input
stringlengths 14
315
| answer
stringlengths 9
2.16k
| gold_ctxs
listlengths 1
15
| ctxs
listlengths 11
186
|
|---|---|---|---|
The authors claims that the performance increase with the number of attention module, is that true, knowing that they tried only m = {1,2,3,4} ?
|
It seems true as they also tried m = 5 and 6 and performance still improved, as seen in Table 6 [6].
|
[
6
] |
[
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representations of objects at that location. Previous works formulate attention drift as a sequential process to capture different attended aspects. However, as far as we know, no attention mechanism has been applied to feedforward network structure to achieve state-of-art results in image classification task. Recent advances of image classification focus on training feedforward convolutional neural networks using “very deep” structure (27, 33, 10). ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_1",
"text": " Inspired by the attention mechanism and recent advances in the deep neural network, we propose Residual Attention Network, a convolutional network that adopts mixed attention mechanism in “very deep” structure. The Residual Attention Network is composed of multiple Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_2",
"text": " Apart from more discriminative feature representation brought by the attention mechanism, our model also exhibits following appealing properties: ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_3",
"text": " (1) Increasing Attention Modules lead to consistent performance improvement, as different types of attention are captured extensively. Fig.1 shows an example of different types of attentions for a hot air balloon image. The sky attention mask diminishes background responses while the balloon instance mask highlighting the bottom part of the balloon. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_4",
"text": " (2) It is able to incorporate with state-of-the-art deep network structures in an end-to-end training fashion. Specifically, the depth of our network can be easily extended to hundreds of layers. Our Residual Attention Network outperforms state-of-the-art residual networks on CIFAR-10, CIFAR-100 and challenging ImageNet image classification dataset with significant reduction of computation (69% forward FLOPs). ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_5",
"text": " All of the aforementioned properties, which are challenging to achieve with previous approaches, are made possible with following contributions: ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_6",
"text": " (1) Stacked network structure: Our Residual Attention Network is constructed by stacking multiple Attention Modules. The stacked structure is the basic application of mixed attention mechanism. Thus, different types of attention are able to be captured in different Attention Modules. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_7",
"text": " (2) Attention Residual Learning: Stacking Attention Modules directly would lead to the obvious performance drop. Therefore, we propose attention residual learning mechanism to optimize very deep Residual Attention Network with hundreds of layers. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_8",
"text": " (3) Bottom-up top-down feedforward attention: Bottom-up top-down feedforward structure has been successfully applied to human pose estimation and image segmentation (22, 25, 1). We use such structure as part of Attention Module to add soft weights on features. This structure can mimic bottom-up fast feedforward process and top-down attention feedback in a single feedforward process which allows us to develop an end-to-end trainable network with top-down attention. The bottom-up top-down structure in our work differs from stacked hourglass network in its intention of guiding feature learning. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_9",
"text": " Evidence from human perception process shows the importance of attention mechanism, which uses top information to guide bottom-up feedforward process. Recently, tentative efforts have been made towards applying attention into deep neural network. Deep Boltzmann Machine (DBM) contains top-down attention by its reconstruction process in the training stage. Attention mechanism has also been widely applied to recurrent neural networks (RNN) and long short term memory (LSTM) to tackle sequential decision tasks (25, 29, 21, 18). Top information is gathered sequentially and decides where to attend for the next feature learning steps. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_10",
"text": " Residual learning is proposed to learn residual of identity mapping. This technique greatly increases the depth of feedforward neuron network. Similar to our work, (25, 29, 21, 18) use residual learning with attention mechanism to benefit from residual learning. Two information sources (query and query context) are captured using attention mechanism to assist each other in their work. While in our work, a single information source (image) is split into two different ones and combined repeatedly. And residual learning is applied to alleviate the problem brought by repeated splitting and combining. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_11",
"text": " In image classification, top-down attention mechanism has been applied using different methods: sequential process, region proposal and control gates. Sequential process (23, 12, 37, 7) models image classification as a sequential decision. Thus attention can be applied similarly with above. This formulation allows end-to-end optimization using RNN and LSTM and can capture different kinds of attention in a goal-driven way. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_12",
"text": " Region proposal (26, 4, 8, 38) has been successfully adopted in image detection task. In image classification, an additional region proposal stage is added before feedforward classification. The proposed regions contain top information and are used for feature learning in the second stage. Unlike image detection whose region proposals rely on large amount of supervision, e.g. the ground truth bounding boxes or detailed segmentation masks , unsupervised learning is usually used to generate region proposals for image classification. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_13",
"text": " Control gates have been extensively used in LSTM. In image classification with attention, control gates for neurones are updated with top information and have influence on the feedforward process during training (2, 30). However, a new process, reinforcement learning or optimization is involved during the training step. Highway Network extends control gate to solve gradient degradation problem for deep convolutional neural network. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_14",
"text": " However, recent advances of image classification focus on training feedforward convolutional neural networks using “very deep” structure (27, 33, 10). The feedforward convolutional network mimics the bottom-up paths of human cortex. Various approaches have been proposed to further improve the discriminative ability of deep convolutional neural network. VGG , Inception and residual learning are proposed to train very deep neural networks. Stochastic depth , Batch Normalization and Dropout exploit regularization for convergence and avoiding overfitting and degradation. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_15",
"text": " Soft attention developed in recent work (3, 17) can be trained end-to-end for convolutional network. Our Residual Attention Network incorporates the soft attention in fast developing feedforward network structure in an innovative way. Recent proposed spatial transformer module achieves state-of-the-art results on house number recognition task. A deep network module capturing top information is used to generate affine transformation. The affine transformation is applied to the input image to get attended region and then feed to another deep network module. The whole process can be trained end-to-end by using differentiable network layer which performs spatial transformation. Attention to scale uses soft attention as a scale selection mechanism and gets state-of-the-art results in image segmentation task. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_16",
"text": " The design of soft attention structure in our Residual Attention Network is inspired by recent development of localization oriented task, i.e. segmentation (22, 25, 1) and human pose estimation . These tasks motivate researchers to explore structure with fined-grained feature maps. The frameworks tend to cascade a bottom-up and a top-down structure. The bottom-up feedforward structure produces low resolution feature maps with strong semantic information. After that, a top-down network produces dense features to inference on each pixel. Skip connection is employed between bottom and top feature maps and achieved state-of-the-art result on image segmentation. The recent stacked hourglass network fuses information from multiple scales to predict human pose, and benefits from encoding both global and local information. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_17",
"text": " Our Residual Attention Network is constructed by stacking multiple Attention Modules. Each Attention Module is divided into two branches: mask branch and trunk branch. The trunk branch performs feature processing and can be adapted to any state-of-the-art network structures. In this work, we use pre-activation Residual Unit , ResNeXt and Inception as our Residual Attention Networks basic unit to construct Attention Module. Given trunk branch output T(x)𝑇𝑥T(x) with input x𝑥x, the mask branch uses bottom-up top-down structure (22, 25, 1, 24) to learn same size mask M(x)𝑀𝑥M(x) that softly weight output features T(x)𝑇𝑥T(x). The bottom-up top-down structure mimics the fast feedforward and feedback attention process. The output mask is used as control gates for neurons of trunk branch similar to Highway Network . The output of Attention Module H𝐻H is: Hi,c(x)=Mi,c(x)∗Ti,c(x)subscript𝐻𝑖𝑐𝑥subscript𝑀𝑖𝑐𝑥subscript𝑇𝑖𝑐𝑥H_{i,c}(x)=M_{i,c}(x)*T_{i,c}(x) (1) where i ranges over all spatial positions and c∈{1,…,C}𝑐1…𝐶c\\in\\{1,...,C\\} is the index of the channel. The whole structure can be trained end-to-end. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_18",
"text": " In Attention Modules, the attention mask can not only serve as a feature selector during forward inference, but also as a gradient update filter during back propagation. In the soft mask branch, the gradient of mask for input feature is: ∂M(x,θ)T(x,ϕ)∂ϕ=M(x,θ)∂T(x,ϕ)∂ϕ𝑀𝑥𝜃𝑇𝑥italic-ϕitalic-ϕ𝑀𝑥𝜃𝑇𝑥italic-ϕitalic-ϕ\\frac{\\partial M(x,\\theta)T(x,\\phi)}{\\partial\\phi}=M(x,\\theta)\\frac{\\partial T(x,\\phi)}{\\partial\\phi} (2) where the θ𝜃\\theta are the mask branch parameters and the ϕitalic-ϕ\\phi are the trunk branch parameters. This property makes Attention Modules robust to noisy labels. Mask branches can prevent wrong gradients (from noisy labels) to update trunk parameters. Experiment in Sec.4.1 shows the robustness of our Residual Attention Network against noisy labels. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_19",
"text": " Instead of stacking Attention Modules in our design, a simple approach would be using a single network branch to generate soft weight mask, similar to spatial transformer layer . However, these methods have several drawbacks on challenging datasets such as ImageNet. First, images with clutter background, complex scenes, and large appearance variations need to be modeled by different types of attentions. In this case, features from different layers need to be modeled by different attention masks. Using a single mask branch would require exponential number of channels to capture all combinations of different factors. Second, a single Attention Module only modify the features once. If the modification fails on some parts of the image, the following network modules do not get a second chance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_20",
"text": " The Residual Attention Network alleviates above problems. In Attention Module, each trunk branch has its own mask branch to learn attention that is specialized for its features. As shown in Fig.1, in hot air balloon images, blue color features from bottom layer have corresponding sky mask to eliminate background, while part features from top layer are refined by balloon instance mask. Besides, the incremental nature of stacked network structure can gradually refine attention for complex images. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_21",
"text": " However, naive stacking Attention Modules leads to the obvious performance drop. First, dot production with mask range from zero to one repeatedly will degrade the value of features in deep layers. Second, soft mask can potentially break good property of trunk branch, for example, the identical mapping of Residual Unit. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_22",
"text": " We propose attention residual learning to ease the above problems. Similar to ideas in residual learning, if soft mask unit can be constructed as identical mapping, the performances should be no worse than its counterpart without attention. Thus we modify output H𝐻H of Attention Module as Hi,c(x)=(1+Mi,c(x))∗Fi,c(x)subscript𝐻𝑖𝑐𝑥1subscript𝑀𝑖𝑐𝑥subscript𝐹𝑖𝑐𝑥H_{i,c}(x)=(1+M_{i,c}(x))*F_{i,c}(x) (3) M(x)𝑀𝑥M(x) ranges from (0,1)01(0,1), with M(x)𝑀𝑥M(x) approximating 0, H(x)𝐻𝑥H(x) will approximate original features F(x)𝐹𝑥F(x). We call this method attention residual learning. Our stacked attention residual learning is different from residual learning. In the origin ResNet, residual learning is formulated as Hi,c(x)=x+Fi,c(x)subscript𝐻𝑖𝑐𝑥𝑥subscript𝐹𝑖𝑐𝑥H_{i,c}(x)=x+F_{i,c}(x), where Fi,c(x)subscript𝐹𝑖𝑐𝑥F_{i,c}(x) approximates the residual function. In our formulation, Fi,c(x)subscript𝐹𝑖𝑐𝑥F_{i,c}(x) indicates the features generated by deep convolutional networks. The key lies on our mask branches M(x)𝑀𝑥M(x). They work as feature selectors which enhance good features and suppress noises from trunk features. In addition, stacking Attention Modules backs up attention residual learning by its incremental nature. Attention residual learning can keep good properties of original features, but also gives them the ability to bypass soft mask branch and forward to top layers to weaken mask branch’s feature selection ability. Stacked Attention Modules can gradually refine the feature maps. As show in Fig.1, features become much clearer as depth going deeper. By using attention residual learning, increasing depth of the proposed Residual Attention Network can improve performance consistently. As shown in the experiment section, the depth of Residual Attention Network is increased up to 452 whose performance surpasses ResNet-1001 by a large margin on CIFAR dataset. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_23",
"text": " Following previous attention mechanism idea in DBN , our mask branch contains fast feed-forward sweep and top-down feedback steps. The former operation quickly collects global information of the whole image, the latter operation combines global information with original feature maps. In convolutional neural network, the two steps unfold into bottom-up top-down fully convolutional structure. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_24",
"text": " From input, max pooling are performed several times to increase the receptive field rapidly after a small number of Residual Units. After reaching the lowest resolution, the global information is then expanded by a symmetrical top-down architecture to guide input features in each position. Linear interpolation up sample the output after some Residual Units. The number of bilinear interpolation is the same as max pooling to keep the output size the same as the input feature map. Then a sigmoid layer normalizes the output range to (0,1)01(0,1) after two consecutive 1×1111\\times 1 convolution layers. We also added skip connections between bottom-up and top-down parts to capture information from different scales. The full module is illustrated in Fig.2. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_25",
"text": " The bottom-up top-down structure has been applied to image segmentation and human pose estimation. However, the difference between our structure and the previous one lies in its intention. Our mask branch aims at improving trunk branch features rather than solving a complex problem directly. Experiment in Sec.4.1 is conducted to verify above arguments. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_26",
"text": " In our work, attention provided by mask branch changes adaptably with trunk branch features. However, constrains to attention can still be added to mask branch by changing normalization step in activation function before soft mask output. We use three types of activation functions corresponding to mixed attention, channel attention and spatial attention. Mixed attention f1subscript𝑓1f_{1} without additional restriction use simple sigmoid for each channel and spatial position. Channel attention f2subscript𝑓2f_{2} performs L2𝐿2L2 normalization within all channels for each spatial position to remove spatial information. Spatial attention f3subscript𝑓3f_{3} performs normalization within feature map from each channel and then sigmoid to get soft mask related to spatial information only. f1(xi,c)=11+exp(−xi,c)subscript𝑓1subscript𝑥𝑖𝑐11𝑒𝑥𝑝subscript𝑥𝑖𝑐\\displaystyle f_{1}(x_{i,c})=\\frac{1}{1+exp(-x_{i,c})} (4) f2(xi,c)=xi,c‖xi‖subscript𝑓2subscript𝑥𝑖𝑐subscript𝑥𝑖𝑐normsubscript𝑥𝑖\\displaystyle f_{2}(x_{i,c})=\\frac{x_{i,c}}{\\|x_{i}\\|} (5) f3(xi,c)=11+exp(−(xi,c−meanc)/stdc)subscript𝑓3subscript𝑥𝑖𝑐11𝑒𝑥𝑝subscript𝑥𝑖𝑐subscriptmean𝑐subscriptstd𝑐\\displaystyle f_{3}(x_{i,c})=\\frac{1}{1+exp(-(x_{i,c}-\\text{mean}_{c})/\\text{std}_{c})} (6) Where i𝑖i ranges over all spatial positions and c𝑐c ranges over all channels. meancsubscriptmean𝑐\\text{mean}_{c} and stdcsubscriptstd𝑐\\text{std}_{c} denotes the mean value and standard deviation of feature map from c𝑐c-th channel. xisubscript𝑥𝑖x_{i} denotes the feature vector at the i𝑖ith spatial position. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_27",
"text": " The experiment results are shown in Table 1, the mixed attention has the best performance. Previous works normally focus on only one type of attention, for example scale attention or spatial attention , which puts additional constrain on soft mask by weight sharing or normalization. However, as supported by our experiments, making attention change adaptively with features without additional constraint leads to the best performance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_28",
"text": " In this section, we evaluate the performance of proposed Residual Attention Network on a series of benchmark datasets including CIFAR-10, CIFAR-100 , and ImageNet . Our experiments contain two parts. In the first part, we analyze the effectiveness of each component in the Residual Attention Network including attention residual learning mechanism and different architectures of soft mask branch in the Attention Module. After that, we explore the noise resistance property. Given limited computation resources, we choose CIFAR-10 and CIFAR-100 dataset to conduct these experiments. Finally, we compare our network with state-of-the-art results in CIFAR dataset. In the second part, we replace the Residual Unit with Inception Module and ResNeXt to demonstrate our Residual Attention Network surpasses origin networks both in parameter efficiency and final performance. We also compare image classification performance with state-of-the-art ResNet and Inception on ImageNet dataset. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_29",
"text": " The CIFAR-10 and CIFAR-100 datasets consist of 60,0006000060,000 32×32323232\\times 32 color images of 101010 and 100100100 classes respectively, with 50,0005000050,000 training images and 10,0001000010,000 test images. The broadly applied state-of-the-art network structure ResNet is used as baseline method. To conduct fair comparison, we keep most of the settings same as ResNet paper . The image is padded by 4 pixels on each side, filled with 00 value resulting in 40×40404040\\times 40 image. A 32×32323232\\times 32 crop is randomly sampled from an image or its horizontal flip, with the per-pixel RGB mean value subtracted. We adopt the same weight initialization method following previous study and train Residual Attention Network using nesterov SGD with a mini-batch size of 64. We use a weight decay of 0.00010.00010.0001 with a momentum of 0.90.90.9 and set the initial learning rate to 0.1. The learning rate is divided by 10 at 646464k and 969696k iterations. We terminate training at 160160160k iterations. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_30",
"text": " The overall network architecture and the hyper parameters setting are described in Fig.2. The network consists of 3 stages and similar to ResNet , equal number of Attention Modules are stacked in each stage. Additionally, we add two Residual Units at each stage. The number of weighted layers in trunk branch is 36m𝑚m+20 where m𝑚m is the number of Attention Module in one stage. We use original 32×32323232\\times 32 image for testing. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_31",
"text": " In this experiment, we evaluate the effectiveness of attention residual learning mechanism. Since the notion of attention residual learning (ARL) is new, no suitable previous methods are comparable therefore we use “naive attention learning” (NAL) as baseline. Specifically, “naive attention learning” uses Attention Module where features are directly dot product by soft mask without attention residual learning. We set the number of Attention Module in each stage m𝑚m = {1, 2, 3, 4}. For Attention Module, this leads to Attention-56 (named by trunk layer depth), Attention-92, Attention-128 and Attention-164 respectively. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_32",
"text": " We train these networks using different mechanisms and summarize the results in the Table 3. As shown in Table 3, the networks trained using attention residual learning technique consistently outperform the networks trained with baseline method which proves the effectiveness of our method. The performance increases with the number of Attention Module when applying attention residual learning. In contrast, the performance of networks trained with “naive attention learning” method suffers obvious degradation with increased number of Attention Module. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_33",
"text": " To understand the benefit of attention residual learning, we calculate mean absolute response value of output layers for each stage. We use Attention-164 to conduct this experiment. As shown in the Fig. 4, the response generated by the network trained using naive attention learning quickly vanishes in the stage 2 after four Attention Modules compared with network trained using attention residual learning. The Attention Module is designed to suppress noise while keeping useful information by applying dot product between feature and soft mask. However, repeated dot product will lead to severe degradation of both useful and useless information in this process. The attention residual learning can relieve signal attenuation using identical mapping, which enhances the feature contrast. Therefore, it gains benefits from noise reduction without significant information loss, which makes optimization much easier while improving the discrimination of represented features. In the rest of the experiments, we apply this technique to train our networks. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_34",
"text": " We conduct experiments to validate the effectiveness of encoder-decoder structure by comparing with local convolutions without any down sampling or up sampling. The local convolutions soft mask consists of three Residual Units using the same number of FLOPs. The Attention-56 is used to construct Attention-Encoder-Decoder-56 and Attention-Local-Conv-56 respectively. Results are shown in Table 4. The Attention-Encoder-Decoder-56 network achieves lower test error 5.52%percent5.525.52\\% compared with Attention-Local-Conv-56 network 6.48%percent6.486.48\\% with a considerable margin 0.94%percent0.940.94\\%. The result suggests that the soft attention optimization process will benefit from multi-scale information. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_35",
"text": " In this experiment, we show our Residual Attention Network enjoys noise resistant property on CIFAR-10 dataset following the setting of paper . The confusion matrix Q𝑄Q in our experiment is set as follows: Q=(r1−r9⋯1−r91−r9r⋯1−r9⋮⋮⋱⋮1−r91−r9⋯r)10×10𝑄subscriptmatrix𝑟1𝑟9⋯1𝑟91𝑟9𝑟⋯1𝑟9⋮⋮⋱⋮1𝑟91𝑟9⋯𝑟1010Q=\\left(\\begin{matrix}r&\\frac{1-r}{9}&\\cdots&\\frac{1-r}{9}\\\\ \\frac{1-r}{9}&r&\\cdots&\\frac{1-r}{9}\\\\ \\vdots&\\vdots&\\ddots&\\vdots\\\\ \\frac{1-r}{9}&\\frac{1-r}{9}&\\cdots&r\\\\ \\end{matrix}\\right)_{10\\times 10} (7) ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_36",
"text": " where r𝑟r denotes the clean label ratio for the whole dataset. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_37",
"text": " We compare ResNet-164 network with Attention-92 network under different noise levels. The Table 5 shows the results. The test error of Attention-92 network is significantly lower than ResNet-164 network with the same noise level. In addition, when we increase the ratio of noise, test error of Attenion-92 declines slowly compared with ResNet-164 network. These results suggest that our Residual Attention Network can perform well even trained with high level noise data. When the label is noisy, the corresponding mask can prevent gradient caused by label error to update trunk branch parameters in the network. In this way, only the trunk branch is learning the wrong supervision information and soft mask branch masks the wrong label. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_38",
"text": " We compare our Residual Attention Network with state-of-the-art methods including ResNet and Wide ResNet on CIFAR-10 and CIFAR-100 datasets. The results are shown in Table 6. Our Attention-452 outperforms all the baseline methods on CIFAR-10 and CIFAR-100 datasets. Note that Attention-92 network achieves 4.99%percent4.994.99\\% test error on CIFAR-10 and 21.71%percent21.7121.71\\% test error on CIFAR-100 compared with 5.46%percent5.465.46\\% and 24.33%percent24.3324.33\\% test error on CIFAR-10 and CIFAR-100 for ResNet-164 network under similar parameter size. In addition, Attention-236 outperforms ResNet-1001 using only half of the parameters. It suggests that our Attention Module and attention residual learning scheme can effectively reduce the number of parameters in the network while improving the classification performance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_39",
"text": " In this section, we conduct experiments using ImageNet LSVRC 201220122012 dataset , which contains 1,00010001,000 classes with 1.21.21.2 million training images, 50,0005000050,000 validation images, and 100,000100000100,000 test images. The evaluation is measured on the non-blacklist images of the ImageNet LSVRC 201220122012 validation set. We use Attention-56 and Attention-92 to conduct the experiments. The network structures and hyper parameters can be found in the Table 2. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_40",
"text": " Our implementation generally follows the practice in the previous study . We apply scale and aspect ratio augmentation to the original image. A 224×224224224224\\times 224 crop is randomly sampled from an augment image or its horizontal flip, with the per-pixel RGB scale to (0,1)01(0,1) and mean value subtracted and standard variance divided. We adopt standard color augmentation . The network is trained using SGD with a momentum of 0.90.90.9. We set initial learning rate to 0.1. The learning rate is divided by 10 at 200200200k, 400400400k, 500500500k iterations. We terminate training at 530530530k iterations. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_41",
"text": " In this experiment, we explore the efficiency of proposed Residual Attention Network. We compare Attention-56 with ResNet-152 . The ResNet-152 has 50 trunk Residual Units and 60.2×106absentsuperscript106\\times 10^{6} parameters compared with 18 trunk Residual Units and 31.9×106absentsuperscript106\\times 10^{6} parameters in Attention-56. We evaluate our model using single crop scheme on the ImageNet validation set and show results in Table 7. The Attention-56 network outperforms ResNet-152 by a large margin with a 0.4%percent0.40.4\\% reduction on top-1 error and a 0.26%percent0.260.26\\% reduction on top-5 error. More importantly, Attention-56 network achieves better performance with only 52% parameters and 56% FLOPs compared with ResNet-152, which suggests that the proposed attention mechanism can significantly improve network performance while reducing the model complexity. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_42",
"text": " In this experiment, we show Residual Attention Network can generalize well using different basic unit. We apply three popular basic units: Residual Unit, ResNeXt , and Inception to construct our Residual Attention Networks. To keep the number of parameters and FLOPs in the same scale, we simplify the Inception. Results are shown in Table 7. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_43",
"text": " When the basic unit is ResNeXt, the AttentionNeXt-56 network performance is the same as ResNeXt-101 while the parameters and FLOPs are significantly fewer than ResNeXt-101. For Inception, The AttentionIncepiton-56 outperforms Inception-ResNet-v1 by a margin with a 0.94% reduction on top-1 error and a 0.21% reduction on top-5 error. The results show that our method can be applied on different network structures. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_44",
"text": " We compare our Attention-92 evaluated using single crop on the ILSVRC 2012 validation set with state-of-the-art algorithms. Table 7 shows the results. Our Attention-92 outperforms ResNet-200 with a large margin. The reduction on top-1 error is 0.6%percent0.60.6\\%. Note that the ResNet-200 network contains 32%percent3232\\% more parameters than Attention-92. The computational complexity of Attention-92 shown in the Table 7 suggests that our network reduces nearly half training time comparing with ResNet-200 by adding attention mechanism and reducing trunk depth. Above results suggest that our model enjoys high efficiency and good performance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_45",
"text": " We propose a Residual Attention Network which stacks multiple Attention Modules. The benefits of our network are in two folds: it can capture mixed attention and is an extensible convolutional neural network. The first benefit lies in that different Attention Modules capture different types of attention to guide feature learning. Our experiments on the forms of activation function also validate this point: free form mixed attention will have better performance than constrained (including single) attention. The second benefit comes from encoding top-down attention mechanism into bottom-up top-down feedforward convolutional structure in each Attention Module. Thus, the basic Attention Modules can be combined to form larger network structure. Moreover, residual attention learning allows training very deep Residual Attention Network. The performance of our model surpasses state-of-the-art image classification methods, i.e. ResNet on CIFAR-10 (3.90% error), CIFAR-100 (20.67% error), and challenging ImageNet dataset (0.6% top-1 accuracy improvement) with only 46%percent4646\\% trunk depth and 69%percent6969\\% forward FLOPs (comparing with ResNet-200). In the future, we will exploit different applications of deep Residual Attention Network such as detection and segmentation to better explore mixed attention mechanism for specific tasks. ",
"title": "Residual Attention Network for Image Classification"
}
] |
What are the examples of suitable prompt for inversion?
|
Examples of inversion with prompts can be found in Figure12, where they used mask-based editing to limit inversion distortion [27].
|
[
27
] |
[
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained on extremely large language-image datasets and use state-of-the-art image generative models including auto-regressive and diffusion models. However, these models do not provide simple editing means, and generally lack control over specific semantic regions of a given image. In particular, even the slightest change in the textual prompt may lead to a completely different output image. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_1",
"text": " To circumvent this, LLI-based methods (28, 4, 33) require the user to explicitly mask a part of the image to be inpainted, and drive the edited image to change in the masked area only, while matching the background of the original image. This approach has provided appealing results, however, the masking procedure is cumbersome, hampering quick and intuitive text-driven editing. Moreover, masking the image content removes important structural information, which is completely ignored in the inpainting process. Therefore, some editing capabilities are out of the inpainting scope, such as modifying the texture of a specific object. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_2",
"text": " In this paper, we introduce an intuitive and powerful textual editing method to semantically edit images in pre-trained text-conditioned diffusion models via Prompt-to-Prompt manipulations. To do so, we dive deep into the cross-attention layers and explore their semantic strength as a handle to control the generated image. Specifically, we consider the internal cross-attention maps, which are high-dimensional tensors that bind pixels and tokens extracted from the prompt text. We find that these maps contain rich semantic relations which critically affect the generated image. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_3",
"text": " Our key idea is that we can edit images by injecting the cross-attention maps during the diffusion process, controlling which pixels attend to which tokens of the prompt text during which diffusion steps. To apply our method to various creative editing applications, we show several methods to control the cross-attention maps through a simple and semantic interface (see fig. 1). The first is to change a single token’s value in the prompt (e.g., “dog” to “cat”), while fixing the cross-attention maps, to preserve the scene composition. The second is to globally edit an image, e.g., change the style, by adding new words to the prompt and freezing the attention on previous tokens, while allowing new attention to flow to the new tokens. The third is to amplify or attenuate the semantic effect of a word in the generated image. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_4",
"text": " Our approach constitutes an intuitive image editing interface through editing only the textual prompt, therefore called Prompt-to-Prompt. This method enables various editing tasks, which are challenging otherwise, and does not requires model training, fine-tuning, extra data, or optimization. Throughout our analysis, we discover even more control over the generation process, recognizing a trade-off between the fidelity to the edited prompt and the source image. We even demonstrate that our method can be applied to real images by using an existing inversion process. Our experiments and numerous results show that our method enables seamless editing in an intuitive text-based manner over extremely diverse images. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_5",
"text": " Image editing is one of the most fundamental tasks in computer graphics, encompassing the process of modifying an input image through the use of an auxiliary input, such as a label, scribble, mask, or reference image. A specifically intuitive way to edit an image is through textual prompts provided by the user. Recently, text-driven image manipulation has achieved significant progress using GANs (15, 8, 19, 20, 21), which are known for their high-quality generation, in tandem with CLIP , which consists of a semantically rich joint image-text representation, trained over millions of text-image pairs. Seminal works (29, 14, 46, 2) which combined these components were revolutionary, since they did not require extra manual labor, and produced highly realistic manipulations using text only. Bau et al. further demonstrated how to use masks provided by the user, to localize the text-based editing and restrict the change to a specific spatial region. However, while GAN-based image editing approaches succeed on highly-curated datasets , e.g., human faces, they struggle over large and diverse datasets. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_6",
"text": " To obtain more expressive generation capabilities, Crowson et al. use VQ-GAN , trained over diverse data, as a backbone. Other works (5, 22) exploit the recent Diffusion models (17, 39, 41, 17, 40, 36), which achieve state-of-the-art generation quality over highly diverse datasets, often surpassing GANs . Kim et al. show how to perform global changes, whereas Avrahami et al. successfully perform local manipulations using user-provided masks for guidance. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_7",
"text": " While most works that require only text (i.e., no masks) are limited to global editing (9, 23), Bar-Tal et al. proposed a text-based localized editing technique without using any mask, showing impressive results. Yet, their techniques mainly allow changing textures, but not modifying complex structures, such as changing a bicycle to a car. Moreover, unlike our method, their approach requires training a network for each input. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_8",
"text": " Numerous works (11, 16, 42, 25, 26, 30, 31, 34, 49, 9, 13, 36) significantly advanced the generation of images conditioned on plain text, known as text-to-image synthesis. Several large-scale text-image models have recently emerged, such as Imagen , DALL-E2 , and Parti , demonstrating unprecedented semantic generation. However, these models do not provide control over a generated image, specifically using text guidance only. Changing a single word in the original prompt associated with the image often leads to a completely different outcome. For instance, adding the adjective “white” to “dog” often changes the dog’s shape. To overcome this, several works (28, 4) assume that the user provides a mask to restrict the area in which the changes are applied. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_9",
"text": " Unlike previous works, our method requires textual input only, by using the spatial information from the internal layers of the generative model itself. This offers the user a much more intuitive editing experience of modifying local or global details by merely modifying the text prompt. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_10",
"text": " Let ℐℐ\\mathcal{I} be an image which was generated by a text-guided diffusion model using the text prompt 𝒫𝒫\\mathcal{P} and a random seed s𝑠s. Our goal is editing the input image guided only by the edited prompt 𝒫∗superscript𝒫\\mathcal{P}^{*}, resulting in an edited image ℐ∗superscriptℐ\\mathcal{I}^{*}. For example, consider an image generated from the prompt “my new bicycle”, and assume that the user wants to edit the color of the bicycle, its material, or even replace it with a scooter while preserving the appearance and structure of the original image. An intuitive interface for the user is to directly change the text prompt by further describing the appearance of the bikes, or replacing it with another word. As opposed to previous works, we wish to avoid relying on any user-defined mask to assist or signify where the edit should occur. A simple, but an unsuccessful attempt is to fix the internal randomness and regenerate using the edited text prompt. Unfortunately, as fig. 2 shows, this results in a completely different image with a different structure and composition. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_11",
"text": " Our key observation is that the structure and appearances of the generated image depend not only on the random seed, but also on the interaction between the pixels to the text embedding through the diffusion process. By modifying the pixel-to-text interaction that occurs in cross-attention layers, we provide Prompt-to-Prompt image editing capabilities. More specifically, injecting the cross-attention maps of the input image ℐℐ\\mathcal{I} enables us to preserve the original composition and structure. In section 3.1, we review how cross-attention is used, and in section 3.2 we describe how to exploit the cross-attention for editing. For additional background on diffusion models, please refer to appendix A. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_12",
"text": " We use the Imagen text-guided synthesis model as a backbone. Since the composition and geometry are mostly determined at the 64×64646464\\times 64 resolution, we only adapt the text-to-image diffusion model, using the super-resolution process as is. Recall that each diffusion step t𝑡t consists of predicting the noise ϵitalic-ϵ\\epsilon from a noisy image ztsubscript𝑧𝑡z_{t} and text embedding ψ(𝒫)𝜓𝒫\\psi(\\mathcal{P}) using a U-shaped network . At the final step, this process yields the generated image ℐ=z0ℐsubscript𝑧0\\mathcal{I}=z_{0}. Most importantly, the interaction between the two modalities occurs during the noise prediction, where the embeddings of the visual and textual features are fused using Cross-attention layers that produce spatial attention maps for each textual token. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_13",
"text": " More formally, as illustrated in fig. 3(Top), the deep spatial features of the noisy image ϕ(zt)italic-ϕsubscript𝑧𝑡\\phi(z_{t}) are projected to a query matrix Q=ℓQ(ϕ(zt))𝑄subscriptℓ𝑄italic-ϕsubscript𝑧𝑡Q=\\ell_{Q}(\\phi(z_{t})), and the textual embedding is projected to a key matrix K=ℓK(ψ(𝒫))𝐾subscriptℓ𝐾𝜓𝒫K=\\ell_{K}(\\psi(\\mathcal{P})) and a value matrix V=ℓV(ψ(𝒫))𝑉subscriptℓ𝑉𝜓𝒫V=\\ell_{V}(\\psi(\\mathcal{P})), via learned linear projections ℓQ,ℓK,ℓVsubscriptℓ𝑄subscriptℓ𝐾subscriptℓ𝑉\\ell_{Q},\\ell_{K},\\ell_{V}. The attention maps are then M=Softmax(QKTd),𝑀Softmax𝑄superscript𝐾𝑇𝑑M=\\text{Softmax}\\left(\\frac{QK^{T}}{\\sqrt{d}}\\right), (1) where the cell Mijsubscript𝑀𝑖𝑗M_{ij} defines the weight of the value of the j𝑗j-th token on the pixel i𝑖i, and where d𝑑d is the latent projection dimension of the keys and queries. Finally, the cross-attention output is defined to be ϕ^(zt)=MV^italic-ϕsubscript𝑧𝑡𝑀𝑉\\widehat{\\phi}\\left(z_{t}\\right)=MV, which is then used to update the spatial features ϕ(zt)italic-ϕsubscript𝑧𝑡\\phi(z_{t}). ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_14",
"text": " Intuitively, the cross-attention output MV𝑀𝑉MV is a weighted average of the values V𝑉V where the weights are the attention maps M𝑀M, which are correlated to the similarity between Q𝑄Q and K𝐾K. In practice, to increase their expressiveness, multi-head attention is used in parallel, and then the results are concatenated and passed through a learned linear layer to get the final output. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_15",
"text": " Imagen , similar to GLIDE , conditions on the text prompt in the noise prediction of each diffusion step (see section A.2) through two types of attention layers: i) cross-attention layers. ii) hybrid attention that acts both as self-attention and cross-attention by simply concatenating the text embedding sequence to the key-value pairs of each self-attention layer. Throughout the rest of the paper, we refer to both of them as cross-attention since our method only intervenes in the cross-attention part of the hybrid attention. That is, only the last channels, which refer to text tokens, are modified in the hybrid attention modules. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_16",
"text": " We return to our key observation — the spatial layout and geometry of the generated image depend on the cross-attention maps. This interaction between pixels and text is illustrated in fig. 4, where the average attention maps are plotted. As can be seen, pixels are more attracted to the words that describe them, e.g., pixels of the bear are correlated with the word “bear”. Note that averaging is done for visualization purposes, and attention maps are kept separate for each head in our method. Interestingly, we can see that the structure of the image is already determined in the early steps of the diffusion process. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_17",
"text": " Since the attention reflects the overall composition, we can inject the attention maps M𝑀M that were obtained from the generation with the original prompt 𝒫𝒫\\mathcal{P}, into a second generation with the modified prompt 𝒫∗superscript𝒫\\mathcal{P}^{*}. This allows the synthesis of an edited image ℐ∗superscriptℐ\\mathcal{I}^{*} that is not only manipulated according to the edited prompt, but also preserves the structure of the input image ℐℐ\\mathcal{I}. This example is a specific instance of a broader set of attention-based manipulations leading to different types of intuitive editing. We, therefore, start by proposing a general framework, followed by the details of the specific editing operations. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_18",
"text": " Let DM(zt,𝒫,t,s)𝐷𝑀subscript𝑧𝑡𝒫𝑡𝑠DM(z_{t},\\mathcal{P},t,s) be the computation of a single step t𝑡t of the diffusion process, which outputs the noisy image zt−1subscript𝑧𝑡1z_{t-1}, and the attention map Mtsubscript𝑀𝑡M_{t} (omitted if not used). We denote by DM(zt,𝒫,t,s){M←M^}𝐷𝑀subscript𝑧𝑡𝒫𝑡𝑠←𝑀^𝑀DM(z_{t},\\mathcal{P},t,s)\\{M\\leftarrow\\widehat{M}\\} the diffusion step where we override the attention map M𝑀M with an additional given map M^^𝑀\\widehat{M}, but keep the values V𝑉V from the supplied prompt. We also denote by Mt∗superscriptsubscript𝑀𝑡M_{t}^{*} the produced attention map using the edited prompt 𝒫∗superscript𝒫\\mathcal{P}^{*}. Lastly, we define Edit(Mt,Mt∗,t)𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡Edit(M_{t},M_{t}^{*},t) to be a general edit function, receiving as input the t𝑡t’th attention maps of the original and edited images during their generation. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_19",
"text": " Our general algorithm for controlled image generation consists of performing the iterative diffusion process for both prompts simultaneously, where an attention-based manipulation is applied in each step according to the desired editing task. We note that for the method above to work, we must fix the internal randomness. This is due to the nature of diffusion models, where even for the same prompt, two random seeds produce drastically different outputs. Formally, our general algorithm is: ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_20",
"text": " Notice that we can also define image ℐℐ\\mathcal{I}, which is generated by prompt 𝒫𝒫\\mathcal{P} and random seed s𝑠s, as an additional input. Yet, the algorithm would remain the same. For editing real images, see section 4. Also, note that we can skip the forward call in line 777 by applying the edit function inside the diffusion forward function. Moreover, a diffusion step can be applied on both zt−1subscript𝑧𝑡1z_{t-1} and zt∗superscriptsubscript𝑧𝑡z_{t}^{*} in the same batch (i.e., in parallel), and so there is only one step overhead with respect to the original inference of the diffusion model. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_21",
"text": " We now turn to address specific editing operations, filling the missing definition of the Edit(Mt,Mt∗,t)𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡Edit(M_{t},M_{t}^{*},t) function. An overview is presented in fig. 3(Bottom). ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_22",
"text": " In this case, the user swaps tokens of the original prompt with others, e.g., 𝒫=𝒫absent\\mathcal{P}=“a big red bicycle” to 𝒫∗=superscript𝒫absent\\mathcal{P}^{*}=“a big red car”. The main challenge is to preserve the original composition while also addressing the content of the new prompt. To this end, we inject the attention maps of the source image into the generation with the modified prompt. However, the proposed attention injection may over constrain the geometry, especially when a large structural modification, such as “car” to “bicycle”, is involved. We address this by suggesting a softer attention constrain: ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_23",
"text": " Edit(Mt,Mt∗,t):={Mt∗ift<τMtotherwise.assign𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡casessuperscriptsubscript𝑀𝑡if𝑡𝜏subscript𝑀𝑡otherwise.Edit(M_{t},M_{t}^{*},t):=\\begin{cases}M_{t}^{*}&\\quad\\text{if}\\;t<\\tau\\\\ M_{t}&\\quad\\text{otherwise.}\\\\ \\end{cases} where τ𝜏\\tau is a timestamp parameter that determines until which step the injection is applied. Note that the composition is determined in the early steps of the diffusion process. Therefore, by limiting the number of injection steps, we can guide the composition of the newly generated image while allowing the necessary geometry freedom for adapting to the new prompt. An illustration is provided in section 4. Another natural relaxation for our algorithm is to assign a different number of injection timestamps for the different tokens in the prompt. In case the two words are represented using a different number of tokens, the maps can be duplicated/averaged as necessary using an alignment function as described in the next paragraph. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_24",
"text": " In another setting, the user adds new tokens to the prompt, e.g., 𝒫=𝒫absent\\mathcal{P}=“a castle next to a river” to 𝒫∗=superscript𝒫absent\\mathcal{P}^{*}=“children drawing of a castle next to a river”. To preserve the common details, we apply the attention injection only over the common tokens from both prompts. Formally, we use an alignment function A𝐴A that receives a token index from target prompt 𝒫∗superscript𝒫\\mathcal{P}^{*} and outputs the corresponding token index in 𝒫𝒫\\mathcal{P} or None if there isn’t a match. Then, the editing function is given by: ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_25",
"text": " (Edit(Mt,Mt∗,t))i,j:={(Mt∗)i,jifA(j)=None(Mt)i,A(j)otherwise.assignsubscript𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡𝑖𝑗casessubscriptsuperscriptsubscript𝑀𝑡𝑖𝑗if𝐴𝑗𝑁𝑜𝑛𝑒subscriptsubscript𝑀𝑡𝑖𝐴𝑗otherwise.\\left(Edit\\left(M_{t},M_{t}^{*},t\\right)\\right)_{i,j}:=\\begin{cases}(M_{t}^{*})_{i,j}&\\quad\\text{if}\\;A(j)=None\\\\ (M_{t})_{i,A(j)}&\\quad\\text{otherwise.}\\\\ \\end{cases} Recall that index i𝑖i corresponds to a pixel value, where j𝑗j corresponds to a text token. Again, we may set a timestamp τ𝜏\\tau to control the number of diffusion steps in which the injection is applied. This kind of editing enables diverse Prompt-to-Prompt capabilities such as stylization, specification of object attributes, or global manipulations as demonstrated in section 4. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_26",
"text": " Lastly, the user may wish to strengthen or weakens the extent to which each token is affecting the resulting image. For example, consider the prompt 𝒫=𝒫absent\\mathcal{P}= “a fluffy red ball”, and assume we want to make the ball more or less fluffy. To achieve such manipulation, we scale the attention map of the assigned token j∗superscript𝑗j^{*} with parameter c∈(−2,2)𝑐22c\\in(-2,2), resulting in a stronger/weaker effect. The rest of the attention maps remain unchanged. That is: (Edit(Mt,Mt∗,t))i,j:={c⋅(Mt)i,jif j=j∗(Mt)i,jotherwise.assignsubscript𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡𝑖𝑗cases⋅𝑐subscriptsubscript𝑀𝑡𝑖𝑗if 𝑗superscript𝑗subscriptsubscript𝑀𝑡𝑖𝑗otherwise.\\left(Edit\\left(M_{t},M_{t}^{*},t\\right)\\right)_{i,j}:=\\begin{cases}c\\cdot(M_{t})_{i,j}&\\quad\\text{if }j=j^{*}\\\\ (M_{t})_{i,j}&\\quad\\text{otherwise.}\\\\ \\end{cases} As described in section 4, the parameter c𝑐c allows fine and intuitive control over the induced effect. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_27",
"text": " Our method, described in section 3, enables intuitive text-only editing by controlling the spatial layout corresponding to each word in the user-provided prompt. In this section, we show several applications using this technique. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_28",
"text": " Text-Only Localized Editing. We first demonstrate localized editing by modifying the user-provided prompt without requiring any user-provided mask. In fig. 2, we depict an example where we generate an image using the prompt “lemon cake”. Our method allows us to retain the spatial layout, geometry, and semantics when replacing the word “lemon” with “pumpkin” (top row). Observe that the background is well-preserved, including the top-left lemons transforming into pumpkins. On the other hand, naively feeding the synthesis model with the prompt “pumpkin cake” results in a completely different geometry (333rd row), even when using the same random seed in a deterministic setting (i.e., DDIM ). Our method succeeds even for a challenging prompt such as “pasta cake.” (222nd row) — the generated cake consists of pasta layers with tomato sauce on top. Another example is provided in fig. 5 where we do not inject the attention of the entire prompt but only the attention of a specific word – “butterfly”. This enables the preservation of the original butterfly while changing the rest of the content. Additional results are provided in the appendix (fig. 13). ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_29",
"text": " As can be seen in fig. 6, our method is not confined to modifying only textures, and it can perform structural modifications, e.g., change a “bicycle” to a “car”. To analyze our attention injection, in the left column we show the results without cross-attention injection, where changing a single word leads to an entirely different outcome. From left to right, we then show the resulting generated image by injecting attention to an increasing number of diffusion steps. Note that the more diffusion steps in which we apply cross-attention injection, the higher the fidelity to the original image. However, the optimal result is not necessarily achieved by applying the injection throughout all diffusion steps. Therefore, we can provide the user with even better control over the fidelity to the original image by changing the number of injection steps. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_30",
"text": " Instead of replacing one word with another, the user may wish to add a new specification to the generated image. In this case, we keep the attention maps of the original prompt, while allowing the generator to address the newly added words. For example, see fig. 7 (top), where we add “crushed” to the “car”, resulting in the generation of additional details over the original image while the background is still preserved. See the appendix (fig. 14) for more examples. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_31",
"text": " Global editing. Preserving the image composition is not only valuable for localized editing, but also an important aspect of global editing. In this setting, the editing should affect all parts of the image, but still retain the original composition, such as the location and identity of the objects. As shown in fig. 7 (bottom), we retain the image content while adding “snow” or changing the lightning. Additional examples appear in fig. 8, including translating a sketch into a photo-realistic image and inducing an artistic style. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_32",
"text": " Fader Control using Attention Re-weighting. While controlling the image by editing the prompt is very effective, we find that it still does not allow full control over the generated image. Consider the prompt “snowy mountain”. A user may want to control the amount of snow on the mountain. However, it is quite difficult to describe the desired amount of snow through text. Instead, we suggest a fader control , where the user controls the magnitude of the effect induced by a specific word, as depicted in fig. 9. As described in section 3, we achieve such control by re-scaling the attention of the specified word. Additional results are in the appendix (fig. 15). ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_33",
"text": " Real Image Editing. Editing a real image requires finding an initial noise vector that produces the given input image when fed into the diffusion process. This process, known as inversion, has recently drawn considerable attention for GANs, e.g., (51, 1, 3, 35, 50, 43, 45, 47), but has not yet been fully addressed for text-guided diffusion models. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_34",
"text": " In the following, we show preliminary editing results on real images, based on common inversion techniques for diffusion models. First, a rather naïve approach is to add Gaussian noise to the input image, and then perform a predefined number of diffusion steps. Since this approach results in significant distortions, we adopt an improved inversion approach (10, 40), which is based on the deterministic DDIM model rather than the DDPM model. We perform the diffusion process in the reverse direction, that is x0⟶xT⟶subscript𝑥0subscript𝑥𝑇x_{0}\\longrightarrow x_{T} instead of xT⟶x0⟶subscript𝑥𝑇subscript𝑥0x_{T}\\longrightarrow x_{0}, where x0subscript𝑥0x_{0} is set to be the given real image. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_35",
"text": " This inversion process often produces satisfying results, as presented in fig. 10. However, the inversion is not sufficiently accurate in many other cases, as in fig. 11. This is partially due to a distortion-editability tradeoff , where we recognize that reducing the classifier-free guidance parameter (i.e., reducing the prompt influence) improves reconstruction but constrains our ability to perform significant manipulations. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_36",
"text": " To alleviate this limitation, we propose to restore the unedited regions of the original image using a mask, directly extracted from the attention maps. Note that here the mask is generated with no guidance from the user. As presented in fig. 12, this approach works well even using the naïve DDPM inversion scheme (adding noise followed by denoising). Note that the cat’s identity is well-preserved under various editing operations, while the mask is produced only from the prompt itself. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_37",
"text": " In this work, we uncovered the powerful capabilities of the cross-attention layers within text-to-image diffusion models. We showed that these high-dimensional layers have an interpretable representation of spatial maps that play a key role in tying the words in the text prompt to the spatial layout of the synthesized image. With this observation, we showed how various manipulations of the prompt can directly control attributes in the synthesized image, paving the way to various applications including local and global editing. This work is a first step towards providing users with simple and intuitive means to edit images, leveraging textual semantic power. It enables users to navigate through a semantic, textual, space, which exhibits incremental changes after each step, rather than producing the desired image from scratch after each text manipulation. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_38",
"text": " While we have demonstrated semantic control by changing only textual prompts, our technique is still subject to a few limitations to be addressed in follow-up work. First, the current inversion process results in a visible distortion over some of the test images. In addition, the inversion requires the user to come up with a suitable prompt. This could be challenging for complicated compositions. Note that the challenge of inversion for text-guided diffusion models is an orthogonal endeavor to our work, which will be thoroughly studied in the future. Second, the current attention maps are of low resolution, as the cross-attention is placed in the network’s bottleneck. This bounds our ability to perform even more precise localized editing. To alleviate this, we suggest incorporating cross-attention also in higher-resolution layers. We leave this for future works since it requires analyzing the training procedure which is out of our current scope. Finally, we recognize that our current method cannot be used to spatially move existing objects across the image and also leave this kind of control for future work. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_39",
"text": " We thank Noa Glaser, Adi Zicher, Yaron Brodsky and Shlomi Fruchter for their valuable inputs that helped improve this work, and to Mohammad Norouzi, Chitwan Saharia and William Chan for providing us with their support and the pretrained models of Imagen . Special thanks to Yossi Matias for early inspiring discussion on the problem and for motivating and encouraging us to develop technologies along the avenue of intuitive interaction. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
}
] |
What are the problems associated with class imbalance for single stage detectors ?
|
The problems associated with class imbalance for single stage detectors are: 1) Training is inefficient as most locations are easy negatives that contribute no useful learning signal [11].
|
[
11
] |
[
{
"id": "1708.02002_all_0",
"text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of the foreground classes or as background using a convolutional neural network. Through a sequence of advances (10, 28, 20, 14), this two-stage framework consistently achieves top accuracy on the challenging COCO benchmark . ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_1",
"text": " Despite the success of two-stage detectors, a natural question to ask is: could a simple one-stage detector achieve similar accuracy? One stage detectors are applied over a regular, dense sampling of object locations, scales, and aspect ratios. Recent work on one-stage detectors, such as YOLO (26, 27) and SSD (22, 9), demonstrates promising results, yielding faster detectors with accuracy within 10-40% relative to state-of-the-art two-stage methods. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_2",
"text": " This paper pushes the envelop further: we present a one-stage object detector that, for the first time, matches the state-of-the-art COCO AP of more complex two-stage detectors, such as the Feature Pyramid Network (FPN) or Mask R-CNN variants of Faster R-CNN . To achieve this result, we identify class imbalance during training as the main obstacle impeding one-stage detector from achieving state-of-the-art accuracy and propose a new loss function that eliminates this barrier. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_3",
"text": " Class imbalance is addressed in R-CNN-like detectors by a two-stage cascade and sampling heuristics. The proposal stage (e.g., Selective Search , EdgeBoxes , DeepMask (24, 25), RPN ) rapidly narrows down the number of candidate object locations to a small number (e.g., 1-2k), filtering out most background samples. In the second classification stage, sampling heuristics, such as a fixed foreground-to-background ratio (1:3), or online hard example mining (OHEM) , are performed to maintain a manageable balance between foreground and background. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_4",
"text": " In contrast, a one-stage detector must process a much larger set of candidate object locations regularly sampled across an image. In practice this often amounts to enumerating ∼similar-to\\scriptstyle\\sim100k locations that densely cover spatial positions, scales, and aspect ratios. While similar sampling heuristics may also be applied, they are inefficient as the training procedure is still dominated by easily classified background examples. This inefficiency is a classic problem in object detection that is typically addressed via techniques such as bootstrapping (33, 29) or hard example mining (37, 8, 31). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_5",
"text": " In this paper, we propose a new loss function that acts as a more effective alternative to previous approaches for dealing with class imbalance. The loss function is a dynamically scaled cross entropy loss, where the scaling factor decays to zero as confidence in the correct class increases, see Figure 1. Intuitively, this scaling factor can automatically down-weight the contribution of easy examples during training and rapidly focus the model on hard examples. Experiments show that our proposed Focal Loss enables us to train a high-accuracy, one-stage detector that significantly outperforms the alternatives of training with the sampling heuristics or hard example mining, the previous state-of-the-art techniques for training one-stage detectors. Finally, we note that the exact form of the focal loss is not crucial, and we show other instantiations can achieve similar results. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_6",
"text": " To demonstrate the effectiveness of the proposed focal loss, we design a simple one-stage object detector called RetinaNet, named for its dense sampling of object locations in an input image. Its design features an efficient in-network feature pyramid and use of anchor boxes. It draws on a variety of recent ideas from (22, 6, 28, 20). RetinaNet is efficient and accurate; our best model, based on a ResNet-101-FPN backbone, achieves a COCO test-dev AP of 39.1 while running at 5 fps, surpassing the previously best published single-model results from both one and two-stage detectors, see Figure 2. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_7",
"text": " The sliding-window paradigm, in which a classifier is applied on a dense image grid, has a long and rich history. One of the earliest successes is the classic work of LeCun et al. who applied convolutional neural networks to handwritten digit recognition (19, 36). Viola and Jones used boosted object detectors for face detection, leading to widespread adoption of such models. The introduction of HOG and integral channel features gave rise to effective methods for pedestrian detection. DPMs helped extend dense detectors to more general object categories and had top results on PASCAL for many years. While the sliding-window approach was the leading detection paradigm in classic computer vision, with the resurgence of deep learning , two-stage detectors, described next, quickly came to dominate object detection. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_8",
"text": " The dominant paradigm in modern object detection is based on a two-stage approach. As pioneered in the Selective Search work , the first stage generates a sparse set of candidate proposals that should contain all objects while filtering out the majority of negative locations, and the second stage classifies the proposals into foreground classes / background. R-CNN upgraded the second-stage classifier to a convolutional network yielding large gains in accuracy and ushering in the modern era of object detection. R-CNN was improved over the years, both in terms of speed (15, 10) and by using learned object proposals (6, 24, 28). Region Proposal Networks (RPN) integrated proposal generation with the second-stage classifier into a single convolution network, forming the Faster R-CNN framework . Numerous extensions to this framework have been proposed, e.g. (20, 31, 32, 16, 14). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_9",
"text": " OverFeat was one of the first modern one-stage object detector based on deep networks. More recently SSD (22, 9) and YOLO (26, 27) have renewed interest in one-stage methods. These detectors have been tuned for speed but their accuracy trails that of two-stage methods. SSD has a 10-20% lower AP, while YOLO focuses on an even more extreme speed/accuracy trade-off. See Figure 2. Recent work showed that two-stage detectors can be made fast simply by reducing input image resolution and the number of proposals, but one-stage methods trailed in accuracy even with a larger compute budget . In contrast, the aim of this work is to understand if one-stage detectors can match or surpass the accuracy of two-stage detectors while running at similar or faster speeds. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_10",
"text": " The design of our RetinaNet detector shares many similarities with previous dense detectors, in particular the concept of ‘anchors’ introduced by RPN and use of features pyramids as in SSD and FPN . We emphasize that our simple detector achieves top results not based on innovations in network design but due to our novel loss. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_11",
"text": " Both classic one-stage object detection methods, like boosted detectors (37, 5) and DPMs , and more recent methods, like SSD , face a large class imbalance during training. These detectors evaluate 104superscript10410^{4}-105superscript10510^{5} candidate locations per image but only a few locations contain objects. This imbalance causes two problems: (1) training is inefficient as most locations are easy negatives that contribute no useful learning signal; (2) en masse, the easy negatives can overwhelm training and lead to degenerate models. A common solution is to perform some form of hard negative mining (33, 37, 8, 31, 22) that samples hard examples during training or more complex sampling/reweighing schemes . In contrast, we show that our proposed focal loss naturally handles the class imbalance faced by a one-stage detector and allows us to efficiently train on all examples without sampling and without easy negatives overwhelming the loss and computed gradients. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_12",
"text": " There has been much interest in designing robust loss functions (e.g., Huber loss ) that reduce the contribution of outliers by down-weighting the loss of examples with large errors (hard examples). In contrast, rather than addressing outliers, our focal loss is designed to address class imbalance by down-weighting inliers (easy examples) such that their contribution to the total loss is small even if their number is large. In other words, the focal loss performs the opposite role of a robust loss: it focuses training on a sparse set of hard examples. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_13",
"text": " The Focal Loss is designed to address the one-stage object detection scenario in which there is an extreme imbalance between foreground and background classes during training (e.g., 1:1000). We introduce the focal loss starting from the cross entropy (CE) loss for binary classification111Extending the focal loss to the multi-class case is straightforward and works well; for simplicity we focus on the binary loss in this work.: CE(p,y)={−log(p)if y=1−log(1−p)otherwise.CE𝑝𝑦cases𝑝if y=11𝑝otherwise.\\textrm{CE}(p,y)=\\begin{cases}-\\log(p)&\\text{if $y=1$}\\\\ -\\log(1-p)&\\text{otherwise.}\\end{cases} (1) In the above y∈{±1}𝑦plus-or-minus1y\\in\\{\\pm 1\\} specifies the ground-truth class and p∈(0,1)𝑝01p\\in(0,1) is the model’s estimated probability for the class with label y=1𝑦1y=1. For notational convenience, we define ptsubscript𝑝tp_{\\textrm{t}}: pt={pif y=11−potherwise,subscript𝑝tcases𝑝if y=11𝑝otherwise,p_{\\textrm{t}}=\\begin{cases}p&\\text{if $y=1$}\\\\ 1-p&\\text{otherwise,}\\end{cases} (2) and rewrite CE(p,y)=CE(pt)=−log(pt)CE𝑝𝑦CEsubscript𝑝tsubscript𝑝t\\textrm{CE}(p,y)=\\textrm{CE}(p_{\\textrm{t}})=-\\log(p_{\\textrm{t}}). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_14",
"text": " The CE loss can be seen as the blue (top) curve in Figure 1. One notable property of this loss, which can be easily seen in its plot, is that even examples that are easily classified (pt≫.5much-greater-thansubscript𝑝t.5p_{\\textrm{t}}\\gg.5) incur a loss with non-trivial magnitude. When summed over a large number of easy examples, these small loss values can overwhelm the rare class. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_15",
"text": " A common method for addressing class imbalance is to introduce a weighting factor α∈(0,1)𝛼01\\alpha\\in(0,1) for class 111 and 1−α1𝛼1-\\alpha for class −11-1. In practice α𝛼\\alpha may be set by inverse class frequency or treated as a hyperparameter to set by cross validation. For notational convenience, we define αtsubscript𝛼t\\alpha_{\\textrm{t}} analogously to how we defined ptsubscript𝑝tp_{\\textrm{t}}. We write the α𝛼\\alpha-balanced CE loss as: CE(pt)=−αtlog(pt).CEsubscript𝑝tsubscript𝛼tsubscript𝑝t\\textrm{CE}(p_{\\textrm{t}})=-\\alpha_{\\textrm{t}}\\log(p_{\\textrm{t}}). (3) This loss is a simple extension to CE that we consider as an experimental baseline for our proposed focal loss. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_16",
"text": " As our experiments will show, the large class imbalance encountered during training of dense detectors overwhelms the cross entropy loss. Easily classified negatives comprise the majority of the loss and dominate the gradient. While α𝛼\\alpha balances the importance of positive/negative examples, it does not differentiate between easy/hard examples. Instead, we propose to reshape the loss function to down-weight easy examples and thus focus training on hard negatives. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_17",
"text": " More formally, we propose to add a modulating factor (1−pt)γsuperscript1subscript𝑝t𝛾(1-p_{\\textrm{t}})^{\\gamma} to the cross entropy loss, with tunable focusing parameter γ≥0𝛾0\\gamma\\geq 0. We define the focal loss as: FL(pt)=−(1−pt)γlog(pt).FLsubscript𝑝tsuperscript1subscript𝑝t𝛾subscript𝑝t\\textrm{FL}(p_{\\textrm{t}})=-(1-p_{\\textrm{t}})^{\\gamma}\\log(p_{\\textrm{t}}). (4) ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_18",
"text": " The focal loss is visualized for several values of γ∈(0,5)𝛾05\\gamma\\in(0,5) in Figure 1. We note two properties of the focal loss. (1) When an example is misclassified and ptsubscript𝑝tp_{\\textrm{t}} is small, the modulating factor is near 111 and the loss is unaffected. As pt→1→subscript𝑝t1p_{\\textrm{t}}\\rightarrow 1, the factor goes to 0 and the loss for well-classified examples is down-weighted. (2) The focusing parameter γ𝛾\\gamma smoothly adjusts the rate at which easy examples are down-weighted. When γ=0𝛾0\\gamma=0, FL is equivalent to CE, and as γ𝛾\\gamma is increased the effect of the modulating factor is likewise increased (we found γ=2𝛾2\\gamma=2 to work best in our experiments). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_19",
"text": " Intuitively, the modulating factor reduces the loss contribution from easy examples and extends the range in which an example receives low loss. For instance, with γ=2𝛾2\\gamma=2, an example classified with pt=0.9subscript𝑝t0.9p_{\\textrm{t}}=0.9 would have 100×100\\times lower loss compared with CE and with pt≈0.968subscript𝑝t0.968p_{\\textrm{t}}\\approx 0.968 it would have 1000×1000\\times lower loss. This in turn increases the importance of correcting misclassified examples (whose loss is scaled down by at most 4×4\\times for pt≤.5subscript𝑝t.5p_{\\textrm{t}}\\leq.5 and γ=2𝛾2\\gamma=2). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_20",
"text": " In practice we use an α𝛼\\alpha-balanced variant of the focal loss: FL(pt)=−αt(1−pt)γlog(pt).FLsubscript𝑝tsubscript𝛼tsuperscript1subscript𝑝t𝛾subscript𝑝t\\textrm{FL}(p_{\\textrm{t}})=-\\alpha_{\\textrm{t}}(1-p_{\\textrm{t}})^{\\gamma}\\log(p_{\\textrm{t}}). (5) We adopt this form in our experiments as it yields slightly improved accuracy over the non-α𝛼\\alpha-balanced form. Finally, we note that the implementation of the loss layer combines the sigmoid operation for computing p𝑝p with the loss computation, resulting in greater numerical stability. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_21",
"text": " While in our main experimental results we use the focal loss definition above, its precise form is not crucial. In the appendix we consider other instantiations of the focal loss and demonstrate that these can be equally effective. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_22",
"text": " Binary classification models are by default initialized to have equal probability of outputting either y=−1𝑦1y=-1 or 111. Under such an initialization, in the presence of class imbalance, the loss due to the frequent class can dominate total loss and cause instability in early training. To counter this, we introduce the concept of a ‘prior’ for the value of p𝑝p estimated by the model for the rare class (foreground) at the start of training. We denote the prior by π𝜋\\pi and set it so that the model’s estimated p𝑝p for examples of the rare class is low, e.g. 0.010.010.01. We note that this is a change in model initialization (see §4.1) and not of the loss function. We found this to improve training stability for both the cross entropy and focal loss in the case of heavy class imbalance. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_23",
"text": " Two-stage detectors are often trained with the cross entropy loss without use of α𝛼\\alpha-balancing or our proposed loss. Instead, they address class imbalance through two mechanisms: (1) a two-stage cascade and (2) biased minibatch sampling. The first cascade stage is an object proposal mechanism (35, 24, 28) that reduces the nearly infinite set of possible object locations down to one or two thousand. Importantly, the selected proposals are not random, but are likely to correspond to true object locations, which removes the vast majority of easy negatives. When training the second stage, biased sampling is typically used to construct minibatches that contain, for instance, a 1:3 ratio of positive to negative examples. This ratio is like an implicit α𝛼\\alpha-balancing factor that is implemented via sampling. Our proposed focal loss is designed to address these mechanisms in a one-stage detection system directly via the loss function. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_24",
"text": " RetinaNet is a single, unified network composed of a backbone network and two task-specific subnetworks. The backbone is responsible for computing a convolutional feature map over an entire input image and is an off-the-self convolutional network. The first subnet performs convolutional object classification on the backbone’s output; the second subnet performs convolutional bounding box regression. The two subnetworks feature a simple design that we propose specifically for one-stage, dense detection, see Figure 3. While there are many possible choices for the details of these components, most design parameters are not particularly sensitive to exact values as shown in the experiments. We describe each component of RetinaNet next. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_25",
"text": " We adopt the Feature Pyramid Network (FPN) from as the backbone network for RetinaNet. In brief, FPN augments a standard convolutional network with a top-down pathway and lateral connections so the network efficiently constructs a rich, multi-scale feature pyramid from a single resolution input image, see Figure 3(a)-(b). Each level of the pyramid can be used for detecting objects at a different scale. FPN improves multi-scale predictions from fully convolutional networks (FCN) , as shown by its gains for RPN and DeepMask-style proposals , as well at two-stage detectors such as Fast R-CNN or Mask R-CNN . ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_26",
"text": " Following , we build FPN on top of the ResNet architecture . We construct a pyramid with levels P3subscript𝑃3P_{3} through P7subscript𝑃7P_{7}, where l𝑙l indicates pyramid level (Plsubscript𝑃𝑙P_{l} has resolution 2lsuperscript2𝑙2^{l} lower than the input). As in all pyramid levels have C=256𝐶256C=256 channels. Details of the pyramid generally follow with a few modest differences.222RetinaNet uses feature pyramid levels P3subscript𝑃3P_{3} to P7subscript𝑃7P_{7}, where P3subscript𝑃3P_{3} to P5subscript𝑃5P_{5} are computed from the output of the corresponding ResNet residual stage (C3subscript𝐶3C_{3} through C5subscript𝐶5C_{5}) using top-down and lateral connections just as in , P6subscript𝑃6P_{6} is obtained via a 3×\\times3 stride-2 conv on C5subscript𝐶5C_{5}, and P7subscript𝑃7P_{7} is computed by applying ReLU followed by a 3×\\times3 stride-2 conv on P6subscript𝑃6P_{6}. This differs slightly from : (1) we don’t use the high-resolution pyramid level P2subscript𝑃2P_{2} for computational reasons, (2) P6subscript𝑃6P_{6} is computed by strided convolution instead of downsampling, and (3) we include P7subscript𝑃7P_{7} to improve large object detection. These minor modifications improve speed while maintaining accuracy. While many design choices are not crucial, we emphasize the use of the FPN backbone is; preliminary experiments using features from only the final ResNet layer yielded low AP. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_27",
"text": " We use translation-invariant anchor boxes similar to those in the RPN variant in . The anchors have areas of 322superscript32232^{2} to 5122superscript5122512^{2} on pyramid levels P3subscript𝑃3P_{3} to P7subscript𝑃7P_{7}, respectively. As in , at each pyramid level we use anchors at three aspect ratios {1\\{1:2,22, 111:111, 222:1}1\\}. For denser scale coverage than in , at each level we add anchors of sizes {20superscript202^{0}, 21/3superscript2132^{1/3}, 22/3superscript2232^{2/3}} of the original set of 3 aspect ratio anchors. This improve AP in our setting. In total there are A=9𝐴9A=9 anchors per level and across levels they cover the scale range 32 - 813 pixels with respect to the network’s input image. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_28",
"text": " Each anchor is assigned a length K𝐾K one-hot vector of classification targets, where K𝐾K is the number of object classes, and a 4-vector of box regression targets. We use the assignment rule from RPN but modified for multi-class detection and with adjusted thresholds. Specifically, anchors are assigned to ground-truth object boxes using an intersection-over-union (IoU) threshold of 0.5; and to background if their IoU is in (0, 0.4). As each anchor is assigned to at most one object box, we set the corresponding entry in its length K𝐾K label vector to 111 and all other entries to 00. If an anchor is unassigned, which may happen with overlap in (0.4, 0.5), it is ignored during training. Box regression targets are computed as the offset between each anchor and its assigned object box, or omitted if there is no assignment. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_29",
"text": " The classification subnet predicts the probability of object presence at each spatial position for each of the A𝐴A anchors and K𝐾K object classes. This subnet is a small FCN attached to each FPN level; parameters of this subnet are shared across all pyramid levels. Its design is simple. Taking an input feature map with C𝐶C channels from a given pyramid level, the subnet applies four 3×\\times3 conv layers, each with C𝐶C filters and each followed by ReLU activations, followed by a 3×\\times3 conv layer with KA𝐾𝐴KA filters. Finally sigmoid activations are attached to output the KA𝐾𝐴KA binary predictions per spatial location, see Figure 3 (c). We use C=256𝐶256C=256 and A=9𝐴9A=9 in most experiments. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_30",
"text": " In contrast to RPN , our object classification subnet is deeper, uses only 3×\\times3 convs, and does not share parameters with the box regression subnet (described next). We found these higher-level design decisions to be more important than specific values of hyperparameters. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_31",
"text": " In parallel with the object classification subnet, we attach another small FCN to each pyramid level for the purpose of regressing the offset from each anchor box to a nearby ground-truth object, if one exists. The design of the box regression subnet is identical to the classification subnet except that it terminates in 4A4𝐴4A linear outputs per spatial location, see Figure 3 (d). For each of the A𝐴A anchors per spatial location, these 444 outputs predict the relative offset between the anchor and the ground-truth box (we use the standard box parameterization from R-CNN ). We note that unlike most recent work, we use a class-agnostic bounding box regressor which uses fewer parameters and we found to be equally effective. The object classification subnet and the box regression subnet, though sharing a common structure, use separate parameters. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_32",
"text": " RetinaNet forms a single FCN comprised of a ResNet-FPN backbone, a classification subnet, and a box regression subnet, see Figure 3. As such, inference involves simply forwarding an image through the network. To improve speed, we only decode box predictions from at most 1k top-scoring predictions per FPN level, after thresholding detector confidence at 0.05. The top predictions from all levels are merged and non-maximum suppression with a threshold of 0.5 is applied to yield the final detections. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_33",
"text": " We use the focal loss introduced in this work as the loss on the output of the classification subnet. As we will show in §5, we find that γ=2𝛾2\\gamma=2 works well in practice and the RetinaNet is relatively robust to γ∈(0.5,5)𝛾0.55\\gamma\\in(0.5,5). We emphasize that when training RetinaNet, the focal loss is applied to all ∼similar-to\\scriptstyle\\sim100k anchors in each sampled image. This stands in contrast to common practice of using heuristic sampling (RPN) or hard example mining (OHEM, SSD) to select a small set of anchors (e.g., 256) for each minibatch. The total focal loss of an image is computed as the sum of the focal loss over all ∼similar-to\\scriptstyle\\sim100k anchors, normalized by the number of anchors assigned to a ground-truth box. We perform the normalization by the number of assigned anchors, not total anchors, since the vast majority of anchors are easy negatives and receive negligible loss values under the focal loss. Finally we note that α𝛼\\alpha, the weight assigned to the rare class, also has a stable range, but it interacts with γ𝛾\\gamma making it necessary to select the two together (see Tables 1a and 1b). In general α𝛼\\alpha should be decreased slightly as γ𝛾\\gamma is increased (for γ=2𝛾2\\gamma=2, α=0.25𝛼0.25\\alpha=0.25 works best). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_34",
"text": " We experiment with ResNet-50-FPN and ResNet-101-FPN backbones . The base ResNet-50 and ResNet-101 models are pre-trained on ImageNet1k; we use the models released by . New layers added for FPN are initialized as in . All new conv layers except the final one in the RetinaNet subnets are initialized with bias b=0𝑏0b=0 and a Gaussian weight fill with σ=0.01𝜎0.01\\sigma=0.01. For the final conv layer of the classification subnet, we set the bias initialization to b=−log((1−π)/π)𝑏1𝜋𝜋b=-\\log((1-\\pi)/\\pi), where π𝜋\\pi specifies that at the start of training every anchor should be labeled as foreground with confidence of ∼similar-to\\scriptstyle\\simπ𝜋\\pi. We use π=.01𝜋.01\\pi=.01 in all experiments, although results are robust to the exact value. As explained in §3.3, this initialization prevents the large number of background anchors from generating a large, destabilizing loss value in the first iteration of training. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_35",
"text": " RetinaNet is trained with stochastic gradient descent (SGD). We use synchronized SGD over 8 GPUs with a total of 16 images per minibatch (2 images per GPU). Unless otherwise specified, all models are trained for 90k iterations with an initial learning rate of 0.01, which is then divided by 10 at 60k and again at 80k iterations. We use horizontal image flipping as the only form of data augmentation unless otherwise noted. Weight decay of 0.0001 and momentum of 0.9 are used. The training loss is the sum the focal loss and the standard smooth L1subscript𝐿1L_{1} loss used for box regression . Training time ranges between 10 and 35 hours for the models in Table 1e. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_36",
"text": " We present experimental results on the bounding box detection track of the challenging COCO benchmark . For training, we follow common practice (1, 20) and use the COCO trainval35k split (union of 80k images from train and a random 35k subset of images from the 40k image val split). We report lesion and sensitivity studies by evaluating on the minival split (the remaining 5k images from val). For our main results, we report COCO AP on the test-dev split, which has no public labels and requires use of the evaluation server. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_37",
"text": " We run numerous experiments to analyze the behavior of the loss function for dense detection along with various optimization strategies. For all experiments we use depth 50 or 101 ResNets with a Feature Pyramid Network (FPN) constructed on top. For all ablation studies we use an image scale of 600 pixels for training and testing. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_38",
"text": " Our first attempt to train RetinaNet uses standard cross entropy (CE) loss without any modifications to the initialization or learning strategy. This fails quickly, with the network diverging during training. However, simply initializing the last layer of our model such that the prior probability of detecting an object is π=.01𝜋.01\\pi=.01 (see §4.1) enables effective learning. Training RetinaNet with ResNet-50 and this initialization already yields a respectable AP of 30.2 on COCO. Results are insensitive to the exact value of π𝜋\\pi so we use π=.01𝜋.01\\pi=.01 for all experiments. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_39",
"text": " Our next attempt to improve learning involved using the α𝛼\\alpha-balanced CE loss described in §3.1. Results for various α𝛼\\alpha are shown in Table 1a. Setting α=.75𝛼.75\\alpha=.75 gives a gain of 0.9 points AP. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_40",
"text": " Results using our proposed focal loss are shown in Table 1b. The focal loss introduces one new hyperparameter, the focusing parameter γ𝛾\\gamma, that controls the strength of the modulating term. When γ=0𝛾0\\gamma=0, our loss is equivalent to the CE loss. As γ𝛾\\gamma increases, the shape of the loss changes so that “easy” examples with low loss get further discounted, see Figure 1. FL shows large gains over CE as γ𝛾\\gamma is increased. With γ=2𝛾2\\gamma=2, FL yields a 2.9 AP improvement over the α𝛼\\alpha-balanced CE loss. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_41",
"text": " For the experiments in Table 1b, for a fair comparison we find the best α𝛼\\alpha for each γ𝛾\\gamma. We observe that lower α𝛼\\alpha’s are selected for higher γ𝛾\\gamma’s (as easy negatives are down-weighted, less emphasis needs to be placed on the positives). Overall, however, the benefit of changing γ𝛾\\gamma is much larger, and indeed the best α𝛼\\alpha’s ranged in just (.25,.75) (we tested α∈(.01,.999)𝛼.01.999\\alpha\\in(.01,.999)). We use γ=2.0𝛾2.0\\gamma=2.0 with α=.25𝛼.25\\alpha=.25 for all experiments but α=.5𝛼.5\\alpha=.5 works nearly as well (.4 AP lower). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_42",
"text": " To understand the focal loss better, we analyze the empirical distribution of the loss of a converged model. For this, we take take our default ResNet-101 600-pixel model trained with γ=2𝛾2\\gamma=2 (which has 36.0 AP). We apply this model to a large number of random images and sample the predicted probability for ∼similar-to\\scriptstyle\\sim107superscript10710^{7} negative windows and ∼similar-to\\scriptstyle\\sim105superscript10510^{5} positive windows. Next, separately for positives and negatives, we compute FL for these samples, and normalize the loss such that it sums to one. Given the normalized loss, we can sort the loss from lowest to highest and plot its cumulative distribution function (CDF) for both positive and negative samples and for different settings for γ𝛾\\gamma (even though model was trained with γ=2𝛾2\\gamma=2). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_43",
"text": " Cumulative distribution functions for positive and negative samples are shown in Figure 4. If we observe the positive samples, we see that the CDF looks fairly similar for different values of γ𝛾\\gamma. For example, approximately 20% of the hardest positive samples account for roughly half of the positive loss, as γ𝛾\\gamma increases more of the loss gets concentrated in the top 20% of examples, but the effect is minor. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_44",
"text": " The effect of γ𝛾\\gamma on negative samples is dramatically different. For γ=0𝛾0\\gamma=0, the positive and negative CDFs are quite similar. However, as γ𝛾\\gamma increases, substantially more weight becomes concentrated on the hard negative examples. In fact, with γ=2𝛾2\\gamma=2 (our default setting), the vast majority of the loss comes from a small fraction of samples. As can be seen, FL can effectively discount the effect of easy negatives, focusing all attention on the hard negative examples. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_45",
"text": " proposed to improve training of two-stage detectors by constructing minibatches using high-loss examples. Specifically, in OHEM each example is scored by its loss, non-maximum suppression (nms) is then applied, and a minibatch is constructed with the highest-loss examples. The nms threshold and batch size are tunable parameters. Like the focal loss, OHEM puts more emphasis on misclassified examples, but unlike FL, OHEM completely discards easy examples. We also implement a variant of OHEM used in SSD : after applying nms to all examples, the minibatch is constructed to enforce a 1:3 ratio between positives and negatives to help ensure each minibatch has enough positives. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_46",
"text": " We test both OHEM variants in our setting of one-stage detection which has large class imbalance. Results for the original OHEM strategy and the ‘OHEM 1:3’ strategy for selected batch sizes and nms thresholds are shown in Table 1d. These results use ResNet-101, our baseline trained with FL achieves 36.0 AP for this setting. In contrast, the best setting for OHEM (no 1:3 ratio, batch size 128, nms of .5) achieves 32.8 AP. This is a gap of 3.2 AP, showing FL is more effective than OHEM for training dense detectors. We note that we tried other parameter setting and variants for OHEM but did not achieve better results. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_47",
"text": " Finally, in early experiments, we attempted to train with the hinge loss on ptsubscript𝑝tp_{\\textrm{t}}, which sets loss to 0 above a certain value of ptsubscript𝑝tp_{\\textrm{t}}. However, this was unstable and we did not manage to obtain meaningful results. Results exploring alternate loss functions are in the appendix. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_48",
"text": " One of the most important design factors in a one-stage detection system is how densely it covers the space of possible image boxes. Two-stage detectors can classify boxes at any position, scale, and aspect ratio using a region pooling operation . In contrast, as one-stage detectors use a fixed sampling grid, a popular approach for achieving high coverage of boxes in these approaches is to use multiple ‘anchors’ at each spatial position to cover boxes of various scales and aspect ratios. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_49",
"text": " We sweep over the number of scale and aspect ratio anchors used at each spatial position and each pyramid level in FPN. We consider cases from a single square anchor at each location to 12 anchors per location spanning 4 sub-octave scales (2k/4superscript2𝑘42^{k/4}, for k≤3𝑘3k\\leq 3) and 3 aspect ratios (0.5, 1, 2). Results using ResNet-50 are shown in Table 1c. A surprisingly good AP (30.3) is achieved using just one square anchor. However, the AP can be improved by nearly 4 points (to 34.0) when using 3 scales and 3 aspect ratios per location. We used this setting for all other experiments in this work. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_50",
"text": " Finally, we note that increasing beyond 6-9 anchors did not shown further gains. Thus while two-stage systems can classify arbitrary boxes in an image, the saturation of performance w.r.t. density implies the higher potential density of two-stage systems may not offer an advantage. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_51",
"text": " Larger backbone networks yield higher accuracy, but also slower inference speeds. Likewise for input image scale (defined by the shorter image side). We show the impact of these two factors in Table 1e. In Figure 2 we plot the speed/accuracy trade-off curve for RetinaNet and compare it to recent methods using public numbers on COCO test-dev. The plot reveals that RetinaNet, enabled by our focal loss, forms an upper envelope over all existing methods, discounting the low-accuracy regime. RetinaNet with ResNet-101-FPN and a 600 pixel image scale (which we denote by RetinaNet-101-600 for simplicity) matches the accuracy of the recently published ResNet-101-FPN Faster R-CNN , while running in 122 ms per image compared to 172 ms (both measured on an Nvidia M40 GPU). Using larger scales allows RetinaNet to surpass the accuracy of all two-stage approaches, while still being faster. For faster runtimes, there is only one operating point (500 pixel input) at which using ResNet-50-FPN improves over ResNet-101-FPN. Addressing the high frame rate regime will likely require special network design, as in , and is beyond the scope of this work. We note that after publication, faster and more accurate results can now be obtained by a variant of Faster R-CNN from . ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_52",
"text": " We evaluate RetinaNet on the challenging COCO dataset and compare test-dev results to recent state-of-the-art methods including both one-stage and two-stage models. Results are presented in Table 2 for our RetinaNet-101-800 model trained using scale jitter and for 1.5×\\times longer than the models in Table 1e (giving a 1.3 AP gain). Compared to existing one-stage methods, our approach achieves a healthy 5.9 point AP gap (39.1 vs. 33.2) with the closest competitor, DSSD , while also being faster, see Figure 2. Compared to recent two-stage methods, RetinaNet achieves a 2.3 point gap above the top-performing Faster R-CNN model based on Inception-ResNet-v2-TDM . Plugging in ResNeXt-32x8d-101-FPN as the RetinaNet backbone further improves results another 1.7 AP, surpassing 40 AP on COCO. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_53",
"text": " In this work, we identify class imbalance as the primary obstacle preventing one-stage object detectors from surpassing top-performing, two-stage methods. To address this, we propose the focal loss which applies a modulating term to the cross entropy loss in order to focus learning on hard negative examples. Our approach is simple and highly effective. We demonstrate its efficacy by designing a fully convolutional one-stage detector and report extensive experimental analysis showing that it achieves state-of-the-art accuracy and speed. Source code is available at https://github.com/facebookresearch/Detectron . ",
"title": "Focal Loss for Dense Object Detection"
}
] |
What are two kinds of pretrained language models?
|
They are monolingual and multilingual [0].
|
[
0
] |
[
{
"id": "2204.08110_all_0",
"text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monolingual pretrained models are generally applied to tasks in the same language, whereas multilingual ones are used for cross-lingual tasks or transfer. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_1",
"text": " Recent work has claimed that monolingual pretrained models are also surprisingly good at transferring between languages, despite ostensibly having never seen the target language before (Gogoulou et al., 2021; Li et al., 2021, inter alia). However, because of the large scale of pretraining data and because many pretraining corpora are not publicly available, it is currently unknown how much foreign language data exists in monolingual pretraining corpora. In this paper, we show that (1) these data are almost certainly contaminated with very small percentages of text from other languages and that (2) cross-lingual transfer is possible from such data leakage in the pretraining corpus. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_2",
"text": " More specifically, we quantify how multilingual English pretrained models are in two steps. First, we analyze common English pretraining corpora with a large-scale automatic evaluation to estimate their language composition, as well as a smaller-scale manual analysis. Second, we perform experiments across fifty languages on masked language modeling and part-of-speech (POS) tagging to measure how well the models trained on these pretraining corpora perform outside of English. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_3",
"text": " Our analysis finds that these corpora include very small percentages that amount to overall significant amounts of non-English text (Figure 1), particularly those derived from web-crawled data. Furthermore, the models trained on this data perform surprisingly well on other languages; this transfer is strongly correlated with the amount of target language data seen during pretraining. Notably, we find that the English T5 outperforms mBERT on POS tagging in multiple languages with no finetuning. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_4",
"text": " Overall, these results indicate that the considered models are actually multilingual and that their ability to transfer across languages is not zero-shot, despite what has been recently claimed. Given the effort required to fully remove all non-English data, we question whether it is practically possible to train truly monolingual models at scale. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_5",
"text": " We first measure how much non-English text exists in commonly used English pretraining corpora with two analyses: an automatic language identification to estimate the amount of foreign language data in these corpora, and a manual qualitative analysis of the text classified as non-English. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_6",
"text": " We consider the following pretraining datasets: English Wikipedia (11.8GB); BookCorpus (Zhu et al. 2015, 4.2GB); Stories (Trinh and Le 2018, 31GB); OpenWebText (Gokaslan and Cohen 2019, 38GB), which is an open-source version of WebText Radford et al. (2019); CC-NEWS (Liu et al. 2019, 76 GB); and C4.En (Raffel et al. 2020, 305GB), as provided by Dodge et al. (2021). We use the versions of Wikipedia, BookCorpus, and CC-NEWS used to pretrain RoBERTa. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_7",
"text": " We use the FastText language identification model Joulin et al. (2017) to label every line in each corpus and keep lines as non-English if they score above a set confidence threshold (0.6). Due to the large size of C4, we subsample the first 50M examples (or 14%); we classify the entirety of all other datasets. Since language detection is imperfect, particularly for low-resource languages Caswell et al. (2021), we present the results of this analysis as an estimate of the non-English data in each dataset and perform a qualitative analysis of potential errors in the following section. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_8",
"text": " A summary of the language identification experiments is presented in Figure 1.111Full results of this evaluation are detailed in Appendix C. We see that every corpus contains notable quantities of non-English data, with our estimates ranging between 300k to 406M tokens. An obvious factor that affects the amount of non-English data in each corpus is the overall size of the dataset; however, even when controlling for size by looking at the percentage of non-English data, we still see that the smaller corpora (Wikipedia, BookCorpus, and Stories) have relatively less non-English data. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_9",
"text": " Indeed, a major factor of language leakage is the method in which the data was collected: the datasets derived from web crawls contain higher percentages of non-English text (OpenWebText andCCNews). This is true even for C4, where the dataset was filtered with a classifier to exclude non-English text Raffel et al. (2020). Since automatic methods for language identification are imperfect, the datasets with more manual filtering (such as Wikipedia, which has human editors curating its content) are less prone to non-English data than those relying on classifiers. Due to these challenges, it is likely impossible to fully remove non-English text from a web-crawled dataset at scale. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_10",
"text": " We also see that non-English text makes up small percentages of the overall data, though this still leads to millions of tokens in large datasets. The largest individual languages after English only make up 0.01%, 0.15%, and 0.05% of the BERT, RoBERTa, and T5 training data, respectively. Multilingual pretraining work has shown that models generalize to new languages from varying amounts of data Delvin (2019); Lample and Conneau (2019); Conneau et al. (2020); however, these approaches intentionally select data across languages, and most upsample low-resource languages during training. Without these considerations, it is an open question how well the models trained on these relatively small amounts of non-English data generalize. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_11",
"text": " We also perform a closer analysis on a random subset (200 per corpus) of non-English lines predicted by the language classifier (Table 2.1). Each example is manually coded into one of six categories. The first set covers various kinds of foreign language data: NE, where the line contains only non-English language text; BiL, or bilingual, where the line contains both English and non-English text; Trans., in which the English and non-English data that are translations of each other; and Ent., where the line is primarily English but contains non-English entities. The last two codes pertain to errors made by the language classifier: En., where the line only contains English text, and XX, which refers to lines that contain no natural language. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_12",
"text": " The majority of lines across datasets consist only of non-English text. The next most common type of non-English data is BiL; this contains many subtypes of data, such as codeswitching and foreign language dialogue within English text. These datasets also include parallel data at both the sentence- and word-level.222e.g., ”大学 【だい・がく】– college”, OpenWebText We note that all observed translations are between English and another language. Finally, some of the examples classified as non-English are actually English texts with non-English phrases. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_13",
"text": " Our analysis also shows that the language classifier performs worse on the non-web crawled data. For example, it misclassified a quarter of the sampled lines from Stories as non-English when they in fact only contain English text; many of these lines stem from snippets of dialogue in the dataset. We generally observe that lines coded as En tend to be shorter than the correctly labeled lines and often contain non-standard English. The language classifier also struggles to handle noisy lines, for which it has no appropriate language label. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_14",
"text": " We now ask: how well do models pretrained on these putatively English corpora perform on non-English tasks? While the English data is more multilingual than previously thought, there are many differences between monolingual and multilingual pretraining; non-English data are often tokenized into more subword units333For example, the Basque UD treebank requires on average 1.78, 2.59, and 2.66 tokens per word to be encoded by XLMR, RoBERTa, and BERT, respectively. and are much less frequently observed during monolingual training. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_15",
"text": " We evaluate popular English pretrained models on tasks in more than 50 languages: (masked) language modeling, POS probing, and finetuned POS tagging. We compare the performance of monolingual BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), and T5 Raffel et al. (2020) against multilingual mBERT Delvin (2019) and XLM-R Conneau et al. (2020). We report average performance across five runs with different random seeds for the POS evaluations. The full results and all languages can be found in Appendix D. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_16",
"text": " We first measure the perplexity of English pretrained MLMs in other languages. We use Wiki-40B, a multilingual language modeling dataset that covers 41 languages Guo et al. (2020). Following the Wiki-40B paper, we report bits per character (BPC) to allow comparison between models with different tokenizations of the text. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_17",
"text": " We find that both BERT models perform notably worse on modeling other languages; however, RoBERTa, reduces the gap with the multilingual models from 2.51 BPC to 0.87 BPC (Figure 2(a)). This finding is consistent with Tran (2020), who also found RoBERTa transfers well cross-lingually. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_18",
"text": " Next, we evaluate how well monolingual English models perform on non-English downstream tasks, using part-of-speech (POS) tagging as a case study. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_19",
"text": " We first consider the performance of the encoders when probed for POS knowledge (Figure 2(b)).444For T5, this means that we evaluate the output of the encoder and discard the decoder. Unsurprisingly, on average all of the English models underperform the multilingual models. Similar to MLM, we find that RoBERTa performs better than BERT when probed for POS features on other languages; surprisingly, it also strongly outperforms T5, despite C4 containing more absolute non-English data than the RoBERTa corpus. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_20",
"text": " This difference is likely due to two factors. First, in terms of relative percentages, RoBERTa is exposed to more non-English text than T5 (0.78% compared to only 0.22%). Secondly, RoBERTa’s subword vocabulary is robust to unexpected inputs and does not substitute an UNK token any input tokens; in contrast, T5 and BERT have high rates of UNK tokens for some non-Latin languages (Appendix B).555UNK tokens refer to placeholder tokens used when the model receives an input not covered by its vocabulary. However, for many high-resource languages the English models perform competitively, with T5 outperforming mBERT on German and Portuguese, among others. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_21",
"text": " To test if the effects of foreign language data carry through after finetuning, we also finetune a subset of the models (BERTbase, RoBERTabase, mBERT, XLMRbase) for non-English POS tagging (Figure 2(c)). After finetuning, the gap between the mono- and multilingual models is much smaller: RoBERTa only averages 2.65 points worse than XLM-R, compared to 12.5 points when probing. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_22",
"text": " We then investigate the correlation between potential transfer causes and model performance (Table 2). Specifically, we consider the quantity of target language data found in the model’s pretraining corpus and the language similarity to English as potential causes of cross-lingual transfer. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_23",
"text": " We find that across tasks, RoBERTa task performance is most strongly correlated with the amount of target language data seen during pretraining. BERT and T5 task performance are less correlated with observed pretrained data, likely due to tokenization artifacts (Appendix B). Indeed, when we control for languages not written with Latin script on T5, the correlation between performance and the amount of target pretraining data increases to ρ=𝜌absent\\rho= 0.313. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_24",
"text": " We also consider the effect of language similarity on task performance, which is often hypothesized to facilitate cross-lingual transfer. We use the syntactic distance of languages calculated by Malaviya et al. (2017); more similar languages score lower. However, we generally find that this is less correlated with performance than the quantity of target text, particularly for RoBERTa. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_25",
"text": " In this paper, we demonstrate that English pretrained models are exposed to a considerable amount of non-English data during pretraining, particularly in the case of more recent models that are trained on larger corpora derived from web crawls. We also find that this non-English text acts as a significant source of signal for cross-lingual transfer. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_26",
"text": " Other recent work has focused on documenting the composition of pretraining corpora Dodge et al. (2021); Gururangan et al. (2022). Caswell et al. (2021) manually audit a variety of multilingual datasets, finding data quality issues that are worse for low-resource languages and, similarly to our work, that texts for many languages are misclassified. In contrast, our focus is on the presence of foreign language data in primarily English corpora. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_27",
"text": " Prior work has also shown the ability of monolingual models to transfer to other languages across a wide range of tasks Gogoulou et al. (2021); Li et al. (2021); Tran (2020); Artetxe et al. (2020); Chi et al. (2020), but these works do not consider the effect of foreign language data leakage as a source of signal. Notably, de Souza et al. (2021) mention the presence of foreign language data in their corpora but assume the small amounts observed will not affect model performance. However, our findings demonstrate that the amount of foreign language data directly correlates with cross-lingual transfer. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_28",
"text": " An obvious follow-up to our findings would be to retrain the models with text that is verified to only contain English data; this would confirm the effect the leaked non-English data has on the models. We reiterate that the standard method for filtering these datasets, automatic language classifiers, is imperfect. This, and the infeasibility of manual filtering due to the scale of the data, means that controlling for the language the model is pretrained on is nearly impossible. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_29",
"text": " However, the presence of foreign language data in pretraining corpora is not inherently problematic. Models trained on these datasets perform exceedingly well on their target languages and generalize to other languages much better than expected. Rather, it is important to remember that these models are not performing zero-shot transfer when used in other languages, given the scale and data with which they were pretrained. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_30",
"text": " Our work has a number of limitations. First, we measure the quantities of non-English data using a language classifier. The amounts of foreign language data we report are estimates for each dataset, as the classifier likely misclassified some examples. We manually audit the types of mistakes made by the language classifier in Section 2. Additionally, we evaluate downstream performance via POS tagging, and it is possible that the models would exhibit different behavior on other NLP tasks. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_31",
"text": " We also only consider the effect of foreign language contamination for English pretrained models. It is unclear to what extent this phenomenon affects monolingual models for other languages; however, since many of the resources evaluated in this work are also used to pretrain non-English monolingual models (e.g., Wikipedia), similar effects would likely be observed. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
}
] |
Why meta learning is better than transfer learning?
|
While transfer learning requires learned parameters, meta-learning does not require learned parameters [35].
|
[
35
] |
[
{
"id": "1703.03400_all_0",
"text": " Learning quickly is a hallmark of human intelligence, whether it involves recognizing objects from a few examples or quickly learning new skills after just minutes of experience. Our artificial agents should be able to do the same, learning and adapting quickly from only a few examples, and continuing to adapt as more data becomes available. This kind of fast and flexible learning is challenging, since the agent must integrate its prior experience with a small amount of new information, while avoiding overfitting to the new data. Furthermore, the form of prior experience and new data will depend on the task. As such, for the greatest applicability, the mechanism for learning to learn (or meta-learning) should be general to the task and the form of computation required to complete the task. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_1",
"text": " In this work, we propose a meta-learning algorithm that is general and model-agnostic, in the sense that it can be directly applied to any learning problem and model that is trained with a gradient descent procedure. Our focus is on deep neural network models, but we illustrate how our approach can easily handle different architectures and different problem settings, including classification, regression, and policy gradient reinforcement learning, with minimal modification. In meta-learning, the goal of the trained model is to quickly learn a new task from a small amount of new data, and the model is trained by the meta-learner to be able to learn on a large number of different tasks. The key idea underlying our method is to train the model’s initial parameters such that the model has maximal performance on a new task after the parameters have been updated through one or more gradient steps computed with a small amount of data from that new task. Unlike prior meta-learning methods that learn an update function or learning rule (Schmidhuber, 1987; Bengio et al., 1992; Andrychowicz et al., 2016; Ravi & Larochelle, 2017), our algorithm does not expand the number of learned parameters nor place constraints on the model architecture (e.g. by requiring a recurrent model (Santoro et al., 2016) or a Siamese network (Koch, 2015)), and it can be readily combined with fully connected, convolutional, or recurrent neural networks. It can also be used with a variety of loss functions, including differentiable supervised losses and non-differentiable reinforcement learning objectives. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_2",
"text": " The process of training a model’s parameters such that a few gradient steps, or even a single gradient step, can produce good results on a new task can be viewed from a feature learning standpoint as building an internal representation that is broadly suitable for many tasks. If the internal representation is suitable to many tasks, simply fine-tuning the parameters slightly (e.g. by primarily modifying the top layer weights in a feedforward model) can produce good results. In effect, our procedure optimizes for models that are easy and fast to fine-tune, allowing the adaptation to happen in the right space for fast learning. From a dynamical systems standpoint, our learning process can be viewed as maximizing the sensitivity of the loss functions of new tasks with respect to the parameters: when the sensitivity is high, small local changes to the parameters can lead to large improvements in the task loss. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_3",
"text": " The primary contribution of this work is a simple model- and task-agnostic algorithm for meta-learning that trains a model’s parameters such that a small number of gradient updates will lead to fast learning on a new task. We demonstrate the algorithm on different model types, including fully connected and convolutional networks, and in several distinct domains, including few-shot regression, image classification, and reinforcement learning. Our evaluation shows that our meta-learning algorithm compares favorably to state-of-the-art one-shot learning methods designed specifically for supervised classification, while using fewer parameters, but that it can also be readily applied to regression and can accelerate reinforcement learning in the presence of task variability, substantially outperforming direct pretraining as initialization. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_4",
"text": " We aim to train models that can achieve rapid adaptation, a problem setting that is often formalized as few-shot learning. In this section, we will define the problem setup and present the general form of our algorithm. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_5",
"text": " The goal of few-shot meta-learning is to train a model that can quickly adapt to a new task using only a few datapoints and training iterations. To accomplish this, the model or learner is trained during a meta-learning phase on a set of tasks, such that the trained model can quickly adapt to new tasks using only a small number of examples or trials. In effect, the meta-learning problem treats entire tasks as training examples. In this section, we formalize this meta-learning problem setting in a general manner, including brief examples of different learning domains. We will discuss two different learning domains in detail in Section 3. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_6",
"text": " We consider a model, denoted f𝑓f, that maps observations 𝐱𝐱\\mathbf{x} to outputs 𝐚𝐚\\mathbf{a}. During meta-learning, the model is trained to be able to adapt to a large or infinite number of tasks. Since we would like to apply our framework to a variety of learning problems, from classification to reinforcement learning, we introduce a generic notion of a learning task below. Formally, each task 𝒯={ℒ(𝐱1,𝐚1,…,𝐱H,𝐚H),q(𝐱1),q(𝐱t+1|𝐱t,𝐚t),H}𝒯ℒsubscript𝐱1subscript𝐚1…subscript𝐱𝐻subscript𝐚𝐻𝑞subscript𝐱1𝑞conditionalsubscript𝐱𝑡1subscript𝐱𝑡subscript𝐚𝑡𝐻\\mathcal{T}=\\{\\mathcal{L}(\\mathbf{x}_{1},\\mathbf{a}_{1},\\dots,\\mathbf{x}_{H},\\mathbf{a}_{H}),q(\\mathbf{x}_{1}),q(\\mathbf{x}_{t+1}|\\mathbf{x}_{t},\\mathbf{a}_{t}),H\\} consists of a loss function ℒℒ\\mathcal{L}, a distribution over initial observations q(𝐱1)𝑞subscript𝐱1q(\\mathbf{x}_{1}), a transition distribution q(𝐱t+1|𝐱t,𝐚t)𝑞conditionalsubscript𝐱𝑡1subscript𝐱𝑡subscript𝐚𝑡q(\\mathbf{x}_{t+1}|\\mathbf{x}_{t},\\mathbf{a}_{t}), and an episode length H𝐻H. In i.i.d. supervised learning problems, the length H=1𝐻1H\\!=\\!1. The model may generate samples of length H𝐻H by choosing an output 𝐚tsubscript𝐚𝑡\\mathbf{a}_{t} at each time t𝑡t. The loss ℒ(𝐱1,𝐚1,…,𝐱H,𝐚H)→ℝ→ℒsubscript𝐱1subscript𝐚1…subscript𝐱𝐻subscript𝐚𝐻ℝ\\mathcal{L}(\\mathbf{x}_{1},\\mathbf{a}_{1},\\dots,\\mathbf{x}_{H},\\mathbf{a}_{H})\\rightarrow\\mathbb{R}, provides task-specific feedback, which might be in the form of a misclassification loss or a cost function in a Markov decision process. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_7",
"text": " In our meta-learning scenario, we consider a distribution over tasks p(𝒯)𝑝𝒯p(\\mathcal{T}) that we want our model to be able to adapt to. In the K𝐾K-shot learning setting, the model is trained to learn a new task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} drawn from p(𝒯)𝑝𝒯p(\\mathcal{T}) from only K𝐾K samples drawn from qisubscript𝑞𝑖q_{i} and feedback ℒ𝒯isubscriptℒsubscript𝒯𝑖\\mathcal{L}_{\\mathcal{T}_{i}} generated by 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. During meta-training, a task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} is sampled from p(𝒯)𝑝𝒯p(\\mathcal{T}), the model is trained with K𝐾K samples and feedback from the corresponding loss ℒ𝒯isubscriptℒsubscript𝒯𝑖\\mathcal{L}_{\\mathcal{T}_{i}} from 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}, and then tested on new samples from 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. The model f𝑓f is then improved by considering how the test error on new data from qisubscript𝑞𝑖q_{i} changes with respect to the parameters. In effect, the test error on sampled tasks 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} serves as the training error of the meta-learning process. At the end of meta-training, new tasks are sampled from p(𝒯)𝑝𝒯p(\\mathcal{T}), and meta-performance is measured by the model’s performance after learning from K𝐾K samples. Generally, tasks used for meta-testing are held out during meta-training. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_8",
"text": " In contrast to prior work, which has sought to train recurrent neural networks that ingest entire datasets (Santoro et al., 2016; Duan et al., 2016b) or feature embeddings that can be combined with nonparametric methods at test time (Vinyals et al., 2016; Koch, 2015), we propose a method that can learn the parameters of any standard model via meta-learning in such a way as to prepare that model for fast adaptation. The intuition behind this approach is that some internal representations are more transferrable than others. For example, a neural network might learn internal features that are broadly applicable to all tasks in p(𝒯)𝑝𝒯p(\\mathcal{T}), rather than a single individual task. How can we encourage the emergence of such general-purpose representations? We take an explicit approach to this problem: since the model will be fine-tuned using a gradient-based learning rule on a new task, we will aim to learn a model in such a way that this gradient-based learning rule can make rapid progress on new tasks drawn from p(𝒯)𝑝𝒯p(\\mathcal{T}), without overfitting. In effect, we will aim to find model parameters that are sensitive to changes in the task, such that small changes in the parameters will produce large improvements on the loss function of any task drawn from p(𝒯)𝑝𝒯p(\\mathcal{T}), when altered in the direction of the gradient of that loss (see Figure 1). We make no assumption on the form of the model, other than to assume that it is parametrized by some parameter vector θ𝜃\\theta, and that the loss function is smooth enough in θ𝜃\\theta that we can use gradient-based learning techniques. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_9",
"text": " Formally, we consider a model represented by a parametrized function fθsubscript𝑓𝜃f_{\\theta} with parameters θ𝜃\\theta. When adapting to a new task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}, the model’s parameters θ𝜃\\theta become θi′superscriptsubscript𝜃𝑖′\\theta_{i}^{\\prime}. In our method, the updated parameter vector θi′superscriptsubscript𝜃𝑖′\\theta_{i}^{\\prime} is computed using one or more gradient descent updates on task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. For example, when using one gradient update, θi′=θ−α∇θℒ𝒯i(fθ).superscriptsubscript𝜃𝑖′𝜃𝛼subscript∇𝜃subscriptℒsubscript𝒯𝑖subscript𝑓𝜃\\vspace{-0.15cm}\\theta_{i}^{\\prime}=\\theta-\\alpha\\nabla_{\\theta}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta}). The step size α𝛼\\alpha may be fixed as a hyperparameter or meta-learned. For simplicity of notation, we will consider one gradient update for the rest of this section, but using multiple gradient updates is a straightforward extension. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_10",
"text": " The model parameters are trained by optimizing for the performance of fθi′subscript𝑓superscriptsubscript𝜃𝑖′f_{\\theta_{i}^{\\prime}} with respect to θ𝜃\\theta across tasks sampled from p(𝒯)𝑝𝒯p(\\mathcal{T}). More concretely, the meta-objective is as follows: minθ∑𝒯i∼p(𝒯)ℒ𝒯i(fθi′)=∑𝒯i∼p(𝒯)ℒ𝒯i(fθ−α∇θℒ𝒯i(fθ))subscript𝜃subscriptsimilar-tosubscript𝒯𝑖𝑝𝒯subscriptℒsubscript𝒯𝑖subscript𝑓superscriptsubscript𝜃𝑖′subscriptsimilar-tosubscript𝒯𝑖𝑝𝒯subscriptℒsubscript𝒯𝑖subscript𝑓𝜃𝛼subscript∇𝜃subscriptℒsubscript𝒯𝑖subscript𝑓𝜃\\displaystyle\\vspace{-0.2cm}\\min_{\\theta}\\sum_{\\mathcal{T}_{i}\\sim p(\\mathcal{T})}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta_{i}^{\\prime}})=\\sum_{\\mathcal{T}_{i}\\sim p(\\mathcal{T})}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta-\\alpha\\nabla_{\\theta}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta})}) Note that the meta-optimization is performed over the model parameters θ𝜃\\theta, whereas the objective is computed using the updated model parameters θ′superscript𝜃′\\theta^{\\prime}. In effect, our proposed method aims to optimize the model parameters such that one or a small number of gradient steps on a new task will produce maximally effective behavior on that task. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_11",
"text": " The meta-optimization across tasks is performed via stochastic gradient descent (SGD), such that the model parameters θ𝜃\\theta are updated as follows: θ←θ−β∇θ∑𝒯i∼p(𝒯)ℒ𝒯i(fθi′)←𝜃𝜃𝛽subscript∇𝜃subscriptsimilar-tosubscript𝒯𝑖𝑝𝒯subscriptℒsubscript𝒯𝑖subscript𝑓superscriptsubscript𝜃𝑖′\\vspace{-0.2cm}\\theta\\leftarrow\\theta-\\beta\\nabla_{\\theta}\\sum_{\\mathcal{T}_{i}\\sim p(\\mathcal{T})}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta_{i}^{\\prime}}) (1) where β𝛽\\beta is the meta step size. The full algorithm, in the general case, is outlined in Algorithm 1. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_12",
"text": " The MAML meta-gradient update involves a gradient through a gradient. Computationally, this requires an additional backward pass through f𝑓f to compute Hessian-vector products, which is supported by standard deep learning libraries such as TensorFlow (Abadi et al., 2016). In our experiments, we also include a comparison to dropping this backward pass and using a first-order approximation, which we discuss in Section 5.2. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_13",
"text": " In this section, we discuss specific instantiations of our meta-learning algorithm for supervised learning and reinforcement learning. The domains differ in the form of loss function and in how data is generated by the task and presented to the model, but the same basic adaptation mechanism can be applied in both cases. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_14",
"text": " Few-shot learning is well-studied in the domain of supervised tasks, where the goal is to learn a new function from only a few input/output pairs for that task, using prior data from similar tasks for meta-learning. For example, the goal might be to classify images of a Segway after seeing only one or a few examples of a Segway, with a model that has previously seen many other types of objects. Likewise, in few-shot regression, the goal is to predict the outputs of a continuous-valued function from only a few datapoints sampled from that function, after training on many functions with similar statistical properties. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_15",
"text": " To formalize the supervised regression and classification problems in the context of the meta-learning definitions in Section 2.1, we can define the horizon H=1𝐻1H=1 and drop the timestep subscript on 𝐱tsubscript𝐱𝑡\\mathbf{x}_{t}, since the model accepts a single input and produces a single output, rather than a sequence of inputs and outputs. The task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} generates K𝐾K i.i.d. observations 𝐱𝐱\\mathbf{x} from qisubscript𝑞𝑖q_{i}, and the task loss is represented by the error between the model’s output for 𝐱𝐱\\mathbf{x} and the corresponding target values 𝐲𝐲\\mathbf{y} for that observation and task. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_16",
"text": " Two common loss functions used for supervised classification and regression are cross-entropy and mean-squared error (MSE), which we will describe below; though, other supervised loss functions may be used as well. For regression tasks using mean-squared error, the loss takes the form: ℒ𝒯i(fϕ)=∑𝐱(j),𝐲(j)∼𝒯i∥fϕ(𝐱(j))−𝐲(j)∥22,subscriptℒsubscript𝒯𝑖subscript𝑓italic-ϕsubscriptsimilar-tosuperscript𝐱𝑗superscript𝐲𝑗subscript𝒯𝑖superscriptsubscriptdelimited-∥∥subscript𝑓italic-ϕsuperscript𝐱𝑗superscript𝐲𝑗22\\displaystyle\\vspace{-0.2cm}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\phi})=\\!\\!\\!\\!\\!\\!\\sum_{\\mathbf{x}^{(j)},\\mathbf{y}^{(j)}\\sim\\mathcal{T}_{i}}\\lVert f_{\\phi}(\\mathbf{x}^{(j)})-\\mathbf{y}^{(j)}\\rVert_{2}^{2}, (2) where 𝐱(j),𝐲(j)superscript𝐱𝑗superscript𝐲𝑗\\mathbf{x}^{(j)},\\mathbf{y}^{(j)} are an input/output pair sampled from task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. In K𝐾K-shot regression tasks, K𝐾K input/output pairs are provided for learning for each task. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_17",
"text": " Similarly, for discrete classification tasks with a cross-entropy loss, the loss takes the form: ℒ𝒯i(fϕ)=∑𝐱(j),𝐲(j)∼𝒯isubscriptℒsubscript𝒯𝑖subscript𝑓italic-ϕsubscriptsimilar-tosuperscript𝐱𝑗superscript𝐲𝑗subscript𝒯𝑖\\displaystyle\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\phi})=\\!\\!\\!\\!\\!\\!\\sum_{\\mathbf{x}^{(j)},\\mathbf{y}^{(j)}\\sim\\mathcal{T}_{i}} 𝐲(j)logfϕ(𝐱(j))superscript𝐲𝑗subscript𝑓italic-ϕsuperscript𝐱𝑗\\displaystyle\\mathbf{y}^{(j)}\\log f_{\\phi}(\\mathbf{x}^{(j)}) (3) +(1−𝐲(j))log(1−fϕ(𝐱(j)))1superscript𝐲𝑗1subscript𝑓italic-ϕsuperscript𝐱𝑗\\displaystyle+(1-\\mathbf{y}^{(j)})\\log(1-f_{\\phi}(\\mathbf{x}^{(j)})) According to the conventional terminology, K𝐾K-shot classification tasks use K𝐾K input/output pairs from each class, for a total of NK𝑁𝐾NK data points for N𝑁N-way classification. Given a distribution over tasks p(𝒯i)𝑝subscript𝒯𝑖p(\\mathcal{T}_{i}), these loss functions can be directly inserted into the equations in Section 2.2 to perform meta-learning, as detailed in Algorithm 2. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_18",
"text": " In reinforcement learning (RL), the goal of few-shot meta-learning is to enable an agent to quickly acquire a policy for a new test task using only a small amount of experience in the test setting. A new task might involve achieving a new goal or succeeding on a previously trained goal in a new environment. For example, an agent might learn to quickly figure out how to navigate mazes so that, when faced with a new maze, it can determine how to reliably reach the exit with only a few samples. In this section, we will discuss how MAML can be applied to meta-learning for RL. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_19",
"text": " Each RL task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} contains an initial state distribution qi(𝐱1)subscript𝑞𝑖subscript𝐱1q_{i}(\\mathbf{x}_{1}) and a transition distribution qi(𝐱t+1|𝐱t,𝐚t)subscript𝑞𝑖conditionalsubscript𝐱𝑡1subscript𝐱𝑡subscript𝐚𝑡q_{i}(\\mathbf{x}_{t+1}|\\mathbf{x}_{t},\\mathbf{a}_{t}), and the loss ℒ𝒯isubscriptℒsubscript𝒯𝑖\\mathcal{L}_{\\mathcal{T}_{i}} corresponds to the (negative) reward function R𝑅R. The entire task is therefore a Markov decision process (MDP) with horizon H𝐻H, where the learner is allowed to query a limited number of sample trajectories for few-shot learning. Any aspect of the MDP may change across tasks in p(𝒯)𝑝𝒯p(\\mathcal{T}). The model being learned, fθsubscript𝑓𝜃f_{\\theta}, is a policy that maps from states 𝐱tsubscript𝐱𝑡\\mathbf{x}_{t} to a distribution over actions 𝐚tsubscript𝐚𝑡\\mathbf{a}_{t} at each timestep t∈{1,…,H}𝑡1…𝐻t\\in\\{1,...,H\\}. The loss for task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} and model fϕsubscript𝑓italic-ϕf_{\\phi} takes the form ℒ𝒯i(fϕ)=−𝔼𝐱t,𝐚t∼fϕ,q𝒯i(∑t=1HRi(𝐱t,𝐚t)).subscriptℒsubscript𝒯𝑖subscript𝑓italic-ϕsubscript𝔼formulae-sequencesimilar-tosubscript𝐱𝑡subscript𝐚𝑡subscript𝑓italic-ϕsubscript𝑞subscript𝒯𝑖delimited-()superscriptsubscript𝑡1𝐻subscript𝑅𝑖subscript𝐱𝑡subscript𝐚𝑡\\displaystyle\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\phi})=-\\mathbb{E}_{\\mathbf{x}_{t},\\mathbf{a}_{t}\\sim f_{\\phi},q_{\\mathcal{T}_{i}}}\\left(\\sum_{t=1}^{H}R_{i}(\\mathbf{x}_{t},\\mathbf{a}_{t})\\right). (4) In K𝐾K-shot reinforcement learning, K𝐾K rollouts from fθsubscript𝑓𝜃f_{\\theta} and task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}, (𝐱1,𝐚1,…𝐱H)subscript𝐱1subscript𝐚1…subscript𝐱𝐻(\\mathbf{x}_{1},\\mathbf{a}_{1},...\\mathbf{x}_{H}), and the corresponding rewards R(𝐱t,𝐚t)𝑅subscript𝐱𝑡subscript𝐚𝑡R(\\mathbf{x}_{t},\\mathbf{a}_{t}), may be used for adaptation on a new task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. Since the expected reward is generally not differentiable due to unknown dynamics, we use policy gradient methods to estimate the gradient both for the model gradient update(s) and the meta-optimization. Since policy gradients are an on-policy algorithm, each additional gradient step during the adaptation of fθsubscript𝑓𝜃f_{\\theta} requires new samples from the current policy fθi′subscript𝑓subscript𝜃superscript𝑖′f_{\\theta_{i^{\\prime}}}. We detail the algorithm in Algorithm 3. This algorithm has the same structure as Algorithm 2, with the principal difference being that steps 5 and 8 require sampling trajectories from the environment corresponding to task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. Practical implementations of this method may also use a variety of improvements recently proposed for policy gradient algorithms, including state or action-dependent baselines and trust regions (Schulman et al., 2015). ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_20",
"text": " The method that we propose in this paper addresses the general problem of meta-learning (Thrun & Pratt, 1998; Schmidhuber, 1987; Naik & Mammone, 1992), which includes few-shot learning. A popular approach for meta-learning is to train a meta-learner that learns how to update the parameters of the learner’s model (Bengio et al., 1992; Schmidhuber, 1992; Bengio et al., 1990). This approach has been applied to learning to optimize deep networks (Hochreiter et al., 2001; Andrychowicz et al., 2016; Li & Malik, 2017), as well as for learning dynamically changing recurrent networks (Ha et al., 2017). One recent approach learns both the weight initialization and the optimizer, for few-shot image recognition (Ravi & Larochelle, 2017). Unlike these methods, the MAML learner’s weights are updated using the gradient, rather than a learned update; our method does not introduce additional parameters for meta-learning nor require a particular learner architecture. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_21",
"text": " Few-shot learning methods have also been developed for specific tasks such as generative modeling (Edwards & Storkey, 2017; Rezende et al., 2016) and image recognition (Vinyals et al., 2016). One successful approach for few-shot classification is to learn to compare new examples in a learned metric space using e.g. Siamese networks (Koch, 2015) or recurrence with attention mechanisms (Vinyals et al., 2016; Shyam et al., 2017; Snell et al., 2017). These approaches have generated some of the most successful results, but are difficult to directly extend to other problems, such as reinforcement learning. Our method, in contrast, is agnostic to the form of the model and to the particular learning task. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_22",
"text": " Another approach to meta-learning is to train memory-augmented models on many tasks, where the recurrent learner is trained to adapt to new tasks as it is rolled out. Such networks have been applied to few-shot image recognition (Santoro et al., 2016; Munkhdalai & Yu, 2017) and learning “fast” reinforcement learning agents (Duan et al., 2016b; Wang et al., 2016). Our experiments show that our method outperforms the recurrent approach on few-shot classification. Furthermore, unlike these methods, our approach simply provides a good weight initialization and uses the same gradient descent update for both the learner and meta-update. As a result, it is straightforward to finetune the learner for additional gradient steps. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_23",
"text": " Our approach is also related to methods for initialization of deep networks. In computer vision, models pretrained on large-scale image classification have been shown to learn effective features for a range of problems (Donahue et al., 2014). In contrast, our method explicitly optimizes the model for fast adaptability, allowing it to adapt to new tasks with only a few examples. Our method can also be viewed as explicitly maximizing sensitivity of new task losses to the model parameters. A number of prior works have explored sensitivity in deep networks, often in the context of initialization (Saxe et al., 2014; Kirkpatrick et al., 2016). Most of these works have considered good random initializations, though a number of papers have addressed data-dependent initializers (Krähenbühl et al., 2016; Salimans & Kingma, 2016), including learned initializations (Husken & Goerick, 2000; Maclaurin et al., 2015). In contrast, our method explicitly trains the parameters for sensitivity on a given task distribution, allowing for extremely efficient adaptation for problems such as K𝐾K-shot learning and rapid reinforcement learning in only one or a few gradient steps. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_24",
"text": " The goal of our experimental evaluation is to answer the following questions: (1) Can MAML enable fast learning of new tasks? (2) Can MAML be used for meta-learning in multiple different domains, including supervised regression, classification, and reinforcement learning? (3) Can a model learned with MAML continue to improve with additional gradient updates and/or examples? ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_25",
"text": " All of the meta-learning problems that we consider require some amount of adaptation to new tasks at test-time. When possible, we compare our results to an oracle that receives the identity of the task (which is a problem-dependent representation) as an additional input, as an upper bound on the performance of the model. All of the experiments were performed using TensorFlow (Abadi et al., 2016), which allows for automatic differentiation through the gradient update(s) during meta-learning. The code is available online111Code for the regression and supervised experiments is at github.com/cbfinn/maml and code for the RL experiments is at github.com/cbfinn/maml_rl. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_26",
"text": " We start with a simple regression problem that illustrates the basic principles of MAML. Each task involves regressing from the input to the output of a sine wave, where the amplitude and phase of the sinusoid are varied between tasks. Thus, p(𝒯)𝑝𝒯p(\\mathcal{T}) is continuous, where the amplitude varies within (0.1,5.0)0.15.0(0.1,5.0) and the phase varies within (0,π)0𝜋(0,\\pi), and the input and output both have a dimensionality of 111. During training and testing, datapoints 𝐱𝐱\\mathbf{x} are sampled uniformly from (−5.0,5.0)5.05.0(-5.0,5.0). The loss is the mean-squared error between the prediction f(𝐱)𝑓𝐱f(\\mathbf{x}) and true value. The regressor is a neural network model with 222 hidden layers of size 404040 with ReLU nonlinearities. When training with MAML, we use one gradient update with K=10𝐾10K=10 examples with a fixed step size α=0.01𝛼0.01\\alpha=0.01, and use Adam as the meta-optimizer (Kingma & Ba, 2015). The baselines are likewise trained with Adam. To evaluate performance, we fine-tune a single meta-learned model on varying numbers of K𝐾K examples, and compare performance to two baselines: (a) pretraining on all of the tasks, which entails training a network to regress to random sinusoid functions and then, at test-time, fine-tuning with gradient descent on the K𝐾K provided points, using an automatically tuned step size, and (b) an oracle which receives the true amplitude and phase as input. In Appendix C, we show comparisons to additional multi-task and adaptation methods. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_27",
"text": " We evaluate performance by fine-tuning the model learned by MAML and the pretrained model on K={5,10,20}𝐾51020K=\\{5,10,20\\} datapoints. During fine-tuning, each gradient step is computed using the same K𝐾K datapoints. The qualitative results, shown in Figure 2 and further expanded on in Appendix B show that the learned model is able to quickly adapt with only 555 datapoints, shown as purple triangles, whereas the model that is pretrained using standard supervised learning on all tasks is unable to adequately adapt with so few datapoints without catastrophic overfitting. Crucially, when the K𝐾K datapoints are all in one half of the input range, the model trained with MAML can still infer the amplitude and phase in the other half of the range, demonstrating that the MAML trained model f𝑓f has learned to model the periodic nature of the sine wave. Furthermore, we observe both in the qualitative and quantitative results (Figure 3 and Appendix B) that the model learned with MAML continues to improve with additional gradient steps, despite being trained for maximal performance after one gradient step. This improvement suggests that MAML optimizes the parameters such that they lie in a region that is amenable to fast adaptation and is sensitive to loss functions from p(𝒯)𝑝𝒯p(\\mathcal{T}), as discussed in Section 2.2, rather than overfitting to parameters θ𝜃\\theta that only improve after one step. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_28",
"text": " To evaluate MAML in comparison to prior meta-learning and few-shot learning algorithms, we applied our method to few-shot image recognition on the Omniglot (Lake et al., 2011) and MiniImagenet datasets. The Omniglot dataset consists of 20 instances of 1623 characters from 50 different alphabets. Each instance was drawn by a different person. The MiniImagenet dataset was proposed by Ravi & Larochelle (2017), and involves 64 training classes, 12 validation classes, and 24 test classes. The Omniglot and MiniImagenet image recognition tasks are the most common recently used few-shot learning benchmarks (Vinyals et al., 2016; Santoro et al., 2016; Ravi & Larochelle, 2017). We follow the experimental protocol proposed by Vinyals et al. (2016), which involves fast learning of N𝑁N-way classification with 1 or 5 shots. The problem of N𝑁N-way classification is set up as follows: select N𝑁N unseen classes, provide the model with K𝐾K different instances of each of the N𝑁N classes, and evaluate the model’s ability to classify new instances within the N𝑁N classes. For Omniglot, we randomly select 120012001200 characters for training, irrespective of alphabet, and use the remaining for testing. The Omniglot dataset is augmented with rotations by multiples of 909090 degrees, as proposed by Santoro et al. (2016). ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_29",
"text": " Our model follows the same architecture as the embedding function used by Vinyals et al. (2016), which has 4 modules with a 3×3333\\times 3 convolutions and 646464 filters, followed by batch normalization (Ioffe & Szegedy, 2015), a ReLU nonlinearity, and 2×2222\\times 2 max-pooling. The Omniglot images are downsampled to 28×28282828\\times 28, so the dimensionality of the last hidden layer is 646464. As in the baseline classifier used by Vinyals et al. (2016), the last layer is fed into a softmax. For Omniglot, we used strided convolutions instead of max-pooling. For MiniImagenet, we used 323232 filters per layer to reduce overfitting, as done by (Ravi & Larochelle, 2017). In order to also provide a fair comparison against memory-augmented neural networks (Santoro et al., 2016) and to test the flexibility of MAML, we also provide results for a non-convolutional network. For this, we use a network with 444 hidden layers with sizes 256256256, 128128128, 646464, 646464, each including batch normalization and ReLU nonlinearities, followed by a linear layer and softmax. For all models, the loss function is the cross-entropy error between the predicted and true class. Additional hyperparameter details are included in Appendix A.1. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_30",
"text": " We present the results in Table 1. The convolutional model learned by MAML compares well to the state-of-the-art results on this task, narrowly outperforming the prior methods. Some of these existing methods, such as matching networks, Siamese networks, and memory models are designed with few-shot classification in mind, and are not readily applicable to domains such as reinforcement learning. Additionally, the model learned with MAML uses fewer overall parameters compared to matching networks and the meta-learner LSTM, since the algorithm does not introduce any additional parameters beyond the weights of the classifier itself. Compared to these prior methods, memory-augmented neural networks (Santoro et al., 2016) specifically, and recurrent meta-learning models in general, represent a more broadly applicable class of methods that, like MAML, can be used for other tasks such as reinforcement learning (Duan et al., 2016b; Wang et al., 2016). However, as shown in the comparison, MAML significantly outperforms memory-augmented networks and the meta-learner LSTM on 5-way Omniglot and MiniImagenet classification, both in the 111-shot and 555-shot case. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_31",
"text": " A significant computational expense in MAML comes from the use of second derivatives when backpropagating the meta-gradient through the gradient operator in the meta-objective (see Equation (1)). On MiniImagenet, we show a comparison to a first-order approximation of MAML, where these second derivatives are omitted. Note that the resulting method still computes the meta-gradient at the post-update parameter values θi′superscriptsubscript𝜃𝑖′\\theta_{i}^{\\prime}, which provides for effective meta-learning. Surprisingly however, the performance of this method is nearly the same as that obtained with full second derivatives, suggesting that most of the improvement in MAML comes from the gradients of the objective at the post-update parameter values, rather than the second order updates from differentiating through the gradient update. Past work has observed that ReLU neural networks are locally almost linear (Goodfellow et al., 2015), which suggests that second derivatives may be close to zero in most cases, partially explaining the good performance of the first-order approximation. This approximation removes the need for computing Hessian-vector products in an additional backward pass, which we found led to roughly 33%percent3333\\% speed-up in network computation. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_32",
"text": " To evaluate MAML on reinforcement learning problems, we constructed several sets of tasks based off of the simulated continuous control environments in the rllab benchmark suite (Duan et al., 2016a). We discuss the individual domains below. In all of the domains, the model trained by MAML is a neural network policy with two hidden layers of size 100100100, with ReLU nonlinearities. The gradient updates are computed using vanilla policy gradient (REINFORCE) (Williams, 1992), and we use trust-region policy optimization (TRPO) as the meta-optimizer (Schulman et al., 2015). In order to avoid computing third derivatives, we use finite differences to compute the Hessian-vector products for TRPO. For both learning and meta-learning updates, we use the standard linear feature baseline proposed by Duan et al. (2016a), which is fitted separately at each iteration for each sampled task in the batch. We compare to three baseline models: (a) pretraining one policy on all of the tasks and then fine-tuning, (b) training a policy from randomly initialized weights, and (c) an oracle policy which receives the parameters of the task as input, which for the tasks below corresponds to a goal position, goal direction, or goal velocity for the agent. The baseline models of (a) and (b) are fine-tuned with gradient descent with a manually tuned step size. Videos of the learned policies can be viewed at sites.google.com/view/maml ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_33",
"text": " 2D Navigation. In our first meta-RL experiment, we study a set of tasks where a point agent must move to different goal positions in 2D, randomly chosen for each task within a unit square. The observation is the current 2D position, and actions correspond to velocity commands clipped to be in the range (−0.1,0.1)0.10.1(-0.1,0.1). The reward is the negative squared distance to the goal, and episodes terminate when the agent is within 0.010.010.01 of the goal or at the horizon of H=100𝐻100H=100. The policy was trained with MAML to maximize performance after 111 policy gradient update using 202020 trajectories. Additional hyperparameter settings for this problem and the following RL problems are in Appendix A.2. In our evaluation, we compare adaptation to a new task with up to 4 gradient updates, each with 404040 samples. The results in Figure 4 show the adaptation performance of models that are initialized with MAML, conventional pretraining on the same set of tasks, random initialization, and an oracle policy that receives the goal position as input. The results show that MAML can learn a model that adapts much more quickly in a single gradient update, and furthermore continues to improve with additional updates. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_34",
"text": " Locomotion. To study how well MAML can scale to more complex deep RL problems, we also study adaptation on high-dimensional locomotion tasks with the MuJoCo simulator (Todorov et al., 2012). The tasks require two simulated robots – a planar cheetah and a 3D quadruped (the “ant”) – to run in a particular direction or at a particular velocity. In the goal velocity experiments, the reward is the negative absolute value between the current velocity of the agent and a goal, which is chosen uniformly at random between 0.00.00.0 and 2.02.02.0 for the cheetah and between 0.00.00.0 and 3.03.03.0 for the ant. In the goal direction experiments, the reward is the magnitude of the velocity in either the forward or backward direction, chosen at random for each task in p(𝒯)𝑝𝒯p(\\mathcal{T}). The horizon is H=200𝐻200H=200, with 202020 rollouts per gradient step for all problems except the ant forward/backward task, which used 404040 rollouts per step. The results in Figure 5 show that MAML learns a model that can quickly adapt its velocity and direction with even just a single gradient update, and continues to improve with more gradient steps. The results also show that, on these challenging tasks, the MAML initialization substantially outperforms random initialization and pretraining. In fact, pretraining is in some cases worse than random initialization, a fact observed in prior RL work (Parisotto et al., 2016). ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_35",
"text": " We introduced a meta-learning method based on learning easily adaptable model parameters through gradient descent. Our approach has a number of benefits. It is simple and does not introduce any learned parameters for meta-learning. It can be combined with any model representation that is amenable to gradient-based training, and any differentiable objective, including classification, regression, and reinforcement learning. Lastly, since our method merely produces a weight initialization, adaptation can be performed with any amount of data and any number of gradient steps, though we demonstrate state-of-the-art results on classification with only one or five examples per class. We also show that our method can adapt an RL agent using policy gradients and a very modest amount of experience. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_36",
"text": " Reusing knowledge from past tasks may be a crucial ingredient in making high-capacity scalable models, such as deep neural networks, amenable to fast training with small datasets. We believe that this work is one step toward a simple and general-purpose meta-learning technique that can be applied to any problem and any model. Further research in this area can make multitask initialization a standard ingredient in deep learning and reinforcement learning. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
}
] |
Does codeBERT trained by natural language?
|
No, CodeBERT trained by code from six programming language [31].
|
[
31
] |
[
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al. (2019); Zhao et al. (2020). In particular, studies have shown that the pretrained LMs like BERT capture linguistic knowledge about syntax Lin et al. (2019); Wu et al. (2020), semantics Vulić et al. (2020b, a) and morphology Hofmann et al. (2020, 2021). In fact, Tenney et al. (2019) demonstrated that learned representations in pretrained LMs even internally reflect the classical NLP pipeline. Since most NLP benchmarks such as SuperGLUE Wang et al. (2019) naturally are focused on tasks such as textual entailment and reading comprehension that require linguistic knowledge and reasoning, it is unsurprising that LMs have achieved strong results on these tasks. On the other hand, little work so far has explored the abilities of pretrained LMs for learning non-linguistic tasks. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_1",
"text": " In this paper, we explore whether pretraining on text is inherently about learning language, or if pretraining also imbues LMs with skills for symbolic manipulation and non-linguistic reasoning (for example, performing quantitative computation such as finding the median of a set of numbers, recognizing regular expressions, or identifying whether a string is a palindrome, as shown in Figure 1). In other words, we investigate whether and how pretraining develops helpful inductive biases for non-linguistic reasoning. For this analysis, we create a set of 19 tasks from three categories of task paradigms: quantitative computation (§3.1), recognizing regular expressions (§3.2), and string reasoning (§3.3). Figure 1 shows an example for each category, and the full list of tasks is described in the table 1. We experiment with transformer and RNN based LMs (§4) for learning these tasks, and perform a comparative analysis with (non-pretrained) neural model variants from the perspective of learning metrics such as accuracy and sample efficiency. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_2",
"text": " Our experiments (§5) reveal that pretrained models overall perform substantially better and are more sample efficient on most tasks. However, there are significant differences and patterns in performance between task types, as well as variance between different LM architectures. Since non-pretrained models do not have the benefit of regularization that comes from pretraining, a plausible reason for the discrepancy between them and pretrained LMs might be underfitting of the non-pretrained models when trained on comparatively small dataset sizes. To account for this, we also comprehensively explore the effect of model size (§6) of non-pretrained models for both transformer and RNN architectures. We find that the discrepancy in performance remains even for smaller neural models, indicating that the differences are not simply due to a mismatch in model and data sizes. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_3",
"text": " Finally, we investigate the role that pretraining data plays in influencing task performance on non-linguistic tasks (§7). We experiment with pretraining on different domains of text, pretraining on perturbed representations of natural language text (such as shuffled word order), pretraining on text of computer programs (no linguistic properties of natural languages), pretraining on multi-lingual and non-English text, and pretraining with synthetic text (data sampled from synthetic distributions). Our analysis reveals that the advantages of pretraining surprisingly persist with various degrees across these variations, suggesting hithertho unexplored connections between pretraining and the learning abilities of language models. Our contributions are: ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_4",
"text": " • We compare a range of pretrained LMs and non-pretrained models on a carefully designed suite of 19 classifications tasks that require non-linguistic reasoning. • We comprehensively explore the role of the pretraining data by experimenting with models pretrained from texts with different provenances. • We establish that the positive effects of pretraining are not simply due to better model regularization by experimenting with neural models with different complexities and architectures. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_5",
"text": " A body of work has investigated contextual word embeddings to determine whether they capture aspects of mathematical meaning for numbers Naik et al. (2019). Wallace et al. (2019) probed numerical supremacy on token embeddings of contextual language models such as ELMO and BERT. Thawani et al. (2021) surveyed numerical understanding in NLP models using 7 sub-tasks such as measurement estimation and word problems. Our work diverges from these in exploring a richer set of tasks including harder tasks such as set operations. Further, previous methods explore mathematical reasoning tasks posed as language problems, which conflates the problems of language and mathematical learning and also makes the datasets susceptible to biases due to data collection. Our analysis circumvents both these issues by design. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_6",
"text": " Some previous works have explored the ability of RNN and Transformer architectures for learning regular languages Weiss et al. (2018); Sennhauser and Berwick (2018); Suzgun et al. (2019b); Bhattamishra et al. (2020), closing brackets Skachkova et al. (2018), and dynamic counting Suzgun et al. (2019a). However, they focus on the learnability of these tasks with specific architectures, and do not look at pretrained LMs, which are our focus here. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_7",
"text": " Finally, in our discussion, we conceptually stretch the notion of inductive bias. The idea of inductive bias is usually associated with specific model types McCoy et al. (2020); Kharitonov and Chaabouni (2021), architectures Xu et al. (2021); Brutzkus and Globerson (2021) and regularization approaches Helmbold and Long (2015). We believe that extending this to refer to learning tasks with pretrained LMs is both reasonable and useful. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_8",
"text": " In this section, we describe the tasks used for our analysis, which we refer to as NILM (measuring Non-linguistic Inductive bias in Language Models). The tasks correspond to three task paradigms: (1) quantitative computation, (2) regular expressions, and (3) string reasoning. Each task in NILM is posed as a classification task. The descriptions for all the tasks with input and output examples, class labels and the input range are shown in Table 1. Each task has a synthetically generated dataset with train/dev/test splits222The training set size for all tasks is 10K, dev set size is 1K and test set size is 1K, except for tasks on recognizing regular expressions, where the test set size is 2K following previous work Bhattamishra et al. (2020).. To avoid biases in the datasets, relevant numbers and strings in individual examples are uniformly sampled from the appropriate ranges. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_9",
"text": " This task paradigm focuses on tasks involving arithmetic and set statistics. Odd classification. Classify if a number is odd. Even classification. Classify if a number is even. Odd even classification. For a given number N𝑁N and a string “even” or “odd”, classify if the number satisfies the string condition. Decimal operation. Subtract or divide two numbers. Operands are represented in decimal notation. Decimal & word operation. Subtract or divide two numbers. Operands are represented in decimal or word notation. Mean. Given a set of numbers, output the mean. Median. Given a set, output the median. Mode. Given a set of numbers, output the mode. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_10",
"text": " This task paradigm focuses on recognizing regular expressions. The training data consists of positive and negative examples of strings matching a regular expression Bhattamishra et al. (2020). Recognize {0,1,2}*02*. Recognize if a pattern matches {0,1,2}*02*. The maximum length of the patterns is 20. Recognize AA*BB*CC*DD*EE*. Recognize if a pattern matches AA*BB*CC*DD*EE*. The maximum length of the patterns is 30. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_11",
"text": " This task paradigm focuses on reasoning tasks over individual strings or pairs of strings. Palindrome classification. A string is a palindrome if it reads the same forward and backward. The task is to classify whether a given string is a palindrome. The string length ranges from 1 to 15. Anagram classification. Two strings are anagrams if one is formed by rearranging letters from the other. The task is to classify if a pair of strings are anagrams. The string length ranges from 2 to 15. Isogram classification. A string is an isogram if it has no repeating characters. The task is to classify whether a given string is an isogram. The string length ranges from 1 to 52. Tautonym classification. A tautonym is a word which can be broken down into two identical parts, with the same spelling. The task is to classify whether a given string is a tautonym. The string length ranges from 1 to 10. Length of a string. Output the length of a given string. The string length ranges from 1 to 10. Count of unique characters. Given a string, count the number of unique characters in it. The string lengths ranges from 10 to 30. Parity check. Given a binary string, output if the counts of ones and zeros are the same. The maximum length of the binary string is 20. Vowels classification. Given a string, classify if the string contains only vowel characters. The string length ranges from 3 to 10. Maximum frequent character. Given a string, output the character with the maximum frequency. The string length ranges from 5 to 30. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_12",
"text": " Next, we describe the LMs and their variants used in NILM. We experiment with four language models, based on both Transformer and RNN architectures. BERT small. This is the bert-base-uncased model with 12 transformer encoder layers and the dimension of the representations is 768. BERT tokenizer is based on the WordPiece model Wu et al. (2016). BERT large. This is the bert-large-uncased model which has 24 transformer encoders and representations have 1024 dimensions. DeBERTa. This is a transformer based language model and its tokenizer is built using Byte Pair Encoding Sennrich et al. (2016). We consider the DeBERTa base model. It has 12 transformer encoder layers and representations have 768 dimensions. ELMO. This is an LSTM based language model Peters et al. (2018). It has 3 layers and the output representations have 1024 dimensions. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_13",
"text": " Our experiments are based on pretrained and non-pretrained variants of these architectures. For pretrained variants, the weights are initialized with the pretrained weights. The tokenization on the training data is performed using the pre-built vocabulary. For the non-pretrained neural models, the weights are initialized randomly and updated during training. The tokenizer used is the same as in the pretrained variant. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_14",
"text": " All the models are trained with varying training data of sizes 10, 20, 40, 80, 160, 320, 640, 1280, 2560, 5120, 6000, 7000, 8000, 9000 and 10000. For training set sizes of less than 1000 samples, we report the average of 10 runs. For training set sizes greater than 1000, all reported numbers are averages of 5 runs. In the next section, we present a comparative analysis of pretrained and non-pretrained models. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_15",
"text": " Next, we compare the performance of pretrained and non-pretrained models on tasks in NILM 333Details, including statistical significance results with the paired t-value test, are included in Appendix 6. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_16",
"text": " Quantitative computation: Figure 2 shows results on odd classification, even classification, odd even classification and decimal operation tasks. We find that pretrained LMs outperformed non-pretrained model for all of these tasks. Further, Transformer-based LMs outperformed the RNN-based ELMO models in all the tasks444We will focus on BERT small as representative of transformer models. Results for BERT large and DeBERTa follow similar trends, and are included in the supplementary material. We note that for the relatively easy tasks such as odd and even classifications, the pretrained LMs show more stable training. However, for harder tasks such as Decimal operations (where the baseline performance is around 10%), no models are able to learn the task well even with 10K labeled examples. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_17",
"text": " Figure 3 shows results on median, mean, mode and decimal & word operation tasks. The median task requires complex reasoning (sorting numbers and computing the middle element), and shows significantly lower performance than the mean and mode tasks for the non-pretrained models even with the maximum training set size. The pretrained LM models show little eventual difference in performance between these three tasks. On the other hand, for the easiest of these tasks (mode), non-pretrained models actually show higher performance than pretrained LMs in the low data regime. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_18",
"text": " Recognizing regular expressions: Figure 4 shows the comparative performance of pretrained LMs on non-pretrained models on the two tasks involving recognizing regular expressions. For both tasks, we note that the pretrained LMs can perfectly learn the tasks with many fewer labeled examples compared to the non-pretrained models. In both cases, the non-pretrained Transformer-based models eventually reach optimal performance as well. However, curiously the ELMO based non-pretrained models struggle with learning both tasks. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_19",
"text": " String reasoning: Figures 6 show the results on Palindrome, Anagram, Isogram and Tautonym classification. These tasks require character comparison within the string or with another string. Again, the pretrained variants consistently outperformed non-pretrained models variants in all of these tasks. In particular, the non-pretrained models completely fail to learn the Anagram and Palindrome tasks even for the largest training set size. Again, Transformer based LMs outperform LSTM based LMs. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_20",
"text": " Figure 7 shows the results on vowels classification, maximum frequent character, length of a string and parity check tasks. These tasks don’t require intra-string comparisons. We see that most Transformer-based variants eventually achieve optimal performance. For these simpler tasks, we again observe several instances where the Transformer-based non-pretrained models actually outperform pretrained LMs in the low data regime. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_21",
"text": " As previously mentioned, a possible explanation for the underperformance of non-pretrained models ise that the large number of parameters of the architecture relative to the sizes of the training data might be leading to under-fitting. To test this, we experiment with smaller Transformer-based models with varying numbers of parameters. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_22",
"text": " Figure 5 illustrates the effect of model sizes of non-pretrained model. The original 110 million parameter model has 12 encoder layers, 12 attention heads, and 768 dimensional representations. The 42 million parameter model has 8 encoder layers, 8 attention heads and 512 dimensional representations. The 29 million parameter model has 4 encoder layers, 8 attention heads and 512 dimensional representations. The 11 million parameter model has 4 encoder layers, 4 attention heads and 256 dimensional representations. The smallest 4 million parameter model has 2 encoder layers, 2 attention heads and 128 dimensional representations. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_23",
"text": " As seen in the figure, reducing the model size significantly improves the average performance of the non-pretrained models over 6 representative tasks. However, the smallest models show a performance drop. Most significantly, even the best performing intermediate-sized architectures are significantly worse than the pretrained LM models. This strongly suggests that the discrepancy between pretrained and non-pretrained models is not simply due to a mismatch between model and data sizes. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_24",
"text": " We observe that pretrained LMs consistently performed better than non-pretrained models. This leads to the natural question of what role the text data used for pretraining plays in the process. Next, we investigate this in depth by experimenting with language models pretrained on different types of text. For this, we pretrain models using the BERT-small and DeBERTa architectures and an MLM objective on different text datasets, and evaluate the performance of these models on NILM tasks. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_25",
"text": " We first explore models pretrained on three different domains of text. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_26",
"text": " SNLI. We pretrained BERT small from scratch on SNLI data Bowman et al. (2015). It has 1000k sentences (570k pairs of text and hypothesis). Amazon reviews. We selected 500k movies and tv reviews from the larger Amazon reviews dataset He and McAuley (2016) and used for pretraining. Since reviews are in a free-text format, and their collection was not tailored with a NLP task in mind, they might be more representative of the complexity of real-world language use than SNLI. ROC. ROC is a corpora of 100K children stories, each made up of five sentences Mostafazadeh et al. (2017). The language in ROC is relatively simple in both vocabulary and sentence structure. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_27",
"text": " Tables 2 and 3 shows the average accuracy of six non-linguistic tasks (palindrome classification, isogram classification, tautonym classification, odd even classification, decimal operation and median) fine-tuned using different BERT and DeBERTA representations respectively. We note that the models pretrained on all three domains outperformed the non-pretrained model (NP). This suggests that the results of experiments in Section 5 generalize to new text corpora for pretraining, and do not rely on having access to text on specific topics during pretraining. This is a non-trivial result, since it suggests for example, that the higher performance of pretrained models on tasks such as palindrome and anagram classification is not due to the pretrained models having seen information about such concepts during pretraining. This is especially so since the results even generalize to ROC stories, which contain no information on such technical concepts. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_28",
"text": " Next, we experiment with perturbing the text used for pretraining by changing the order of words in the text. We explore the following models: ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_29",
"text": " SNLI sort. The words in the sentences of SNLI dataset are sorted based on alphabetical order. SNLI shuffle. We randomly shuffle words in sentences in the SNLI dataset. Amazon reviews sort. Similar to SNLI sort, the words in sentences are alphabetically sorted. Amazon reviews shuffle. We randomly shuffle words in sentences in the Amazon reviews dataset. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_30",
"text": " We observe that models pretrained with perturbed text also significantly outperformed non-pretrained models, and perform comparably to the original pretrained LMs. For the SNLI dataset, there is 3% drop in best performance when pretrained on SNLI sort and 2% drop in performance when pretrained on SNLI shuffle for BERT (Table 2). In fact, for DeBERTa, SNLI shuffle outperformed the standard SNLI by 2% (Table 3). Similarly, the Amazon sort and Amazon shuffle versions outperformed or achieved similar performance as the standard Amazon data version. A likely explanation for this is that, even though syntactic word order is disturbed by shuffling, distributional information over sentence contexts is still preserved in the perturbed data. We describe experiments with text data having no distributional information in later sections. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_31",
"text": " A possible rationale for explaining the beneficial effect of pretraining for non-linguistic tasks is that irrespective of whether the tasks require non-linguistic reasoning, their format is in language, and hence language models should be able to learn these tasks with fewer examples. To test this hypothesis, we also experiment with models pretrained on text from languages different from English, as well as models pretrained on computer code. These include the following models: Multilingual BERT. Multilingual BERT is pretrained on text from 102 different languages. About 21% of the pretraining text is English. Chinese BERT. Chinese BERT is a BERT model pretrained on Chinese text. Code BERT. CodeBERT Feng et al. (2020) is pretrained on code from six programming languages. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_32",
"text": " In Table 2, we note that all three non-English pretrained LMs significantly outperformed non-pretrained models, with the best performance being comparable or marginally lower than English versions. In fact, Code-BERT surprisingly surpasses ROC by 5%. These findings strongly indicate that the advantages from pretraining have little to do with the format of the tasks, since they persist for scenarios with little shared linguistic structure. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_33",
"text": " Finally, to investigate what happens if we weaken the distributional properties that hold even in the perturbed text versions from Section 6.2, we experiment with pretraining models on synthetic text sampled from simple probability distributions: ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_34",
"text": " Zipf distribution. We select 30k words (types) from the Amazon reviews dataset. Words are picked with a unigram probability that follows Zipf’s word frequency law, which all natural languages empirically follow Piantadosi (2014). For the Zipf distribution, we chose α𝛼\\alpha=1 and β𝛽\\beta=2.7, to match the parameters of most natural languages. The text does not follow any word order. Uniform distribution. In this dataset, words are sampled from the same vocabulary as in ‘Zipf distribution’, but with a uniform unigram probability. The text does not follow any word order. Synthetic Vocabulary. Words are selected with uniform distribution from a vocabulary to form sentences. However, instead of a vocabulary of English words, the words in the vocabulary are also synthetically generated (3 letter combinations of lower-case alphabets). In this text, the words do not possess morphology in addition to no syntax. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_35",
"text": " In Tables 2 and 3, we note that surprisingly, even models pretrained on Zipfian and uniform distribution text continue to outperform the non-pretrained models. In fact, the Zipf version’s best accuracy is 3% higher than the standard Amazon data version and 2% compared to perturbed Amazon shuffled data version in case of BERT. Zipf outperforms standard amazon data by 1% and lags behind amazon shuffle by 3% for DeBERTA. The Uniform distribution version lags behind Zipf by 9% and 2% for BERT and DeBERTa respectively. We note that the Zipf and Uniform versions still use the prebuilt vocabulary from the Amazon data, and hence this text maintains morphological structure. However, the gains finally disappear for the Synthetic vocabulary model, which cannot leverage morphological structure in the text, and its performance is similar to the non-pretrained models. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_36",
"text": " We explore the non-linguistic inductive biases of pretrained LMs. While the general trend (that pretraining helps) is unsurprising, our analysis with models pretrained on different text corpora shows that this is not due to the model seeing related topics during pretraining. We find that these gains persist even in absence of any shared linguistic structure (in cross-lingual settings). Our observation that this behavior is seen even when pretraining on synthetically generated languages is intriguing and can be explored further by future work. ",
"title": "What do Large Language Models Learn beyond Language?"
}
] |
How different would a BC version of chain of thought be than Lambada model?
|
Lambada, is an algorithm for text-based deductive logical reasoning that combines the ability of LMs to handle realistic text input with the backward chaining (BC) technique for high-level reasoning [58].
|
[
58
] |
[
{
"id": "2212.13894_all_0",
"text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is an important building block for automated knowledge discovery and holds the key for future advances across various scientific domains. While in recent years tremendous progress has been made towards natural language understanding thanks to pretrained language models (LMs) (Brown et al., 2020; Chowdhery et al., 2022, i.a.,), the performance of these models for logical reasoning still lags behind Rae et al. (2021); Creswell et al. (2023); Valmeekam et al. (2022) compared to the advancements in other areas such as reading comprehension and question-answering. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_1",
"text": " While many problems benefit from LM scaling, scaling has been observed to provide limited benefit for solving complex reasoning problems. For example, Creswell et al. (2023) observed that for the Gopher family of LMs (Rae et al., 2021), the benefit of scaling for logic-based tasks is significantly worse than for other language tasks. Moreover, while finetuning initially seemed to enable logical reasoning in LMs Clark et al. (2021); Tafjord et al. (2021), further exploration revealed that finetuned LMs mostly exploit spurious correlations (e.g., the correlation between the number of rules and the label) as opposed to learning to reason Zhang et al. (2022b); Schlegel et al. (2022); Liu et al. (2023). Recently, prompting strategies such as Chain-of-Thought Wei et al. (2022) and Scratchpad (Nye et al., 2022) have contributed to improving performance of LMs on reasoning tasks, although they have been also shown to struggle with proof planning for more complex logical reasoning problems Saparov and He (2023). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_2",
"text": " One solution to the aforementioned problems is to integrate the strength and reliability of classical AI models in logical reasoning with LMs Garcez and Lamb (2020); Marcus (2020). In the literature, there are two major approaches to logical reasoning Poole and Mackworth (2010): 1. Forward Chaining (FC) where one starts from the facts and rules (“theory”), and iterates between making new inferences and adding them to the theory until the goal statement can be proved or disproved, 2. Backward Chaining (BC) where one starts from the goal and uses the rules to recursively decompose it into sub-goals until the sub-goals can be proved or disproved based on the theory. Previous approaches to reasoning with LMs mostly incorporate elements of FC into LMs Tafjord et al. (2021); Creswell et al. (2023). FC requires selecting a subset of facts and rules from the entire set, which might be difficult for an LM as it requires a combinatorial search over a large space. Moreover, deciding when to halt and declare failure to prove is challenging in FC, as also noted by Creswell et al. (2023), sometimes requiring specialized modules trained on intermediate labels Creswell and Shanahan (2022). Indeed, the classical automated reasoning literature is heavily weighted towards BC or goal-directed strategies for proof-finding. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_3",
"text": " In this paper, we show experimentally that BC is better suited for text-based deductive logical reasoning, as it does not require a combinatorial search for subset selection and there are more natural halting criteria for it. We develop a hybrid LAnguage Model augmented BAckwarD chAining technique (Lambada), where BC drives the high-level proof planning, and the LM performs the textual understanding and individual reasoning steps. We conduct experiments with challenging datasets for LM reasoning containing examples expressed in naturalistic text. The datasets contain proof chains of up to 555 hops in depth, and examples where the goal can neither be proved nor disproved from the provided theory. We show that Lambada achieves substantially higher deductive accuracy, and is considerably more likely to generate valid reasoning chains compared to other techniques which find correct conclusions with spurious proof traces, while also being more query efficient than other LM-based modular reasoning approaches. Our results strongly indicate that future work on reasoning with LMs should incorporate backward chaining or goal-directed planning strategies. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_4",
"text": " The deep learning based models that have been developed to solve text-based (logical) reasoning tasks can be categorized as follows (see Huang and Chang 2022 for a recent survey of the literature). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_5",
"text": " Pretraining on Relevant Tasks: Pretraining an LM on corpora relevant to the target reasoning task can lead to improvements Hendrycks et al. (2021); Shen et al. (2021). Pretraining is, however, costly especially for larger LMs. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_6",
"text": " Implicit Reasoning: These approaches finetune LMs to produce the label directly given the input Clark et al. (2021); Betz et al. (2021); Saeed et al. (2021); Han et al. (2022); reasoning is expected to happen implicitly in the parameters of the LM. It has been shown that finetuning LMs on logical reasoning tasks makes them learn spurious correlations Zhang et al. (2022b); Schlegel et al. (2022), and is not robust to multi-hop reasoning Kassner et al. (2020). Besides, finetuning large LMs is costly especially when the dataset is large, and may introduce distributional shocks to the model Kazemi et al. (2023). In this paper, we focus on models that only take in-context examples as supervision. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_7",
"text": " Explicit Reasoning: Generating the intermediate reasoning steps such as the chain of reasoning Wei et al. (2022); Nye et al. (2022); Dalvi et al. (2021); Zelikman et al. (2022); Zhang et al. (2022a) has shown substantial improvement for many reasoning tasks Suzgun et al. (2022). Such chains have been explored both in the forward and the backward directions, e.g., using multiple constrained LMs for logical reasoning (Zhang et al., 2022a). Gontier et al. (2020) investigated how transformer models perform when trained to perform forward or backward chaining, and drew conclusions about their internal reasoning strategies. We compare against a popular recent prompting strategy, namely Chain-of-Thought (CoT) Wei et al. (2022), from this category. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_8",
"text": " Verifiers: To improve CoT, some works train a verifier using chain-level labels. The verifier takes a reasoning chain produced by the model as input and judges the quality of the chain Cobbe et al. (2021); Shen et al. (2021); Jhamtani and Clark (2020); Zelikman et al. (2022). Using this verifier, one can then generate multiple reasoning chains (e.g., by running the algorithm multiple times with different decoding temperatures) and use the best chain according to the verifier. Since Lambada also generates proofs, verifiers are also applicable to our algorithm. In this paper, we assume not having access to chain-level labels, and leave experiments with verifiers as future work. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_9",
"text": " Length generalization: A number of approaches specifically look into whether LMs can generalize from examples requiring shorter reasoning chains (shown to them either as demonstration or as finetuning data) to examples requiring longer chains Anil et al. (2022); Tafjord et al. (2021). With our model, length generalization comes for free because the model learns the building blocks of solving the problem that are applied as many times as needed to solve the problem. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_10",
"text": " Modular Reasoning: These approaches break the problem into smaller modules and use separate LMs to solve each module Zhou et al. (2022); Khot et al. (2023); Sprague et al. (2022); Zhou et al. (2023); Dua et al. (2022); Wang et al. (2022); Schlag et al. (2023). LM-based approaches to logical reasoning typically makes use of a single LM module; for example, in Tafjord et al. (2021), a single LM module iteratively and exhaustively infers all conclusions based on the facts and rules, and then the goal statement is compared against the final set of conclusions to confirm if it can be proved from the theory. Since exhaustively deriving all conclusions is computationally expensive, Creswell et al. (2023) consider a more scalable approach where the conclusions that are derived are informed by the goal; they iteratively apply two LLM modules one selecting a subset of the facts and rules informed by the goal and the other making new inferences based on the selected facts and rules and adding it back to the theory. In this paper, we compare against the second approach. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_11",
"text": " Natural Language Inference (NLI): Logical reasoning can also be understood as identifying whether a logical entailment relation holds between two propositions (premise and hypothesis; the premise is the theory and the hypothesis is the statement to be proved). In this sense, NLI models are also relevant, although inferences under NLI typically adopt a more relaxed notion of entailment rather than purely logical Dagan et al. (2013); Bowman et al. (2015); Williams et al. (2018). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_12",
"text": " We focus on performing automated reasoning over facts, i.e., natural language assertions such as ‘‘Nice people are red’’, that are coherent but not necessarily grounded in reality. A rule is a natural language statement that is either of the form, or can be rewritten in the form, ‘‘If P then Q’’; e.g., ‘‘Rough, cold people are blue’’ can be rewritten as ‘‘If a person is rough and cold, then they are blue’’. P is called the antecedent and Q is called the consequent of the rule. A theory 𝒞𝒞\\mathcal{C} consists of facts ℱ={f1,f2,…,fn}ℱsubscript𝑓1subscript𝑓2…subscript𝑓𝑛\\mathcal{F}=\\{f_{1},f_{2},\\dots,f_{n}\\} and rules ℛ={r1,r2,…,rm}ℛsubscript𝑟1subscript𝑟2…subscript𝑟𝑚\\mathcal{R}=\\{r_{1},r_{2},\\dots,r_{m}\\}. We let 𝒢𝒢\\mathcal{G} represent a goal that we would like to prove or disprove based on the theory. An example theory with fictional characters and rules is demonstrated in Figure 1. Based on the theory, one should prove or disprove the goal ‘‘Eric is nice’’. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_13",
"text": " Backward chaining (BC) is a strategy for reasoning that starts from the goal and recursively breaks the goal into sub-goals based on the rules that can be applied to it, until the sub-goals can be proved or disproved based on the facts or no more rules can be applied to break down the sub-goal further. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_14",
"text": " Figure 1 shows an example of BC applied to a theory to prove a goal. Initially, BC verifies if the goal can be proved or disproved based on the facts (this step is omitted from the figure). Since none of the facts directly prove or disprove the goal, BC next selects a rule that can be applied to break down the goal into sub-goals. Whether or not a rule applies to a goal is determined by an operation called unification in logic; Rule6 has the same consequent as the goal so the operation can be applied, but the other rules have different consequents and it cannot be applied. Using Rule6, the goal can be broken down into three sub-goals that should be proved for the goal to be proved. BC then makes recursive calls to prove each sub-goal. The algorithm continues until either a halting criterion is reached (e.g., reaching a certain depth in search), or a sub-goal can no longer be broken down (e.g., the left sub-tree under ‘‘Eric is rough’’), or all sub-goals are proved (e.g., the right sub-tree under ‘‘Eric is rough’’). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_15",
"text": " The outcome of BC for a goal is either Proved, Disproved, or Unknown; e.g., its output for the goal in Figure 1 is Proved, for ‘‘Fred is not green?’’ is Disproved (because it contradicts Fact3), and for ‘‘Fred is round?’’ is Unknown (because the theory does not entail or contradict it). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_16",
"text": " To enable applying BC for text-based reasoning, we introduce four LM-based modules: Fact Check, Rule Selection, Goal Decomposition, and Sign Agreement, each implemented by showing relevant in-context demonstrations to a pretrained LM (see Appendix D.3 for details). We describe these modules and then proceed to the full algorithm. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_17",
"text": " Given a set of facts ℱℱ\\mathcal{F} from the theory and a goal 𝒢𝒢\\mathcal{G}, the Fact Check module verifies if there exists a fact f∈ℱ𝑓ℱf\\in\\mathcal{F} such that f𝑓f entails 𝒢𝒢\\mathcal{G} (in which case the goal is proved) or f𝑓f entails the negation of 𝒢𝒢\\mathcal{G} (in which case the goal is disproved). If no such fact can be found, then the truth of 𝒢𝒢\\mathcal{G} remains unknown. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_18",
"text": " We implement Fact Check with two sub-modules: the first sub-module selects a fact from the set of facts that is most relevant to the goal, and the second sub-module verifies if the goal can be proved or disproved based on that fact.111Note that we select only one fact because the goals and sub-goals in the datasets we work with can be proved/disproved using single facts; The two modules can be adapted to selected multiple facts if this is not the case. Since the first sub-module may fail to identify the best fact on the first try, if the truth of the goal remained unknown after one try, the selected fact can be removed and the sub-modules can be called again. This process can be repeated multiple times. In our experiments, we call the two sub-modules twice. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_19",
"text": " Given a set of rules ℛℛ\\mathcal{R} from the theory and a goal 𝒢𝒢\\mathcal{G}, the Rule Selection module identifies the rules r∈ℛ𝑟ℛr\\in\\mathcal{R} such that the consequent of r𝑟r unifies with 𝒢𝒢\\mathcal{G}. These rules are then used for decomposing the goal into sub-goals. If no such rule can be identified, then the truth of 𝒢𝒢\\mathcal{G} remains unknown. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_20",
"text": " As we did for Fact Check, we implement Rule Selection with two sub-modules: the first sub-module identifies the consequent of each rule (independent of the goal), and the second sub-module takes the rule consequents and the goal as input and identifies which one unifies with the goal. Note that due to the recursive nature of BC, the Rule Selection module may be invoked multiple times during the proof of a goal. Since identifying the consequent of each rule is independent of the goal, this sub-module only needs to be called once. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_21",
"text": " Given a rule r𝑟r and a goal 𝒢𝒢\\mathcal{G} such that the consequent of r𝑟r unifies with 𝒢𝒢\\mathcal{G}, the Goal Decomposition module identifies the sub-goals that need to be proved in order for 𝒢𝒢\\mathcal{G} to be proved or disproved. The sub-goals are identified based on the antecedent of r𝑟r. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_22",
"text": " In the case where we succeed in proving the antecedent of r𝑟r, whether the goal is proved or disproved depends on whether the sign of the goal agrees or disagrees with the sign of the consequent of r𝑟r. For instance, in Figure 1, for the goal ‘‘Eric is nice.’’, since the sign of the goal agrees with the sign of the consequent of Rule6 and the antecedent of the rule is proved, we conclude that the goal is proved. However, if Rule6 was ‘‘(...) is not going to be a nice individual.’’, then the sign of the goal would disagree with the sign of the consequent and so we would conclude that the goal is disproved. This motivates the fourth module, Sign Agreement, described below. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_23",
"text": " Given a rule r𝑟r and a goal 𝒢𝒢\\mathcal{G}, the Sign Agreement module verifies if the sign of the consequent of r𝑟r agrees or disagrees with the sign of the goal or not. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_24",
"text": " Algorithm 1 provides a high-level description of how the four LM modules described earlier can be integrated with BC to enable text-based logical reasoning (the function calls corresponding to LM modules are color-coded). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_25",
"text": " Lambada can be understood as a depth-first search algorithm over the facts and the rules. It takes as input a theory 𝒞=(ℱ,ℛ)𝒞ℱℛ\\mathcal{C}=(\\mathcal{F},\\mathcal{R}), a goal 𝒢𝒢\\mathcal{G}, and a depth D𝐷D that defines a halting criterion for the algorithm based on the maximum allowed depth for the search. The search depth is a natural halting criterion corresponding to the maximum number of reasoning hops required for answering questions. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_26",
"text": " Initially, the algorithm uses the Fact Check module to check if 𝒢𝒢\\mathcal{G} can be proved or disproved using the facts. If this is the case, then the algorithm stops and returns the result (Proved or Disproved). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_27",
"text": " If 𝒢𝒢\\mathcal{G} cannot be proved or disproved, then the algorithm checks the depth D𝐷D: if D=0𝐷0D=0, then the algorithm stops and returns Unknown indicating that 𝒢𝒢\\mathcal{G} could not be proved or disproved. Otherwise, the algorithm proceeds with applying rules. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_28",
"text": " The Rule Selection module is used to identify the rules ℛssubscriptℛ𝑠\\mathcal{R}_{s} from ℛℛ\\mathcal{R} whose consequent unifies with 𝒢𝒢\\mathcal{G}. Once the set ℛssubscriptℛ𝑠\\mathcal{R}_{s} is identified, if Lambada can start with the rules that have a higher chance of succeeding at (dis)proving the goal, it can save computations and be less error-prone. Therefore, we include a Rerank function in Lambada. Based on the intuition that shorter rules are likely to have fewer sub-goals (hence a higher chance of success), we start the search from shorter rules and proceed to longer rules if the shorter ones fail. We leave more sophisticated ranking strategies as future work. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_29",
"text": " For each selected rule, the algorithm uses the Goal Decomposition module to decompose 𝒢𝒢\\mathcal{G} into a set of sub-goals 𝐆𝐆\\mathbf{G} that need to be proved and checks whether those sub-goals can be proved by making recursive calls to the algorithm (with reduced depth). If the sub-goals can be proved, then the algorithm uses the Sign Agreement module to check whether the sign of the rule consequent agrees or disagrees with the sign of 𝒢𝒢\\mathcal{G}. If it does, then the algorithm returns Proved and otherwise Disproved. If there is no rule for which the sub-goals can be proved, then Unknown is returned. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_30",
"text": " During a proof, Lambada may be called multiple times with the same theory and goal; in Appendix A we explain how cycles and redundant computations can be avoided using a cache. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_31",
"text": " We describe our baselines and datasets here, and provide further implementation details in Appendix D. Unless stated otherwise, all experiments are based on the PaLM 540B model Chowdhery et al. (2022). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_32",
"text": " We compare against the following two baselines. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_33",
"text": " Chain-of-Thought (CoT) Wei et al. (2022) is a popular neural approach based on demonstrating chains of inference to the LM within the in-context prompt. In addition to the few-shot demonstrations in <INPUT>/<LABEL> format in typical in-context learning settings, in CoT, an intermediate explanation for the label is also provided (<INPUT>/<EXPLANATION>/<LABEL>). In our work, the explanation corresponds to the proof. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_34",
"text": " Selection-Inference (SI) Creswell et al. (2023) is a strong modular reasoning approach based on forward chaining. SI contains two modules: (1) selection, which, guided by the goal, selects a subset of the facts and rules from which new conclusions can be derived toward proving the goal, and (2) inference, which takes the selected facts and rules and derives a new conclusion. The two modules are called iteratively, each time producing a single conclusion that is added back to the theory before the next iteration. The iterations continue until a halting criterion is met (a fixed number of steps in Creswell et al. 2023). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_35",
"text": " We experiment with challenging deductive logical reasoning datasets outlined below. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_36",
"text": " ProofWriter Tafjord et al. (2021) is a commonly used synthetic dataset for testing logical reasoning when facts and rules are expressed in naturalistic text. It contains two subsets: an open-world assumption (OWA) subset and a closed-world assumption (CWA) subset. In this paper, we use the OWA subset. Each example is a (theory, goal) pair and the label is one of {{\\{Proved, Disproved, Unknown}}\\} where Unknown indicates that the goal can neither be proved nor disproved. The dataset has five parts, each part requiring 00, ≤1absent1\\leq 1, ≤2absent2\\leq 2, ≤3absent3\\leq 3 and ≤5absent5\\leq 5 hops of reasoning, respectively. We report two sets of results on this dataset: (1) with examples labeled Unknown removed (for compatibility with previous work), and (2) with all three labels. Note that intermediate proof chains from ProofWriter are not used by our models in making predictions. For both cases, due to the cost of inference, we used the first 100010001000 examples in the test set. Hereafter, we refer to these two subsets as ProofWriter-PD and ProofWriter-PUD. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_37",
"text": " PrOntoQA Saparov and He (2023) is a synthetic dataset created to analyze the capacity of LM-based approaches for logical reasoning. Compared to ProofWriter, PrOntoQA has lower natural language diversity and less l fact/rule variations (e.g., no conjunctions). However, the search traces typically contain multiple paths with only one of them leading to the proof, thus enabling testing the proof planning of different models. This dataset has multiple versions; we use the fictional characters version, which is one of the hardest versions according to Saparov and He (2023). Similarly to ProofWriter, each version of PrOntoQA is divided into different parts depending on the depth of reasoning chains required (111, 333, and 555 hops). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_38",
"text": " ParaRules Tafjord et al. (2021) is a version of ProofWriter where the synthetically generated sentences in the theory are rewritten by crowdworkers to increase diversity and naturalness of the text. This lets us move beyond evaluating reasoning with templatic expressions, which is a key limitation of the other datasets. Each fact in ParaRules may be a combination of several sub-facts (see Fig. 1 for an example). The examples require proof depths of up to 555 and the label can be Proved, Disproved, or Unknown. We found some minor quality issues in ParaRules; we manually verified and fixed the first 500500500 examples of the test set (see Appendix D.2) and used this set for evaluation. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_39",
"text": " We now describe the results and compare Lambada and the baselines in detail. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_40",
"text": " The results are reported in Figure 2, (a)–(d).222Due to the low performance of SI on ProofWriter and PrOntoQA and its high number of LM calls (see Figure 7), we only compared Lambada against CoT for ParaRules. Lambada significantly outperforms the baselines, especially on ProofWriter-PUD which contains Unknown labels (44%percent4444\\% relative improvement compared to CoT and 56%percent5656\\% compared to SI on Depth-5), the higher depths of PrOntoQA (37%percent3737\\% relative improvement compared to CoT and 113%percent113113\\% compared to SI on Depth-5), and the ParaRules dataset (43%percent4343\\% relative improvement compared to CoT). These results overall show the merit of Lambada for logical reasoning. We highlight that the reasoning capacity of Lambada robustly generalizes to more naturalistic expressions, as demonstrated by the high accuracy on ParaRules, which is exactly the desired outcome of combining the strengths of an LM and a symbolic reasoning algorithm. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_41",
"text": " The results in Figure 2(a) reveal a shortcoming of the CoT approach in dealing with Unknown labels. That is, unlike the examples for which the label is Proved or Disproved, there is no natural chain of thought for the examples whose labels are Unknown. Nevertheless, the performance of CoT is competitive for the ProofWriter-PD dataset, and the accuracy does not diminish substantially with increasing depth. We investigate the reason for this behaviour of CoT in the next section. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_42",
"text": " To understand the reason behind the high accuracy of CoT on higher depths of ProofWriter-PD, we randomly selected 505050 examples from Depth-5 of the dataset where CoT predicted the label correctly, and manually verified if the proof chain is correct or not. For comparison, we also manually verified the proofs generated by Lambada following a similar procedure. The results are reported in Figure 2(e). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_43",
"text": " While Lambada mostly produces correct chains, CoT produces correct chains only for 28%percent2828\\% of the examples. We find that hallucination is the main source of error (48%percent4848\\% of the examples; see Appendix B.2 for other prominent failure modes). The hallucinated facts and rules mostly resulted in shortcuts to the correct answer. This hints at the possibility of spurious correlations in ProofWriter-PD that can be exploited by CoT (see Appendix B.2, Figure 10 for examples). This result is consistent with previous work showing that when LMs are asked to solve logical reasoning end-to-end, they rely on spurious correlations Zhang et al. (2022b). Note that for modular approaches like SI and Lambada, the intermediate modules are impervious to the spurious correlations between the input and the label and do not suffer from this issue. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_44",
"text": " As previously explained, SI is based on forward chaining and its selection module requires a combinatorial search to find the right subset of facts and rules (see Appendix C), and the search space becomes progressively larger in each iteration of the algorithm as new inferences are added to the theory. To verify whether the increase in the search space makes forward chaining progressively harder, we measured the success rate of the k𝑘k-th inference of SI for different values of k𝑘k on Depth-5 of PrOntoQA (see Appendix B.3 for details). From the results in Figure 3, we can see that the success rate indeed decreases in the later inferences of the model, where the size of the input theory is larger and therefore a larger space needs to be searched to find the right combination of facts and rules. Note that none of the components in Lambada require selecting a subset, hence no combinatorial search is required (see Appendix C for more details). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_45",
"text": " SI also suffers from inferring redundant facts. Figure 4 reports the number of unique inferences from SI for the examples in ProofWriter-PD (Depth-5) where SI incorrectly predicted Unknown (i.e., examples where a proof exists but SI failed to find it). The result shows that SI inferences contained no redundant facts only 29%percent2929\\% of the time; in 7%percent77\\% of the cases, all 555 inferred facts were identical, and in another 10%percent1010\\%, only two unique inferences were made. This shows that SI, and maybe more generally forward-chaining approaches, suffer from redundant inference. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_46",
"text": " SI also over-predicts Disproved in the binary case and Unknown in the three-way classification case (see Appendix B.4), performing even worse than the majority class for Depth-5 of PrOntoQA which has more Proved labels than Disproved. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_47",
"text": " These results, together with Figure 2, show that backward chaining (which is the backbone of reasoning in Lambada) is a better choice compared to forward chaining (the backbone in SI). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_48",
"text": " Our results may raise the question of whether it is enough to directly incorporate the steps of backward chaining into CoT prompts, or if modularity (as in Lambada) is also needed. To answer this question, we experiment with a backward version of CoT where the proofs are written in the backward direction from the goal to the premises. The label accuracies are presented in Figure 5(a)–(b) for ProofWriter-PUD and ProofWriter-PD, and their proof accuracy on ProofWriter-PD (Depth-5) in Figure 5(c). The label accuracy of forward and backward CoT are comparable, but forward CoT leads to better performance on PUD and backward CoT leads to better performance on PD. For proof accuracy, however, we see a clear difference between the two versions where backward CoT produces substantially lower quality proofs compared to forward chaining. This result is consistent with the observations of Gontier et al. (2020) for finetuned LMs. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_49",
"text": " The above results show that a modular formulation (as in Lambada) is key to successful logical reasoning and simply providing CoT in the backward direction does not suffice. We note, however, that future work can use the traces of our model to finetune (smaller) language models (e.g., Zelikman et al. 2022), or use the traces as training data in future language models to improve their performance with CoT prompting. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_50",
"text": " Taking the label and proof accuracy results together, there is also a potential that backward CoT models are more heavily relying on spurious correlations for the PD case where backward CoT outperformed CoT, as backward CoT achieves a similar label accuracy as forward CoT but with a much lower proof accuracy. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_51",
"text": " In Figure 1, we show the search trace created by Lambada for an example from ParaRules, where the answer was predicted correctly. From the figure, one can see how backward chaining helps Lambada effectively search and create the reasoning chain and how the LM helps fact checking, rule selection, goal decomposition, and sign agreement checking. In Appendix B.1, we include an example that has a much larger search trace. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_52",
"text": " To understand which components in Lambada are responsible for the failure cases, we computed the individual accuracy of the four modules described in Section 3. For this purpose, we created four datasets from the validation set of ProofWriter, each measuring only the performance of one module in isolation (see Appendix D.1 for details). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_53",
"text": " Based on the results of the PaLM 540B model in Figure 6, Rule Selection is the lowest performing module followed by Goal Decomposition. It is possible that the Rule Selection module (partially) fails for some examples but Lambada still arrives at the correct conclusion and proof (e.g., if in Figure 1 the third call to Rule Selection only returned Rule5). For Fact Check, when we allow the model to only select one fact, the accuracy is 0.940.940.94 but when we allow the model to select two facts, the accuracy is near perfect. The Sign Agreement module also shows near-perfect accuracy. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_54",
"text": " We repeat the experiment from Section 5.6 with PaLM 62B and 8B to examine the effect of LM scale on Lambada. According to the results in Figure 6, when we use PaLM 62B, the performance of the Goal Decomposition and Sign Agreement modules remain comparable, but the performance for the Fact Check and Rule Selection modules drop substantially. Unlike the first two modules, the second two rely on a one-to-many comparison between the goal and each of the facts/rules which may require a larger model capacity. Moreover, we observe that in PaLM 8B, the accuracy for all components drops significantly, in some cases becoming close to random prediction. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_55",
"text": " We argue that the extent to which the higher-level reasoning algorithm breaks the problem into sub-problems should be dependent on the scale and power of the base LMs. If smaller LMs are used, then one may need finer-grained problem decomposition (e.g., further decomposing the one-to-many comparisons in the selection module). And as LMs become larger and stronger in the future, one could rely on them to solve problems with a coarser-grained decomposition of the problem. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_56",
"text": " Another advantage of Lambada is its efficiency compared to other approaches that require multiple LM inference calls per example such as SI. In Figure 7, we compare the average number of LM calls per example, for different depths of ProofWriter-PUD. Lambada requires much fewer calls compared to SI, especially at higher depths: for Depth-1, Lambada requires 3.8x fewer calls whereas for Depth-5 it requires 11.8x fewer calls. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_57",
"text": " To analyze the lexical sensitivity of Lambada, we modified the test set of ProofWriter-PUD by replacing various lexical items (names, adjectives, and verbs) with novel tokens and the rule templates with novel ones. We then compared the performance of Lambada on the original and the modified test sets using the same few-shot examples. The details of the modifications are in Appendix B.5. As can be seen in Figure 8, the performance of Lambada remains almost unchanged, demonstrating robustness to lexical and templatic variations. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_58",
"text": " We developed Lambada, an algorithm for deductive logical reasoning with natural language that combines the capacity of LMs to handle naturalistic text input with the backward chaining algorithm for robust symbolic reasoning. We showed that Lambada achieves significant improvements over competitive approaches on challenging benchmarks, both in terms of label accuracy (predicting if a statement can be proved or disproved based on a theory) and proof accuracy. Importantly, this improvement was also observed in a dataset that expresses the theory in more naturalistic expressions, clearly illustrating the benefit of combining an LM with reasoning modules. We also demonstrated the query efficiency and lexical robustness of Lambada. Although in this paper we only experiment with formal reasoning problems and datasets, we believe our key insight on the efficacy of backward, goal-directed reasoning with LMs has broader implications and can be adapted to other NLP tasks where multi-step inference is required. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
}
] |
How does TransE learns entity and relatio embeddings in unsupervised way?
|
TransE is an unsupervised learning method that learns latent representations for a knowledge triplet [21].
|
[
21
] |
[
{
"id": "2204.11673_all_0",
"text": " Passage Re-ranking is a crucial stage in modern information retrieval systems, which aims to reorder a small set of candidate passages to be presented to users. To put the most relevant passages on top of a ranking list, a re-ranker is usually designed with powerful capacity in modeling semantic relevance, which attracted a wealth of research studies in the past decade (Guo et al., 2020). Recently, large-scale pre-trained language models (PLMs), e.g. BERT (Devlin et al., 2018), ERNIE (Sun et al., 2019) and RoBERTa (Liu et al., 2019), have dominated many natural language processing tasks, and have also achieved remarkable success on passage re-ranking. For example, PLM based re-rankers (MacAvaney et al., 2019; Li et al., 2020; Dong and Niu, 2021; Dong et al., 2022) have achieved state-of-the-art performance, which takes the concatenation of query-passage pair as input, and applies multi-layer full-attention to model their semantic relevance. Their superiority can be attributed to the expressive transformer structure and the pretrain-then-finetune paradigm, which allow the model to learn useful implicit knowledge (i.e., semantic relevance in the latent space) from massive textual corpus (Fan et al., 2021). ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_1",
"text": " However, implicit knowledge still has some inherent weaknesses, which limits the applicability of PLMs based re-rankers. First, queries and passages are usually created by different persons and have different expression ways (Nogueira et al., 2019b), such as word usage and language style. Worse still, the data distributions of search queries and web contents are highly heterogeneous (Liu et al., 2021), where various specialized domains (e.g., bio-medical) may only have few training examples in a general corpus. Domain-specific knowledge can hardly be revealed and captured by the model, and thus the processing of domain-specific queries is often inaccurate. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_2",
"text": " To overcome the limitations, it is essential to incorporate the knowledge graph as explicit knowledge to PLM based re-rankers. Thus we propose Knowledge Enhanced Re-ranking Model (KERM), which utilizes external knowledge to explicitly enhance the semantic matching process in PLM based re-rankers. Intuitively, the difference in expression ways can be mitigated by the triplet with \"synonymy\" as relation in knowledge graph, and all the triplets can enrich the domain knowledge. The overall workflow of KERM is depicted in Fig. 1. To the the best of our knowledge, this is the first attempt for knowledge enhanced PLMs for passage re-ranking. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_3",
"text": " Despite the knowledge graph is a desirable source of explicit knowledge, it is non-trivial to take advantage of explicit knowledge directly for passage re-ranking due to the following two challenges: ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_4",
"text": " • Challenge 1. Existing knowledge graph are not constructed for re-ranking task. They usually contain trivial factual triples, which can hardly bring information gain. The inappropriate selection of external knowledge could even jeopardize the re-ranker performance. How to utilize existing knowledge graph to re-ranking task is remain a challenge. • Challenge 2. The explicit knowledge and implicit knowledge are highly heterogeneous due to the different sources, which makes the aggregation of the two difficult. How to mutually refine each other and effectively aggregate explicit knowledge into implicit knowledge to alleviate the semantic gap between query and passage is still a challenge. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_5",
"text": " In general, the workflow of KERM can be divided into knowledge graph distillation and knowledge aggregation to tackle the above challenges. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_6",
"text": " For knowledge graph distillation, we propose a novel pipeline to establish knowledge meta graphs, which only retain informative knowledge for passage re-ranking. Specifically, we first distill a graph globally for passage re-ranking scenario from an existing knowledge graph by pruning some unreliable or noisy relations based on TransE embedding. Then for a specific query-passage pair, we extract entities from both the query and passage, and construct a query-document bipartite entity graph based on query and passage entities and their k-hop neighbors, namely knowledge meta graph. Challenge 1. could be addressed in this distillation process. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_7",
"text": " For knowledge aggregation, we design a novel interaction module between text and knowledge graph to combine the implicit and explicit knowledge. To derive implicit knowledge from text, we employ PLM as text encoder. To be aligned with implicit knowledge, knowledge meta graph is encoded with a multi-layer graph neural network (i.e. k-hop), namely Graph Meta Network (GMN). Each transformer layer outputs word representations. Each graph meta network layer outputs entity representations. Both word and entity representations are aggregated as the input of the following transformer and GMN layer, respectively in a novelly designed module, namely knowledge injector. Therefore through knowledge aggregation, implicit knowledge from text corpus and explicit knowledge from existing knowledge graph can mutually boost each other to achieve a better re-ranking performance, in which the issues in Challenge 2. could be mitigated. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_8",
"text": " Overall, our contributions can be summarized as follows: • It is the first attempt to solve the knowledge enhanced PLMs problem for passage re-ranking. The key motivation lies in that bridging the semantic gap between the query and passage with the help of both kinds of knowledge. • We design a novel knowledge graph distillation method. It refines a reliable knowledge graph from the existing one globally and constructs a knowledge meta graph based on the refined graph locally. • We propose a novel aggregation of PLM and graph neural network framework to model the interaction between explicit knowledge and implicit knowledge. • Experimental results show the effectiveness of KERM on both general and domain specific data, achieving state-of-the-art performance for passage re-ranking. We also conduct a comprehensive study for the effects of each module in our method. The code is available at https://github.com/DQ0408 /KERM. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_9",
"text": " In this section, we introduce several recently-proposed PLMs based re-rankers and retrievers. Moreover, we also present the general background of the related techniques involved in this paper, i.e. Knowledge Enhanced Pre-trained Language Models (KE-PLMs) and Graph Neural Network. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_10",
"text": " Existing PLMs based re-rankers typically improve ranking performance from two aspects: (1) By optimizing the ranking procedure: monoBERT (Nogueira and Cho, 2019) is the first work that re-purposed BERT as a passage re-ranker and achieves state-of-the-art results. duoBERT (Nogueira et al., 2019a) integrates monoBERT in a multistage ranking architecture and adopts a pairwise classification approach to passage relevance computation. UED (Yan et al., 2021) proposes a cascade pre-training manner that can jointly enhance the retrieval stage through passage expansion with a pre-trained query generator and thus elevate the re-ranking stage with a pre-trained transformer encoder. The two stages can facilitate each other in a unified pre-training framework. H-ERNIE (Chu et al., 2022) proposes a multi-granularity PLM for web search. (2) By designing rational distillation procedure: LM Distill + Fine-Tuning (Gao et al., 2020) explores a variety of distillation methods to equip a smaller re-ranker with both general-purpose language modeling knowledge learned in pre-training and search- specific𝑠𝑝𝑒𝑐𝑖𝑓𝑖𝑐specific relevance modeling knowledge learned in fine-tuning, and produces a faster re-ranker with better ranking performance. CAKD (Hofstätter et al., 2020) proposes a cross-architecture knowledge distillation procedure with a Margin-MSE loss, which can distill knowledge from multiple teachers at the same time. RocketQAv1 (Qu et al., 2021) trains dual-encoder and cross-encoder in a cascade manner, which leverages the powerful cross-encoder to empower the dual-encoder. RocketQAv2 (Ren et al., 2021) proposes a novel approach that jointly trains the dense passage retriever and passage re-ranker. The parameters of RocketQAv2 are inherited from RocketQAv1. Besides, RocketQAv2 utilizes a large PLM for data augmentation and denoising, which can also be regarded as a distillation procedure. Notably, these two types of studies anticipate more insightful information to be captured by the advanced ranking and training procedures, while neglecting the limitations of implicit knowledge extracted from noisy and heterogeneous data. Therefore, in this paper, we proposed the first knowledge-enhanced PLM based re-ranker, which thoughtfully leverages explicit external knowledge that improve the effectiveness of the model. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_11",
"text": " The low-dimensional dense representations for query and passage are computed by PLMs based retrievers from the dual-encoder architecture. Afterward, the candidate passage set could be retrieved efficiently via approximate nearest neighbor algorithms. Existing studies could be categorized into two parts: (1) By optimizing the matching stage: DPR (Karpukhin et al., 2020) is the first study to leverage PLM to empower the retriever by a single vector. Other researches, such as RepBERT (Zhan et al., 2020), ColBERT (Khattab and Zaharia, 2020), COIL (Gao et al., 2021) and Interactor (Ye et al., 2022), obtain multiple vectors for query and passage for matching. (2) By optimizing the representation learning module: RocketQAv1 (Qu et al., 2021) and RocketQAv2 (Ren et al., 2021) boost the representation learning of retriever by leveraging the power of cross-encoder in a cascade or joint manner. Other studies boost the representation learning by designed IR-oriented pre-training tasks. ICT (Lee et al., 2019) treats sentences as pseudo-queries and matched them to the passage they originate from. Condenser (Gao and Callan, 2021) utilizes a novel pre-training task, which can produces an information-rich representation to condense an input sequence. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_12",
"text": " Existing KE-PLMs can be categorized by the granularity of knowledge they incorporate from knowledge graph (KG), as text-based knowledge, entity knowledge and KG meta-graphs. To integrate text-based knowledge, RAG (Lewis et al., 2020) and KIF (Fan et al., 2020) first retrieve top-k documents from Wikipedia using KNN-based retrieval, and the PLM model is employed to generate the output conditioned on these retrieved documents. Entity-level information can be highly useful for a variety of natural language understanding tasks. Hence, many existing KE-PLMs target this type of simple yet powerful knowledge. ERNIE(BAIDU) (Sun et al., 2019) introduces a new pre-training strategy of language model which masking phrases or entities in order to implicitly learn both synaptic and semantic knowledge from these units. ERNIE(THU) (Zhang et al., 2019) integrates informative entity representations in the knowledge module into the underlying layers of the semantic module based on the alignments between text and entity to equip the model with the ability of knowledge awareness. As knowledge graphs provide richer information than simply entity, more and more researchers start to explore integration of more sophisticated knowledge, such as meta-graphs in KG. CokeBERT (Su et al., 2021) proposes a novel semantic-driven Graph Neural Network (GNN) to dynamically select contextual knowledge and embed knowledge context according to textual context for PLMs, which can avoid the effect of redundant and ambiguous knowledge in KGs that cannot match the input text. CoLake (Sun et al., 2020a) also uses GNN to aggregate information from the constructed meta-graph in both pre-training and inference. CoLake converts the meta-graph into token sequence and appends it to input sequence for PLMs, which is distinctive to CokeBERT. Although extensive research has been proposed up to now to address the knowledge-aware problem, none exists which constrained on how to use knowledge to empower PLMs particularly for re-ranking tasks. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_13",
"text": " Existing Graph Neural Networks (GNNs) mainly fall into two categories: graph-based and path-based. Graph-based GNNs learn the structured information by directly passing nodes massage on the graph structure. GCNs (Kipf and Welling, 2016) introduce a novel approach on graph-structured data by aggregating messages from its direct neighbors to learn the graph-structured feature efficiently and effectively. R-GCNs (Schlichtkrull et al., 2018) are developed specifically to encode the highly multi-relational graphs by defining relation-specific weight matrix for each edge type. In contrast, path-based GNNs first decompose the graph into paths and then pass nodes massage on the path level, which can naturally utilize the relationship between neighbors to transmit messages. RNs (Santoro et al., 2017) use MLPs to encode all paths in a graph and then pool the representation of paths to generate a global representation for the graph. KagNet (Lin et al., 2019) is a combination of GCNs, LSTMs and a hierarchical path-based attention mechanism, which forms an architecture for modeling nondegenerate paths in a graph. In this work, we use path-based GNNs to formulate our GMN module for its good scalability on modeling relationship information in heterogeneous graphs. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_14",
"text": " Given a query q, passage re-ranking aims at ordering a set of ϰitalic-ϰ\\varkappa passages, i.e., 𝒫={pκ}κ=1ϰ𝒫superscriptsubscriptsubscriptp𝜅𝜅1italic-ϰ\\mathcal{P}=\\left\\{\\textbf{p}_{\\kappa}\\right\\}_{\\kappa=1}^{\\varkappa}, which is usually retrieved from a large-scale passage collection by a retriever, e.g. BM25 (Yang et al., 2017), DPR (Karpukhin et al., 2020) etc. In particular, a passage is a sequence of words p={wp}p=1|p|psuperscriptsubscriptsubscript𝑤𝑝𝑝1p\\textbf{p}=\\{w_{p}\\}_{p=1}^{|\\textbf{p}|}, where |p|p|\\textbf{p}| is the length of passage p. Similarly, a query is a sequence of words q={wq}q=1|q|qsuperscriptsubscriptsubscript𝑤𝑞𝑞1q\\textbf{q}=\\{w_{q}\\}_{q=1}^{|\\textbf{q}|}. Note that a passage p consists of T𝑇T sentences p={sτ}τ=1Tpsuperscriptsubscriptsubscripts𝜏𝜏1𝑇\\textbf{p}=\\{\\textbf{s}_{\\tau}\\}_{\\tau=1}^{T}. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_15",
"text": " Following a previous study (Zou et al., 2021), a desirable re-ranker is a scoring function f∗(⋅,⋅)superscript𝑓⋅⋅f^{*}(\\cdot,\\cdot) that maximizes the consistency between its predictions (denoted as Y^q,𝒫={f(𝐪,𝐩κ)|𝐩κ∈𝒫}subscript^𝑌q𝒫conditional-set𝑓𝐪subscript𝐩𝜅subscript𝐩𝜅𝒫\\hat{Y}_{\\textbf{q},\\mathcal{P}}=\\{f(\\mathbf{q},\\mathbf{p}_{\\kappa})\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathbf{p}_{\\kappa}\\in\\mathcal{P}\\}) and the ground truth labels (denoted as Y={yκ}κ=1ϰ𝑌superscriptsubscriptsubscript𝑦𝜅𝜅1italic-ϰY=\\{y_{\\kappa}\\}_{\\kappa=1}^{\\varkappa}), i.e., ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_16",
"text": " (1) f∗=maxf𝔼{q,𝒫,Y}ϑ(Y,Y^q,𝒫),superscript𝑓subscript𝑓subscript𝔼q𝒫𝑌italic-ϑ𝑌subscript^𝑌q𝒫f^{*}=\\max_{f}\\mathbb{E}_{\\{\\textbf{q},\\mathcal{P},Y\\}}{\\vartheta(Y,\\hat{Y}_{\\textbf{q},\\mathcal{P}})}, where ϑitalic-ϑ\\vartheta is a ranking metric (e.g., MRR@10) that measures the consistency between the predictions and the labels. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_17",
"text": " A knowledge base is usually represented as a directed graph 𝒢={ℰ,ℛ}𝒢ℰℛ\\mathcal{G}=\\{\\mathcal{E},\\mathcal{R}\\}, where the node set ℰℰ\\mathcal{E} represents entities, and the edge set ℛℛ\\mathcal{R} is composed of relations between entities. A triplet (eh,r,et)subscript𝑒ℎ𝑟subscript𝑒𝑡(e_{h},r,e_{t}) is the basic unit in the knowledge graph, where eh,et∈ℰsubscript𝑒ℎsubscript𝑒𝑡ℰe_{h},e_{t}\\in\\mathcal{E} are head and tail entity respectively, and r∈ℛ𝑟ℛr\\in\\mathcal{R} refers to their relations. For example, (apple,used_for,eating)𝑎𝑝𝑝𝑙𝑒𝑢𝑠𝑒𝑑_𝑓𝑜𝑟𝑒𝑎𝑡𝑖𝑛𝑔(apple,used\\_{for},eating) means that \"apple is used for eating\". ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_18",
"text": " To leverage explicit knowledge in 𝒢𝒢\\mathcal{G} for passage re-ranking, we anticipate building a novel knowledge-enhanced passage re-ranker, whose objective can be defined as (2) f∗=maxf𝔼{q,𝒫,Y}ϑ(Y,Y^q,𝒫,𝒢),superscript𝑓subscript𝑓subscript𝔼q𝒫𝑌italic-ϑ𝑌subscript^𝑌q𝒫𝒢f^{*}=\\max_{f}\\mathbb{E}_{\\{\\textbf{q},\\mathcal{P},Y\\}}{\\vartheta(Y,\\hat{Y}_{\\textbf{q},\\mathcal{P},\\mathcal{G}})}, where Y^q,𝒫,𝒢={f(𝐪,𝐩κ|𝒢)|𝐩κ∈𝒫}subscript^𝑌q𝒫𝒢conditional𝑓𝐪conditionalsubscript𝐩𝜅𝒢subscript𝐩𝜅𝒫\\hat{Y}_{\\textbf{q},\\mathcal{P},\\mathcal{G}}=\\{f(\\mathbf{q},\\mathbf{p}_{\\kappa}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G})\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathbf{p}_{\\kappa}\\in\\mathcal{P}\\}, and f(𝐪,𝐩κ|𝒢)𝑓𝐪conditionalsubscript𝐩𝜅𝒢f(\\mathbf{q},\\mathbf{p}_{\\kappa}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G}) represents the ranking score that is aware of the explicit knowledge extracted from 𝒢𝒢\\mathcal{G}. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_19",
"text": " In this section, we introduce Knowledge Enhanced Re-ranking Model (KERM), which leverages explicit knowledge that improves conventional cross-encoder for passage re-ranking. Notably, the main challenges of incorporating explicit knowledge are to 1) distill a knowledge graph that is useful for re-ranking task, and 2) aggregate the explicit knowledge with the current implicit knowledge in an appropriate manner that can improve the overall performance. Hence our proposed approach is mainly composed of two parts, i.e., knowledge graph distillation and knowledge aggregation, to tackle two challenges respectively. In the rest of this section, we first describe how to distill a reliable knowledge graph globally and build a knowledge meta graph locally from it for a specific query-passage pair. Then, we present how to combine the distilled knowledge graph and existing text corpus to derive a knowledge enhanced passage re-ranker. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_20",
"text": " Existing knowledge graphs are usually incomplete and noisy. It is unsuitable for direct introduction of them to the current model. Specially, there is no knowledge base particularly for passage re-ranking task. For example, ConceptNet (Speer et al., 2017) is a general knowledge graph that contains common sense knowledge, where the information might not be useful for our passage re-ranking task. Therefore, it is critical for us to propose a knowledge graph distillation process from both global and local perspectives. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_21",
"text": " Given a global knowledge graph 𝒢𝒢\\mathcal{G}, the first step is to eliminate those knowledge that might be noisy to be applied. To achieve this, we use TransE (Bordes et al., 2013) to measure the reliability of a given knowledge triplet. In particular, TransE is an unsupervised learning method that learns latent representations for a knowledge triplet (eh,r,et)subscript𝑒ℎ𝑟subscript𝑒𝑡(e_{h},r,e_{t}). Intuitively, it models the latent distribution of knowledge in a given knowledge graph, and those who are out of this distribution can be viewed as less informative knowledge, which should not be used. Based on this, we use the entity embeddings pre-trained by TransE to calculate a distance metric between two linked entities as (3) Rele(eh,r,et)=𝐄(eh)⋅𝐄(r)+𝐄(eh)⋅𝐄(et)+𝐄(r)⋅𝐄(et),𝑅𝑒subscript𝑙𝑒subscript𝑒ℎ𝑟subscript𝑒𝑡⋅𝐄subscript𝑒ℎ𝐄𝑟⋅𝐄subscript𝑒ℎ𝐄subscript𝑒𝑡⋅𝐄𝑟𝐄subscript𝑒𝑡Rel_{e}(e_{h},r,e_{t})=\\mathbf{E}({e_{h}})\\cdot\\mathbf{E}(r)+\\mathbf{E}({e_{h}})\\cdot\\mathbf{E}({e_{t}})+\\mathbf{E}({r})\\cdot\\mathbf{E}({e_{t}}), (4) Dist(eh,et)=1Rele(eh,r,et),𝐷𝑖𝑠𝑡subscript𝑒ℎsubscript𝑒𝑡1𝑅𝑒subscript𝑙𝑒subscript𝑒ℎ𝑟subscript𝑒𝑡Dist(e_{h},e_{t})=\\frac{1}{Rel_{e}(e_{h},r,e_{t})}, where 𝐄(e)𝐄𝑒\\mathbf{E}({e}) and 𝐄(r)𝐄𝑟\\mathbf{E}({r}) are the TransE embeddings of entity and relation, respectively, and the inner product measures the relevance between two vectors. As the objective of TranE is aligned with minimizing the distance shown in Eq.(4), we can consider those knowledge triplets with small distance values as informative knowledge. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_22",
"text": " After measuring the reliability of knowledge, we prune 𝒢𝒢\\mathcal{G} by only keep the top-ΠΠ\\Pi neighboring entities 𝒩(eh)𝒩subscript𝑒ℎ\\mathcal{N}(e_{h}) of a given entity ehsubscript𝑒ℎe_{h}, which can formally be defined as (5) 𝒩(eh)=∪π=1Π{etπ},whereDist(eh,etπ)≤Dist(eh,etπ+1).formulae-sequence𝒩subscript𝑒ℎsuperscriptsubscript𝜋1Πsuperscriptsubscript𝑒𝑡𝜋𝑤ℎ𝑒𝑟𝑒𝐷𝑖𝑠𝑡subscript𝑒ℎsuperscriptsubscript𝑒𝑡𝜋𝐷𝑖𝑠𝑡subscript𝑒ℎsuperscriptsubscript𝑒𝑡𝜋1\\mathcal{N}(e_{h})=\\cup_{\\pi=1}^{\\Pi}\\{e_{t}^{\\pi}\\},where\\,Dist(e_{h},e_{t}^{\\pi})\\leq Dist(e_{h},e_{t}^{\\pi+1}). Thus, the pruned global graph 𝒢¯¯𝒢\\overline{\\mathcal{G}} can be denoted as (6) 𝒢¯={(eh,r,et)|eh,et∈ℰ∧r∈ℛ∧et∈𝒩(eh)}.¯𝒢conditional-setsubscript𝑒ℎ𝑟subscript𝑒𝑡subscript𝑒ℎsubscript𝑒𝑡ℰ𝑟ℛsubscript𝑒𝑡𝒩subscript𝑒ℎ\\overline{\\mathcal{G}}=\\{(e_{h},r,e_{t})|e_{h},e_{t}\\in\\mathcal{E}\\wedge r\\in\\mathcal{R}\\wedge e_{t}\\in\\mathcal{N}(e_{h})\\}. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_23",
"text": " Fig. 2 shows a real case of our global graph pruning method on ConceptNet, i.e., a general knowledge graph. In this case, the entity hepatitis has various relations to disease, infectious disease, adult, etc. From the distance of nodes in Fig. 2, we can clearly observe that the knowledge hepatitis is an infectious disease is more reliable and informative than hepatitis is located at adult. To hepatitis, the concept adult is more general than infectious disease. This indicates that our pruning method can effectively eliminate less informative knowledge. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_24",
"text": " Different from existing knowledge-enhanced PLMs for other NLP tasks, our aim for the re-ranking task is particularly on the relevance modeling between query and passage. Thus, we further leverage the knowledge in the global graph 𝒢¯¯𝒢\\overline{\\mathcal{G}} to construct “bridges” between query and passage, which alleviates the semantic gap and improves semantic modeling. More specifically, for a given query-passage pair (i.e., (𝐪,𝐩)𝐪𝐩(\\mathbf{q},\\mathbf{p})), we propose to construct a bipartite meta-graph that connects those entities in the 𝐪𝐪\\mathbf{q} and those in 𝐩𝐩\\mathbf{p}. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_25",
"text": " The construction process is shown in Alg. 1, which contains three sub-steps: key sentence selection, target entity recognition and path discovery. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_26",
"text": " (1) Key sentence selection. The actual information need of a user usually concentrates on a small part of a relevant passage (Guo et al., 2020). To this end, we mimic human judgment and only focus on the sentence of each passage that is the most related to a query (Zou et al., 2021). In particular, we define the relevance score between a query q and a sentence sisubscripts𝑖\\textbf{s}_{i} as (7) Relqs(q,si)=∑q=1|q|E(wq)|q|⋅∑s=1|si|E(ws)|si|.𝑅𝑒subscript𝑙𝑞𝑠qsubscripts𝑖⋅superscriptsubscript𝑞1qEsubscript𝑤𝑞qsuperscriptsubscript𝑠1subscripts𝑖Esubscript𝑤𝑠subscripts𝑖Rel_{qs}(\\textbf{q},\\textbf{s}_{i})=\\frac{\\sum_{q=1}^{|\\textbf{q}|}\\textbf{E}(w_{q})}{|\\textbf{q}|}\\cdot\\frac{\\sum_{s=1}^{|\\textbf{s}_{i}|}\\textbf{E}(w_{s})}{|\\textbf{s}_{i}|}. For the sake of efficiency, we initialize E(w)E𝑤\\textbf{E}(w) from Word2Vec (Mikolov et al., 2013) embedding. Based on Eq.(7), we select the most relevant sentence s∗superscripts\\textbf{s}^{*} in p to build the meta-graph for 𝐪𝐪\\mathbf{q} and 𝐩𝐩\\mathbf{p}. (2) Target entity recognition. Next, we select the entities in q and s∗superscripts\\textbf{s}^{*} to construct the meta-graph. Specifically, we only consider the entities that exactly match in ℰℰ\\mathcal{E}. Meanwhile, we omit those entity phrases that are sub-sequences of other recognized entities. For example, in the query \"what causes low liver enzymes\", both \"liver\" and \"liver enzyme\" are entities, but the entity \"liver enzyme\" is more informative to be recognized as the target entity, and \"liver\" should be omitted. (3) Path discovery. Finally, given the target entities of q and s∗superscripts\\textbf{s}^{*} (denoted as ϕ𝐪subscriptitalic-ϕ𝐪\\phi_{\\mathbf{q}} and ϕ𝐬∗subscriptitalic-ϕsuperscript𝐬\\phi_{\\mathbf{s}^{*}}, respectively), we perform Breadth First Search (BFS) on 𝒢¯¯𝒢\\overline{\\mathcal{G}} to discover the paths within K𝐾K-hop between ϕ𝐪subscriptitalic-ϕ𝐪\\phi_{\\mathbf{q}} and ϕ𝐬∗subscriptitalic-ϕsuperscript𝐬\\phi_{\\mathbf{s}^{*}}. Note that we only keep the within-K𝐾K-hop paths that might be the most useful for the downstream re-ranking task. Meanwhile, the knowledge could be complemented from the K𝐾K-hop paths. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_27",
"text": " After taking the series of processes, the meta-graph 𝐆𝐪,𝐩={ℰ𝐪,𝐩,ℛ𝐪,𝐩}subscript𝐆𝐪𝐩subscriptℰ𝐪𝐩subscriptℛ𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}}=\\{\\mathcal{E}_{\\mathbf{q},\\mathbf{p}},\\mathcal{R}_{\\mathbf{q},\\mathbf{p}}\\} is constructed with the multi-hop paths discovered between ϕ𝐪subscriptitalic-ϕ𝐪\\phi_{\\mathbf{q}} and ϕ𝐬∗subscriptitalic-ϕsuperscript𝐬\\phi_{\\mathbf{s}^{*}}. Fig. 3 shows an example of the meta-graph, which contains rich knowledge about the semantic relevance between the query and passage. Notably, a better key sentence selector or entity linker such as Sentence-BERT (Reimers and Gurevych, 2019) and DER (Wu et al., 2019) may benefit the ranking performance, but can burden the entire model inference time, which is infeasible to a qualified re-ranker. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_28",
"text": " Given a meta-graph 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}}, we propose a PLM based re-ranker that performs knowledge-enhanced relevance computation, i.e., f(𝐪,𝐩|𝒢)𝑓𝐪conditional𝐩𝒢f(\\mathbf{q},\\mathbf{p}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G}). In the following, we first introduce the text encoder, and then present how we inject explicit knowledge from 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}} into the encoder. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_29",
"text": " We adopt the commonly-used cross-encoder as the text encoder. The input is formulated as the concatenation of a query-passage pair and the input layer converts the token indexes to a set of token embeddings (Vaswani et al., 2017) (i.e., 𝐎0subscript𝐎0\\mathbf{O}_{0}) as (8) 𝐎0=InputLayer(((CLS),{wq}q=1|𝐪|,(SEP),{wp}p=1|𝐩|,(SEP))).subscript𝐎0InputLayerdelimited-()𝐶𝐿𝑆superscriptsubscriptsubscript𝑤𝑞𝑞1𝐪delimited-()𝑆𝐸𝑃superscriptsubscriptsubscript𝑤𝑝𝑝1𝐩delimited-()𝑆𝐸𝑃\\mathbf{O}_{0}=\\text{InputLayer}(((CLS),\\{w_{q}\\}_{q=1}^{|\\mathbf{q}|},(SEP),\\{w_{p}\\}_{p=1}^{|\\mathbf{p}|},(SEP))). In the l𝑙l-th transformer layer, text context features are extracted via multi-head self-attention and Feed Forward Network (FFN) as (9) 𝐇^l=MultiHeadAttention(𝐎l−1),subscript^𝐇𝑙MultiHeadAttentionsubscript𝐎𝑙1\\hat{\\mathbf{H}}_{l}=\\operatorname{MultiHeadAttention}(\\mathbf{O}_{l-1}), (10) 𝐎l=σ(𝐇l^𝐖l1+bl1)𝐖l2+bl2,subscript𝐎𝑙𝜎^subscript𝐇𝑙superscriptsubscript𝐖𝑙1superscriptsubscript𝑏𝑙1superscriptsubscript𝐖𝑙2superscriptsubscript𝑏𝑙2\\mathbf{O}_{l}=\\sigma\\left(\\hat{\\mathbf{H}_{l}}\\mathbf{W}_{l}^{1}+b_{l}^{1}\\right)\\mathbf{W}_{l}^{2}+b_{l}^{2}, where 𝐖lsubscript𝐖𝑙\\mathbf{W}_{l} and blsubscript𝑏𝑙b_{l} are the parameters of FFN and σ𝜎\\sigma is an activation function, and 𝐎lsubscript𝐎𝑙\\mathbf{O}_{l} is the output of layer l𝑙l. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_30",
"text": " Based on the text encoder, we develop a knowledge injector that can seamlessly integrate explicit knowledge. Moreover, inspired by CokeBERT (Su et al., 2021), our knowledge injector is equipped with a GMN module to dynamically refine the knowledge context on the basis of text context features learned by text encoder, which further improves the flexibility and usability of the knowledge enhancement. Besides, our method allows the text context and knowledge context to interact and mutually boost each other. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_31",
"text": " Knowledge injection. As shown in Fig. 4, the knowledge injector consists of multiple transformer layers, which is the same as the text encoder. Given a query-passage pair (𝐪,𝐩)𝐪𝐩(\\mathbf{q},\\mathbf{p}), we first find the entities in 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}} that can be enhanced by external knowledge. For these entities, we define 𝐄𝐄\\mathbf{E} as the knowledge embeddings to be applied in the knowledge injection layers, where 𝐄𝐄\\mathbf{E} is initialized by TransE embeddings extracted from the pruned global graph 𝒢¯¯𝒢\\overline{\\mathcal{G}}. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_32",
"text": " Next, we align each entity with the first token of the corresponding phrase in the selected key sentence (Zhang et al., 2019), and define the knowledge injection process as (11) 𝐇^l=MultiHeadAttention(𝐎l−1),subscript^𝐇𝑙MultiHeadAttentionsubscript𝐎𝑙1\\hat{\\mathbf{H}}_{l}=\\operatorname{MultiHeadAttention}(\\mathbf{O}_{l-1}), (12) 𝐅l=σ((𝐇l^𝐖l1+bl1)⊕Λ(𝐄𝐖l3+bl3)),subscript𝐅𝑙𝜎direct-sum^subscript𝐇𝑙superscriptsubscript𝐖𝑙1superscriptsubscript𝑏𝑙1Λsuperscriptsubscript𝐄𝐖𝑙3superscriptsubscript𝑏𝑙3\\mathbf{F}_{l}=\\sigma\\left((\\hat{\\mathbf{H}_{l}}\\mathbf{W}_{l}^{1}+b_{l}^{1})\\oplus\\Lambda(\\mathbf{E}\\mathbf{W}_{l}^{3}+b_{l}^{3})\\right), (13) 𝐎l=𝐅l𝐖l2+bl2.subscript𝐎𝑙subscript𝐅𝑙subscriptsuperscript𝐖2𝑙subscriptsuperscript𝑏2𝑙\\mathbf{O}_{l}=\\mathbf{F}_{l}\\mathbf{W}^{2}_{l}+b^{2}_{l}. In Eq. (12), ⊕direct-sum\\oplus means element-wise addition and Λ(⋅)Λ⋅\\Lambda(\\cdot) represents the alignment function maps the entities to the corresponding positions of the tokens. By doing this, the external knowledge 𝐄𝐄\\mathbf{E} is integrated in the output 𝐎lsubscript𝐎𝑙\\mathbf{O}_{l} of the knowledge injection layer. The final relevance score of this query-passage pair is defined as (14) f(𝐪,𝐩|𝒢)=σ(𝐎M(CLS)𝐖4+b4).𝑓𝐪conditional𝐩𝒢𝜎superscriptsubscript𝐎M(CLS)superscript𝐖4superscript𝑏4f(\\mathbf{q},\\mathbf{p}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G})=\\sigma\\left(\\mathbf{O}_{\\textrm{M}}^{\\textrm{(CLS)}}\\mathbf{W}^{4}+b^{4}\\right). ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_33",
"text": " Knowledge propagation via meta-graph. It is worth noting that, the above-defined knowledge injection process only leverages knowledge embeddings learned by TransE on the global graph 𝒢¯¯𝒢\\overline{\\mathcal{G}}. Particularly, it lacks considering the knowledge that bridges the semantics between query and passage. To this end, we introduce a Graph Meta Network (GMN) module that refines knowledge with the constructed meta-graph 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}}, The multi-hop paths of 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}} allow the knowledge to be propagated between query and passage, which can enhance the relevance signal to be captured by the model, and thus alleviate the semantic gap. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_34",
"text": " More specifically, each knowledge injection layer has a multi-layer GMN (as shown in Fig. 4) to propagate knowledge on 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}}. First, the input of GMN is formulated with the fused feature 𝐅lsubscript𝐅𝑙\\mathbf{F}_{l} as (15) 𝐄^l(0)=Γ(𝐅l𝐖l5+bl5),superscriptsubscript^𝐄𝑙0Γsubscript𝐅𝑙subscriptsuperscript𝐖5𝑙subscriptsuperscript𝑏5𝑙\\hat{\\mathbf{E}}_{l}^{(0)}=\\Gamma(\\mathbf{F}_{l}\\mathbf{W}^{5}_{l}+b^{5}_{l}), where ΓΓ\\Gamma represents the slice operation that extracts the fused information of the target entities in 𝐆𝐪,𝐩={ℰ𝐪,𝐩,ℛ𝐪,𝐩}subscript𝐆𝐪𝐩subscriptℰ𝐪𝐩subscriptℛ𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}}=\\{\\mathcal{E}_{\\mathbf{q},\\mathbf{p}},\\mathcal{R}_{\\mathbf{q},\\mathbf{p}}\\}, and thus 𝐄^l(0)superscriptsubscript^𝐄𝑙0\\hat{\\mathbf{E}}_{l}^{(0)} consists of fused entities representation 𝐄^e1(0),𝐄^e2(0),…,𝐄^eΨ(0)subscriptsuperscript^𝐄0subscript𝑒1subscriptsuperscript^𝐄0subscript𝑒2…subscriptsuperscript^𝐄0subscript𝑒Ψ\\hat{\\mathbf{E}}^{(0)}_{e_{1}},\\hat{\\mathbf{E}}^{(0)}_{e_{2}},...,\\hat{\\mathbf{E}}^{(0)}_{e_{\\Psi}}, i.e., Ψ=|ℰ𝐪,𝐩|Ψsubscriptℰ𝐪𝐩\\Psi=|\\mathcal{E}_{\\mathbf{q},\\mathbf{p}}|. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_35",
"text": " Next, in the k𝑘k-th layer of GMN, an entity embedding ehsubscript𝑒ℎe_{h} is updated via an attentive aggregation from its neighbors 𝒩(eh)𝒩subscript𝑒ℎ\\mathcal{N}(e_{h}) as (16) 𝐄^eh(k)=𝐄^eh(k−1)+∑et∈𝒩(eh)𝐚ht(k)𝐄^et(k−1).superscriptsubscript^𝐄subscript𝑒ℎ𝑘superscriptsubscript^𝐄subscript𝑒ℎ𝑘1subscriptsubscript𝑒𝑡𝒩subscript𝑒ℎsuperscriptsubscript𝐚ℎ𝑡𝑘superscriptsubscript^𝐄subscript𝑒𝑡𝑘1\\hat{\\mathbf{E}}_{e_{h}}^{(k)}=\\hat{\\mathbf{E}}_{e_{h}}^{(k-1)}+\\sum_{e_{t}\\in\\mathcal{N}(e_{h})}\\mathbf{a}_{ht}^{(k)}\\hat{\\mathbf{E}}_{e_{t}}^{(k-1)}. Here, 𝐚ht(k)superscriptsubscript𝐚ℎ𝑡𝑘\\mathbf{a}_{ht}^{(k)} is the attention value, which can be defined as (17) 𝐚ht(k)=exp(𝐦ht(k))∑en∈𝒩(eh)exp(𝐦hn(k)),superscriptsubscript𝐚ℎ𝑡𝑘𝑒𝑥𝑝superscriptsubscript𝐦ℎ𝑡𝑘subscriptsubscript𝑒𝑛𝒩subscript𝑒ℎ𝑒𝑥𝑝superscriptsubscript𝐦ℎ𝑛𝑘\\mathbf{a}_{ht}^{(k)}=\\frac{exp(\\mathbf{m}_{ht}^{(k)})}{\\sum_{e_{n}\\in\\mathcal{N}(e_{h})}exp(\\mathbf{m}_{hn}^{(k)})}, and the logits 𝐦ht(k)superscriptsubscript𝐦ℎ𝑡𝑘\\mathbf{m}_{ht}^{(k)} is computed as (18) 𝐦ht(k)=σsuperscriptsubscript𝐦ℎ𝑡𝑘𝜎\\displaystyle\\mathbf{m}_{ht}^{(k)}=\\sigma (α(𝐄^eh(k−1)∥𝐄^et(k−1))+β(𝐄^eh(k−1)∥𝐄^rht(k−1))\\displaystyle\\left(\\alpha\\left(\\hat{\\mathbf{E}}_{e_{h}}^{(k-1)}\\|\\hat{\\mathbf{E}}_{e_{t}}^{(k-1)}\\right)+\\beta\\left(\\hat{\\mathbf{E}}_{e_{h}}^{(k-1)}\\|\\hat{\\mathbf{E}}_{r_{ht}}^{(k-1)}\\right)\\right. +γ(𝐄^rht(k−1)∥𝐄^et(k−1))).\\displaystyle\\left.+\\gamma\\left(\\hat{\\mathbf{E}}_{r_{ht}}^{(k-1)}\\|\\hat{\\mathbf{E}}_{e_{t}}^{(k-1)}\\right)\\right). In Eq. (18), the functions α(⋅)𝛼⋅\\alpha(\\cdot), β(⋅)𝛽⋅\\beta(\\cdot) and γ(⋅)𝛾⋅\\gamma(\\cdot) are full-connected layers, and ⋅∥⋅\\cdot\\|\\cdot represents concatenation operation. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_36",
"text": " By applying a K𝐾K-layer GMN in each layer of the knowledge injector, the output entity representation 𝐄^eh(K)superscriptsubscript^𝐄subscript𝑒ℎ𝐾\\hat{\\mathbf{E}}_{e_{h}}^{(K)} can ensemble knowledge from all the K𝐾K-hop neighbors. As described in Section 4.1.2 that all the paths of 𝐆𝐪,𝐩subscript𝐆𝐪𝐩\\mathbf{G}_{\\mathbf{q},\\mathbf{p}} between 𝐪𝐪\\mathbf{q} and 𝐩𝐩\\mathbf{p} is within K𝐾K hops, the GMN module can attentively propagate knowledge along the paths from entities in 𝐩𝐩\\mathbf{p} to those in 𝐪𝐪\\mathbf{q}, and vice versa, which can enrich the semantics of the entities that benefit the relevance modeling. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_37",
"text": " Subsequently, the updated entity embeddings could be used as the knowledge to be injected in the next layer, i.e., 𝐄:=𝐄^(K)assign𝐄superscript^𝐄𝐾\\mathbf{E}:=\\hat{\\mathbf{E}}^{(K)}. In other words, we can re-define Eq. (12) as (19) 𝐅l=σ((𝐇l^𝐖l1+bl1)⊕Λ(𝐄l𝐖l3+bl3)),subscript𝐅𝑙𝜎direct-sum^subscript𝐇𝑙superscriptsubscript𝐖𝑙1superscriptsubscript𝑏𝑙1Λsubscript𝐄𝑙superscriptsubscript𝐖𝑙3superscriptsubscript𝑏𝑙3\\mathbf{F}_{l}=\\sigma\\left((\\hat{\\mathbf{H}_{l}}\\mathbf{W}_{l}^{1}+b_{l}^{1})\\oplus\\Lambda(\\mathbf{E}_{l}\\mathbf{W}_{l}^{3}+b_{l}^{3})\\right), where 𝐄lsubscript𝐄𝑙\\mathbf{E}_{l} is defined as (20) 𝐄l={𝐄^l−1(K),l∈(2,M)TransE embeddings.l=1subscript𝐄𝑙casessuperscriptsubscript^𝐄𝑙1𝐾𝑙2𝑀TransE embeddings𝑙1\\mathbf{E}_{l}=\\begin{cases}\\hat{\\mathbf{E}}_{l-1}^{(K)},&l\\in(2,M)\\\\ \\text{TransE embeddings}.&l=1\\end{cases} ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_38",
"text": " Knowledge-enhanced pre-training. Following previous studies (Nogueira et al., 2019a; Yan et al., 2021; Kim and Ko, 2021), we conduct continual pre-training on MSMARCO corpus to warm up the parameters of GMN module. We apply Masked Language Model (MLM) (Devlin et al., 2018) and Sentence Relation Prediction (SRP) (Wang et al., 2019) as the pre-training tasks in KERM. Compared to conventional Next Sentence Prediction (NSP) (Devlin et al., 2018), the task of SRP is to predict whether a given sentence is the next sentence, previous sentence relation or no relation with another sentence. To incorporate knowledge during the pre-training stage, we construct a meta-graph for each sentence pair, and apply the knowledge aggregation process as introduced above. The pre-training loss is defined as ℒp=ℒMLM+ℒSRPsubscriptℒ𝑝subscriptℒ𝑀𝐿𝑀subscriptℒ𝑆𝑅𝑃\\mathcal{L}_{p}=\\mathcal{L}_{MLM}+\\mathcal{L}_{SRP}. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_39",
"text": " Knowledge-enhanced fine-tuning. We adopt a cross-entropy loss to fine-tune KERM: (21) ℒf=−1|𝒬|∑q∈𝒬logexp(f(𝐪,𝐩+|𝒢))exp(f(𝐪,𝐩+|𝒢))+∑p−exp(f(𝐪,𝐩−|𝒢))subscriptℒ𝑓1𝒬subscript𝑞𝒬exp𝑓𝐪conditionalsuperscript𝐩𝒢exp𝑓𝐪conditionalsuperscript𝐩𝒢subscriptsuperscript𝑝exp𝑓𝐪conditionalsuperscript𝐩𝒢\\mathcal{L}_{f}=-\\frac{1}{|\\mathcal{Q}|}\\sum_{q\\in\\mathcal{Q}}\\log\\frac{\\mathrm{exp}({f(\\mathbf{q},\\mathbf{p}^{+}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G})})}{\\mathrm{exp}({f(\\mathbf{q},\\mathbf{p}^{+}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G})})+\\sum_{p^{-}}\\mathrm{exp}({f(\\mathbf{q},\\mathbf{p}^{-}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G})})} where |𝒬|𝒬|\\mathcal{Q}| is the number of queries in training set, and p+superscript𝑝p^{+} and p−superscript𝑝p^{-} denote the positive passage and negative passage in ℙℙ\\mathbb{P} for current query 𝐪𝐪\\mathbf{q}, respectively. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_40",
"text": " We use a large-scale public available corpus, i.e., MSMARCO-Passage collection (Nguyen et al., 2016), as our passage collection. This collection contains approximately 8.8 million passages extracted from 3.2 million web documents covering multiple fields. We train our model on the MSMARCO-TRAIN query set of 502,939 queries and evaluate KERM on three query sets. Table 1 provides the detailed information of these query sets. The first test set is MSMARCO-DEV, which includes 6,980 sparsely-judged queries mixed with multiple domains. Each query has an average of 1.1 relevant passages with binary relevance label. The second test set is TREC 2019 DL (Craswell et al., 2020), which contains 43 densely-judged queries with fine-grained relevance labels, i.e., irrelevant, relevant, highly relevant and perfectly relevant. On average, a query has 95.4 relevant passages, and most queries have more than 10 relevant passages. With fine-grained labels and multiple relevant passages per query, TREC 2019 DL can be used to reflect the fine-grained ranking performance between relevant passages. To evaluate KERM on specific domains, we further introduce Ohsumed 111http://disi.unitn.it/moschitti/corpora.htm query set, which contains 63 queries on bio-medical domain. The collection of Ohsumed is constructed from the first 20,000 passages in Mesh categories of the year 1991. Following the previous work (Joachims, 1998), the test collection including 10,000 passages are utilized for performance comparison on Ohsumed query set. Each query has an average of 50.9 relevant passages with three graded relevance labels. In section 6.4, we demonstrate that the quality of external knowledge constructed by KERM in such domain could be more useful. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_41",
"text": " We use ConceptNet (Speer et al., 2017), a general knowledge graph as our external knowledge base 𝒢𝒢\\mathcal{G}. Following KagNet (Lin et al., 2019), we merge relation types to increase graph density and construct a multi-relational graph with 17 relation types, including atlocation𝑎𝑡𝑙𝑜𝑐𝑎𝑡𝑖𝑜𝑛atlocation, causes𝑐𝑎𝑢𝑠𝑒𝑠causes, createdby𝑐𝑟𝑒𝑎𝑡𝑒𝑑𝑏𝑦createdby, etc. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_42",
"text": " We include several PLMs based re-rankers in our evaluation, including the state-of-the-art: • monoBERT (Nogueira and Cho, 2019): The first study that re-purposes BERT as a re-ranker and achieves state-of-the-art results. • duoBERT (Nogueira et al., 2019a): This work proposes a pairwise classification approach using BERT, which obtains the ability to be more sensitive to semantics through greater computation. • UED (Yan et al., 2021): A unified pre-training framework that jointly refines re-ranker and query generator. For a fair comparison, we only use the re-ranker in UED without passage expansion. • LM Distill+Fine-Tuning (LDFT) (Gao et al., 2020): A variety of distillation methods are compared in this paper. The experimental results indicate that a proper distillation procedure (i.e. first distill the language model, and then fine-tune on the ranking task) could produce a faster re-ranker with better ranking performance. • CAKD (Hofstätter et al., 2020): This work proposes a cross-architecture knowledge distillation procedure with Margin-MSE loss, which can distill knowledge from multiple teachers. • RocketQAv1 (Qu et al., 2021): This work mainly focuses on the training of PLM based retriever, where the re-ranker is an intermediate product of its training process. • RocketQAv2 (Ren et al., 2021): Based on RocketQAv1, this work proposes a novel approach that jointly trains the PLM based retriever and re-ranker. To compare the performance of different methods, we resort to two ranking metrics. For MSMARCO-DEV, We adopt Mean Reciprocal Rank (i.e., MRR@10). For TREC 2019 DL, we use Mean Average Precision, i.e., MAP@10 and MAP@30. For Ohsumed, both Mean Reciprocal Rank and Mean Average Precision (i.e., MRR@10 and MAP@10) are employed for comprehensive performance analysis in queries requiring in-depth domain knowledge. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_43",
"text": " We use the traditional sparse retriever BM25 (Yang et al., 2017) as our first stage method. All experiments are conducted under the same BM25 setting with 1000 retrieved candidates. We conduct experiments with the deep learning framework PaddlePaddle (Ma et al., 2019) on up to 4 NVIDIA Tesla A100 GPUs (with 40G RAM). For the GMN module, we use Paddle Graph Learning (PGL) 222https://github.com/PaddlePaddle/PGL, an efficient and flexible graph learning framework based on PaddlePaddle. For training, we used the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 1e-5 for text encoder and 1e-4 for knowledge injector. The model is trained up to 5 epochs with a batch size of 640 and 240 for base and large models respectively. In our experiments, the PLM small, base and large models have 6, 12 and 24 Transformer layers respectively. The text encoder has 9 layers and 21 layers for base and large model respectively, and the knowledge injector both has 3 layers in our experiment. The dropout rates are set to 0.1. The ratio of the positive to the hard negative is set to 1:19. All transformer layers in KERM’s backbone are initialized from ERNIE-2.0 base (Sun et al., 2020b), which is a BERT-like model pre-trained with a continual pre-training framework on multiple tasks. We perform Knowledge-enhanced pre-training on MARCO passage collection to warm up the parameters in knowledge injector, which has 60,000 iterations under the batch size of 256. For a fair comparison, the same pre-training without knowledge enhancement is also conducted on ERNIEbasesubscriptERNIEbase\\textrm{ERNIE}_{\\textrm{base}} re-ranker and all models in ablation studies. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_44",
"text": " Here we compare ranking performances of KERM and other PLMs based re-rankers on the first two widely used query sets. Moreover, ablation studies for each component of KERM are also explored. All experimental results were reported under the same BM25 setting. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_45",
"text": " Table 2 shows the ranking performance of KERM and baselines on MSMARCO-DEV and TREC 2019 DL. In the second column, model settings are displayed, including the PLMs used in models, whether distillation is enabled and computing resources required for model training. From Table 2, we observe the following phenomena. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_46",
"text": " (1) Compared with the best SOTA methods on the sparsely-judged MARCO-DEV query set, KERM outperforms all other baseline models except RocketQAv2. It utilizes a well-trained cross-encoder ERNIElargesubscriptERNIElarge\\textrm{ERNIE}_{\\textrm{large}} in RocketQAv1 to remove the predicted negatives with low confidence scores and include the predicted positives with high confidence scores. This can be regarded as a distillation. Meanwhile, RocketQAv2 achieves promising performance through a very large batch size on enormous computational resources, which is hardly comparable to our technique that only requires up to 4 GPUs. In addition to RocketQAv2, both KERMbasesubscriptKERMbase\\textrm{KERM}_{\\textrm{base}} and KERMlargesubscriptKERMlarge\\textrm{KERM}_{\\textrm{large}} exceed strong baseline models, including duoBERT with sophisticated multiple re-ranking stages and CAKD distilled from multiple large models. It demonstrates the effectiveness of external knowledge injection. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_47",
"text": " (2) Among both kinds of baselines, KERMlargesubscriptKERMlarge\\textrm{KERM}_{\\textrm{large}} achieves the best performance on the densely-judged TREC 2019 DL query set. MAP @10 and MAP@30 measure the quality of the ranking result over related passages. Baseline models with larger networks usually perform better in MAP, which indicates that complex structure helps model capture the fine-grained differences between related passages. With the well-designed GMN module and introduced reliable external knowledge, KERMbasesubscriptKERMbase\\textrm{KERM}_{\\textrm{base}} achieves the best performance on MAP@10 even compared to various large baseline models. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_48",
"text": " (3) Distilled models typically perform better at putting the relevant passage at top positions, but the subtle differences between relevant passages cannot be captured effectively through relatively small distilled models. On the MARCO-DEV query set, LDFT (Gao et al., 2020) performs better than duoBERT on MRR@10 and the former model size is much smaller than the latter. It shows that distillation plays a great role in performance improvement. Because LDFT (Gao et al., 2020) neither release the code nor report MAP in the original paper, we omit its result on TREC 2019 DL query set. Additionally, models that perform well on MAP do not lead in MRR and vice versa, demonstrating that two metrics are to measure different aspects of the ranking quality. KERM shows the most stable performance on both metrics among all baseline models. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_49",
"text": " (4) Compared with ERNIEbasesubscriptERNIEbase\\textrm{ERNIE}_{\\textrm{base}} we trained, KERMbasesubscriptKERMbase\\textrm{KERM}_{\\textrm{base}} shows a significant improvement on both two query sets. This indicates the explicit introduction of external knowledge can alleviate the semantic gap and heterogeneity between query and passage, and improve the semantic matching performance. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_50",
"text": " Knowledge injector module including knowledge injection and propagation process realized as Graph Meta Network (GMN), is mainly responsible for the interaction between text and knowledge graph. To explore their roles in the ranking performance, we remove the knowledge injection, aggregation process and the whole module separately and keep other units unchanged in KERM. Experimental results of three base models are shown in Table 3. KERM without knowledge injector module is degraded to vanilla ERNIE. KERM without knowledge propagation process is formally equivalent to ERNIE(THU) (Zhang et al., 2019). KERM without knowledge injection process takes the text of query-passage pair and meta graph as separate inputs, and then concatenate two parts of outputs to feed into a linear layer by redefining Eq.(19) and Eq.(14) respectively as (22) 𝐅l={σ(𝐇l^𝐖l1+bl1),forEq.(13)σ(𝐄l𝐖l3+bl3),forEq.(15)subscript𝐅𝑙cases𝜎^subscript𝐇𝑙superscriptsubscript𝐖𝑙1superscriptsubscript𝑏𝑙1forEq.13𝜎subscript𝐄𝑙superscriptsubscript𝐖𝑙3superscriptsubscript𝑏𝑙3forEq.15\\mathbf{F}_{l}=\\begin{cases}\\sigma\\left(\\hat{\\mathbf{H}_{l}}\\mathbf{W}_{l}^{1}+b_{l}^{1}\\right),&\\textrm{for}\\;\\textrm{Eq.}(\\ref{eq:textoutput})\\\\ \\sigma\\left(\\mathbf{E}_{l}\\mathbf{W}_{l}^{3}+b_{l}^{3}\\right),&\\textrm{for}\\;\\textrm{Eq.}(\\ref{eq:gmninput})\\end{cases} (23) f(𝐪,𝐩|𝒢)=σ((𝐎M(CLS)∥𝐄M(K))𝐖6+b6).𝑓𝐪conditional𝐩𝒢𝜎conditionalsuperscriptsubscript𝐎M(CLS)superscriptsubscript𝐄M𝐾superscript𝐖6superscript𝑏6f(\\mathbf{q},\\mathbf{p}\\leavevmode\\nobreak\\ |\\leavevmode\\nobreak\\ \\mathcal{G})=\\sigma\\left(\\left(\\mathbf{O}_{\\textrm{M}}^{\\textrm{(CLS)}}\\|\\mathbf{E}_{\\textrm{M}}^{(K)}\\right)\\mathbf{W}^{6}+b^{6}\\right). ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_51",
"text": " Table 3 shows the performance comparisons between different settings of knowledge injector, which is statistically significant. From this table, we can observe the following phenomena. (1) MRR@10 of KERM without interaction and propagation process decreases at least 1%percent11\\% respectively. This indicates both knowledge interaction and propagation processes play an indispensable role in ranking performance. (2) The performance of KERM without propagation is comparable to vanilla ERNIE. Not only query and passage entities, but also their multi-hop neighbors are essential for the ranking performance. (3) MRR@10 of KERM without knowledge interaction drops the most. It suggests the simple and straightforward way to aggregate knowledge graph with text does not work in the passage re-ranking scenario. The text and knowledge graph need to be refined with each other mutually in the interaction, which will be further analyzed in detail as follows. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_52",
"text": " To further explore the text-knowledge interaction influence on the ranking performance, we compare ranking performances from KERM with different numbers of knowledge injector layers. All experiments in Table 4 are conducted with the same experimental settings except the number of knowledge injector layers (denoted as M𝑀M). Note that in our setting, the number of text encoder layers N𝑁N plus M𝑀M is always 121212, i.e. the number of layers in ERNIEbasesubscriptERNIEbase\\textrm{ERNIE}_{\\textrm{base}}. No knowledge injector layer (M=0𝑀0M=0) represents the vanilla ERNIEbasesubscriptERNIEbase\\textrm{ERNIE}_{\\textrm{base}} re-ranker without explicit knowledge enhancement. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_53",
"text": " With the increase of M𝑀M in Table 4, the ranking performance is not improved linearly. Instead, the performance achieves the best at M=3𝑀3M=3 and then falls down (statistically significant). This performance variation trend is contradictory to our intuition that the more injector layers, the deeper interaction between text and knowledge, and the more performance improvement is expected. The possible reason lies in that the knowledge injector layer makes pretrained parameters from ERNIEbasesubscriptERNIEbase\\textrm{ERNIE}_{\\textrm{base}} not reusable, which means the implicit knowledge learned from large-scale is not applicable to these layers. Hence the number choice of the knowledge injector layer is somehow determined by the trade-off between implicit and explicit knowledge. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_54",
"text": " Knowledge graph distillation is performed in both global and local perspectives. To explore their roles in the ranking performance, we remove the graph pruning globally and sentence selection locally respectively, keep other settings unchanged, and derive KERM without graph pruning and sentence selection respectively. From results on TREC 2019 DL in Table 5, observations are listed as below. (1) Without global graph pruning, MRR@10 and the average edge score, calculated through Eq.(3), decrease the most, and the time efficiency drops slightly. This indicates the original knowledge graph exists noise data that affect performance. (2) Without sentence selection, the time efficiency drops the most and the average edge score decreases slightly, which proves that not every sentence in a passage has a positive effect on semantic matching. Overall, knowledge graph distillation is significant to KERM. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_55",
"text": " We further investigate the ranking effect of KERM on a specific domain. Specifically, we conduct experiments on OHSUMED from bio-medical field, and a bio-medical query subset of MSMARCO-DEV including 1,11011101,110 queries. This query subset is derived from the mixed domain query set of MSMARCO-DEV by k-means clustering method (Hartigan and Wong, 1979), while the remaining subset with 5,87058705,870 queires is denoted as the general domain subset. Performance comparisons between KERM and BM25, ERNIE are shown in Table 6. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_56",
"text": " Results are obtained from Table 6. (1) Poor ranking performances of all models on bio-medical domain indicates that it is more challenging in the data scarcity scenario, where textual data is not covered widely in the PLMs’ pretraining datasets. (2) Compared with ERNIE, KERM has a higher relative improvement in bio-medical domain than general domain. This demonstrates that the incorporation of knowledge graph is more useful for a data scarcity domain. To verify this idea, we compare the size of knowledge meta graph used for different domains as follows. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_57",
"text": " We quantify the knowledge desirability as the size of average knowledge meta graph used in one domain. Specifically, we use the average number of edges as the size and average edge score calculated through Eq.(3) as the reliability of the knowledge meta graph. From Table 7, we can see that the meta-graph constructed on Bio-Medical domains is better in terms of quantity and quality. It indicates that the external knowledge found on professional domains contains more information. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_58",
"text": " The main goal of this paper is to reasonably introduce external knowledge graph to PLMs for passage re-ranking. We first design a novel knowledge meta graph construction method to distill reliable and query related knowledge from a general and noisy knowledge graph. The knowledge meta graph bridges the semantic gap between each query and passage. Then we propose a knowledge injector layer for mutually updating text and knowledge representations, which transformers word to entity representations for graph meta network, vice versa. Knowledge Enhanced Ranking Model is pretrained with Masked Language Model (MLM) Sentence Relation Prediction (SRP) tasks, and fine-tuned with cross entropy loss function for passage re-ranking task. Experimental results on public benchmark datasets show the effectiveness of the proposed method compared with state-of-the-art baselines without external knowledge due to its first attempt. The role of each module in KERM is also comprehensively analyzed. Since this work was limited to the one-to-one meta-graph of a query-passage pair built online, continued efforts are needed to make knowledge enhancement more efficient for both retrieval and re-ranking stage. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
},
{
"id": "2204.11673_all_59",
"text": " Despite that the knowledge graph distillation in our method is empirically shown to be effective for the final performance, the implementation of graph pruning and meta-graph construction is still based on simple heuristics. A more promising way of formulating a useful meta-graph is to jointly learn a graph generator with the reranker in an end-to-end fashion, which enables more flexibility. Besides, it is currently infeasible to exploit the external knowledge in the retrieval stage, which needs to exhaustively build massive meta-graphs for a large scale of candidates. A further study could focus on how to use external knowledge in PLM based retriever. ",
"title": "Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking"
}
] |
How has the quality and diversity of generated 3D face images improved over time, and what advances have contributed to these improvements?
|
The paper only talks about the line of work on 3D image reconstruction, in other words, the methods and approaches to reconstruct 3D face images [42].
|
[
42
] |
[
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early 1990s, the study of FR became popular following the introduction of the historical Eigenface approach . The milestones of feature-based FR over the past years are presented in Fig. 1, in which the times of four major technical streams are highlighted. The holistic approaches derive the low-dimensional representation through certain distribution assumptions, such as linear subspace , manifold , and sparse representation . This idea dominated the FR community in the 1990s and 2000s. However, a well-known problem is that these theoretically plausible holistic methods fail to address the uncontrolled facial changes that deviate from their prior assumptions. In the early 2000s, this problem gave rise to local-feature-based FR. Gabor and LBP , as well as their multilevel and high-dimensional extensions , achieved robust performance through some invariant properties of local filtering. Unfortunately, handcrafted features suffered from a lack of distinctiveness and compactness. In the early 2010s, learning-based local descriptors were introduced to the FR community , in which local filters are learned for better distinctiveness and the encoding codebook is learned for better compactness. However, these shallow representations still have an inevitable limitation on robustness against the complex nonlinear facial appearance variations. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_1",
"text": " In general, traditional methods attempted to recognize human face by one or two layer representations, such as filtering responses, histogram of the feature codes, or distribution of the dictionary atoms. The research community studied intensively to separately improve the preprocessing, local descriptors, and feature transformation, but these approaches improved FR accuracy slowly. What’s worse, most methods aimed to address one aspect of unconstrained facial changes only, such as lighting, pose, expression, or disguise. There was no any integrated technique to address these unconstrained challenges integrally. As a result, with continuous efforts of more than a decade, “shallow” methods only improved the accuracy of the LFW benchmark to about 95% , which indicates that “shallow” methods are insufficient to extract stable identity feature invariant to real-world changes. Due to the insufficiency of this technical, facial recognition systems were often reported with unstable performance or failures with countless false alarms in real-world applications. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_2",
"text": " But all that changed in 2012 when AlexNet won the ImageNet competition by a large margin using a technique called deep learning . Deep learning methods, such as convolutional neural networks, use a cascade of multiple layers of processing units for feature extraction and transformation. They learn multiple levels of representations that correspond to different levels of abstraction. The levels form a hierarchy of concepts, showing strong invariance to the face pose, lighting, and expression changes, as shown in Fig. 2. It can be seen from the figure that the first layer of the deep neural network is somewhat similar to the Gabor feature found by human scientists with years of experience. The second layer learns more complex texture features. The features of the third layer are more complex, and some simple structures have begun to appear, such as high-bridged nose and big eyes. In the fourth, the network output is enough to explain a certain facial attribute, which can make a special response to some clear abstract concepts such as smile, roar, and even blue eye. In conclusion, in deep convolutional neural networks (CNN), the lower layers automatically learn the features similar to Gabor and SIFT designed for years or even decades (such as initial layers in Fig. 2), and the higher layers further learn higher level abstraction. Finally, the combination of these higher level abstraction represents facial identity with unprecedented stability. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_3",
"text": " In 2014, DeepFace achieved the SOTA accuracy on the famous LFW benchmark , approaching human performance on the unconstrained condition for the first time (DeepFace: 97.35% vs. Human: 97.53%), by training a 9-layer model on 4 million facial images. Inspired by this work, research focus has shifted to deep-learning-based approaches, and the accuracy was dramatically boosted to above 99.80% in just three years. Deep learning technique has reshaped the research landscape of FR in almost all aspects such as algorithm designs, training/test datasets, application scenarios and even the evaluation protocols. Therefore, it is of great significance to review the breakthrough and rapid development process in recent years. There have been several surveys on FR (24, 25, 26, 27, 28) and its subdomains, and they mostly summarized and compared a diverse set of techniques related to a specific FR scene, such as illumination-invariant FR , 3D FR , pose-invariant FR . Unfortunately, due to their earlier publication dates, none of them covered the deep learning methodology that is most successful nowadays. This survey focuses only on recognition problem, and one can refer to Ranjan et al. for a brief review of a full deep FR pipeline with detection and alignment, or refer to Jin et al. for a survey of face alignment. Specifically, the major contributions of this survey are as follows: ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_4",
"text": " • A systematic review on the evolution of the network architectures and loss functions for deep FR is provided. Various loss functions are categorized into Euclidean-distance-based loss, angular/cosine-margin-based loss and softmax loss and its variations. Both the mainstream network architectures, such as Deepface , DeepID series (34, 35, 21, 36), VGGFace , FaceNet , and VGGFace2 , and other architectures designed for FR are covered. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_5",
"text": " • We categorize the new face processing methods based on deep learning, such as those used to handle recognition difficulty on pose changes, into two classes: “one-to-many augmentation” and “many-to-one normalization”, and discuss how emerging generative adversarial network (GAN) facilitates deep FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_6",
"text": " • We present a comparison and analysis on public available databases that are of vital importance for both model training and testing. Major FR benchmarks, such as LFW , IJB-A/B/C (41, 42, 43), Megaface , and MS-Celeb-1M , are reviewed and compared, in term of the four aspects: training methodology, evaluation tasks and metrics, and recognition scenes, which provides an useful reference for training and testing deep FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_7",
"text": " • Besides the general purpose tasks defined by the major databases, we summarize a dozen scenario-specific databases and solutions that are still challenging for deep learning, such as anti-attack, cross-pose FR, and cross-age FR. By reviewing specially designed methods for these unsolved problems, we attempt to reveal the important issues for future research on deep FR, such as adversarial samples, algorithm/data biases, and model interpretability. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_8",
"text": " The remainder of this survey is structured as follows. In Section II, we introduce some background concepts and terminologies, and then we briefly introduce each component of FR. In Section III, different network architectures and loss functions are presented. Then, we summarize the face processing algorithms and the datasets. In Section V, we briefly introduce several methods of deep FR used for different scenes. Finally, the conclusion of this paper and discussion of future works are presented in Section VI. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_9",
"text": " As mentioned in , there are three modules needed for FR system, as shown in Fig. 3. First, a face detector is used to localize faces in images or videos. Second, with the facial landmark detector, the faces are aligned to normalized canonical coordinates. Third, the FR module is implemented with these aligned face images. We only focus on the FR module throughout the remainder of this paper. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_10",
"text": " Before a face image is fed to an FR module, face anti-spoofing, which recognizes whether the face is live or spoofed, is applied to avoid different types of attacks. Then, recognition can be performed. As shown in Fig. 3(c), an FR module consists of face processing, deep feature extraction and face matching, and it can be described as follows: ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_11",
"text": " M(F(Pi(Ii)),F(Pj(Ij)))𝑀𝐹subscript𝑃𝑖subscript𝐼𝑖𝐹subscript𝑃𝑗subscript𝐼𝑗M(F(P_{i}(I_{i})),F(P_{j}(I_{j}))) (1) where Iisubscript𝐼𝑖I_{i} and Ijsubscript𝐼𝑗I_{j} are two face images, respectively. P𝑃P stands for face processing to handle intra-personal variations before training and testing, such as poses, illuminations, expressions and occlusions. F𝐹F denotes feature extraction, which encodes the identity information. The feature extractor is learned by loss functions when training, and is utilized to extract features of faces when testing. M𝑀M means a face matching algorithm used to compute similarity scores of features to determine the specific identity of faces. Different from object classification, the testing identities are usually disjoint from the training data in FR, which makes the learned classifier cannot be used to recognize testing faces. Therefore, face matching algorithm is an essential part in FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_12",
"text": " Although deep-learning-based approaches have been widely used, Mehdipour et al. proved that various conditions, such as poses, illuminations, expressions and occlusions, still affect the performance of deep FR. Accordingly, face processing is introduced to address this problem. The face processing methods are categorized as “one-to-many augmentation” and “many-to-one normalization”, as shown in Table I. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_13",
"text": " • “One-to-many augmentation”. These methods generate many patches or images of the pose variability from a single image to enable deep networks to learn pose-invariant representations. • “Many-to-one normalization”. These methods recover the canonical view of face images from one or many images of a nonfrontal view; then, FR can be performed as if it were under controlled conditions. Note that we mainly focus on deep face processing method designed for pose variations in this paper, since pose is widely regarded as a major challenge in automatic FR applications and other variations can be solved by the similar methods. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_14",
"text": " Network Architecture. The architectures can be categorized as backbone and assembled networks, as shown in Table II. Inspired by the extraordinary success on the ImageNet challenge, the typical CNN architectures, e.g. AlexNet, VGGNet, GoogleNet, ResNet and SENet (22, 75, 76, 77, 78), are introduced and widely used as the baseline models in FR (directly or slightly modified). In addition to the mainstream, some assembled networks, e.g. multi-task networks and multi-input networks, are utilized in FR. Hu et al. shows that accumulating the results of assembled networks provides an increase in performance compared with an individual network. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_15",
"text": " Loss Function. The softmax loss is commonly used as the supervision signal in object recognition, and it encourages the separability of features. However, the softmax loss is not sufficiently effective for FR because intra-variations could be larger than inter-differences and more discriminative features are required when recognizing different people. Many works focus on creating novel loss functions to make features not only more separable but also discriminative, as shown in Table III. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_16",
"text": " FR can be categorized as face verification and face identification. In either scenario, a set of known subjects is initially enrolled in the system (the gallery), and during testing, a new subject (the probe) is presented. After the deep networks are trained on massive data with the supervision of an appropriate loss function, each of the test images is passed through the networks to obtain a deep feature representation. Using cosine distance or L2 distance, face verification computes one-to-one similarity between the gallery and probe to determine whether the two images are of the same subject, whereas face identification computes one-to-many similarity to determine the specific identity of a probe face. In addition to these, other methods are introduced to postprocess the deep features such that the face matching is performed efficiently and accurately, such as metric learning, sparse-representation-based classifier (SRC), and so forth. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_17",
"text": " To sum up, we present FR modules and their commonly-used methods in Fig. 4 to help readers to get a view of the whole FR. In deep FR, various training and testing face databases are constructed, and different architectures and losses of deep FR always follow those of deep object classification and are modified according to unique characteristics of FR. Moreover, in order to address unconstrained facial changes, face processing methods are further designed to handle poses, expressions and occlusions variations. Benefiting from these strategies, deep FR system significantly improves the SOTA and surpasses human performance. When the applications of FR becomes more and more mature in general scenario, recently, different solutions are driven for more difficult specific scenarios, such as cross-pose FR, cross-age FR, video FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_18",
"text": " For most applications, it is difficult to include the candidate faces during the training stage, which makes FR become a “zero-shot” learning task. Fortunately, since all human faces share a similar shape and texture, the representation learned from a small proportion of faces can generalize well to the rest. Based on this theory, a straightforward way to improve generalized performance is to include as many IDs as possible in the training set. For example, Internet giants such as Facebook and Google have reported their deep FR system trained by 106−107superscript106superscript10710^{6}-10^{7} IDs (38, 20). ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_19",
"text": " Unfortunately, these personal datasets, as well as prerequisite GPU clusters for distributed model training, are not accessible for academic community. Currently, public available training databases for academic research consist of only 103−105superscript103superscript10510^{3}-10^{5} IDs. Instead, academic community makes effort to design effective loss functions and adopts efficient architectures to make deep features more discriminative using the relatively small training data sets. For instance, the accuracy of most popular LFW benchmark has been boosted from 97% to above 99.8% in the pasting four years, as enumerated in Table IV. In this section, we survey the research efforts on different loss functions and network architectures that have significantly improved deep FR methods. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_20",
"text": " Inheriting from the object classification network such as AlexNet, the initial Deepface and DeepID adopted cross-entropy based softmax loss for feature learning. After that, people realized that the softmax loss is not sufficient by itself to learn discriminative features, and more researchers began to explore novel loss functions for enhanced generalization ability. This becomes the hottest research topic in deep FR research, as illustrated in Fig. 5. Before 2017, Euclidean-distance-based loss played an important role; In 2017, angular/cosine-margin-based loss as well as feature and weight normalization became popular. It should be noted that, although some loss functions share the similar basic idea, the new one is usually designed to facilitate the training procedure by easier parameter or sample selection. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_21",
"text": " Euclidean-distance-based loss is a metric learning method (118, 119) that embeds images into Euclidean space in which intra-variance is reduced and inter-variance is enlarged. The contrastive loss and the triplet loss are the commonly used loss functions. The contrastive loss (35, 21, 36, 61, 120) requires face image pairs, and then pulls together positive pairs and pushes apart negative pairs. ℒ=yijmax(0,‖f(xi)−f(xj)‖2−ϵ+)+(1−yij)max(0,ϵ−−‖f(xi)−f(xj)‖2)ℒsubscript𝑦𝑖𝑗𝑚𝑎𝑥0subscriptdelimited-∥∥𝑓subscript𝑥𝑖𝑓subscript𝑥𝑗2superscriptitalic-ϵ1subscript𝑦𝑖𝑗𝑚𝑎𝑥0superscriptitalic-ϵsubscriptdelimited-∥∥𝑓subscript𝑥𝑖𝑓subscript𝑥𝑗2\\begin{split}\\mathcal{L}=&y_{ij}max\\left(0,\\left\\|f(x_{i})-f(x_{j})\\right\\|_{2}-\\epsilon^{+}\\right)\\\\ &+(1-y_{ij})max\\left(0,\\epsilon^{-}-\\left\\|f(x_{i})-f(x_{j})\\right\\|_{2}\\right)\\end{split} (2) where yij=1subscript𝑦𝑖𝑗1y_{ij}=1 means xisubscript𝑥𝑖x_{i} and xjsubscript𝑥𝑗x_{j} are matching samples and yij=0subscript𝑦𝑖𝑗0y_{ij}=0 means non-matching samples. f(⋅)𝑓⋅f(\\cdot) is the feature embedding, ϵ+superscriptitalic-ϵ\\epsilon^{+} and ϵ−superscriptitalic-ϵ\\epsilon^{-} control the margins of the matching and non-matching pairs respectively. DeepID2 combined the face identification (softmax) and verification (contrastive loss) supervisory signals to learn a discriminative representation, and joint Bayesian (JB) was applied to obtain a robust embedding space. Extending from DeepID2 , DeepID2+ increased the dimension of hidden representations and added supervision to early convolutional layers. DeepID3 further introduced VGGNet and GoogleNet to their work. However, the main problem with the contrastive loss is that the margin parameters are often difficult to choose. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_22",
"text": " Contrary to contrastive loss that considers the absolute distances of the matching pairs and non-matching pairs, triplet loss considers the relative difference of the distances between them. Along with FaceNet proposed by Google, Triplet loss (38, 37, 81, 80, 58, 60) was introduced into FR. It requires the face triplets, and then it minimizes the distance between an anchor and a positive sample of the same identity and maximizes the distance between the anchor and a negative sample of a different identity. FaceNet made ‖f(xia)−f(xip)‖22+α<−‖f(xia)−f(xin)‖22superscriptsubscriptnorm𝑓superscriptsubscript𝑥𝑖𝑎𝑓superscriptsubscript𝑥𝑖𝑝22𝛼superscriptsubscriptnorm𝑓superscriptsubscript𝑥𝑖𝑎𝑓superscriptsubscript𝑥𝑖𝑛22\\left\\|f(x_{i}^{a})-f(x_{i}^{p})\\right\\|_{2}^{2}+\\alpha<-\\left\\|f(x_{i}^{a})-f(x_{i}^{n})\\right\\|_{2}^{2} using hard triplet face samples, where xiasuperscriptsubscript𝑥𝑖𝑎x_{i}^{a}, xipsuperscriptsubscript𝑥𝑖𝑝x_{i}^{p} and xinsuperscriptsubscript𝑥𝑖𝑛x_{i}^{n} are the anchor, positive and negative samples, respectively, α𝛼\\alpha is a margin and f(⋅)𝑓⋅f(\\cdot) represents a nonlinear transformation embedding an image into a feature space. Inspired by FaceNet , TPE and TSE learned a linear projection W𝑊W to construct triplet loss. Other methods optimize deep models using both triplet loss and softmax loss (59, 58, 60, 121). They first train networks with softmax and then fine-tune them with triplet loss. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_23",
"text": " However, the contrastive loss and triplet loss occasionally encounter training instability due to the selection of effective training samples, some paper begun to explore simple alternatives. Center loss and its variants (82, 116, 102) are good choices for reducing intra-variance. The center loss learned a center for each class and penalized the distances between the deep features and their corresponding class centers. This loss can be defined as follows: ℒC=12∑i=1m‖xi−cyi‖22subscriptℒ𝐶12superscriptsubscript𝑖1𝑚superscriptsubscriptnormsubscript𝑥𝑖subscript𝑐subscript𝑦𝑖22\\mathcal{L}_{C}=\\frac{1}{2}\\sum_{i=1}^{m}\\left\\|x_{i}-c_{y_{i}}\\right\\|_{2}^{2} (3) where xisubscript𝑥𝑖x_{i} denotes the i𝑖i-th deep feature belonging to the yisubscript𝑦𝑖y_{i}-th class and cyisubscript𝑐subscript𝑦𝑖c_{y_{i}} denotes the yisubscript𝑦𝑖y_{i}-th class center of deep features. To handle the long-tailed data, a range loss , which is a variant of center loss, is used to minimize k greatest range’s harmonic mean values in one class and maximize the shortest inter-class distance within one batch. Wu et al. proposed a center-invariant loss that penalizes the difference between each center of classes. Deng et al. selected the farthest intra-class samples and the nearest inter-class samples to compute a margin loss. However, the center loss and its variants suffer from massive GPU memory consumption on the classification layer, and prefer balanced and sufficient training data for each identity. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_24",
"text": " In 2017, people had a deeper understanding of loss function in deep FR and thought that samples should be separated more strictly to avoid misclassifying the difficult samples. Angular/cosine-margin-based loss (104, 84, 105, 106, 108) is proposed to make learned features potentially separable with a larger angular/cosine distance. The decision boundary in softmax loss is (W1−W2)x+b1−b2=0subscript𝑊1subscript𝑊2𝑥subscript𝑏1subscript𝑏20\\left(W_{1}-W_{2}\\right)x+b_{1}-b_{2}=0, where x𝑥x is feature vector, Wisubscript𝑊𝑖W_{i} and bisubscript𝑏𝑖b_{i} are weights and bias in softmax loss, respectively. Liu et al. reformulated the original softmax loss into a large-margin softmax (L-Softmax) loss. They constrain b1=b2=0subscript𝑏1subscript𝑏20b_{1}=b_{2}=0, so the decision boundaries for class 1 and class 2 become ‖x‖(‖W1‖cos(mθ1)−‖W2‖cos(θ2))=0norm𝑥normsubscript𝑊1𝑐𝑜𝑠𝑚subscript𝜃1normsubscript𝑊2𝑐𝑜𝑠subscript𝜃20\\left\\|x\\right\\|\\left(\\left\\|W_{1}\\right\\|cos\\left(m\\theta_{1}\\right)-\\left\\|W_{2}\\right\\|cos\\left(\\theta_{2}\\right)\\right)=0 and ‖x‖(‖W1‖‖W2‖cos(θ1)−cos(mθ2))=0norm𝑥normsubscript𝑊1normsubscript𝑊2𝑐𝑜𝑠subscript𝜃1𝑐𝑜𝑠𝑚subscript𝜃20\\left\\|x\\right\\|\\left(\\left\\|W_{1}\\right\\|\\left\\|W_{2}\\right\\|cos\\left(\\theta_{1}\\right)-cos\\left(m\\theta_{2}\\right)\\right)=0, respectively, where m𝑚m is a positive integer introducing an angular margin, and θisubscript𝜃𝑖\\theta_{i} is the angle between Wisubscript𝑊𝑖W_{i} and x𝑥x. Due to the non-monotonicity of the cosine function, a piece-wise function is applied in L-softmax to guarantee the monotonicity. The loss function is defined as follows: ℒi=−log(e‖Wyi‖‖xi‖φ(θyi)e‖Wyi‖‖xi‖φ(θyi)+∑j≠yie‖Wyi‖‖xi‖cos(θj))subscriptℒ𝑖𝑙𝑜𝑔superscript𝑒normsubscript𝑊𝑦𝑖normsubscript𝑥𝑖𝜑subscript𝜃𝑦𝑖superscript𝑒normsubscript𝑊𝑦𝑖normsubscript𝑥𝑖𝜑subscript𝜃𝑦𝑖subscript𝑗subscript𝑦𝑖superscript𝑒normsubscript𝑊𝑦𝑖normsubscript𝑥𝑖𝑐𝑜𝑠subscript𝜃𝑗\\mathcal{L}_{i}=-log\\left(\\frac{e^{\\left\\|W_{yi}\\right\\|\\left\\|x_{i}\\right\\|\\varphi(\\theta_{yi})}}{e^{\\left\\|W_{yi}\\right\\|\\left\\|x_{i}\\right\\|\\varphi(\\theta_{yi})+\\sum_{j\\neq y_{i}}e^{\\left\\|W_{yi}\\right\\|\\left\\|x_{i}\\right\\|cos(\\theta_{j})}}}\\right) (4) where φ(θ)=(−1)kcos(mθ)−2k,θ∈(kπm,(k+1)πm)formulae-sequence𝜑𝜃superscript1𝑘𝑐𝑜𝑠𝑚𝜃2𝑘𝜃𝑘𝜋𝑚𝑘1𝜋𝑚\\varphi(\\theta)=(-1)^{k}cos(m\\theta)-2k,\\theta\\in\\left(\\frac{k\\pi}{m},\\frac{(k+1)\\pi}{m}\\right) (5) Considering that L-Softmax is difficult to converge, it is always combined with softmax loss to facilitate and ensure the convergence. Therefore, the loss function is changed into: fyi=λ‖Wyi‖‖xi‖cos(θyi)+‖Wyi‖‖xi‖φ(θyi)1+λsubscript𝑓subscript𝑦𝑖𝜆normsubscript𝑊subscript𝑦𝑖normsubscript𝑥𝑖𝑐𝑜𝑠subscript𝜃subscript𝑦𝑖normsubscript𝑊subscript𝑦𝑖normsubscript𝑥𝑖𝜑subscript𝜃subscript𝑦𝑖1𝜆f_{y_{i}}=\\frac{\\lambda\\left\\|W_{y_{i}}\\right\\|\\left\\|x_{i}\\right\\|cos(\\theta_{y_{i}})+\\left\\|W_{y_{i}}\\right\\|\\left\\|x_{i}\\right\\|\\varphi(\\theta_{y_{i}})}{1+\\lambda}, where λ𝜆\\lambda is a dynamic hyper-parameter. Based on L-Softmax, A-Softmax loss further normalized the weight W𝑊W by L2 norm (‖W‖=1norm𝑊1\\left\\|W\\right\\|=1) such that the normalized vector will lie on a hypersphere, and then the discriminative face features can be learned on a hypersphere manifold with an angular margin (Fig. 6). Liu et al. introduced a deep hyperspherical convolution network (SphereNet) that adopts hyperspherical convolution as its basic convolution operator and is supervised by angular-margin-based loss. To overcome the optimization difficulty of L-Softmax and A-Softmax, which incorporate the angular margin in a multiplicative manner, ArcFace and CosFace , AMS loss respectively introduced an additive angular/cosine margin cos(θ+m)𝑐𝑜𝑠𝜃𝑚cos(\\theta+m) and cosθ−m𝑐𝑜𝑠𝜃𝑚cos\\theta-m. They are extremely easy to implement without tricky hyper-parameters λ𝜆\\lambda, and are more clear and able to converge without the softmax supervision. The decision boundaries under the binary classification case are given in Table V. Based on large margin, FairLoss and AdaptiveFace further proposed to adjust the margins for different classes adaptively to address the problem of unbalanced data. Compared to Euclidean-distance-based loss, angular/cosine-margin-based loss explicitly adds discriminative constraints on a hypershpere manifold, which intrinsically matches the prior that human face lies on a manifold. However, Wang et al. showed that angular/cosine-margin-based loss can achieve better results on a clean dataset, but is vulnerable to noise and becomes worse than center loss and softmax in the high-noise region as shown in Fig. 7. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_25",
"text": " In 2017, in addition to reformulating softmax loss into an angular/cosine-margin-based loss as mentioned above, some works tries to normalize the features and weights in loss functions to improve the model performance, which can be written as follows: W^=W‖W‖,x^=αx‖x‖formulae-sequence^𝑊𝑊norm𝑊^𝑥𝛼𝑥norm𝑥\\hat{W}=\\frac{W}{\\left\\|W\\right\\|},\\hat{x}=\\alpha\\frac{x}{\\left\\|x\\right\\|} (6) where α𝛼\\alpha is a scaling parameter, x𝑥x is the learned feature vector, W𝑊W is weight of last fully connected layer. Scaling x𝑥x to a fixed radius α𝛼\\alpha is important, as Wang et al. proved that normalizing both features and weights to 1 will make the softmax loss become trapped at a very high value on the training set. After that, the loss function, e.g. softmax, can be performed using the normalized features and weights. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_26",
"text": " Some papers (84, 108) first normalized the weights only and then added angular/cosine margin into loss functions to make the learned features be discriminative. In contrast, some works, such as (109, 111), adopted feature normalization only to overcome the bias to the sample distribution of the softmax. Based on the observation of that the L2-norm of features learned using the softmax loss is informative of the quality of the face, L2-softmax enforced all the features to have the same L2-norm by feature normalization such that similar attention is given to good quality frontal faces and blurry faces with extreme pose. Rather than scaling x𝑥x to the parameter α𝛼\\alpha, Hasnat et al. normalized features with x^=x−μσ2^𝑥𝑥𝜇superscript𝜎2\\hat{x}=\\frac{x-\\mu}{\\sqrt{\\sigma^{2}}}, where μ𝜇\\mu and σ2superscript𝜎2\\sigma^{2} are the mean and variance. Ring loss encouraged the norm of samples being value R𝑅R (a learned parameter) rather than explicit enforcing through a hard normalization operation. Moreover, normalizing both features and weights (110, 112, 115, 105, 106) has become a common strategy. Wang et al. explained the necessity of this normalization operation from both analytic and geometric perspectives. After normalizing features and weights, CoCo loss optimized the cosine distance among data features, and Hasnat et al. used the von Mises-Fisher (vMF) mixture model as the theoretical basis to develop a novel vMF mixture loss and its corresponding vMF deep features. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_27",
"text": " Mainstream architectures. The commonly used network architectures of deep FR have always followed those of deep object classification and evolved from AlexNet to SENet rapidly. We present the most influential architectures of deep object classification and deep face recognition in chronological order 111The time we present is when the paper was published. in Fig. 8. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_28",
"text": " In 2012, AlexNet was reported to achieve the SOTA recognition accuracy in the ImageNet large-scale visual recognition competition (ILSVRC) 2012, exceeding the previous best results by a large margin. AlexNet consists of five convolutional layers and three fully connected layers, and it also integrates various techniques, such as rectified linear unit (ReLU), dropout, data augmentation, and so forth. ReLU was widely regarded as the most essential component for making deep learning possible. Then, in 2014, VGGNet proposed a standard network architecture that used very small 3×3333\\times 3 convolutional filters throughout and doubled the number of feature maps after the 2×\\times2 pooling. It increased the depth of the network to 16-19 weight layers, which further enhanced the flexibility to learn progressive nonlinear mappings by deep architectures. In 2015, the 22-layer GoogleNet introduced an “inception module” with the concatenation of hybrid feature maps, as well as two additional intermediate softmax supervised signals. It performs several convolutions with different receptive fields (1×1111\\times 1, 3×3333\\times 3 and 5×5555\\times 5) in parallel, and concatenates all feature maps to merge the multi-resolution information. In 2016, ResNet proposed to make layers learn a residual mapping with reference to the layer inputs ℱ(x):=ℋ(x)−xassignℱ𝑥ℋ𝑥𝑥\\mathcal{F}(x):=\\mathcal{H}(x)-x rather than directly learning a desired underlying mapping ℋ(x)ℋ𝑥\\mathcal{H}(x) to ease the training of very deep networks (up to 152 layers). The original mapping is recast into ℱ(x)+xℱ𝑥𝑥\\mathcal{F}(x)+x and can be realized by “shortcut connections”. As the champion of ILSVRC 2017, SENet introduced a “Squeeze-and-Excitation” (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. These blocks can be integrated with modern architectures, such as ResNet, and improves their representational power. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_29",
"text": " With the evolved architectures and advanced training techniques, such as batch normalization (BN), the network becomes deeper and the training becomes more controllable. Following these architectures in object classification, the networks in deep FR are also developed step by step, and the performance of deep FR is continually improving. We present these mainstream architectures of deep FR in Fig. 9. In 2014, DeepFace was the first to use a nine-layer CNN with several locally connected layers. With 3D alignment for face processing, it reaches an accuracy of 97.35% on LFW. In 2015, FaceNet used a large private dataset to train a GoogleNet. It adopted a triplet loss function based on triplets of roughly aligned matching/nonmatching face patches generated by a novel online triplet mining method and achieved good performance of 99.63%. In the same year, VGGface designed a procedure to collect a large-scale dataset from the Internet. It trained the VGGNet on this dataset and then fine-tuned the networks via a triplet loss function similar to FaceNet. VGGface obtains an accuracy of 98.95%. In 2017, SphereFace used a 64-layer ResNet architecture and proposed the angular softmax (A-Softmax) loss to learn discriminative face features with angular margin. It boosts the achieves to 99.42% on LFW. In the end of 2017, a new large-scale face dataset, namely VGGface2 , was introduced, which consists of large variations in pose, age, illumination, ethnicity and profession. Cao et al. first trained a SENet with MS-celeb-1M dataset and then fine-tuned the model with VGGface2 , and achieved the SOTA performance on the IJB-A and IJB-B . ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_30",
"text": " Light-weight networks. Using deeper neural network with hundreds of layers and millions of parameters to achieve higher accuracy comes at cost. Powerful GPUs with larger memory size are needed, which makes the applications on many mobiles and embedded devices impractical. To address this problem, light-weight networks are proposed. Light CNN (85, 86) proposed a max-feature-map (MFM) activation function that introduces the concept of maxout in the fully connected layer to CNN. The MFM obtains a compact representation and reduces the computational cost. Sun et al. proposed to sparsify deep networks iteratively from the previously learned denser models based on a weight selection criterion. MobiFace adopted fast downsampling and bottleneck residual block with the expansion layers and achieved high performance with 99.7% on LFW database. Although some other light-weight CNNs, such as SqueezeNet, MobileNet, ShuffleNet and Xception (126, 127, 128, 129), are still not widely used in FR, they deserve more attention. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_31",
"text": " Adaptive-architecture networks. Considering that designing architectures manually by human experts are time-consuming and error-prone processes, there is growing interest in adaptive-architecture networks which can find well-performing architectures, e.g. the type of operation every layer executes (pooling, convolution, etc) and hyper-parameters associated with the operation (number of filters, kernel size and strides for a convolutional layer, etc), according to the specific requirements of training and testing data. Currently, neural architecture search (NAS) is one of the promising methodologies, which has outperformed manually designed architectures on some tasks such as image classification or semantic segmentation . Zhu et al. integrated NAS technology into face recognition. They used reinforcement learning algorithm (policy gradient) to guide the controller network to train the optimal child architecture. Besides NAS, there are some other explorations to learn optimal architectures adaptively. For example, conditional convolutional neural network (c-CNN) dynamically activated sets of kernels according to modalities of samples; Han et al. proposed a novel contrastive convolution consisted of a trunk CNN and a kernel generator, which is beneficial owing to its dynamistic generation of contrastive kernels based on the pair of faces being compared. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_32",
"text": " Joint alignment-recognition networks. Recently, an end-to-end system (91, 92, 93, 94) was proposed to jointly train FR with several modules (face detection, alignment, and so forth) together. Compared to the existing methods in which each module is generally optimized separately according to different objectives, this end-to-end system optimizes each module according to the recognition objective, leading to more adequate and robust inputs for the recognition model. For example, inspired by spatial transformer , Hayat et al. proposed a CNN-based data-driven approach that learns to simultaneously register and represent faces (Fig. 10), while Wu et al. designed a novel recursive spatial transformer (ReST) module for CNN allowing face alignment and recognition to be jointly optimized. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_33",
"text": " Multi-input networks. In “one-to-many augmentation”, multiple images with variety are generated from one image in order to augment training data. Taken these multiple images as input, multiple networks are also assembled together to extract and combine features of different type of inputs, which can outperform an individual network. In (58, 59, 60, 99, 34, 21, 35), assembled networks are built after different face patches are cropped, and then different types of patches are fed into different sub-networks for representation extraction. By combining the results of sub-networks, the performance can be improved. Other papers (96, 95, 98) used assembled networks to recognize images with different poses. For example, Masi et al. adjusted the pose to frontal (0∘superscript00^{\\circ}), half-profile (40∘superscript4040^{\\circ}) and full-profile views (75∘superscript7575^{\\circ}) and then addressed pose variation by assembled pose networks. A multi-view deep network (MvDN) consists of view-specific subnetworks and common subnetworks; the former removes view-specific variations, and the latter obtains common representations. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_34",
"text": " Multi-task networks. FR is intertwined with various factors, such as pose, illumination, and age. To solve this problem, multitask learning is introduced to transfer knowledge from other relevant tasks and to disentangle nuisance factors. In multi-task networks, identity classification is the main task and the side tasks are pose, illumination, and expression estimations, among others. The lower layers are shared among all the tasks, and the higher layers are disentangled into different sub-networks to generate the task-specific outputs. In , the task-specific sub-networks are branched out to learn face detection, face alignment, pose estimation, gender recognition, smile detection, age estimation and FR. Yin et al. proposed to automatically assign the dynamic loss weights for each side task. Peng et al. used a feature reconstruction metric learning to disentangle a CNN into sub-networks for jointly learning the identity and non-identity features as shown in Fig. 11. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_35",
"text": " During testing, the cosine distance and L2 distance are generally employed to measure the similarity between the deep features x1subscript𝑥1x_{1} and x2subscript𝑥2x_{2}; then, threshold comparison and the nearest neighbor (NN) classifier are used to make decision for verification and identification. In addition to these common methods, there are some other explorations. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_36",
"text": " Metric learning, which aims to find a new metric to make two classes more separable, can also be used for face matching based on extracted deep features. The JB model is a well-known metric learning method (35, 21, 36, 34, 120), and Hu et al. proved that it can improve the performance greatly. In the JB model, a face feature x𝑥x is modeled as x=μ+ε𝑥𝜇𝜀x=\\mu+\\varepsilon, where μ𝜇\\mu and ε𝜀\\varepsilon are identity and intra-personal variations, respectively. The similarity score r(x1,x2)𝑟subscript𝑥1subscript𝑥2r(x_{1},x_{2}) can be represented as follows: r(x1,x2)=logP(x1,x2|HI)P(x1,x2|HE)𝑟subscript𝑥1subscript𝑥2𝑙𝑜𝑔𝑃subscript𝑥1conditionalsubscript𝑥2subscript𝐻𝐼𝑃subscript𝑥1conditionalsubscript𝑥2subscript𝐻𝐸r(x_{1},x_{2})=log\\frac{P\\left(x_{1},x_{2}|H_{I}\\right)}{P\\left(x_{1},x_{2}|H_{E}\\right)} (7) where P(x1,x2|HI)𝑃subscript𝑥1conditionalsubscript𝑥2subscript𝐻𝐼P(x_{1},x_{2}|H_{I}) is the probability that two faces belong to the same identity and P(x1,x2|HE)𝑃subscript𝑥1conditionalsubscript𝑥2subscript𝐻𝐸P(x_{1},x_{2}|H_{E}) is the probability that two faces belong to different identities. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_37",
"text": " After cosine distance was computed, Cheng et al. proposed a heuristic voting strategy at the similarity score level to combine the results of multiple CNN models and won first place in Challenge 2 of MS-celeb-1M 2017. Yang et al. extracted the local adaptive convolution features from the local regions of the face image and used the extended SRC for FR with a single sample per person. Guo et al. combined deep features and the SVM classifier to perform recognition. Wang et al. first used product quantization (PQ) to directly retrieve the top-k most similar faces and re-ranked these faces by combining similarities from deep features and the COTS matcher . In addition, Softmax can be also used in face matching when the identities of training set and test set overlap. For example, in Challenge 2 of MS-celeb-1M, Ding et al. trained a 21,000-class softmax classifier to directly recognize faces of one-shot classes and normal classes after augmenting feature by a conditional GAN; Guo et al. trained the softmax classifier combined with underrepresented-classes promotion (UP) loss term to enhance the performance on one-shot classes. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_38",
"text": " When the distributions of training data and testing data are the same, the face matching methods mentioned above are effective. However, there is always a distribution change or domain shift between two data domains that can degrade the performance on test data. Transfer learning (144, 145) has recently been introduced into deep FR to address the problem of domain shift. It learns transferable features using a labeled source domain (training data) and an unlabeled target domain (testing data) such that domain discrepancy is reduced and models trained on source domain will also perform well on target domain. Sometimes, this technology is applied to face matching. For example, Crosswhite et al. and Xiong et al. adopted template adaptation to the set of media in a template by combining CNN features with template-specific linear SVMs. But most of the time, it is not enough to do transfer learning only at face matching stage. Transfer learning should be embedded in deep models to learn more transferable representations. Kan et al. proposed a bi-shifting autoencoder network (BAE) for domain adaptation across view angle, ethnicity, and imaging sensor; while Luo et al. utilized the multi-kernels maximum mean discrepancy (MMD) to reduce domain discrepancies. Sohn et al. used adversarial learning to transfer knowledge from still image FR to video FR. Moreover, fine-tuning the CNN parameters from a prelearned model using a target training dataset is a particular type of transfer learning, and is commonly employed by numerous methods (151, 152, 103). ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_39",
"text": " We present the development of face processing methods in chronological order in Fig. 12. As we can see from the figure, most papers attempted to perform face processing by autoencoder model in 2014 and 2015; while 3D model played an important role in 2016. GAN has drawn substantial attention from the deep learning and computer vision community since it was first proposed by Goodfellow et al. It can be used in different fields and was also introduced into face processing in 2017. GAN can be used to perform “one-to-many augmentation” and “many-to-one normalization”, and it broke the limit that face synthesis should be done under supervised way. Although GAN has not been widely used in face processing for training and recognition, it has great latent capacity for preprocessing, for example, Dual-Agent GANs (DA-GAN) won the 1st places on verification and identification tracks in the NIST IJB-A 2017 FR competitions. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_40",
"text": " Collecting a large database is extremely expensive and time consuming. The methods of “one-to-many augmentation” can mitigate the challenges of data collection, and they can be used to augment not only training data but also the gallery of test data. we categorized them into four classes: data augmentation, 3D model, autoencoder model and GAN model. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_41",
"text": " Data augmentation. Common data augmentation methods consist of photometric transformations (75, 22) and geometric transformations, such as oversampling (multiple patches obtained by cropping at different scales) , mirroring , and rotating the images. Recently, data augmentation has been widely used in deep FR algorithms (58, 59, 60, 35, 21, 36, 61, 62). for example, Sun et al. cropped 400 face patches varying in positions, scales, and color channels and mirrored the images. Liu et al. generated seven overlapped image patches centered at different landmarks on the face region and trained them with seven CNNs with the same structure. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_42",
"text": " 3D model. 3D face reconstruction is also a way to enrich the diversity of training data. They utilize 3D structure information to model the transformation between poses. 3D models first use 3D face data to obtain morphable displacement fields and then apply them to obtain 2D face data in different pose angles. There is a large number of papers about this domain, but we only focus on the 3D face reconstruction using deep methods or used for deep FR. In , Masi et al. generated face images with new intra-class facial appearance variations, including pose, shape and expression, and then trained a 19-layer VGGNet with both real and augmented data. Masi et al. used generic 3D faces and rendered fixed views to reduce much of the computational effort. Richardson et al. employed an iterative 3D CNN by using a secondary input channel to represent the previous network’s output as an image for reconstructing a 3D face as shown in Fig. 13. Dou et al. used a multi-task CNN to divide 3D face reconstruction into neutral 3D reconstruction and expressive 3D reconstruction. Tran et al. directly regressed 3D morphable face model (3DMM) parameters from an input photo by a very deep CNN architecture. An et al. synthesized face images with various poses and expressions using the 3DMM method, then reduced the gap between synthesized data and real data with the help of MMD. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_43",
"text": " Autoencoder model. Rather than reconstructing 3D models from a 2D image and projecting it back into 2D images of different poses, autoencoder models can generate 2D target images directly. Taken a face image and a pose code encoding a target pose as input, an encoder first learns pose-invariant face representation, and then a decoder generates a face image with the same identity viewed at the target pose by using the pose-invariant representation and the pose code. For example, given the target pose codes, multi-view perceptron (MVP) trained some deterministic hidden neurons to learn pose-invariant face representations, and simultaneously trained some random hidden neurons to capture pose features, then a decoder generated the target images by combining pose-invariant representations with pose features. As shown in Fig. 14, Yim et al. and Qian et al. introduced an auxiliary CNN to generate better images viewed at the target poses. First, an autoencoder generated the desired pose image, then the auxiliary CNN reconstructed the original input image back from the generated target image, which guarantees that the generated image is identity-preserving. In , two groups of units are embedded between encoder and decoder. The identity units remain unchanged and the rotation of images is achieved by taking actions to pose units at each time step. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_44",
"text": " GAN model. In GAN models, a generator aims to fool a discriminator through generating images that resemble the real images, while the discriminator aims to discriminate the generated samples from the real ones. By this minimax game between generator and discriminator, GAN can successfully generate photo-realistic images with different poses. After using a 3D model to generate profile face images, DA-GAN refined the images by a GAN, which combines prior knowledge of the data distribution and knowledge of faces (pose and identity perception loss). CVAE-GAN combined a variational auto-encoder with a GAN for augmenting data, and took advantages of both statistic and pairwise feature matching to make the training process converge faster and more stably. In addition to synthesizing diverse faces from noise, some papers also explore to disentangle the identity and variation, and synthesize new faces by exchanging identity and variation from different people. In CG-GAN , a generator directly resolves each representation of input image into a variation code and an identity code and regroups these codes for cross-generating, simultaneously, a discriminator ensures the reality of generated images. Bao et al. extracted identity representation of one input image and attribute representation of any other input face image, then synthesized new faces by recombining these representations. This work shows superior performance in generating realistic and identity preserving face images, even for identities outside the training dataset. Unlike previous methods that treat classifier as a spectator, FaceID-GAN proposed a three-player GAN where the classifier cooperates together with the discriminator to compete with the generator from two different aspects, i.e. facial identity and image quality respectively. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_45",
"text": " In contrast to “one-to-many augmentation”, the methods of “many-to-one normalization” produce frontal faces and reduce appearance variability of test data to make faces align and compare easily. It can be categorized as autoencoder model, CNN model and GAN model. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_46",
"text": " Autoencoder model. Autoencoder can also be applied to “many-to-one normalization”. Different from the autoencoder model in “one-to-many augmentation” which generates the desired pose images with the help of pose codes, autoencoder model here learns pose-invariant face representation by an encoder and directly normalizes faces by a decoder without pose codes. Zhu et al. (66, 67) selected canonical-view images according to the face images’ symmetry and sharpness and then adopted an autoencoder to recover the frontal view images by minimizing the reconstruction loss error. The proposed stacked progressive autoencoders (SPAE) progressively map the nonfrontal face to the frontal face through a stack of several autoencoders. Each shallow autoencoders of SPAE is designed to convert the input face images at large poses to a virtual view at a smaller pose, so the pose variations are narrowed down gradually layer by layer along the pose manifold. Zhang et al. built a sparse many-to-one encoder to enhance the discriminant of the pose free feature by using multiple random faces as the target values for multiple encoders. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_47",
"text": " CNN model. CNN models usually directly learn the 2D mappings between non-frontal face images and frontal images, and utilize these mapping to normalize images in pixel space. The pixels in normalized images are either directly the pixels or the combinations of the pixels in non-frontal images. In LDF-Net , the displacement field network learns the shifting relationship of two pixels, and the translation layer transforms the input non-frontal face image into a frontal one with this displacement field. In GridFace shown in Fig. 15, first, the rectification network normalizes the images by warping pixels from the original image to the canonical one according to the computed homography matrix, then the normalized output is regularized by an implicit canonical view face prior, finally, with the normalized faces as input, the recognition network learns discriminative face representation via metric learning. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_48",
"text": " GAN model. Huang et al. proposed a two-pathway generative adversarial network (TP-GAN) that contains four landmark-located patch networks and a global encoder-decoder network. Through combining adversarial loss, symmetry loss and identity-preserving loss, TP-GAN generates a frontal view and simultaneously preserves global structures and local details as shown in Fig. 16. In a disentangled representation learning generative adversarial network (DR-GAN) , the generator serves as a face rotator, in which an encoder produces an identity representation, and a decoder synthesizes a face at the specified pose using this representation and a pose code. And the discriminator is trained to not only distinguish real vs. synthetic images, but also predict the identity and pose of a face. Yin et al. incorporated 3DMM into the GAN structure to provide shape and appearance priors to guide the generator to frontalization. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_49",
"text": " In the past three decades, many face databases have been constructed with a clear tendency from small-scale to large-scale, from single-source to diverse-sources, and from lab-controlled to real-world unconstrained condition, as shown in Fig. 17. As the performance of some simple databases become saturated, e.g. LFW , more and more complex databases were continually developed to facilitate the FR research. It can be said without exaggeration that the development process of the face databases largely leads the direction of FR research. In this section, we review the development of major training and testing academic databases for the deep FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_50",
"text": " The prerequisite of effective deep FR is a sufficiently large training dataset. Zhou et al. suggested that large amounts of data with deep learning improve the performance of FR. The results of Megaface Challenge also revealed that premier deep FR methods were typically trained on data larger than 0.5M images and 20K people. The early works of deep FR were usually trained on private training datasets. Facebook’s Deepface model was trained on 4M images of 4K people; Google’s FaceNet was trained on 200M images of 3M people; DeepID serial models (34, 35, 21, 36) were trained on 0.2M images of 10K people. Although they reported ground-breaking performance at this stage, researchers cannot accurately reproduce or compare their models without public training datasets. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_51",
"text": " To address this issue, CASIA-Webface provided the first widely-used public training dataset for the deep model training purpose, which consists of 0.5M images of 10K celebrities collected from the web. Given its moderate size and easy usage, it has become a great resource for fair comparisons for academic deep models. However, its relatively small data and ID size may not be sufficient to reflect the power of many advanced deep learning methods. Currently, there have been more databases providing public available large-scale training dataset (Table VI), especially three databases with over 1M images, namely MS-Celeb-1M , VGGface2 , and Megaface (44, 164), and we summary some interesting findings about these training sets, as shown in Fig. 18. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_52",
"text": " Depth v.s. breadth. These large training sets are expanded from depth or breadth. VGGface2 provides a large-scale training dataset of depth, which have limited number of subjects but many images for each subjects. The depth of dataset enforces the trained model to address a wide range intra-class variations, such as lighting, age, and pose. In contrast, MS-Celeb-1M and Mageface (Challenge 2) offers large-scale training datasets of breadth, which contains many subject but limited images for each subjects. The breadth of dataset ensures the trained model to cover the sufficiently variable appearance of various people. Cao et al. conducted a systematic studies on model training using VGGface2 and MS-Celeb-1M, and found an optimal model by first training on MS-Celeb-1M (breadth) and then fine-tuning on VGGface2 (depth). ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_53",
"text": " Long tail distribution. The utilization of long tail distribution is different among datasets. For example, in Challenge 2 of MS-Celeb-1M, the novel set specially uses the tailed data to study low-shot learning; central part of the long tail distribution is used by the Challenge 1 of MS-Celeb-1M and images’ number is approximately limited to 100 for each celebrity; VGGface and VGGface2 only use the head part to construct deep databases; Megaface utilizes the whole distribution to contain as many images as possible, the minimal number of images is 3 per person and the maximum is 2469. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_54",
"text": " Data engineering. Several popular benchmarks, such as LFW unrestricted protocol, Megaface Challenge 1, MS-Celeb-1M Challenge 1&2, explicitly encourage researchers to collect and clean a large-scale data set for enhancing the capability of deep neural network. Although data engineering is a valuable problem to computer vision researchers, this protocol is more incline to the industry participants. As evidence, the leaderboards of these experiments are mostly occupied by the companies holding invincible hardwares and data scales. This phenomenon may not be beneficial for developments of new models in academic community. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_55",
"text": " Data noise. Owing to data source and collecting strategies, existing large-scale datasets invariably contain label noises. Wang et al. profiled the noise distribution in existing datasets in Fig. 19 and showed that the noise percentage increases dramatically along the scale of data. Moreover, they found that noise is more lethal on a 10,000-class problem of FR than on a 10-class problem of object classification and that label flip noise severely deteriorates the performance of a model, especially the model using A-softmax . Therefore, building a sufficiently large and clean dataset for academic research is very meaningful. Deng et al. found there are serious label noise in MS-Celeb-1M , and they cleaned the noise of MS-Celeb-1M, and made the refined dataset public available. Microsoft and Deepglint jointly released the largest public data set with cleaned labels, which includes 4M images cleaned from MS-Celeb-1M dataset and 2.8M aligned images of 100K Asian celebrities. Moreover, Zhan et al. shifted the focus from cleaning the datasets to leveraging more unlabeled data. Through automatically assigning pseudo labels to unlabeled data with the help of relational graphs, they obtained competitive or even better results over the fully-supervised counterpart. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_56",
"text": " Data bias. Large-scale training datasets, such as CASIA-WebFace , VGGFace2 and MS-Celeb-1M , are typically constructed by scraping websites like Google Images, and consist of celebrities on formal occasions: smiling, make-up, young, and beautiful. They are largely different from databases captured in the daily life (e.g. Megaface). The biases can be attributed to many exogenous factors in data collection, such as cameras, lightings, preferences over certain types of backgrounds, or annotator tendencies. Dataset biases adversely affect cross-dataset generalization; that is, the performance of the model trained on one dataset drops significantly when applied to another one. One persuasive evidence is presented by P.J. Phillips’ study which conducted a cross benchmark assessment of VGGFace model for face recognition. The VGGFace model achieves 98.95% on LFW and 97.30% on YTF , but only obtains 26%, 52% and 85% on Ugly, Bad and Good partition of GBU database . ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_57",
"text": " Demographic bias (e.g., race/ethnicity, gender, age) in datasets is a universal but urgent issue to be solved in data bias field. In existing training and testing datasets, the male, White, and middle-aged cohorts always appear more frequently, as shown in Table VII, which inevitably causes deep learning models to replicate and even amplify these biases resulting in significantly different accuracies when deep models are applied to different demographic groups. Some researches (145, 171, 172) showed that the female, Black, and younger cohorts are usually more difficult to recognize in FR systems trained with commonly-used datasets. For example, Wang et al. proposed a Racial Faces in-the-Wild (RFW) database and proved that existing commercial APIs and the SOTA algorithms indeed work unequally for different races and the maximum difference in error rate between the best and worst groups is 12%, as shown in Table VIII. Hupont et al. showed that SphereFace has a TAR of 0.87 for White males which drops to 0.28 for Asian females, at a FAR of 1e−41𝑒41e-4. Such bias can result in mistreatment of certain demographic groups, by either exposing them to a higher risk of fraud, or by making access to services more difficult. Therefore, addressing data bias and enhancing fairness of FR systems in real life are urgent and necessary tasks. Collecting balanced data to train a fair model or designing some debiasing algorithms are effective way. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_58",
"text": " In terms of training protocol, FR can be categorized into subject-dependent and subject-independent settings, as illustrated in Fig. 20. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_59",
"text": " Subject-dependent protocol. For subject-dependent protocol, all testing identities are predefined in training set, it is natural to classify testing face images to the given identities. Therefore, subject-dependent FR can be well addressed as a classification problem, where features are expected to be separable. The protocol is mostly adopted by the early-stage (before 2010) FR studies on FERET , AR , and is suitable only for some small-scale applications. The Challenge 2 of MS-Celeb-1M is the only large-scale database using subject-dependent training protocol. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_60",
"text": " Subject-independent protocol. For subject-independent protocol, the testing identities are usually disjoint from the training set, which makes FR more challenging yet close to practice. Because it is impossible to classify faces to known identities in training set, generalized representation is essential. Due to the fact that human faces exhibit similar intra-subject variations, deep models can display transcendental generalization ability when training with a sufficiently large set of generic subjects, where the key is to learn discriminative large-margin deep features. This generalization ability makes subject-independent FR possible. Almost all major face-recognition benchmarks, such as LFW , PaSC , IJB-A/B/C (41, 42, 43) and Megaface (44, 164), require the tested models to be trained under subject-independent protocol. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_61",
"text": " In order to evaluate whether our deep models can solve the different problems of FR in real life, many testing datasets are designed to evaluate the models in different tasks, i.e. face verification, close-set face identification and open-set face identification. In either task, a set of known subjects is initially enrolled in the system (the gallery), and during testing, a new subject (the probe) is presented. Face verification computes one-to-one similarity between the gallery and probe to determine whether the two images are of the same subject, whereas face identification computes one-to-many similarity to determine the specific identity of a probe face. When the probe appears in the gallery identities, this is referred to as closed-set identification; when the probes include those who are not in the gallery, this is open-set identification. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_62",
"text": " Face verification is relevant to access control systems, re-identification, and application independent evaluations of FR algorithms. It is classically measured using the receiver operating characteristic (ROC) and estimated mean accuracy (Acc). At a given threshold (the independent variable), ROC analysis measures the true accept rate (TAR), which is the fraction of genuine comparisons that correctly exceed the threshold, and the false accept rate (FAR), which is the fraction of impostor comparisons that incorrectly exceed the threshold. And Acc is a simplified metric introduced by LFW , which represents the percentage of correct classifications. With the development of deep FR, more accurate recognitions are required. Customers concern more about the TAR when FAR is kept in a very low rate in most security certification scenario. PaSC reports TAR at a FAR of 10−2superscript10210^{-2}; IJB-A evaluates TAR at a FAR of 10−3superscript10310^{-3}; Megaface (44, 164) focuses on TAR@10−6superscript10610^{-6}FAR; especially, in MS-celeb-1M challenge 3 , TAR@10−9superscript10910^{-9}FAR is reported. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_63",
"text": " Close-set face identification is relevant to user driven searches (e.g., forensic identification), rank-N and cumulative match characteristic (CMC) is commonly used metrics in this scenario. Rank-N is based on what percentage of probe searches return the probe’s gallery mate within the top k𝑘k rank-ordered results. The CMC curve reports the percentage of probes identified within a given rank (the independent variable). IJB-A/B/C (41, 42, 43) concern on the rank-1 and rank-5 recognition rate. The MegaFace challenge (44, 164) systematically evaluates rank-1 recognition rate function of increasing number of gallery distractors (going from 10 to 1 Million), the results of the SOTA evaluated on MegaFace challenge are listed in Table IX. Rather than rank-N and CMC, MS-Celeb-1M further applies a precision-coverage curve to measure identification performance under a variable threshold t𝑡t. The probe is rejected when its confidence score is lower than t𝑡t. The algorithms are compared in term of what fraction of passed probes, i.e. coverage, with a high recognition precision, e.g. 95% or 99%, the results of the SOTA evaluated on MS-Celeb-1M challenge are listed in Table X. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_64",
"text": " Open-set face identification is relevant to high throughput face search systems (e.g., de-duplication, watch list identification), where the recognition system should reject unknown/unseen subjects (probes who do not present in gallery) at test time. At present, there are very few databases covering the task of open-set FR. IJB-A/B/C (41, 42, 43) benchmarks introduce a decision error tradeoff (DET) curve to characterize the the false negative identification rate (FNIR) as function of the false positive identification rate (FPIR). FPIR measures what fraction of comparisons between probe templates and non-mate gallery templates result in a match score exceeding T𝑇T. At the same time, FNIR measures what fraction of probe searches will fail to match a mated gallery template above a score of T𝑇T. The algorithms are compared in term of the FNIR at a low FPIR, e.g. 1% or 10%, the results of the SOTA evaluated on IJB-A dataset as listed in Table XI. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_65",
"text": " Public available training databases are mostly collected from the photos of celebrities due to privacy issue, it is far from images captured in the daily life with diverse scenes. In order to study different specific scenarios, more difficult and realistic datasets are constructed accordingly, as shown in Table XII. According to their characteristics, we divide these scenes into four categories: cross-factor FR, heterogenous FR, multiple (or single) media FR and FR in industry (Fig. 21). ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_66",
"text": " • Cross-factor FR. Due to the complex nonlinear facial appearance, some variations will be caused by people themselves, such as cross-pose, cross-age, make-up, and disguise. For example, CALFW , MORPH , CACD and FG-NET are commonly used datasets with different age range; CFP only focuses on frontal and profile face, CPLFW is extended from LFW and contains different poses. Disguised faces in the wild (DFW) evaluates face recognition across disguise. • Heterogenous FR. It refers to the problem of matching faces across different visual domains. The domain gap is mainly caused by sensory devices and cameras settings, e.g. visual light vs. near-infrared and photo vs. sketch. For example, CUFSF and CUFS are commonly used photo-sketch datasets and CUFSF dataset is harder due to lighting variation and shape exaggeration. • Multiple (or single) media FR. Ideally, in FR, many images of each subject are provided in training datasets and image-to-image recognitions are performed when testing. But the situation will be different in reality. Sometimes, the number of images per person in training set could be very small, such as MS-Celeb-1M challenge 2 . This challenge is often called low- shot or few-shot FR. Moreover, each subject face in test set may be enrolled with a set of images and videos and set-to-set recognition should be performed, such as IJB-A and PaSC . • FR in industry. Although deep FR has achieved beyond human performance on some standard benchmarks, but some other factors should be given more attention rather than accuracy when deep FR is adopted in industry, e.g. anti-attack (CASIA-FASD ) and 3D FR (Bosphorus , BU-3DFE and FRGCv2 ). Compared to publicly available 2D face databases, 3D scans are hard to acquire, and the number of scans and subjects in public 3D face databases is still limited, which hinders the development of 3D deep FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_67",
"text": " Despite the high accuracy in the LFW and Megaface (44, 164) benchmarks, the performance of FR models still hardly meets the requirements in real-world application. A conjecture in industry is made that results of generic deep models can be improved simply by collecting big datasets of the target scene. However, this holds only to a certain degree. More and more concerns on privacy may make the collection and human-annotation of face data become illegal in the future. Therefore, significant efforts have been paid to design excellent algorithms to address the specific problems with limited data in these realistic scenes. In this section, we present several special algorithms of FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_68",
"text": " As shows that many existing algorithms suffer a decrease of over 10% from frontal-frontal to frontal-profile verification, cross-pose FR is still an extremely challenging scene. In addition to the aforementioned methods, including “one-to-many augmentation”, “many-to-one normalization” and assembled networks (Section 4 and 3.2.2), there are some other algorithms designed for cross-pose FR. Considering the extra burden of above methods, Cao et al. attempted to perform frontalization in the deep feature space rather than the image space. A deep residual equivariant mapping (DREAM) block dynamically added residuals to an input representation to transform a profile face to a frontal image. Chen et al. proposed to combine feature extraction with multi-view subspace learning to simultaneously make features be more pose-robust and discriminative. Pose Invariant Model (PIM) jointly performed face frontalization and learned pose invariant representations end-to-end to allow them to mutually boost each other, and further introduced unsupervised cross-domain adversarial training and a learning to learn strategy to provide high-fidelity frontal reference face images. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_69",
"text": " Cross-age FR is extremely challenging due to the changes in facial appearance by the aging process over time. One direct approach is to synthesize the desired image with target age such that the recognition can be performed in the same age group. A generative probabilistic model was used by to model the facial aging process at each short-term stage. The identity-preserved conditional generative adversarial networks (IPCGANs) framework utilized a conditional-GAN to generate a face in which an identity-preserved module preserved the identity information and an age classifier forced the generated face with the target age. Antipov et al. proposed to age faces by GAN, but the synthetic faces cannot be directly used for face verification due to its imperfect preservation of identities. Then, they used a local manifold adaptation (LMA) approach to solve the problem of . In , high-level age-specific features conveyed by the synthesized face are estimated by a pyramidal adversarial discriminator at multiple scales to generate more lifelike facial details. An alternative to address the cross-age problem is to decompose aging and identity components separately and extract age-invariant representations. Wen et al. developed a latent identity analysis (LIA) layer to separate these two components, as shown in Fig. 22. In , age-invariant features were obtained by subtracting age-specific factors from the representations with the help of the age estimation task. In , face features are decomposed in the spherical coordinate system, in which the identity-related components are represented with angular coordinates and the age-related information is encoded with radial coordinate. Additionally, there are other methods designed for cross-age FR. For example, Bianco ett al. and El et al. fine-tuned the CNN to transfer knowledge across age. Wang et al. proposed a siamese deep network to perform multi-task learning of FR and age estimation. Li et al. integrated feature extraction and metric learning via a deep CNN. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_70",
"text": " Makeup is widely used by the public today, but it also brings challenges for FR due to significant facial appearance changes. The research on matching makeup and nonmakeup face images is receiving increasing attention. Li et al. generated nonmakeup images from makeup ones by a bi-level adversarial network (BLAN) and then used the synthesized nonmakeup images for verification as shown in Fig. 23. Sun et al. pretrained a triplet network on videos and fine-tuned it on a small makeup datasets. Specially, facial disguise (214, 228, 229) is a challenging research topic in makeup face recognition. By using disguise accessories such as wigs, beard, hats, mustache, and heavy makeup, disguise introduces two variations: (i) when a person wants to obfuscate his/her own identity, and (ii) another individual impersonates someone else’s identity. Obfuscation increases intra-class variations whereas impersonation reduces the inter-class dissimilarity, thereby affecting face recognition/verification task. To address this issue, a variety of methods are proposed. Zhang et al. first trained two DCNNs for generic face recognition and then used Principal Components Analysis (PCA) to find the transformation matrix for disguised face recognition adaptation. Kohli et al. finetuned models using disguised faces. Smirnov et al. proposed a hard example mining method benefitted from class-wise (Doppelganger Mining ) and example-wise mining to learn useful deep embeddings for disguised face recognition. Suri et al. learned the representations of images in terms of colors, shapes, and textures (COST) using an unsupervised dictionary learning method, and utilized the combination of COST features and CNN features to perform recognition. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_71",
"text": " Due to the excellent performance of the near-infrared spectrum (NIS) images under low-light scenarios, NIS images are widely applied in surveillance systems. Because most enrolled databases consist of visible light (VIS) spectrum images, how to recognize a NIR face from a gallery of VIS images has been a hot topic. Saxena et al. and Liu et al. transferred the VIS deep networks to the NIR domain by fine-tuning. Lezama et al. used a VIS CNN to recognize NIR faces by transforming NIR images to VIS faces through cross-spectral hallucination and restoring a low-rank structure for features through low-rank embedding. Reale et al. trained a VISNet (for visible images) and a NIRNet (for near-infrared images), and coupled their output features by creating a siamese network. He et al. (238, 239) divided the high layer of the network into a NIR layer, a VIS layer and a NIR-VIS shared layer, then, a modality-invariant feature can be learned by the NIR-VIS shared layer. Song et al. embedded cross-spectral face hallucination and discriminative feature learning into an end-to-end adversarial network. In , the low-rank relevance and cross-modal ranking were used to alleviate the semantic gap. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_72",
"text": " Although deep networks are robust to low resolution to a great extent, there are still a few studies focused on promoting the performance of low-resolution FR. For example, Zangeneh et al. proposed a CNN with a two-branch architecture (a super-resolution network and a feature extraction network) to map the high- and low-resolution face images into a common space where the intra-person distance is smaller than the inter-person distance. Shen et al. exploited the face semantic information and local structural constraints to better restore the shape and detail of face images. In addition, they optimized the network with perceptual and adversarial losses to produce photo-realistic results. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_73",
"text": " The photo-sketch FR may help law enforcement to quickly identify suspects. The commonly used methods can be categorized as two classes. One is to utilize transfer learning to directly match photos to sketches. Deep networks are first trained using a large face database of photos and are then fine-tuned using small sketch database (243, 244). The other is to use the image-to-image translation, where the photo can be transformed to a sketch or the sketch to a photo; then, FR can be performed in one domain. Zhang et al. developed a fully convolutional network with generative loss and a discriminative regularizer to transform photos to sketches. Zhang et al. utilized a branched fully convolutional neural network (BFCN) to generate a structure-preserved sketch and a texture-preserved sketch, and then they fused them together via a probabilistic method. Recently, GANs have achieved impressive results in image generation. Yi et al. , Kim et al. and Zhu et al. used two generators, GAsubscript𝐺𝐴G_{A} and GBsubscript𝐺𝐵G_{B}, to generate sketches from photos and photos from sketches, respectively (Fig. 24). Based on , Wang et al. proposed a multi-adversarial network to avoid artifacts by leveraging the implicit presence of feature maps of different resolutions in the generator subnetwork. Similar to photo-sketch FR, photo-caricature FR is one kind of heterogenous FR scenes which is challenging and important to understanding of face perception. Huo et al. built a large dataset of caricatures and photos, and provided several evaluation protocols and their baseline performances for comparison. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_74",
"text": " For many practical applications, such as surveillance and security, the FR system should recognize persons with a very limited number of training samples or even with only one sample. The methods of low-shot learning can be categorized as 1) synthesizing training data and 2) learning more powerful features. Hong et al. generated images in various poses using a 3D face model and adopted deep domain adaptation to handle other variations, such as blur, occlusion, and expression (Fig. 25). Choe et al. used data augmentation methods and a GAN for pose transition and attribute boosting to increase the size of the training dataset. Wu et al. proposed a framework with hybrid classifiers using a CNN and a nearest neighbor (NN) model. Guo et al. made the norms of the weight vectors of the one-shot classes and the normal classes aligned to address the data imbalance problem. Cheng et al. proposed an enforced softmax that contains optimal dropout, selective attenuation, L2 normalization and model-level optimization. Yin et al. augmented feature space of low-shot classes by transferring the principal components from regular to low-shot classes to encourage the variance of low-shot classes to mimic that of regular classes. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_75",
"text": " Different from traditional image-to-image recognition, set-to-set recognition takes a set (heterogeneous contents containing both images and videos) as the smallest unit of representation. This kind of setting does reflect the real-world biometric scenarios, thereby attracting a lot of attention. After learning face representations of media in each set, two strategies are generally adopted to perform set-to-set matching. One is to use these representations to perform pair-wise similarity comparison of two sets and aggregate the results into a single and final score by max score pooling , average score pooling and its variations (253, 254). The other strategy is feature pooling (96, 103, 81) which first aggregates face representations into a single representation for each set and then performs a comparison between two sets. In addition to the commonly used strategies, there are also some novel methods proposed for set/template-based FR. For example, Hayat et al. proposed a deep heterogeneous feature fusion network to exploit the features’ complementary information generated by different CNNs. Liu et al. introduced the actor-critic reinforcement learning for set-based FR. They casted the inner-set dependency modeling to a Markov decision process in the latent space, and trained a dependency-aware attention control agent to make attention control for each image in each step. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_76",
"text": " There are two key issues in video FR: one is to integrate the information across different frames together to build a representation of the video face, and the other is to handle video frames with severe blur, pose variations, and occlusions. For frame aggregation, Yang et al. proposed a neural aggregation network (NAN) in which the aggregation module, consisting of two attention blocks driven by a memory, produces a 128-dimensional vector representation (Fig. 26). Rao et al. aggregated raw video frames directly by combining the idea of metric learning and adversarial learning. For dealing with bad frames, Rao et al. discarded the bad frames by treating this operation as a Markov decision process and trained the attention model through a deep reinforcement learning framework. Ding et al. artificially blurred clear images for training to learn blur-robust face representations. Parchami et al. used a CNN to reconstruct a lower-quality video into a high-quality face. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_77",
"text": " 3D FR has inherent advantages over 2D methods, but 3D deep FR is not well developed due to the lack of large annotated 3D data. To enlarge 3D training datasets, most works use the methods of “one-to-many augmentation” to synthesize 3D faces. However, the effective methods for extracting deep features of 3D faces remain to be explored. Kim et al. fine-tuned a 2D CNN with a small amount of 3D scans for 3D FR. Zulqarnain et al. used a three-channel (corresponding to depth, azimuth and elevation angles of the normal vector) image as input and minimized the average prediction log-loss. Zhang et al. first selected 30 feature points from the Candide-3 face model to characterize faces, then conducted the unsupervised pretraining of face depth data, and finally performed the supervised fine-tuning. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_78",
"text": " Partial FR, in which only arbitrary-size face patches are presented, has become an emerging problem with increasing requirements of identification from CCTV cameras and embedded vision systems in mobile devices, robots and smart home facilities. He et al. divided the aligned face image into several multi-scale patches, and the dissimilarity between two partial face images is calculated as the weighted L2 distance between corresponding patches. Dynamic feature matching (DFM) utilized a sliding window of the same size as the probe feature maps to decompose the gallery feature maps into several gallery sub-feature maps, and the similarity-guided constraint imposed on sparse representation classification (SRC) provides an alignment-free matching. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_79",
"text": " With the emergence of mobile phones, tablets and augmented reality, FR has been applied in mobile devices. Due to computational limitations, the recognition tasks in these devices need to be carried out in a light but timely fashion. MobiFace required efficient memory and low cost operators by adopting fast downsampling and bottleneck residual block, and achieves 99.7% on LFW database and 91.3% on Megaface database. Tadmor et al. proposed a multibatch method that first generates signatures for a minibatch of k𝑘k face images and then constructs an unbiased estimate of the full gradient by relying on all k2−ksuperscript𝑘2𝑘k^{2}-k pairs from the minibatch. As mentioned in Section 3.2.1, light-weight deep networks (126, 127, 128, 129) perform excellently in the fundamental tasks of image classification and deserve further attention in FR tasks. Moreover, some well-known compressed networks such as Pruning (264, 265, 266), BinaryNets (267, 268, 269, 270), Mimic Networks (271, 272), also have potential to be introduced into FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_80",
"text": " With the success of FR techniques, various types of attacks, such as face spoofing and adversarial perturbations, are becoming large threats. Face spoofing involves presenting a fake face to the biometric sensor using a printed photograph, worn mask, or even an image displayed on another electronic device. In order to defense this type of attack, several methods are proposed (211, 273, 274, 275, 276, 277, 278, 279). Atoum et al. proposed a novel two-stream CNN in which the local features discriminate the spoof patches that are independent of the spatial face areas, and holistic depth maps ensure that the input live sample has a face-like depth. Yang et al. trained a CNN using both a single frame and multiple frames with five scales as input, and using the live/spoof label as the output. Taken the sequence of video frames as input, Xu et al. applied LSTM units on top of CNN to obtain end-to-end features to recognize spoofing faces which leveraged the local and dense property from convolution operation and learned the temporal structure using LSTM units. Li et al. and Patel et al. fine-tuned their networks from a pretrained model by training sets of real and fake images. Jourabloo et al. proposed to inversely decompose a spoof face into the live face and the spoof noise pattern. Adversarial perturbation is the other type of attack which can be defined as the addition of a minimal vector r𝑟r such that with addition of this vector into the input image x𝑥x, i.e. (x+r)𝑥𝑟(x+r), the deep learning models misclassifies the input while people will not. Recently, more and more work has begun to focus on solving this perturbation of FR. Goswami et al. proposed to detect adversarial samples by characterizing abnormal filter response behavior in the hidden layers and increase the network’s robustness by removing the most problematic filters. Goel et al. provided an open source implementation of adversarial detection and mitigation algorithms. Despite of progresses of anti-attack algorithms, attack methods are updated as well and remind us the need to further increase security and robustness in FR systems, for example, Mai et al. proposed a neighborly de-convolutional neural network (NbNet) to reconstruct a fake face using the stolen deep templates. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_81",
"text": " As described in Section 5.1, existing datasets are highly biased in terms of the distribution of demographic cohorts, which may dramatically impact the fairness of deep models. To address this issue, there are some works that seek to introduce fairness into face recognition and mitigate demographic bias, e,g. unbalanced-training , attribute removal (284, 285, 286) and domain adaptation (173, 287, 147). 1) Unbalanced-training methods mitigate the bias via model regularization, taking into consideration of the fairness goal in the overall model objective function. For example, RL-RBN formulated the process of finding the optimal margins for non-Caucasians as a Markov decision process and employed deep Q-learning to learn policies based on large margin loss. 2) Attribute removal methods confound or remove demographic information of faces to learn attribute-invariant representations. For example, Alvi et al. applied a confusion loss to make a classifier fail to distinguish attributes of examples so that multiple spurious variations are removed from the feature representation. SensitiveNets proposed to introduce sensitive information into triplet loss. They minimized the sensitive information, while maintaining distances between positive and negative embeddings. 3) Domain adaptation methods propose to investigate data bias problem from a domain adaptation point of view and attempt to design domain-invariant feature representations to mitigate bias across domains. IMAN simultaneously aligned global distribution to decrease race gap at domain-level, and learned the discriminative target representations at cluster level. Kan directly converted the Caucasian data to non-Caucasian domain in the image space with the help of sparse reconstruction coefficients learnt in the common subspace. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_82",
"text": " In this paper, we provide a comprehensive survey of deep FR from both data and algorithm aspects. For algorithms, mainstream and special network architectures are presented. Meanwhile, we categorize loss functions into Euclidean-distance-based loss, angular/cosine-margin-based loss and variable softmax loss. For data, we summarize some commonly used datasets. Moreover, the methods of face processing are introduced and categorized as “one-to-many augmentation” and “many-to-one normalization”. Finally, the special scenes of deep FR, including video FR, 3D FR and cross-age FR, are briefly introduced. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_83",
"text": " Taking advantage of big annotated data and revolutionary deep learning techniques, deep FR has dramatically improved the SOTA performance and fostered successful real-world applications. With the practical and commercial use of this technology, many ideal assumptions of academic research were broken, and more and more real-world issues are emerging. To the best our knowledge, major technical challenges include the following aspects. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_84",
"text": " • Security issues. Presentation attack , adversarial attack (280, 281, 290), template attack and digital manipulation attack (292, 293) are developing to threaten the security of deep face recognition systems. 1) Presentation attack with 3D silicone mask, which exhibits skin-like appearance and facial motion, challenges current anti-sproofing methods . 2) Although adversarial perturbation detection and mitigation methods are recently proposed , the root cause of adversarial vulnerability is unclear and thus new types of adversarial attacks are still upgraded continuously (295, 296). 3) The stolen deep feature template can be used to recover its facial appearance, and how to generate cancelable template without loss of accuracy is another important issue. 4) Digital manipulation attack, made feasible by GANs, can generate entirely or partially modified photorealistic faces by expression swap, identity swap, attribute manipulation and entire face synthesis, which remains a main challenge for the security of deep FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_85",
"text": " • Privacy-preserving face recognition. With the leakage of biological data, privacy concerns are raising nowadays. Facial images can predict not only demographic information such as gender, age, or race, but even the genetic information . Recently, the pioneer works such as Semi-Adversarial Networks (298, 299, 285) have explored to generate a recognizable biometric templates that can hidden some of the private information presented in the facial images. Further research on the principles of visual cryptography, signal mixing and image perturbation to protect users’ privacy on stored face templates are essential for addressing public concern on privacy. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_86",
"text": " • Understanding deep face recognition. Deep face recognition systems are now believed to surpass human performance in most scenarios . There are also some interesting attempts to apply deep models to assist human operators for face verification . Despite this progress, many fundamental questions are still open, such as what is the “identity capacity” of a deep representation ? Why deep neural networks, rather than humans, are easily fooled by adversarial samples? While bigger and bigger training dataset by itself cannot solve this problem, deeper understanding on these questions may help us to build robust applications in real world. Recently, a new benchmark called TALFW has been proposed to explore this issue . ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_87",
"text": " • Remaining challenges defined by non-saturated benchmark datasets. Three current major datasets, namely, MegaFace (44, 164) , MS-Celeb-1M and IJB-A/B/C (41, 42, 43), are corresponding to large-scale FR with a very large number of candidates, low/one-shot FR and large pose-variance FR which will be the focus of research in the future. Although the SOTA algorithms can be over 99.9 percent accurate on LFW and Megaface (44, 164) databases, fundamental challenges such as matching faces cross ages , poses , sensors, or styles still remain. For both datasets and algorithms, it is necessary to measure and address the racial/gender/age biases of deep FR in future research. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_88",
"text": " • Ubiquitous face recognition across applications and scenes. Deep face recognition has been successfully applied on many user-cooperated applications, but the ubiquitous recognition applications in everywhere are still an ambitious goal. In practice, it is difficult to collect and label sufficient samples for innumerable scenes in real world. One promising solution is to first learn a general model and then transfer it to an application-specific scene. While deep domain adaptation has recently been applied to reduce the algorithm bias on different scenes , different races , general solution to transfer face recognition is largely open. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_89",
"text": " • Pursuit of extreme accuracy and efficiency. Many killer-applications, such as watch-list surveillance or financial identity verification, require high matching accuracy at very low alarm rate, e.g. 10−9superscript10910^{-9}. It is still a big challenge even with deep learning on massive training data. Meanwhile, deploying deep face recognition on mobile devices pursues the minimum size of feature representation and compressed deep network. It is of great significance for both industry and academic to explore this extreme face-recognition performance beyond human imagination. It is also exciting to constantly push the performance limits of the algorithm after it has already surpassed human. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_90",
"text": " • Fusion issues. Face recognition by itself is far from sufficient to solve all biometric and forensic tasks, such as distinguishing identical twins and matching faces before and after surgery . A reliable solution is to consolidate multiple sources of biometric evidence . These sources of information may correspond to different biometric traits (e.g., face + hand ), sensors (e.g., 2D + 3D face cameras), feature extraction and matching techniques, or instances (e.g., a face sequence of various poses). It is beneficial for face biometric and forensic applications to perform information fusion at the data level, feature level, score level, rank level, and decision level . ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_91",
"text": " This work was partially supported by National Key R&D Program of China (2019YFB1406504) and BUPT Excellent Ph.D. Students Foundation CX2020207. ",
"title": "Deep Face Recognition"
}
] |
What are the computer-aided detection problems studied in this paper?
|
The paper studies thoraco-abdominal lymph node detection and interstitial lung disease classification [5].
|
[
5
] |
[
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annotated datasets with representative data distribution characteristics are crucial to learning more accurate or generalizable models (5, 4). Unlike previous image datasets used in computer vision, ImageNet offers a very comprehensive database of more than 1.2 million categorized natural images of 1000+ classes. The CNN models trained upon this database serve as the backbone for significantly improving many object detection and image segmentation problems using other datasets (6, 7), e.g., PASCAL and medical image categorization (9, 10, 11, 12). However, there exists no large-scale annotated medical image dataset comparable to ImageNet, as data acquisition is difficult, and quality annotation is costly. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_1",
"text": " There are currently three major techniques that successfully employ CNNs to medical image classification: 1) training the “CNN from scratch” (13, 14, 15, 16, 17); 2) using “off-the-shelf CNN” features (without retraining the CNN) as complementary information channels to existing hand-crafted image features, for Chest X-rays and CT lung nodule identification (9, 12); and 3) performing unsupervised pre-training on natural or medical images and fine-tuning on medical target images using CNN or other types of deep learning models (18, 19, 20, 21). A decompositional 2.5D view resampling and an aggregation of random view classification scores are used to eliminate the “curse-of-dimensionality” issue in , in order to acquire a sufficient number of training image samples. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_2",
"text": " Previous studies have analyzed three-dimensional patch creation for LN detection (23, 24), atlas creation from chest CT and the extraction of multi-level image features (26, 27). At present, there are several extensions or variations of the decompositional view representation introduced in (22, 28), such as: using a novel vessel-aligned multi-planar image representation for pulmonary embolism detection , fusing unregistered multiview for mammogram analysis and classifying pulmonary peri-fissural nodules via an ensemble of 2D views . ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_3",
"text": " Although natural images and medical images differ significantly, conventional image descriptors developed for object recognition in natural images, such as the scale-invariant feature transform (SIFT) and the histogram of oriented gradients (HOG) , have been widely used for object detection and segmentation in medical image analysis. Recently, ImageNet pre-trained CNNs have been used for chest pathology identification and detection in X-ray and CT modalities (10, 9, 12). They have yielded the best performance results by integrating low-level image features (e.g., GIST , bag of visual words (BoVW) and bag-of-frequency ). However, the fine-tuning of an ImageNet pre-trained CNN model on medical image datasets has not yet been exploited. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_4",
"text": " In this paper, we exploit three important, but previously under-studied factors of employing deep convolutional neural networks to computer-aided detection problems. Particularly, we explore and evaluate different CNN architectures varying in width (ranging from 5 thousand to 160 million parameters) and depth (various numbers of layers), describe the effects of varying dataset scale and spatial image context on performance, and discuss when and why transfer learning from pre-trained ImageNet CNN models can be valuable. We further verify our hypothesis by inheriting and adapting rich hierarchical image features (5, 33) from the large-scale ImageNet dataset for computer aided diagnosis (CAD). We also explore CNN architectures of the most studied seven-layered “AlexNet-CNN” , a shallower “Cifar-CNN” , and a much deeper version of “GoogLeNet-CNN” (with our modifications on CNN structures). This study is partially motivated by recent studies (34, 35) in computer vision. The thorough quantitative analysis and evaluation on deep CNN or sparsity image coding methods elucidate the emerging techniques of the time and provide useful suggestions for their future stages of development, respectively. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_5",
"text": " Two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification are studied in this work. On mediastinal LN detection, we surpass all currently reported results. We obtain 86%percent8686\\% sensitivity on 3 false positives (FP) per patient, versus the prior state-of-art sensitivities of 78%percent7878\\% (stacked shallow learning) and 70%percent7070\\% (CNN), as prior state-of-the-art. For the first time, ILD classification results under the patient-level five-fold cross-validation protocol (CV5) are investigated and reported. The ILD dataset contains 905 annotated image slices with 120 patients and 6 ILD labels. Such sparsely annotated datasets are generally difficult for CNN learning, due to the paucity of labeled instances. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_6",
"text": " Evaluation protocols and details are critical to deriving significant empirical findings . Our experimental results suggest that different CNN architectures and dataset re-sampling protocols are critical for the LN detection tasks where the amount of labeled training data is sufficient and spatial contexts are local. Since LN images are more flexible than ILD images with respect to resampling and reformatting, LN datasets may be more readily augmented by such image transformations. As a result, LN datasets contain more training and testing data instances (due to data auugmentation) than ILD datasets. They nonetheless remain less comprehensive than natural image datasets, such as ImageNet. Fine-tuning ImageNet-trained models for ILD classification is clearly advantageous and yields early promising results, when the amount of labeled training data is highly insufficient and multi-class categorization is used, as opposed to the LN dataset’s binary class categorization. Another significant finding is that CNNs trained from scratch or fine-tuned from ImageNet models consistently outperform CNNs that merely use off-the-shelf CNN features, in both the LN and ILD classification problems. We further analyze, via CNN activation visualizations, when and why transfer learning from non-medical to medical images in CADe problems can be valuable. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_7",
"text": " We employ CNNs (with the characteristics defined above) to thoraco-abdominal lymph node (LN) detection (evaluated separately on the mediastinal and abdominal regions) and interstitial lung disease (ILD) detection. For LN detection, we use randomly sampled 2.5D views in CT . We use 2D CT slices (38, 39, 40) for ILD detection. We then evaluate and compare CNN performance results. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_8",
"text": " Until the detection aggregation approach (22, 41), thoracoabdominal lymph node (LN) detection via CADe mechanisms has yielded poor performance results. In , each 3D LN candidate produces up to 100 random 2.5D orthogonally sampled images or views which are then used to train an effective CNN model. The best performance on abdominal LN detection is achieved at 83%percent8383\\% recall on 3FP per patient , using a “Cifar-10” CNN. Using the thoracoabdominal LN detection datasets , we aim to surpass this CADe performance level, by testing different CNN architectures, exploring various dataset re-sampling protocols, and applying transfer learning from ImageNet pre-trained CNN models. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_9",
"text": " Interstitial lung disease (ILD) comprises more than 150 lung diseases affecting the interstitium, which can severely impair the patient’s ability to breathe. Gao et al. investigate the ILD classification problem in two scenarios: 1) slice-level classification: assigning a holistic two-dimensional axial CT slice image with its occurring ILD disease label(s); and 2) patch-level classification: a/ sampling patches within the 2D ROIs (Regions of Interest provided by ), then b/ classifying patches into seven category labels ( six disease labels and one “healthy” label). Song et al. (38, 39) only address the second sub-task of patch-level classification under the “leave-one-patient-out” (LOO) criterion. By training on the moderate-to-small scale ILD dataset , our main objective is to exploit and benchmark CNN based ILD classification performances under the CV5 metric (which is more realistic and unbiased than LOO (38, 39) and hard-split ), with and without transfer learning. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_10",
"text": " Thoracoabdominal Lymph Node Datasets. We use the publicly available dataset from (22, 41). There are 388 mediastinal LNs labeled by radiologists in 90 patient CT scans, and 595 abdominal LNs in 86 patient CT scans. To facilitate comparison, we adopt the data preparation protocol of , where positive and negative LN candidates are sampled with the fields-of-view (FOVs) of 30mm to 45mm, surrounding the annotated and detected LN centers (obtained by a candidate generation process). More precisely, (22, 41, 36) follow a coarse-to-fine CADe scheme, partially inspired by , which operates with ∼100%similar-toabsentpercent100\\sim 100\\% detection recalls at the cost of approximately 40 false or negative LN candidates per patient scan. In this work, positive and negative LN candidate are first sampled up to 200 times with translations and rotations. Afterwards, negative LN samples are randomly re-selected at a lower rate close to the total number of positives. LN candidates are randomly extracted from fields-of-view (FOVs) spanning 35mm to 128mm in soft-tissue window (-100, 200HU). This allows us to capture multiple spatial scales of image context (43, 44)). The samples are then rescaled to a 64×64646464\\times 64 pixel resolution via B-spline interpolation. A few examples of LNs with axial, coronal, and sagittal views encoded in RGB color images are shown in Figure 1. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_11",
"text": " Unlike the heart or the liver, lymph nodes have no pre-determined anatomic orientation. Hence, the purely random image resampling (with respect to scale, displacement and orientation) and reformatting (the axial, coronal, and sagittal views are in any system randomly resampled coordinates) is a natural choice, which also happens to yield high CNN performance. Although we integrate three channels of information from three orthogonal views for LN detection, the pixel-wise spatial correlations between or among channels are not necessary. The convolutional kernels in the lower level CNN architectures can learn the optimal weights to linearly combine the observations from the axial, coronal, and sagittal channels by computing their dot-products. Transforming axial, coronal, and sagittal representations to RGB also facilitates transfer learning from CNN models trained on ImageNet. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_12",
"text": " This learning representation (i.e., “built-in CNN”) is flexible, in that it naturally combines multiple sources or channels of information. In the recent literature , even heterogeneous class-conditional probability maps can be combined with raw images to improve performance. This set-up is similar to that of other works in computer vision, such as , where heterogeneous image information channels are jointly fed into the CNN convolutional layers for high-accuracy human parsing and segmentation. Finally, if there are correlations among CNN input channels, one may observe the corresponding correlated patterns in the learned filters. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_13",
"text": " In summary, the assumption that there are or must be pixel-wise spatial correlations among input channels does not apply to the CNN model representation. For other medical imaging problems, such as pulmonary embolism detection , in which orientation can be constrained along the attached vessel axis, vessel-aligned multi-planar image representation (MPR) is more effective than randomly aligned MPR. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_14",
"text": " Interstitial Lung Disease Dataset. We utilize the publicly available dataset of . It contains 905 image slices from 120 patients, with six lung tissue types annotations containing at least one of the following: healthy (NM), emphysema (EM), ground glass (GG), fibrosis (FB), micronodules (MN) and consolidation (CD) (Figure 3). At the slice level, the objective is to classify the status of “presence/absence” of any of the six ILD classes for an input axial CT slice . Characterizing an arbitrary CT slice against any possible ILD type, without any manual ROI (in contrast to (38, 39)), can be useful for large-scale patient screening. For slice-level ILD classification, we sampled the slices 12 times with random translations and rotations. After this, we balanced the numbers of CT slice samples for the six classes by randomly sampling several instances at various rates. For patch-based classification, we sampled up to 100 patches of size 64×64646464\\times 64 from each ROI. This dataset is divided into five folds with disjoint patient subsets. The average number of CT slices (training instances) per fold is small, as shown in Table I. Slice-level ILD classification is a very challenging task where CNN models need to learn from very small numbers of training examples and predict ILD labels on unseen patients. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_15",
"text": " In the publicly available ILD dataset, very few CT slices are labeled as normal or healthy. The remaining CT slices cannot be simply classified as normal, because many ILD disease regions or slices have not yet been labeled. ILD is a partially labeled database; this is one of its main limitations. Research is being conducted to address this issue. In particular, has proposed to fully label the ILD dataset pixel-wise via proposed segmentation label propagation. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_16",
"text": " To leverage the CNN architectures designed for color images and to transfer CNN parameters pre-trained on ImageNet, we transform all gray-scale axial CT slice images via three CT window ranges: lung window range (-1400, -200HU), high-attenuation range (-160, 240HU), and low-attenuation range (-1400; -950HU). We then encode the transformed images into RGB channels (to be aligned with the input channels of CNN models (4, 33) pre-trained from natural image datasets ). The low-attenuation CT window is useful for visualizing certain texture patterns of lung diseases (especially emphysema). The usage of different CT attenuation channels improves classification results over the usage of a single CT windowing channel, as demonstrated in . More importantly, these CT windowing processes do not depend on the lung segmentation, which instead is directly defined in the CT HU space. Figure 4 shows a representative example of lung, high-attenuation, and low-attenuation CT windowing for an axis lung CT slice. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_17",
"text": " As observed in , lung segmentation is crucial to holistic slice-level ILD classification. We empirically compare performance in two scenarios with a rough lung segmentation111This can be achieved by segmenting the lung using simple label-fusion methods . In the first case, we overlay the target image slice with the average lung mask among the training folds. In the second, we perform simple morphology operations to obtain the lung boundary. In order to retain information from the inside of the lung, we apply Gaussian smoothing to the regions outside of the lung boundary. There is no significant difference between two setups. Due to the high precision of CNN based image processing, highly accurate lung segmentation is not necessary . The localization of ILD regions within the lung is simultaneously learned through selectively weighted CNN reception fields in the deepest convolutional layers during the classification based CNN training (49, 50). Some areas outside of the lung appear in both healthy or diseased images. CNN training learns to ignore them by setting very small filter weights around the corresponding regions (Figure 13). This observation is validated by . ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_18",
"text": " In this study, we explore, evaluate and analyze the influence of various CNN Architectures, dataset characteristics (when we need more training data or better models for object detection ) and CNN transfer learning from non-medical to medical image domains. These three key elements of building effective deep CNN models for CADe problems are described below. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_19",
"text": " We mainly explore three convolutional neural network architectures (CifarNet (5, 22), AlexNet and GoogLeNet ) with different model training parameter values. The current deep learning models (22, 52, 53) in medical image tasks are at least 2∼5similar-to252\\sim 5 orders of magnitude smaller than even AlexNet . More complex CNN models (22, 52) have only about 150K or 15K parameters. Roth et al. adopt the CNN architecture tailored to the Cifar-10 dataset and operate on image windows of 32×32×33232332\\times 32\\times 3 pixels for lymph node detection, while the simplest CNN in has only one convolutional, pooling, and FC layer, respectively. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_20",
"text": " We use CifarNet as used in as a baseline for the LN detection. AlexNet and GoogLeNet are also modified to evaluate these state-of-the-art CNN architecture from ImageNet classification task to our CADe problems and datasets. A simplified illustration of three CNN architectures exploited is shown in Figure 5. CifarNet always takes 32×32×33232332\\times 32\\times 3 image patches as input while AlexNet and GoogLeNet are originally designed for the fixed image dimension of 256×256×32562563256\\times 256\\times 3 pixels. We also reduced the filter size, stride and pooling parameters of AlexNet and GoogLeNet to accommodate a smaller input size of 64×64×36464364\\times 64\\times 3 pixels. We do so to produce and evaluate “simplified” AlexNet and GoogLeNet versions that are better suited to the smaller scale training datasets common in CADe problems. Throughout the paper, we refer to the models as CifarNet (32x32) or CifarNet (dropping 32x32); AlexNet (256x256) or AlexNet-H (high resolution); AlexNet (64x64) or AlexNet-L (low resolution); GoogLeNet (256x256) or GoogLeNet-H and GoogLeNet (64x64) or GoogLeNet-L (dropping 3 since all image inputs are three channels). ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_21",
"text": " CifarNet, introduced in , was the state-of-the-art model for object recognition on the Cifar10 dataset, which consists of 32×32323232\\times 32 images of 10 object classes. The objects are normally centered in the images. Some example images and class categories from the Cifar10 dataset are shown in Figure 7. CifarNet has three convolution layers, three pooling layers, and one fully-connected layer. This CNN architecture, also used in has about 0.15 million free parameters. We adopt it as a baseline model for the LN detection. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_22",
"text": " The AlexNet architecture was published in , achieved significantly improved performance over the other non-deep learning methods for ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012. This success has revived the interest in CNNs in computer vision. ImageNet consists of 1.2 million 256×256256256256\\times 256 images belonging to 1000 categories. At times, the objects in the image are small and obscure, and thus pose more challenges for learning a successful classification model. More details about the ImageNet dataset will be discussed in Sec. III-B. AlexNet has five convolution layers, three pooling layers, and two fully-connected layers with approximately 60 million free parameters. AlexNet is our default CNN architecture for evaluation and analysis in the remainder of the paper. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_23",
"text": " The GoogLeNet model proposed in , is significantly more complex and deep than all previous CNN architectures. More importantly, it also introduces a new module called “Inception”, which concatenates filters of different sizes and dimensions into a single new filter (refer to Figure 6). Overall, GoogLeNet has two convolution layers, two pooling layers, and nine “Inception” layers. Each “Inception” layer consists of six convolution layers and one pooling layer. An illustration of an “Inception” layer (inception3a) from GoogLeNet is shown in Figure 6. GoogLeNet is the current state-of-the-art CNN architecture for the ILSVRC challenge, where it achieved 5.5% top-5 classification error on the ImageNet challenge, compared to AlexNet’s 15.3% top-5 classification error. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_24",
"text": " ImageNet has more than 1.2 million 256×256256256256\\times 256 images categorized under 1000 object class categories. There are more than 1000 training images per class. The database is organized according to the WordNet hierarchy, which currently contains only nouns in 1000 object categories. The image-object labels are obtained largely through crowd-sourcing, e.g., Amazon Mechanical Turk, and human inspection. Some examples of object categories in ImageNet are “sea snake”, “sandwich”, “vase”, “leopard”, etc. ImageNet is currently the largest image dataset among other standard datasets for visual recognition. Indeed, the Caltech101, Caltech256 and Cifar10 dataset merely contain 60000 32×32323232\\times 32 images and 10 object classes. Furthermore, due to the large number (1000+) of object classes, the objects belonging to each ImageNet class category can be occluded, partial and small, relative to those in the previous public image datasets. This significant intra-class variation poses greater challenges to any data-driven learning system that builds a classifier to fit given data and generalize to unseen data. For comparison, some example images of Cifar10 dataset and ImageNet images in the “tennis ball” class category are shown in Figure 7. The ImageNet dataset is publicly available, and the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has become the standard benchmark for large-scale object recognition. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_25",
"text": " When learned from scratch, all the parameters of CNN models are initialized with random Gaussian distributions and trained for 30 epochs with the mini-batch size of 50 image instances. Training convergence can be observed within 30 epochs. The other hyperparameters are momentum: 0.9; weight decay: 0.0005; (base) learning rate: 0.01, decreased by a factor of 10 at every 10 epochs. We use the Caffe framework and NVidia K40 GPUs to train the CNNs. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_26",
"text": " AlexNet and GoogLeNet CNN models can be either learned from scratch or fine-tuned from pre-trained models. Girshick et al. find that, by applying ImageNet pre-trained ALexNet to PASCAL dataset , performances of semantic 20-class object detection and segmentation tasks significantly improve over previous methods that use no deep CNNs. AlexNet can be fine-tuned on the PASCAL dataset to surpass the performance of the ImageNet pre-trained AlexNet, although the difference is not as significant as that between the CNN and non-CNN methods. Similarly, (57, 58) also demonstrate that better performing deep models are learned via CNN transfer learning from ImageNet to other datasets of limited scales. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_27",
"text": " Our hypothesis on CNN parameter transfer learning is the following: despite the disparity between natural images and natural images, CNNs comprehensively trained on the large scale well-annotated ImageNet may still be transferred to make medical image recognition tasks more effective. Collecting and annotating large numbers of medical images still poses significant challenges. On the other hand, the mainstream deep CNN architectures (e.g., AlexNet and GoogLeNet) contain tens of millions of free parameters to train, and thus require sufficiently large numbers of labeled medical images. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_28",
"text": " For transfer learning, we follow the approach of (57, 6) where all CNN layers except the last are fine-tuned at a learning rate 10 times smaller than the default learning rate. The last fully-connected layer is random initialized and freshly trained, in order to accommodate the new object categories in our CADe applications. Its learning rate is kept at the original 0.01. We denote the models with random initialization or transfer learning as AlexNet-RI and AlexNet-TL, and GoogLeNet-RI and GoogLeNet-TL. We found that the transfer learning strategy yields the best performance results. Determining the optimal learning rate for different layers is challenging, especially for very deep networks such as GoogLeNet. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_29",
"text": " We also perform experiments using “off-the-shelf” CNN features of AlexNet pre-trained on ImageNet and training only the final classifier layer to complete the new CADe classification tasks. Parameters in the convolutional and fully connected layers are fixed and are used as deep image extractors, as in (10, 9, 12). We refer to this model as AlexNet-ImNet in the remainder of the paper. Note that (10, 9, 12) train support vector machines and random forest classifiers using ImageNet pre-trained CNN features. Our simplified implementation is intended to determine whether fine-tuning the “end-to-end” CNN network is necessary to improve performance, as opposed to merely training the final classification layer. This is a slight modification from the method described in (10, 9, 12). ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_30",
"text": " Finally, transfer learning in CNN representation, as empirically verified in previous literature (59, 60, 61, 11, 62), can be effective in various cross-modality imaging settings (RGB images to depth images (59, 60), natural images to general CT and MRI images , and natural images to neuroimaging or ultrasound data). More thorough theoretical studies on cross-modality imaging statistics and transferability will be needed for future studies. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_31",
"text": " In this section, we evaluate and compare the performances of nine CNN model configurations (CifarNet, AlexNet-ImNet, AlexNet-RI-H, AlexNet-TL-H, AlexNet-RI-L, GoogLeNet-RI-H, GoogLeNet-TL-H, GoogLeNet-RI-L and combined) on two important CADe problems using publicly available datasets (22, 41, 37). ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_32",
"text": " We train and evaluate CNNs using three-fold cross-validation (folds are split into disjoint sets of patients), with the different CNN architectures described above. In testing, each LN candidate has multiple random 2.5D views tested by CNN classifiers to generate LN class probability scores. We follow the random view aggregation by averaging probabilities, as in . ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_33",
"text": " We first sample the LN image patches at a 64×64646464\\times 64 pixel resolution. We then up-sample the 64×64646464\\times 64 pixel LN images via bi-linear interpolation to 256×256256256256\\times 256 pixels, in order to accommodate AlexNet-RI-L, AlexNet-TL-H, GoogLeNet-RI-H and GoogLeNet-TL-H. For the modified AlexNet-RI-L at (64×64646464\\times 64) pixel resolution, we reduce the number of first layer convolution filters from 96 to 64 and reduce the stride from 4 to 2. For the modified GoogLeNet-RI (64×64646464\\times 64), we decrease the number of first layer convolution filters from 64 to 32, the pad size from 3 to 2, the kernel size from 7 to 5, stride from 2 to 1 and the stride of the subsequent pooling layer from 2 to 1. We slightly reduce the number of convolutional filters in order to accommodate the smaller input image sizes of target medical image datasets (22, 37), while preventing over-fitting. This eventually improves performance on patch-based classification. CifarNet is used in to detect LN samples of 32×32×33232332\\times 32\\times 3 images. For consistency purposes, we down-sample 64×64×36464364\\times 64\\times 3 resolution LN sample images to the dimension of 32×32×33232332\\times 32\\times 3. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_34",
"text": " Results for lymph node detection in the mediastinum and abdomen are reported in Table II. FROC curves are illustrated in Figure 8. The area-under-the-FROC-curve (AUC) and true positive rate (TPR, recall or sensitivity) at three false positives per patient (TPR/3FP) are used as performance metrics. Of the nine investigated CNN models, CifarNet, AlexNet-ImNet and GoogLeNet-RI-H generally yielded the least competitive detection accuracy results. Our LN datasets are significantly more complex (i.e., display much larger within-class appearance variations), especially due to the extracted fields-of-view (FOVs) of (35mm-128mm) compared to (30mm-45mm) in , where CifarNet is also employed. In this experiment, CifarNet is under-trained with respect to our enhanced LN datasets, due to its limited input resolution and parameter complexity. The inferior performance of AlexNet-ImNet implies that using the pre-trained ImageNet CNNs alone as “off-the-shelf” deep image feature extractors may not be optimal or adequate for mediastinal and abdominal LN detection tasks. To complement “off-the-shelf” CNN features, (10, 9, 12) all add and integrate various other hand-crafted image features as hybrid inputs for the final CADe classification. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_35",
"text": " GoogLeNet-RI-H performs poorly, as it is susceptible to over-fitting. No sufficient data samples are available to train GoogLeNet-RI-H with random initialization. Indeed, due to GoogLeNet-RI-H’s complexity and 22-layer depth, million-image datasets may be required to properly train this model. However, GoogLeNet-TL-H significantly improves upon GoogLeNet-RI-H (0.81 versus 0.61 TPR/3FP in mediastinum; 0.70 versus 0.48 TPR/3FP in abdomen). This indicates that transfer learning offers a much better initialization of CNN parameters than random initialization. Likewise, AlexNet-TL-H consistently outperforms AlexNet-RI-H, though by smaller margins (0.81 versus 0.79 TPR/3FP in mediastinum; 0.69 versus 0.67 TPR/3FP in abdomen). This is also consistent with the findings reported for ILD detection in Table III and Figure 11. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_36",
"text": " GoogLeNet-TL-H yields results similar to AlexNet-TL-H’s for the mediastinal LN detection, and slightly outperforms Alex-Net-H for abdominal LN detection. AlexNet-RI-H exhibits less severe over-fitting than GoogLeNet-RI-H. We also evaluate a simple ensemble by averaging the probability scores from five CNNs: AlexNet-RI-H, AlexNet-TL-H, AlexNet-RI-H, GoogLeNet-TL-H and GoogLeNet-RI-L. This combined ensemble outputs the classification accuracies matching or slightly exceeding the best performing individual CNN models on the mediastinal or abdominal LN detection tasks, respectively. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_37",
"text": " Many of our CNN models achieve notably better (FROC-AUC and TPR/3FP) results than the previous state-of-the-art models for mediastinal LN detection: GoogLeNet-RI-L obtains an AUC=0.95 and 0.85 TPR/3FP, versus AUC=0.92 and 0.70 TPR/3FP and 0.78 TPR/3FP which uses stacked shallow learning. This difference lies in the fact that annotated lymph node segmentation masks are required to learn a mid-level semantic boundary detector , whereas CNN approaches only need LN locations for training . In abdominal LN detection, obtains the best trade-off between its CNN model complexity and sampled data configuration. Our best performing CNN model is GoogLeNet-TL (256x256) which obtains an AUC=0.92 and 0.70 TPR/3FP. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_38",
"text": " The main difference between our dataset preparation protocol and that from is a more aggressive extraction of random views within a much larger range of FOVs. The usage of larger FOVs to capture more image spatial context is inspired by deep zoom-out features that improve semantic segmentation. This image sampling scheme contributes to our best reported performance results in both mediastinal LN detection (in this paper) and automated pancreas segmentation . As shown in Figure 1, abdominal LNs are surrounded by many other similar looking objects. Meanwhile, mediastinal LNs are more easily distinguishable, due to the images’ larger spatial contexts. Finally, from the perspective of the data-model trade-off: “Do We Need More Training Data or Better Models?” , more abdomen CT scans from distinct patient populations need to be acquired and annotated, in order to take full advantage of deep CNN models of high capacity. Nevertheless, deeper and wider CNN models (e.g., GoogLeNet-RI-L and GoogLeNet-TL-H versus Cifar-10 ) have shown improved results in the mediastinal LN detection. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_39",
"text": " Figure 9 provides examples of misclassified lymph nodes (in axial view) (both false negatives (Left) and false positives(Right)), from the Abdomen and Mediastinum datasets. The overall reported LN detection results are clinically significant, as indicated in . ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_40",
"text": " The CNN models evaluated in this experiment are 1) AlexNet-RI (training from scratch on the ILD dataset with random initialization); 2) AlexNet-TL (with transfer learning from ); 3) AlexNet-ImNet: pre-trained ImageNet-CNN model with only the last cost function layer retrained from random initialization, according to the six ILD classes (similar to but without using additional hand-crafted non-deep feature descriptors, such as GIST and BoVW); 4) GoogLeNet-RI (random initialization); 5) GoogLeNet-TL (GoogLeNet with transfer learning from ). All ILD images (patches of 64×64646464\\times 64 and CT axial slices of 512×512512512512\\times 512) are re-sampled to a fixed dimension of 256×256256256256\\times 256 pixels. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_41",
"text": " We evaluate the ILD classification task with five-fold CV on patient-level split, as it is more informative for real clinical performance than LOO. The classification accuracy rates for interstitial lung disease detection are shown in Table III. Two sub-tasks on ILD patch and slice classifications are conducted. In general, patch-level ILD classification is less challenging than slice-level classification, as far more data samples can be sampled from the manually annotated ROIs (up to 100 image patches per ROI), available from . From Table III, all five deep models evaluated obtain comparable results within the range of classification accuracy rates (0.74,0.76)0.740.76(0.74,0.76). Their averaged model achieves a slightly better accuracy of 0.79. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_42",
"text": " F1-scores (38, 39, 54) and the confusion matrix (Table V) for patch-level ILD classification using GoogLeNet-TL under five-fold cross-validation (we denote as Patch-CV5) are also computed. F1-scores are reported on patch classification only (32×32323232\\times 32 pixel patches extracted from manual ROIs) (38, 39, 54), as shown in Table IV. Both and use the evaluation protocol of “leave-one-patient-out” (LOO), which is arguably much easier and not directly comparable to 10-fold CV or our Patch-CV5. In this study, we classify six ILD classes by adding a consolidation (CD) class to five classes of healthy (normal - NM), emphysema (EM), ground glass (GG), fibrosis (FB), and micronodules (MN) in (38, 39, 54). Patch-CV10 and Patch-CV5 report similar medium to high F-scores. This implies that the ILD dataset (although one of the mainstream public medical image datasets) may not adequately represent ILD disease CT lung imaging patterns, over a population of only 120 patients. Patch-CV5 yields higher F-scores than and classifies the extra consolidation (CD) class. At present, the most pressing task is to drastically expand the dataset or to explore across-dataset deep learning on the combined ILD and LTRC datasets . ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_43",
"text": " Recently, Gao et al. have argued that a new CADe protocol on holistic classification of ILD diseases directly, using axial CT slice attenuation patterns and CNN, may be more realistic for clinical applications. We refer to this as slice-level classification, as image patch sampling from manual ROIs can be completely avoided (hence, no manual ROI inputs will be provided). The experimental results in are conducted with a patient-level hard split of 100 (training) and 20 (testing). The method’s testing F-scores (i.e., Slice-Test) are given in Table IV. Note that the F-scores in are not directly comparable to our results, due to different evaluation criteria. Only Slice-Test is evaluated and reported in , and we find that F-scores can change drastically from different rounds of the five-fold CV. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_44",
"text": " While it is a more practical CADe scheme, slice-level CNN learning is very challenging, as it is restricted to only 905 CT image slices with tagged ILD labels. We only benchmark the slice-level ILD classification results in this section. Even with the help of data augmentation (described in Sec. II), the classification accuracy of GoogLeNet-TL from Table III is only 0.57. However, transfer learning from ImageNet pre-trained model is consistently beneficial, as evidenced by AlexNet-TL (0.46) versus AlexNet-RI (0.44), and GoogLeNet-TL (0.57) versus GoogLeNet-RI (0.41). It especially prevents GoogLeNet from over-fitting on the limited CADe datasets. Finally, when the cross-validation is conducted by randomly splitting the set of all 905 CT axial slices into five folds, markedly higher F-scores are obtained (Slice-Random in Table IV). This further validates the claim that the dataset poorly generalizes ILDs for different patients. Figure 10 shows examples of misclassified ILD patches (in axial view), with their ground truth labels and inaccurately classified labels. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_45",
"text": " No existing work has reached the performance requirements for a realistic clinical setting , in which simple ROI-guided image patch extraction and classification (which requires manual ROI selection by clinicians) is implemented. The main goal of this paper is to investigate the three factors (CNN architectures, dataset characteristics and transfer learning) that affect performance on a specific medical image analysis problem and to ultimately deliver clinically relevant results. For ILD classification, the most critical performance bottlenecks are the challenge of cross-dataset learning and the limited patient population size. We attempt to overcome these obstacles by merging the ILD and LTRC datasets. Although the ILD and LTRC datasets (used in ) were generated and annotated separately, they contain many common disease labels. For instance, the ILD disease classes emphysema (EM), ground glass (GG), fibrosis (FB), and micronodules (MN) belong to both datasets, and thus can be jointly trained/tested to form a larger and unified dataset. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_46",
"text": " Adapting fully convolutional CNN or FCNN to parse every pixel location in the ILD lung CT images or slices, or adapting other methods from CNN based semantic image segmentation using PASCAL or ImageNet, may improve accuracy and efficiency. However, current FCNN approaches (65, 66) lack adequate spatial resolution in their directly output label space. A segmentation label propagation method was recently proposed to provide full pixel-wise labeling of the ILD data images. In this work, we sample image patches from the slice using the ROIs for the ILD provided in the dataset, in order to be consistent with previous methods in patch-level (38, 39, 54) and slice-level classification . ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_47",
"text": " In this work, we mainly focus on AlexNet and GoogLeNet. AlexNet is the first notably successful CNN architecture on the ImageNet challenge and has rekindled significant research interests on CNN. GoogLeNet is the state-of-the-art deep model, which has outperformed other notable models, such as AlexNet, OverFeat, and VGGNet (67, 68) in various computer vision benchmarks. Likewise, a reasonable assumption is that OverFeat and VGGNet may generate quantitative performance results ranked between AlexNet’s and GoogLeNet’s. For completeness, we include the Overfeat and VGGNet in the following evaluations, to bolster our hypothesis. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_48",
"text": " OverFeat is described in as an integrated framework for using CNN for classification, localization and detection. Its architecture is similar to that of AlexNet, but contains far more parameters (e.g., 1024 convolution filters in both “conv4” and “conv5” layers compared to 384 and 256 convolution kernels in the “conv4” and “conv5” layers of AlexNet), and operates more densely (e.g., smaller kernel size of 2 in “pool2” layer “pool5” compared to the kernel size 3 in “pool2” and “pool5” of AlexNet) on the input image. Overfeat is the winning model of the ILSVRC 2013 in detection and classification tasks. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_49",
"text": " The VGGNet architecture is introduced in , where it is designed to significantly increase the depth of the existing CNN architectures with 16 or 19 layers. Very small 3×3333\\times 3 size convolutional filters are used in all convolution layers with a convolutional stride of size 1, in order to reduce the number of parameters in deeper networks. Since VGGNet is substantially deeper than the other CNN models, VGGNet is more susceptible to the vanishing gradient problem (69, 70, 71). Hence, the network may be more difficult to train. Training the network requires far more memory and computation time than AlexNet. We use the 16 layer variant as our default VGGNet model in our study. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_50",
"text": " The classification accuracy results for ILD slice and patch level classification of five CNN architectures (CifarNet, AlexNet, Overfeat, VGGNet and GoogLeNet) are shown in Table VI. Based on the analysis in Sec. IV-B, transfer learning is only used for the slice level classification task. From Table VI, quantitative classification accuracy rates increase as the CNN model becomes more complex (CifarNet, AlexNet, Overfeat, VGGNet and GoogLeNet, in ascending order), for both ILD slice and patch level classification problems. The reported results validate our assumption that OverFeat’s and VGGNet’s performance levels fall between AlexNet’s and GoogLeNet‘s (this observation is consistent with the computer vision findings). CifarNet is designed for images with smaller dimensions (32×32323232\\times 32 images), and thus is not catered to classification tasks involving 256×256256256256\\times 256 images. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_51",
"text": " To investigate the performance difference between five-fold cross-validation (CV) in Sec. IV-B and leave-one-patient-out (LOO) validation, this experiment is performed under the LOO protocol. By comparing results in Table III (CV-5) to those in Table VI (LOO), one can see that LOO’s quantitative performances are remarkably better than CV-5’s. For example, in ILD slice-level classification, the accuracy level drastically increases from 0.46 to 0.867 using AlexNet-TL, and from 0.57 to 0.902 for GoogLeNet-TL. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_52",
"text": " CNN training is implemented with the Caffe deep learning framework, using a NVidia K40 GPU on Ubuntu 14.04 Linux OS. All models are trained for up to 90 epochs with early stopping criteria, where a model snapshot with low validation loss is taken for the final model. Other hyper-parameters are fixed as follows: momentum: 0.9; weight decay: 0.0005; and a step learning rate schedule with base learning rate of 0.01, decreased by a factor of 10 every 30 epochs. The image batch size is set to 128, except for GoogLeNet’s (64) and VGG-16’s (32), which are the maximum batch sizes that can fit in the NVidia K40 GPU with 12GB of memory capacity. Table VII illustrates the training time and memory requirements of the five CNN architectures on ILD patch-based classification up to 90 epochs. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_53",
"text": " Medical datasets are often “biased”, in that the number of healthy samples is much larger than the number of diseased instances, or that the numbers of images per class are uneven. In ILD dataset, the number of fibrosis samples is about 3.5 times greater than the number of emphysema samples. The number of non-LNs is 3∼4similar-to343\\sim 4 times greater than the number of LNs in lymph node detection. Different sampling or resampling rates are routinely applied to both ILD and LN detection to balance the data sample number or scale per class, as in. We refer this as “Equal Prior”. If we use the same sampling rate, that will lead to a “Biased Prior” across different classes. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_54",
"text": " Without loss of generality, after GoogLeNet is trained on the training sets under “Equal” or “Biased” priors, we compare its classification results on the balanced validation sets. Evaluating a classifier on a biased validation set will cause unfair assessment of its performance. For instance, a classifier that predicts every image patch as “non-LN” will still achieve a 70%percent7070\\% accuracy rate on a biased set with 3.53.53.5 times as many non-LN samples as LN samples. The classification accuracy results of GoogLeNet trained under two configurations are shown in Table VIII. Overall, it achieves lower accuracy results when trained with a “biased prior” in both tasks, and the accuracy difference for ILD patch-based classification is small. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_55",
"text": " In this section, we determine and analyze, via CNN visualization, the reasons for which transfer learning is beneficial to achieve better performance on CAD applications. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_56",
"text": " Thoracoabdominal LN Detection. In Figure 12, the first layer convolution filters from five different CNN architectures are visualized. We notice that without transfer learning (57, 6), somewhat blurry filters are learned (AlexNet-RI (256x256), AlexNet-RI (64x64), GoogLeNet-RI (256x256) and GoogLeNet-RI (64x64)). However, in AlexNet-TL (256x256), many higher orders of contrast- or edge-preserving patterns (that enable capturing image appearance details) are evidently learned through fine-tuning from ImageNet. With a smaller input resolution, AlexNet-RI (64x64) and GoogLeNet-RI (64x64) can learn image contrast filters to some degree; whereas, GoogLeNet-RI (256x256) and AlexNet-RI (256x256) have over-smooth low-level filters throughout. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_57",
"text": " ILD classification. We focus on analyzing visual CNN optimization traces and activations from the ILD dataset, as its slice-level setting is most similar to ImageNet’s. Indeed, both datasets use full-size images. The traces of the training loss, validation loss and validation accuracy of AlexNet-RI and AlexNet-TL, are shown in Figure 11. For AlexNet-RI in Figure 11 (a), the training loss significantly decreases as the number of training epochs increases, while the validation loss notably increases and the validation accuracy does not improve much before reaching a plateau. With transfer learning and fine-tuning, much better and consistent performances of training loss, validation loss and validation accuracy traces are obtained (see Figure 11 (b)). We begin the optimization problem – that of fine-tuning the ImageNet pre-trained CNN to classify a comprehensive set of images – by initializing the parameters close to an optimal solution. One could compare this process to making adults learn to classify ILDs, as opposed to babies. During the process, the validation loss, having remained at lower values throughout, achieves higher final accuracy levels than the validation loss on a similar problem with random initialization. Meanwhile, the training losses in both cases decrease to values near zero. This indicates that both AlexNet-RI and AlexNet-TL over-fit on the ILD dataset, due to its small instance size. The quantitative results in Table III indicate that AlexNet-TL and GoogLeNet-TL have consistently better classification accuracies than AlexNet-RI and GoogLeNet-RI, respectively. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_58",
"text": " The last pooling layer (pool-5) activation maps of the ImageNet pre-trained AlexNet (analogical to AlexNet-ImNet) and AlexNet-TL, obtained by processing two input images of Figure 2 (b,c), are shown in Figure 13 (a,b). The last pooling layer activation map summarizes the entire input image by highlighting which relative locations or neural reception fields relative to the image are activated. There are a total of 256 (6x6) reception fields in AlexNet . Pooling units where the relative image location of the disease region is present in the image are highlighted with green boxes. Next, we reconstruct the original ILD images using the process of de-convolution, back-propagating with convolution and un-pooling from the activation maps of the chosen pooling units . From the reconstructed images (Figure 13 bottom), we observe that with fine-tuning, AlexNet-TL detects and localizes objects of interest (ILD disease regions depicted in in Figure 2 (b) and (c)) better than AlexNet-ImNet. The filters shown in Figure 13 that better localize regions on the input images (Figure 2 (b) and (c)) respectively, produce relatively higher activations (in the top 5%) among all 512 reception field responses in the fine-tuned AlexNet-TL model. As observed in , the final CNN classification score can not be driven solely by a single strong activation in the receptions fields, but often by a sparse set of high activations (i.e., varying selective or sparse activations per input image). ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_59",
"text": " We summarize our findings as follows. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_60",
"text": " • Deep CNN architectures with 8, even 22 layers (4, 33), can be useful even for CADe problems where the available training datasets are limited. Previously, CNN models used in medical image analysis applications have often been 2∼5similar-to252\\sim 5 orders of magnitude smaller. • The trade-off between using better learning models and using more training data should be carefully considered when searching for an optimal solution to any CADe problem (e.g., mediastinal and abdominal LN detection). • Limited datasets can be a bottleneck to further advancement of CADe. Building progressively growing (in scale), well annotated datasets is at least as crucial as developing new algorithms. This has been accomplished, for instance, in the field of computer vision. The well-known scene recognition problem has made tremendous progress, thanks to the steady and continuous development of Scene-15, MIT Indoor-67, SUN-397 and Place datasets . • Transfer learning from the large scale annotated natural image datasets (ImageNet) to CADe problems has been consistently beneficial in our experiments. This sheds some light on cross-dataset CNN learning in the medical image domain, e.g., the union of the ILD and LTRC datasets , as suggested in this paper. • Finally, applications of off-the-shelf deep CNN image features to CADe problems can be improved by either exploring the performance-complementary properties of hand-crafted features (10, 9, 12), or by training CNNs from scratch and better fine-tuning CNNs on the target medical image dataset, as evaluated in this paper. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_61",
"text": " In this paper, we exploit and extensively evaluate three important, previously under-studied factors on deep convolutional neural networks (CNN) architecture, dataset characteristics, and transfer learning. We evaluate CNN performance on two different computer-aided diagnosis applications: thoraco-abdominal lymph node detection and interstitial lung disease classification. The empirical evaluation, CNN model visualization, CNN performance analysis, and conclusive insights can be generalized to the design of high performance CAD systems for other medical imaging tasks. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
}
] |
What is instance segmentation ?
|
Instance segmentation is a new type of computer vision task that aims to solve the problem of how to represent all objects in an image [1].
|
[
1
] |
[
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network (FCN) frameworks for object detection and semantic segmentation, respectively. These methods are conceptually intuitive and offer flexibility and robustness, together with fast training and inference time. Our goal in this work is to develop a comparably enabling framework for instance segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_1",
"text": " Instance segmentation is challenging because it requires the correct detection of all objects in an image while also precisely segmenting each instance. It therefore combines elements from the classical computer vision tasks of object detection, where the goal is to classify individual objects and localize each using a bounding box, and semantic segmentation, where the goal is to classify each pixel into a fixed set of categories without differentiating object instances.111Following common terminology, we use object detection to denote detection via bounding boxes, not masks, and semantic segmentation to denote per-pixel classification without differentiating instances. Yet we note that instance segmentation is both semantic and a form of detection. Given this, one might expect a complex method is required to achieve good results. However, we show that a surprisingly simple, flexible, and fast system can surpass prior state-of-the-art instance segmentation results. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_2",
"text": " Our method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting segmentation masks on each Region of Interest (RoI), in parallel with the existing branch for classification and bounding box regression (Figure 1). The mask branch is a small FCN applied to each RoI, predicting a segmentation mask in a pixel-to-pixel manner. Mask R-CNN is simple to implement and train given the Faster R-CNN framework, which facilitates a wide range of flexible architecture designs. Additionally, the mask branch only adds a small computational overhead, enabling a fast system and rapid experimentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_3",
"text": " In principle Mask R-CNN is an intuitive extension of Faster R-CNN, yet constructing the mask branch properly is critical for good results. Most importantly, Faster R-CNN was not designed for pixel-to-pixel alignment between network inputs and outputs. This is most evident in how RoIPool (18, 12), the de facto core operation for attending to instances, performs coarse spatial quantization for feature extraction. To fix the misalignment, we propose a simple, quantization-free layer, called RoIAlign, that faithfully preserves exact spatial locations. Despite being a seemingly minor change, RoIAlign has a large impact: it improves mask accuracy by relative 10% to 50%, showing bigger gains under stricter localization metrics. Second, we found it essential to decouple mask and class prediction: we predict a binary mask for each class independently, without competition among classes, and rely on the network’s RoI classification branch to predict the category. In contrast, FCNs usually perform per-pixel multi-class categorization, which couples segmentation and classification, and based on our experiments works poorly for instance segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_4",
"text": " Without bells and whistles, Mask R-CNN surpasses all previous state-of-the-art single-model results on the COCO instance segmentation task , including the heavily-engineered entries from the 2016 competition winner. As a by-product, our method also excels on the COCO object detection task. In ablation experiments, we evaluate multiple basic instantiations, which allows us to demonstrate its robustness and analyze the effects of core factors. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_5",
"text": " Our models can run at about 200ms per frame on a GPU, and training on COCO takes one to two days on a single 8-GPU machine. We believe the fast train and test speeds, together with the framework’s flexibility and accuracy, will benefit and ease future research on instance segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_6",
"text": " Finally, we showcase the generality of our framework via the task of human pose estimation on the COCO keypoint dataset . By viewing each keypoint as a one-hot binary mask, with minimal modification Mask R-CNN can be applied to detect instance-specific poses. Mask R-CNN surpasses the winner of the 2016 COCO keypoint competition, and at the same time runs at 5 fps. Mask R-CNN, therefore, can be seen more broadly as a flexible framework for instance-level recognition and can be readily extended to more complex tasks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_7",
"text": " We have released code to facilitate future research. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_8",
"text": " The Region-based CNN (R-CNN) approach to bounding-box object detection is to attend to a manageable number of candidate object regions (42, 20) and evaluate convolutional networks (25, 24) independently on each RoI. R-CNN was extended (18, 12) to allow attending to RoIs on feature maps using RoIPool, leading to fast speed and better accuracy. Faster R-CNN advanced this stream by learning the attention mechanism with a Region Proposal Network (RPN). Faster R-CNN is flexible and robust to many follow-up improvements (e.g., (38, 27, 21)), and is the current leading framework in several benchmarks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_9",
"text": " Driven by the effectiveness of R-CNN, many approaches to instance segmentation are based on segment proposals. Earlier methods (13, 15, 16, 9) resorted to bottom-up segments (42, 2). DeepMask and following works (34, 8) learn to propose segment candidates, which are then classified by Fast R-CNN. In these methods, segmentation precedes recognition, which is slow and less accurate. Likewise, Dai et al. proposed a complex multiple-stage cascade that predicts segment proposals from bounding-box proposals, followed by classification. Instead, our method is based on parallel prediction of masks and class labels, which is simpler and more flexible. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_10",
"text": " Most recently, Li et al. combined the segment proposal system in and object detection system in for “fully convolutional instance segmentation” (FCIS). The common idea in (8, 11, 26) is to predict a set of position-sensitive output channels fully convolutionally. These channels simultaneously address object classes, boxes, and masks, making the system fast. But FCIS exhibits systematic errors on overlapping instances and creates spurious edges (Figure 6), showing that it is challenged by the fundamental difficulties of segmenting instances. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_11",
"text": " Another family of solutions (23, 4, 3, 29) to instance segmentation are driven by the success of semantic segmentation. Starting from per-pixel classification results (e.g., FCN outputs), these methods attempt to cut the pixels of the same category into different instances. In contrast to the segmentation-first strategy of these methods, Mask R-CNN is based on an instance-first strategy. We expect a deeper incorporation of both strategies will be studied in the future. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_12",
"text": " Mask R-CNN is conceptually simple: Faster R-CNN has two outputs for each candidate object, a class label and a bounding-box offset; to this we add a third branch that outputs the object mask. Mask R-CNN is thus a natural and intuitive idea. But the additional mask output is distinct from the class and box outputs, requiring extraction of much finer spatial layout of an object. Next, we introduce the key elements of Mask R-CNN, including pixel-to-pixel alignment, which is the main missing piece of Fast/Faster R-CNN. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_13",
"text": " We begin by briefly reviewing the Faster R-CNN detector . Faster R-CNN consists of two stages. The first stage, called a Region Proposal Network (RPN), proposes candidate object bounding boxes. The second stage, which is in essence Fast R-CNN , extracts features using RoIPool from each candidate box and performs classification and bounding-box regression. The features used by both stages can be shared for faster inference. We refer readers to for latest, comprehensive comparisons between Faster R-CNN and other frameworks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_14",
"text": " Mask R-CNN adopts the same two-stage procedure, with an identical first stage (which is RPN). In the second stage, in parallel to predicting the class and box offset, Mask R-CNN also outputs a binary mask for each RoI. This is in contrast to most recent systems, where classification depends on mask predictions (e.g. (33, 10, 26)). Our approach follows the spirit of Fast R-CNN that applies bounding-box classification and regression in parallel (which turned out to largely simplify the multi-stage pipeline of original R-CNN ). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_15",
"text": " Formally, during training, we define a multi-task loss on each sampled RoI as L=Lcls+Lbox+Lmask𝐿subscript𝐿𝑐𝑙𝑠subscript𝐿𝑏𝑜𝑥subscript𝐿𝑚𝑎𝑠𝑘L=L_{cls}+L_{box}+L_{mask}. The classification loss Lclssubscript𝐿𝑐𝑙𝑠L_{cls} and bounding-box loss Lboxsubscript𝐿𝑏𝑜𝑥L_{box} are identical as those defined in . The mask branch has a Km2𝐾superscript𝑚2Km^{2}-dimensional output for each RoI, which encodes K𝐾K binary masks of resolution m×m𝑚𝑚m\\times m, one for each of the K𝐾K classes. To this we apply a per-pixel sigmoid, and define Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} as the average binary cross-entropy loss. For an RoI associated with ground-truth class k𝑘k, Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} is only defined on the k𝑘k-th mask (other mask outputs do not contribute to the loss). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_16",
"text": " Our definition of Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} allows the network to generate masks for every class without competition among classes; we rely on the dedicated classification branch to predict the class label used to select the output mask. This decouples mask and class prediction. This is different from common practice when applying FCNs to semantic segmentation, which typically uses a per-pixel softmax and a multinomial cross-entropy loss. In that case, masks across classes compete; in our case, with a per-pixel sigmoid and a binary loss, they do not. We show by experiments that this formulation is key for good instance segmentation results. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_17",
"text": " A mask encodes an input object’s spatial layout. Thus, unlike class labels or box offsets that are inevitably collapsed into short output vectors by fully-connected (fc) layers, extracting the spatial structure of masks can be addressed naturally by the pixel-to-pixel correspondence provided by convolutions. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_18",
"text": " Specifically, we predict an m×m𝑚𝑚m\\times m mask from each RoI using an FCN . This allows each layer in the mask branch to maintain the explicit m×m𝑚𝑚m\\times m object spatial layout without collapsing it into a vector representation that lacks spatial dimensions. Unlike previous methods that resort to fc layers for mask prediction (33, 34, 10), our fully convolutional representation requires fewer parameters, and is more accurate as demonstrated by experiments. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_19",
"text": " This pixel-to-pixel behavior requires our RoI features, which themselves are small feature maps, to be well aligned to faithfully preserve the explicit per-pixel spatial correspondence. This motivated us to develop the following RoIAlign layer that plays a key role in mask prediction. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_20",
"text": " RoIPool is a standard operation for extracting a small feature map (e.g., 7×\\times7) from each RoI. RoIPool first quantizes a floating-number RoI to the discrete granularity of the feature map, this quantized RoI is then subdivided into spatial bins which are themselves quantized, and finally feature values covered by each bin are aggregated (usually by max pooling). Quantization is performed, e.g., on a continuous coordinate x𝑥x by computing (x/16)delimited-()𝑥16(x/16), where 16 is a feature map stride and (⋅)delimited-()⋅(\\cdot) is rounding; likewise, quantization is performed when dividing into bins (e.g., 7×\\times7). These quantizations introduce misalignments between the RoI and the extracted features. While this may not impact classification, which is robust to small translations, it has a large negative effect on predicting pixel-accurate masks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_21",
"text": " To address this, we propose an RoIAlign layer that removes the harsh quantization of RoIPool, properly aligning the extracted features with the input. Our proposed change is simple: we avoid any quantization of the RoI boundaries or bins (i.e., we use x/16𝑥16x/16 instead of (x/16)delimited-()𝑥16(x/16)). We use bilinear interpolation to compute the exact values of the input features at four regularly sampled locations in each RoI bin, and aggregate the result (using max or average), see Figure 3 for details. We note that the results are not sensitive to the exact sampling locations, or how many points are sampled, as long as no quantization is performed. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_22",
"text": " RoIAlign leads to large improvements as we show in §4.2. We also compare to the RoIWarp operation proposed in . Unlike RoIAlign, RoIWarp overlooked the alignment issue and was implemented in as quantizing RoI just like RoIPool. So even though RoIWarp also adopts bilinear resampling motivated by , it performs on par with RoIPool as shown by experiments (more details in Table 2c), demonstrating the crucial role of alignment. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_23",
"text": " To demonstrate the generality of our approach, we instantiate Mask R-CNN with multiple architectures. For clarity, we differentiate between: (i) the convolutional backbone architecture used for feature extraction over an entire image, and (ii) the network head for bounding-box recognition (classification and regression) and mask prediction that is applied separately to each RoI. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_24",
"text": " We denote the backbone architecture using the nomenclature network-depth-features. We evaluate ResNet and ResNeXt networks of depth 50 or 101 layers. The original implementation of Faster R-CNN with ResNets extracted features from the final convolutional layer of the 4-th stage, which we call C4. This backbone with ResNet-50, for example, is denoted by ResNet-50-C4. This is a common choice used in (19, 10, 21, 39). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_25",
"text": " We also explore another more effective backbone recently proposed by Lin et al. , called a Feature Pyramid Network (FPN). FPN uses a top-down architecture with lateral connections to build an in-network feature pyramid from a single-scale input. Faster R-CNN with an FPN backbone extracts RoI features from different levels of the feature pyramid according to their scale, but otherwise the rest of the approach is similar to vanilla ResNet. Using a ResNet-FPN backbone for feature extraction with Mask R-CNN gives excellent gains in both accuracy and speed. For further details on FPN, we refer readers to . ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_26",
"text": " For the network head we closely follow architectures presented in previous work to which we add a fully convolutional mask prediction branch. Specifically, we extend the Faster R-CNN box heads from the ResNet and FPN papers. Details are shown in Figure 4. The head on the ResNet-C4 backbone includes the 5-th stage of ResNet (namely, the 9-layer ‘res5’ ), which is compute-intensive. For FPN, the backbone already includes res5 and thus allows for a more efficient head that uses fewer filters. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_27",
"text": " We note that our mask branches have a straightforward structure. More complex designs have the potential to improve performance but are not the focus of this work. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_28",
"text": " We set hyper-parameters following existing Fast/Faster R-CNN work (12, 36, 27). Although these decisions were made for object detection in original papers (12, 36, 27), we found our instance segmentation system is robust to them. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_29",
"text": " As in Fast R-CNN, an RoI is considered positive if it has IoU with a ground-truth box of at least 0.5 and negative otherwise. The mask loss Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} is defined only on positive RoIs. The mask target is the intersection between an RoI and its associated ground-truth mask. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_30",
"text": " We adopt image-centric training . Images are resized such that their scale (shorter edge) is 800 pixels . Each mini-batch has 2 images per GPU and each image has N𝑁N sampled RoIs, with a ratio of 1:3 of positive to negatives . N𝑁N is 64 for the C4 backbone (as in (12, 36)) and 512 for FPN (as in ). We train on 8 GPUs (so effective mini-batch size is 16) for 160k iterations, with a learning rate of 0.02 which is decreased by 10 at the 120k iteration. We use a weight decay of 0.0001 and momentum of 0.9. With ResNeXt , we train with 1 image per GPU and the same number of iterations, with a starting learning rate of 0.01. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_31",
"text": " The RPN anchors span 5 scales and 3 aspect ratios, following . For convenient ablation, RPN is trained separately and does not share features with Mask R-CNN, unless specified. For every entry in this paper, RPN and Mask R-CNN have the same backbones and so they are shareable. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_32",
"text": " At test time, the proposal number is 300 for the C4 backbone (as in ) and 1000 for FPN (as in ). We run the box prediction branch on these proposals, followed by non-maximum suppression . The mask branch is then applied to the highest scoring 100 detection boxes. Although this differs from the parallel computation used in training, it speeds up inference and improves accuracy (due to the use of fewer, more accurate RoIs). The mask branch can predict K𝐾K masks per RoI, but we only use the k𝑘k-th mask, where k𝑘k is the predicted class by the classification branch. The m𝑚m×\\timesm𝑚m floating-number mask output is then resized to the RoI size, and binarized at a threshold of 0.5. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_33",
"text": " Note that since we only compute masks on the top 100 detection boxes, Mask R-CNN adds a small overhead to its Faster R-CNN counterpart (e.g., ∼similar-to\\scriptstyle\\sim20% on typical models). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_34",
"text": " We perform a thorough comparison of Mask R-CNN to the state of the art along with comprehensive ablations on the COCO dataset . We report the standard COCO metrics including AP (averaged over IoU thresholds), AP50, AP75, and APS, APM, APL (AP at different scales). Unless noted, AP is evaluating using mask IoU. As in previous work (5, 27), we train using the union of 80k train images and a 35k subset of val images (trainval35k), and report ablations on the remaining 5k val images (minival). We also report results on test-dev . ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_35",
"text": " We compare Mask R-CNN to the state-of-the-art methods in instance segmentation in Table 1. All instantiations of our model outperform baseline variants of previous state-of-the-art models. This includes MNC and FCIS , the winners of the COCO 2015 and 2016 segmentation challenges, respectively. Without bells and whistles, Mask R-CNN with ResNet-101-FPN backbone outperforms FCIS+++ , which includes multi-scale train/test, horizontal flip test, and online hard example mining (OHEM) . While outside the scope of this work, we expect many such improvements to be applicable to ours. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_36",
"text": " Mask R-CNN outputs are visualized in Figures 2 and 5. Mask R-CNN achieves good results even under challenging conditions. In Figure 6 we compare our Mask R-CNN baseline and FCIS+++ . FCIS+++ exhibits systematic artifacts on overlapping instances, suggesting that it is challenged by the fundamental difficulty of instance segmentation. Mask R-CNN shows no such artifacts. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_37",
"text": " We run a number of ablations to analyze Mask R-CNN. Results are shown in Table 2 and discussed in detail next. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_38",
"text": " Table 2a shows Mask R-CNN with various backbones. It benefits from deeper networks (50 vs. 101) and advanced designs including FPN and ResNeXt. We note that not all frameworks automatically benefit from deeper or advanced networks (see benchmarking in ). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_39",
"text": " Mask R-CNN decouples mask and class prediction: as the existing box branch predicts the class label, we generate a mask for each class without competition among classes (by a per-pixel sigmoid and a binary loss). In Table 2b, we compare this to using a per-pixel softmax and a multinomial loss (as commonly used in FCN ). This alternative couples the tasks of mask and class prediction, and results in a severe loss in mask AP (5.5 points). This suggests that once the instance has been classified as a whole (by the box branch), it is sufficient to predict a binary mask without concern for the categories, which makes the model easier to train. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_40",
"text": " Our default instantiation predicts class-specific masks, i.e., one m𝑚m×\\timesm𝑚m mask per class. Interestingly, Mask R-CNN with class-agnostic masks (i.e., predicting a single m𝑚m×\\timesm𝑚m output regardless of class) is nearly as effective: it has 29.7 mask AP vs. 30.3 for the class-specific counterpart on ResNet-50-C4. This further highlights the division of labor in our approach which largely decouples classification and segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_41",
"text": " An evaluation of our proposed RoIAlign layer is shown in Table 2c. For this experiment we use the ResNet-50-C4 backbone, which has stride 16. RoIAlign improves AP by about 3 points over RoIPool, with much of the gain coming at high IoU (AP75). RoIAlign is insensitive to max/average pool; we use average in the rest of the paper. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_42",
"text": " Additionally, we compare with RoIWarp proposed in MNC that also adopt bilinear sampling. As discussed in §3, RoIWarp still quantizes the RoI, losing alignment with the input. As can be seen in Table 2c, RoIWarp performs on par with RoIPool and much worse than RoIAlign. This highlights that proper alignment is key. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_43",
"text": " We also evaluate RoIAlign with a ResNet-50-C5 backbone, which has an even larger stride of 32 pixels. We use the same head as in Figure 4 (right), as the res5 head is not applicable. Table 2d shows that RoIAlign improves mask AP by a massive 7.3 points, and mask AP75 by 10.5 points (50% relative improvement). Moreover, we note that with RoIAlign, using stride-32 C5 features (30.9 AP) is more accurate than using stride-16 C4 features (30.3 AP, Table 2c). RoIAlign largely resolves the long-standing challenge of using large-stride features for detection and segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_44",
"text": " Finally, RoIAlign shows a gain of 1.5 mask AP and 0.5 box AP when used with FPN, which has finer multi-level strides. For keypoint detection that requires finer alignment, RoIAlign shows large gains even with FPN (Table 6). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_45",
"text": " Segmentation is a pixel-to-pixel task and we exploit the spatial layout of masks by using an FCN. In Table 2e, we compare multi-layer perceptrons (MLP) and FCNs, using a ResNet-50-FPN backbone. Using FCNs gives a 2.1 mask AP gain over MLPs. We note that we choose this backbone so that the conv layers of the FCN head are not pre-trained, for a fair comparison with MLP. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_46",
"text": " We compare Mask R-CNN to the state-of-the-art COCO bounding-box object detection in Table 3. For this result, even though the full Mask R-CNN model is trained, only the classification and box outputs are used at inference (the mask output is ignored). Mask R-CNN using ResNet-101-FPN outperforms the base variants of all previous state-of-the-art models, including the single-model variant of G-RMI , the winner of the COCO 2016 Detection Challenge. Using ResNeXt-101-FPN, Mask R-CNN further improves results, with a margin of 3.0 points box AP over the best previous single model entry from (which used Inception-ResNet-v2-TDM). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_47",
"text": " As a further comparison, we trained a version of Mask R-CNN but without the mask branch, denoted by “Faster R-CNN, RoIAlign” in Table 3. This model performs better than the model presented in due to RoIAlign. On the other hand, it is 0.9 points box AP lower than Mask R-CNN. This gap of Mask R-CNN on box detection is therefore due solely to the benefits of multi-task training. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_48",
"text": " Lastly, we note that Mask R-CNN attains a small gap between its mask and box AP: e.g., 2.7 points between 37.1 (mask, Table 1) and 39.8 (box, Table 3). This indicates that our approach largely closes the gap between object detection and the more challenging instance segmentation task. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_49",
"text": " We train a ResNet-101-FPN model that shares features between the RPN and Mask R-CNN stages, following the 4-step training of Faster R-CNN . This model runs at 195ms per image on an Nvidia Tesla M40 GPU (plus 15ms CPU time resizing the outputs to the original resolution), and achieves statistically the same mask AP as the unshared one. We also report that the ResNet-101-C4 variant takes ∼similar-to\\scriptstyle\\sim400ms as it has a heavier box head (Figure 4), so we do not recommend using the C4 variant in practice. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_50",
"text": " Although Mask R-CNN is fast, we note that our design is not optimized for speed, and better speed/accuracy trade-offs could be achieved , e.g., by varying image sizes and proposal numbers, which is beyond the scope of this paper. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_51",
"text": " Mask R-CNN is also fast to train. Training with ResNet-50-FPN on COCO trainval35k takes 32 hours in our synchronized 8-GPU implementation (0.72s per 16-image mini-batch), and 44 hours with ResNet-101-FPN. In fact, fast prototyping can be completed in less than one day when training on the train set. We hope such rapid training will remove a major hurdle in this area and encourage more people to perform research on this challenging topic. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_52",
"text": " Our framework can easily be extended to human pose estimation. We model a keypoint’s location as a one-hot mask, and adopt Mask R-CNN to predict K𝐾K masks, one for each of K𝐾K keypoint types (e.g., left shoulder, right elbow). This task helps demonstrate the flexibility of Mask R-CNN. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_53",
"text": " We note that minimal domain knowledge for human pose is exploited by our system, as the experiments are mainly to demonstrate the generality of the Mask R-CNN framework. We expect that domain knowledge (e.g., modeling structures ) will be complementary to our simple approach. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_54",
"text": " We make minor modifications to the segmentation system when adapting it for keypoints. For each of the K𝐾K keypoints of an instance, the training target is a one-hot m×m𝑚𝑚m\\times m binary mask where only a single pixel is labeled as foreground. During training, for each visible ground-truth keypoint, we minimize the cross-entropy loss over an m2superscript𝑚2m^{2}-way softmax output (which encourages a single point to be detected). We note that as in instance segmentation, the K𝐾K keypoints are still treated independently. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_55",
"text": " We adopt the ResNet-FPN variant, and the keypoint head architecture is similar to that in Figure 4 (right). The keypoint head consists of a stack of eight 3×\\times3 512-d conv layers, followed by a deconv layer and 2×\\times bilinear upscaling, producing an output resolution of 56×\\times56. We found that a relatively high resolution output (compared to masks) is required for keypoint-level localization accuracy. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_56",
"text": " Models are trained on all COCO trainval35k images that contain annotated keypoints. To reduce overfitting, as this training set is smaller, we train using image scales randomly sampled from (640, 800) pixels; inference is on a single scale of 800 pixels. We train for 90k iterations, starting from a learning rate of 0.02 and reducing it by 10 at 60k and 80k iterations. We use bounding-box NMS with a threshold of 0.5. Other details are identical as in §3.1. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_57",
"text": " We evaluate the person keypoint AP (APkpkp{}^{\\text{kp}}) and experiment with a ResNet-50-FPN backbone; more backbones will be studied in the appendix. Table 4 shows that our result (62.7 APkpkp{}^{\\text{kp}}) is 0.9 points higher than the COCO 2016 keypoint detection winner that uses a multi-stage processing pipeline (see caption of Table 4). Our method is considerably simpler and faster. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_58",
"text": " More importantly, we have a unified model that can simultaneously predict boxes, segments, and keypoints while running at 5 fps. Adding a segment branch (for the person category) improves the APkpkp{}^{\\text{kp}} to 63.1 (Table 4) on test-dev. More ablations of multi-task learning on minival are in Table 5. Adding the mask branch to the box-only (i.e., Faster R-CNN) or keypoint-only versions consistently improves these tasks. However, adding the keypoint branch reduces the box/mask AP slightly, suggesting that while keypoint detection benefits from multitask training, it does not in turn help the other tasks. Nevertheless, learning all three tasks jointly enables a unified system to efficiently predict all outputs simultaneously (Figure 7). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_59",
"text": " We also investigate the effect of RoIAlign on keypoint detection (Table 6). Though this ResNet-50-FPN backbone has finer strides (e.g., 4 pixels on the finest level), RoIAlign still shows significant improvement over RoIPool and increases APkpkp{}^{\\text{kp}} by 4.4 points. This is because keypoint detections are more sensitive to localization accuracy. This again indicates that alignment is essential for pixel-level localization, including masks and keypoints. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_60",
"text": " Given the effectiveness of Mask R-CNN for extracting object bounding boxes, masks, and keypoints, we expect it be an effective framework for other instance-level tasks. ",
"title": "Mask R-CNN"
}
] |
What is distillation and why it is used?
|
Distillation is a knowledge transfer technique for deep networks which is used for compute efficient model design [47].
|
[
47
] |
[
{
"id": "1704.04861_all_0",
"text": " Convolutional neural networks have become ubiquitous in computer vision ever since AlexNet popularized deep convolutional neural networks by winning the ImageNet Challenge: ILSVRC 2012 . The general trend has been to make deeper and more complicated networks in order to achieve higher accuracy (27, 31, 29, 8). However, these advances to improve accuracy are not necessarily making networks more efficient with respect to size and speed. In many real world applications such as robotics, self-driving car and augmented reality, the recognition tasks need to be carried out in a timely fashion on a computationally limited platform. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_1",
"text": " This paper describes an efficient network architecture and a set of two hyper-parameters in order to build very small, low latency models that can be easily matched to the design requirements for mobile and embedded vision applications. Section 2 reviews prior work in building small models. Section 3 describes the MobileNet architecture and two hyper-parameters width multiplier and resolution multiplier to define smaller and more efficient MobileNets. Section 4 describes experiments on ImageNet as well a variety of different applications and use cases. Section 5 closes with a summary and conclusion. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_2",
"text": " There has been rising interest in building small and efficient neural networks in the recent literature, e.g. (16, 34, 12, 36, 22). Many different approaches can be generally categorized into either compressing pretrained networks or training small networks directly. This paper proposes a class of network architectures that allows a model developer to specifically choose a small network that matches the resource restrictions (latency, size) for their application. MobileNets primarily focus on optimizing for latency but also yield small networks. Many papers on small networks focus only on size but do not consider speed. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_3",
"text": " MobileNets are built primarily from depthwise separable convolutions initially introduced in and subsequently used in Inception models to reduce the computation in the first few layers. Flattened networks build a network out of fully factorized convolutions and showed the potential of extremely factorized networks. Independent of this current paper, Factorized Networks introduces a similar factorized convolution as well as the use of topological connections. Subsequently, the Xception network demonstrated how to scale up depthwise separable filters to out perform Inception V3 networks. Another small network is Squeezenet which uses a bottleneck approach to design a very small network. Other reduced computation networks include structured transform networks and deep fried convnets . ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_4",
"text": " A different approach for obtaining small networks is shrinking, factorizing or compressing pretrained networks. Compression based on product quantization , hashing , and pruning, vector quantization and Huffman coding have been proposed in the literature. Additionally various factorizations have been proposed to speed up pretrained networks (14, 20). Another method for training small networks is distillation which uses a larger network to teach a smaller network. It is complementary to our approach and is covered in some of our use cases in section 4. Another emerging approach is low bit networks (4, 22, 11). ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_5",
"text": " In this section we first describe the core layers that MobileNet is built on which are depthwise separable filters. We then describe the MobileNet network structure and conclude with descriptions of the two model shrinking hyper-parameters width multiplier and resolution multiplier. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_6",
"text": " The MobileNet model is based on depthwise separable convolutions which is a form of factorized convolutions which factorize a standard convolution into a depthwise convolution and a 1×1111\\times 1 convolution called a pointwise convolution. For MobileNets the depthwise convolution applies a single filter to each input channel. The pointwise convolution then applies a 1×1111\\times 1 convolution to combine the outputs the depthwise convolution. A standard convolution both filters and combines inputs into a new set of outputs in one step. The depthwise separable convolution splits this into two layers, a separate layer for filtering and a separate layer for combining. This factorization has the effect of drastically reducing computation and model size. Figure 2 shows how a standard convolution 2(a) is factorized into a depthwise convolution 2(b) and a 1×1111\\times 1 pointwise convolution 2(c). ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_7",
"text": " A standard convolutional layer takes as input a DF×DF×Msubscript𝐷𝐹subscript𝐷𝐹𝑀D_{F}\\times D_{F}\\times M feature map 𝐅𝐅\\mathbf{F} and produces a DF×DF×Nsubscript𝐷𝐹subscript𝐷𝐹𝑁D_{F}\\times D_{F}\\times N feature map 𝐆𝐆\\mathbf{G} where DFsubscript𝐷𝐹D_{F} is the spatial width and height of a square input feature map111We assume that the output feature map has the same spatial dimensions as the input and both feature maps are square. Our model shrinking results generalize to feature maps with arbitrary sizes and aspect ratios., M𝑀M is the number of input channels (input depth), DGsubscript𝐷𝐺D_{G} is the spatial width and height of a square output feature map and N𝑁N is the number of output channel (output depth). ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_8",
"text": " The standard convolutional layer is parameterized by convolution kernel 𝐊𝐊\\mathbf{K} of size DK×DK×M×Nsubscript𝐷𝐾subscript𝐷𝐾𝑀𝑁D_{K}\\times D_{K}\\times M\\times N where DKsubscript𝐷𝐾D_{K} is the spatial dimension of the kernel assumed to be square and M𝑀M is number of input channels and N𝑁N is the number of output channels as defined previously. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_9",
"text": " The output feature map for standard convolution assuming stride one and padding is computed as: ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_10",
"text": " 𝐆k,l,n=∑i,j,m𝐊i,j,m,n⋅𝐅k+i−1,l+j−1,msubscript𝐆𝑘𝑙𝑛subscript𝑖𝑗𝑚⋅subscript𝐊𝑖𝑗𝑚𝑛subscript𝐅𝑘𝑖1𝑙𝑗1𝑚\\mathbf{G}_{k,l,n}=\\sum_{i,j,m}\\mathbf{K}_{i,j,m,n}\\cdot\\mathbf{F}_{k+i-1,l+j-1,m} (1) ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_11",
"text": " Standard convolutions have the computational cost of: ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_12",
"text": " DK⋅DK⋅M⋅N⋅DF⋅DF⋅subscript𝐷𝐾subscript𝐷𝐾𝑀𝑁subscript𝐷𝐹subscript𝐷𝐹D_{K}\\cdot D_{K}\\cdot M\\cdot N\\cdot D_{F}\\cdot D_{F} (2) where the computational cost depends multiplicatively on the number of input channels M𝑀M, the number of output channels N𝑁N the kernel size Dk×Dksubscript𝐷𝑘subscript𝐷𝑘D_{k}\\times D_{k} and the feature map size DF×DFsubscript𝐷𝐹subscript𝐷𝐹D_{F}\\times D_{F}. MobileNet models address each of these terms and their interactions. First it uses depthwise separable convolutions to break the interaction between the number of output channels and the size of the kernel. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_13",
"text": " The standard convolution operation has the effect of filtering features based on the convolutional kernels and combining features in order to produce a new representation. The filtering and combination steps can be split into two steps via the use of factorized convolutions called depthwise separable convolutions for substantial reduction in computational cost. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_14",
"text": " Depthwise separable convolution are made up of two layers: depthwise convolutions and pointwise convolutions. We use depthwise convolutions to apply a single filter per each input channel (input depth). Pointwise convolution, a simple 1×1111\\times 1 convolution, is then used to create a linear combination of the output of the depthwise layer. MobileNets use both batchnorm and ReLU nonlinearities for both layers. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_15",
"text": " Depthwise convolution with one filter per input channel (input depth) can be written as: ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_16",
"text": " 𝐆^k,l,m=∑i,j𝐊^i,j,m⋅𝐅k+i−1,l+j−1,msubscript^𝐆𝑘𝑙𝑚subscript𝑖𝑗⋅subscript^𝐊𝑖𝑗𝑚subscript𝐅𝑘𝑖1𝑙𝑗1𝑚\\hat{\\mathbf{G}}_{k,l,m}=\\sum_{i,j}\\hat{\\mathbf{K}}_{i,j,m}\\cdot\\mathbf{F}_{k+i-1,l+j-1,m} (3) where 𝐊^^𝐊\\hat{\\mathbf{K}} is the depthwise convolutional kernel of size DK×DK×Msubscript𝐷𝐾subscript𝐷𝐾𝑀D_{K}\\times D_{K}\\times M where the mthsubscript𝑚𝑡ℎm_{th} filter in 𝐊^^𝐊\\hat{\\mathbf{K}} is applied to the mthsubscript𝑚𝑡ℎm_{th} channel in 𝐅𝐅\\mathbf{F} to produce the mthsubscript𝑚𝑡ℎm_{th} channel of the filtered output feature map 𝐆^^𝐆\\hat{\\mathbf{G}}. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_17",
"text": " Depthwise convolution has a computational cost of: ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_18",
"text": " DK⋅DK⋅M⋅DF⋅DF⋅subscript𝐷𝐾subscript𝐷𝐾𝑀subscript𝐷𝐹subscript𝐷𝐹D_{K}\\cdot D_{K}\\cdot M\\cdot D_{F}\\cdot D_{F} (4) ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_19",
"text": " Depthwise convolution is extremely efficient relative to standard convolution. However it only filters input channels, it does not combine them to create new features. So an additional layer that computes a linear combination of the output of depthwise convolution via 1×1111\\times 1 convolution is needed in order to generate these new features. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_20",
"text": " The combination of depthwise convolution and 1×1111\\times 1 (pointwise) convolution is called depthwise separable convolution which was originally introduced in . ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_21",
"text": " Depthwise separable convolutions cost: ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_22",
"text": " DK⋅DK⋅M⋅DF⋅DF+M⋅N⋅DF⋅DF⋅subscript𝐷𝐾subscript𝐷𝐾𝑀subscript𝐷𝐹subscript𝐷𝐹⋅𝑀𝑁subscript𝐷𝐹subscript𝐷𝐹D_{K}\\cdot D_{K}\\cdot M\\cdot D_{F}\\cdot D_{F}+M\\cdot N\\cdot D_{F}\\cdot D_{F} (5) which is the sum of the depthwise and 1×1111\\times 1 pointwise convolutions. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_23",
"text": " By expressing convolution as a two step process of filtering and combining we get a reduction in computation of: ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_24",
"text": " DK⋅DK⋅M⋅DF⋅DF+M⋅N⋅DF⋅DFDK⋅DK⋅M⋅N⋅DF⋅DF⋅subscript𝐷𝐾subscript𝐷𝐾𝑀subscript𝐷𝐹subscript𝐷𝐹⋅𝑀𝑁subscript𝐷𝐹subscript𝐷𝐹⋅subscript𝐷𝐾subscript𝐷𝐾𝑀𝑁subscript𝐷𝐹subscript𝐷𝐹\\displaystyle\\frac{D_{K}\\cdot D_{K}\\cdot M\\cdot D_{F}\\cdot D_{F}+M\\cdot N\\cdot D_{F}\\cdot D_{F}}{D_{K}\\cdot D_{K}\\cdot M\\cdot N\\cdot D_{F}\\cdot D_{F}} =\\displaystyle= 1N+1DK21𝑁1superscriptsubscript𝐷𝐾2\\displaystyle\\frac{1}{N}+\\frac{1}{D_{K}^{2}} ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_25",
"text": " MobileNet uses 3×3333\\times 3 depthwise separable convolutions which uses between 8 to 9 times less computation than standard convolutions at only a small reduction in accuracy as seen in Section 4. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_26",
"text": " Additional factorization in spatial dimension such as in (16, 31) does not save much additional computation as very little computation is spent in depthwise convolutions. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_27",
"text": " The MobileNet structure is built on depthwise separable convolutions as mentioned in the previous section except for the first layer which is a full convolution. By defining the network in such simple terms we are able to easily explore network topologies to find a good network. The MobileNet architecture is defined in Table 1. All layers are followed by a batchnorm and ReLU nonlinearity with the exception of the final fully connected layer which has no nonlinearity and feeds into a softmax layer for classification. Figure 3 contrasts a layer with regular convolutions, batchnorm and ReLU nonlinearity to the factorized layer with depthwise convolution, 1×1111\\times 1 pointwise convolution as well as batchnorm and ReLU after each convolutional layer. Down sampling is handled with strided convolution in the depthwise convolutions as well as in the first layer. A final average pooling reduces the spatial resolution to 1 before the fully connected layer. Counting depthwise and pointwise convolutions as separate layers, MobileNet has 28 layers. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_28",
"text": " It is not enough to simply define networks in terms of a small number of Mult-Adds. It is also important to make sure these operations can be efficiently implementable. For instance unstructured sparse matrix operations are not typically faster than dense matrix operations until a very high level of sparsity. Our model structure puts nearly all of the computation into dense 1×1111\\times 1 convolutions. This can be implemented with highly optimized general matrix multiply (GEMM) functions. Often convolutions are implemented by a GEMM but require an initial reordering in memory called im2col in order to map it to a GEMM. For instance, this approach is used in the popular Caffe package . 1×1111\\times 1 convolutions do not require this reordering in memory and can be implemented directly with GEMM which is one of the most optimized numerical linear algebra algorithms. MobileNet spends 95%percent9595\\% of it’s computation time in 1×1111\\times 1 convolutions which also has 75%percent7575\\% of the parameters as can be seen in Table 2. Nearly all of the additional parameters are in the fully connected layer. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_29",
"text": " MobileNet models were trained in TensorFlow using RMSprop with asynchronous gradient descent similar to Inception V3 . However, contrary to training large models we use less regularization and data augmentation techniques because small models have less trouble with overfitting. When training MobileNets we do not use side heads or label smoothing and additionally reduce the amount image of distortions by limiting the size of small crops that are used in large Inception training . Additionally, we found that it was important to put very little or no weight decay (l2 regularization) on the depthwise filters since their are so few parameters in them. For the ImageNet benchmarks in the next section all models were trained with same training parameters regardless of the size of the model. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_30",
"text": " Although the base MobileNet architecture is already small and low latency, many times a specific use case or application may require the model to be smaller and faster. In order to construct these smaller and less computationally expensive models we introduce a very simple parameter α𝛼\\alpha called width multiplier. The role of the width multiplier α𝛼\\alpha is to thin a network uniformly at each layer. For a given layer and width multiplier α𝛼\\alpha, the number of input channels M𝑀M becomes αM𝛼𝑀\\alpha M and the number of output channels N𝑁N becomes αN𝛼𝑁\\alpha N. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_31",
"text": " The computational cost of a depthwise separable convolution with width multiplier α𝛼\\alpha is: DK⋅DK⋅αM⋅DF⋅DF+αM⋅αN⋅DF⋅DF⋅⋅subscript𝐷𝐾subscript𝐷𝐾𝛼𝑀subscript𝐷𝐹subscript𝐷𝐹⋅⋅𝛼𝑀𝛼𝑁subscript𝐷𝐹subscript𝐷𝐹D_{K}\\cdot D_{K}\\cdot\\alpha M\\cdot D_{F}\\cdot D_{F}+\\alpha M\\cdot\\alpha N\\cdot D_{F}\\cdot D_{F} (6) where α∈(0,1)𝛼01\\alpha\\in(0,1) with typical settings of 1, 0.75, 0.5 and 0.25. α=1𝛼1\\alpha=1 is the baseline MobileNet and α<1𝛼1\\alpha<1 are reduced MobileNets. Width multiplier has the effect of reducing computational cost and the number of parameters quadratically by roughly α2superscript𝛼2\\alpha^{2}. Width multiplier can be applied to any model structure to define a new smaller model with a reasonable accuracy, latency and size trade off. It is used to define a new reduced structure that needs to be trained from scratch. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_32",
"text": " The second hyper-parameter to reduce the computational cost of a neural network is a resolution multiplier ρ𝜌\\rho. We apply this to the input image and the internal representation of every layer is subsequently reduced by the same multiplier. In practice we implicitly set ρ𝜌\\rho by setting the input resolution. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_33",
"text": " We can now express the computational cost for the core layers of our network as depthwise separable convolutions with width multiplier α𝛼\\alpha and resolution multiplier ρ𝜌\\rho: DK⋅DK⋅αM⋅ρDF⋅ρDF+αM⋅αN⋅ρDF⋅ρDF⋅⋅⋅subscript𝐷𝐾subscript𝐷𝐾𝛼𝑀𝜌subscript𝐷𝐹𝜌subscript𝐷𝐹⋅⋅⋅𝛼𝑀𝛼𝑁𝜌subscript𝐷𝐹𝜌subscript𝐷𝐹D_{K}\\cdot D_{K}\\cdot\\alpha M\\cdot\\rho D_{F}\\cdot\\rho D_{F}+\\alpha M\\cdot\\alpha N\\cdot\\rho D_{F}\\cdot\\rho D_{F} (7) where ρ∈(0,1)𝜌01\\rho\\in(0,1) which is typically set implicitly so that the input resolution of the network is 224, 192, 160 or 128. ρ=1𝜌1\\rho=1 is the baseline MobileNet and ρ<1𝜌1\\rho<1 are reduced computation MobileNets. Resolution multiplier has the effect of reducing computational cost by ρ2superscript𝜌2\\rho^{2}. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_34",
"text": " As an example we can look at a typical layer in MobileNet and see how depthwise separable convolutions, width multiplier and resolution multiplier reduce the cost and parameters. Table 3 shows the computation and number of parameters for a layer as architecture shrinking methods are sequentially applied to the layer. The first row shows the Mult-Adds and parameters for a full convolutional layer with an input feature map of size 14×14×512141451214\\times 14\\times 512 with a kernel K𝐾K of size 3×3×512×512335125123\\times 3\\times 512\\times 512. We will look in detail in the next section at the trade offs between resources and accuracy. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_35",
"text": " In this section we first investigate the effects of depthwise convolutions as well as the choice of shrinking by reducing the width of the network rather than the number of layers. We then show the trade offs of reducing the network based on the two hyper-parameters: width multiplier and resolution multiplier and compare results to a number of popular models. We then investigate MobileNets applied to a number of different applications. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_36",
"text": " First we show results for MobileNet with depthwise separable convolutions compared to a model built with full convolutions. In Table 4 we see that using depthwise separable convolutions compared to full convolutions only reduces accuracy by 1%percent11\\% on ImageNet was saving tremendously on mult-adds and parameters. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_37",
"text": " We next show results comparing thinner models with width multiplier to shallower models using less layers. To make MobileNet shallower, the 555 layers of separable filters with feature size 14×14×512141451214\\times 14\\times 512 in Table 1 are removed. Table 5 shows that at similar computation and number of parameters, that making MobileNets thinner is 3%percent33\\% better than making them shallower. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_38",
"text": " Table 6 shows the accuracy, computation and size trade offs of shrinking the MobileNet architecture with the width multiplier α𝛼\\alpha. Accuracy drops off smoothly until the architecture is made too small at α=0.25𝛼0.25\\alpha=0.25. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_39",
"text": " Table 7 shows the accuracy, computation and size trade offs for different resolution multipliers by training MobileNets with reduced input resolutions. Accuracy drops off smoothly across resolution. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_40",
"text": " Figure 4 shows the trade off between ImageNet Accuracy and computation for the 16 models made from the cross product of width multiplier α∈{1,0.75,0.5,0.25}𝛼10.750.50.25\\alpha\\in\\{1,0.75,0.5,0.25\\} and resolutions {224,192,160,128}224192160128\\{224,192,160,128\\}. Results are log linear with a jump when models get very small at α=0.25𝛼0.25\\alpha=0.25. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_41",
"text": " Figure 5 shows the trade off between ImageNet Accuracy and number of parameters for the 16 models made from the cross product of width multiplier α∈{1,0.75,0.5,0.25}𝛼10.750.50.25\\alpha\\in\\{1,0.75,0.5,0.25\\} and resolutions {224,192,160,128}224192160128\\{224,192,160,128\\}. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_42",
"text": " Table 8 compares full MobileNet to the original GoogleNet and VGG16 . MobileNet is nearly as accurate as VGG16 while being 32 times smaller and 27 times less compute intensive. It is more accurate than GoogleNet while being smaller and more than 2.5 times less computation. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_43",
"text": " Table 9 compares a reduced MobileNet with width multiplier α=0.5𝛼0.5\\alpha=0.5 and reduced resolution 160×160160160160\\times 160. Reduced MobileNet is 4%percent44\\% better than AlexNet while being 45×45\\times smaller and 9.4×9.4\\times less compute than AlexNet. It is also 4%percent44\\% better than Squeezenet at about the same size and 22×22\\times less computation. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_44",
"text": " We train MobileNet for fine grained recognition on the Stanford Dogs dataset . We extend the approach of and collect an even larger but noisy training set than from the web. We use the noisy web data to pretrain a fine grained dog recognition model and then fine tune the model on the Stanford Dogs training set. Results on Stanford Dogs test set are in Table 10. MobileNet can almost achieve the state of the art results from at greatly reduced computation and size. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_45",
"text": " PlaNet casts the task of determining where on earth a photo was taken as a classification problem. The approach divides the earth into a grid of geographic cells that serve as the target classes and trains a convolutional neural network on millions of geo-tagged photos. PlaNet has been shown to successfully localize a large variety of photos and to outperform Im2GPS (6, 7) that addresses the same task. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_46",
"text": " We re-train PlaNet using the MobileNet architecture on the same data. While the full PlaNet model based on the Inception V3 architecture has 52 million parameters and 5.74 billion mult-adds. The MobileNet model has only 13 million parameters with the usual 3 million for the body and 10 million for the final layer and 0.58 Million mult-adds. As shown in Tab. 11, the MobileNet version delivers only slightly decreased performance compared to PlaNet despite being much more compact. Moreover, it still outperforms Im2GPS by a large margin. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_47",
"text": " Another use-case for MobileNet is compressing large systems with unknown or esoteric training procedures. In a face attribute classification task, we demonstrate a synergistic relationship between MobileNet and distillation , a knowledge transfer technique for deep networks. We seek to reduce a large face attribute classifier with 757575 million parameters and 160016001600 million Mult-Adds. The classifier is trained on a multi-attribute dataset similar to YFCC100M . ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_48",
"text": " We distill a face attribute classifier using the MobileNet architecture. Distillation works by training the classifier to emulate the outputs of a larger model222The emulation quality is measured by averaging the per-attribute cross-entropy over all attributes. instead of the ground-truth labels, hence enabling training from large (and potentially infinite) unlabeled datasets. Marrying the scalability of distillation training and the parsimonious parameterization of MobileNet, the end system not only requires no regularization (e.g. weight-decay and early-stopping), but also demonstrates enhanced performances. It is evident from Tab. 12 that the MobileNet-based classifier is resilient to aggressive model shrinking: it achieves a similar mean average precision across attributes (mean AP) as the in-house while consuming only 1%percent11\\% the Multi-Adds. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_49",
"text": " MobileNet can also be deployed as an effective base network in modern object detection systems. We report results for MobileNet trained for object detection on COCO data based on the recent work that won the 2016 COCO challenge . In table 13, MobileNet is compared to VGG and Inception V2 under both Faster-RCNN and SSD framework. In our experiments, SSD is evaluated with 300 input resolution (SSD 300) and Faster-RCNN is compared with both 300 and 600 input resolution (Faster-RCNN 300, Faster-RCNN 600). The Faster-RCNN model evaluates 300 RPN proposal boxes per image. The models are trained on COCO train+val excluding 8k minival images and evaluated on minival. For both frameworks, MobileNet achieves comparable results to other networks with only a fraction of computational complexity and model size. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_50",
"text": " The FaceNet model is a state of the art face recognition model . It builds face embeddings based on the triplet loss. To build a mobile FaceNet model we use distillation to train by minimizing the squared differences of the output of FaceNet and MobileNet on the training data. Results for very small MobileNet models can be found in table 14. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
},
{
"id": "1704.04861_all_51",
"text": " We proposed a new model architecture called MobileNets based on depthwise separable convolutions. We investigated some of the important design decisions leading to an efficient model. We then demonstrated how to build smaller and faster MobileNets using width multiplier and resolution multiplier by trading off a reasonable amount of accuracy to reduce size and latency. We then compared different MobileNets to popular models demonstrating superior size, speed and accuracy characteristics. We concluded by demonstrating MobileNet’s effectiveness when applied to a wide variety of tasks. As a next step to help adoption and exploration of MobileNets, we plan on releasing models in Tensor Flow. ",
"title": "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications"
}
] |
Is Data Augmentation always sufficient to support performance in the segmentation task?
|
Performance of microscopy image segmentation task can be improved by using elastic deformation based segmentation [15].
|
[
15
] |
[
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available training sets and the size of the considered networks. The breakthrough by Krizhevsky et al. was due to supervised training of a large network with 8 layers and millions of parameters on the ImageNet dataset with 1 million training images. Since then, even larger and deeper networks have been trained . ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_1",
"text": " The typical use of convolutional networks is on classification tasks, where the output to an image is a single class label. However, in many visual tasks, especially in biomedical image processing, the desired output should include localization, i.e., a class label is supposed to be assigned to each pixel. Moreover, thousands of training images are usually beyond reach in biomedical tasks. Hence, Ciresan et al. trained a network in a sliding-window setup to predict the class label of each pixel by providing a local region (patch) around that pixel as input. First, this network can localize. Secondly, the training data in terms of patches is much larger than the number of training images. The resulting network won the EM segmentation challenge at ISBI 2012 by a large margin. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_2",
"text": " Obviously, the strategy in Ciresan et al. has two drawbacks. First, it is quite slow because the network must be run separately for each patch, and there is a lot of redundancy due to overlapping patches. Secondly, there is a trade-off between localization accuracy and the use of context. Larger patches require more max-pooling layers that reduce the localization accuracy, while small patches allow the network to see only little context. More recent approaches (11, 4) proposed a classifier output that takes into account the features from multiple layers. Good localization and the use of context are possible at the same time. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_3",
"text": " In this paper, we build upon a more elegant architecture, the so-called “fully convolutional network” . We modify and extend this architecture such that it works with very few training images and yields more precise segmentations; see Figure 1. The main idea in is to supplement a usual contracting network by successive layers, where pooling operators are replaced by upsampling operators. Hence, these layers increase the resolution of the output. In order to localize, high resolution features from the contracting path are combined with the upsampled output. A successive convolution layer can then learn to assemble a more precise output based on this information. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_4",
"text": " One important modification in our architecture is that in the upsampling part we have also a large number of feature channels, which allow the network to propagate context information to higher resolution layers. As a consequence, the expansive path is more or less symmetric to the contracting path, and yields a u-shaped architecture. The network does not have any fully connected layers and only uses the valid part of each convolution, i.e., the segmentation map only contains the pixels, for which the full context is available in the input image. This strategy allows the seamless segmentation of arbitrarily large images by an overlap-tile strategy (see Figure 2). To predict the pixels in the border region of the image, the missing context is extrapolated by mirroring the input image. This tiling strategy is important to apply the network to large images, since otherwise the resolution would be limited by the GPU memory. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_5",
"text": " As for our tasks there is very little training data available, we use excessive data augmentation by applying elastic deformations to the available training images. This allows the network to learn invariance to such deformations, without the need to see these transformations in the annotated image corpus. This is particularly important in biomedical segmentation, since deformation used to be the most common variation in tissue and realistic deformations can be simulated efficiently. The value of data augmentation for learning invariance has been shown in Dosovitskiy et al. in the scope of unsupervised feature learning. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_6",
"text": " Another challenge in many cell segmentation tasks is the separation of touching objects of the same class; see Figure 3. To this end, we propose the use of a weighted loss, where the separating background labels between touching cells obtain a large weight in the loss function. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_7",
"text": " The resulting network is applicable to various biomedical segmentation problems. In this paper, we show results on the segmentation of neuronal structures in EM stacks (an ongoing competition started at ISBI 2012), where we outperformed the network of Ciresan et al. . Furthermore, we show results for cell segmentation in light microscopy images from the ISBI cell tracking challenge 2015. Here we won with a large margin on the two most challenging 2D transmitted light datasets. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_8",
"text": " The network architecture is illustrated in Figure 1. It consists of a contracting path (left side) and an expansive path (right side). The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a 1x1 convolution is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_9",
"text": " To allow a seamless tiling of the output segmentation map (see Figure 2), it is important to select the input tile size such that all 2x2 max-pooling operations are applied to a layer with an even x- and y-size. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_10",
"text": " The input images and their corresponding segmentation maps are used to train the network with the stochastic gradient descent implementation of Caffe . Due to the unpadded convolutions, the output image is smaller than the input by a constant border width. To minimize the overhead and make maximum use of the GPU memory, we favor large input tiles over a large batch size and hence reduce the batch to a single image. Accordingly we use a high momentum (0.99) such that a large number of the previously seen training samples determine the update in the current optimization step. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_11",
"text": " The energy function is computed by a pixel-wise soft-max over the final feature map combined with the cross entropy loss function. The soft-max is defined as pk(𝐱)=exp(ak(𝐱))/(∑k′=1Kexp(ak′(𝐱)))subscript𝑝𝑘𝐱subscript𝑎𝑘𝐱superscriptsubscriptsuperscript𝑘′1𝐾subscript𝑎superscript𝑘′𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}})=\\exp({a_{k}(\\boldsymbol{\\mathbf{x}})})/\\left(\\sum_{k^{\\prime}=1}^{K}\\exp(a_{k^{\\prime}}(\\boldsymbol{\\mathbf{x}}))\\right) where ak(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) denotes the activation in feature channel k𝑘k at the pixel position 𝐱∈Ω𝐱Ω\\boldsymbol{\\mathbf{x}}\\in\\Omega with Ω⊂ℤ2Ωsuperscriptℤ2\\Omega\\subset\\mathbb{Z}^{2}. K𝐾K is the number of classes and pk(𝐱)subscript𝑝𝑘𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}}) is the approximated maximum-function. I.e. pk(𝐱)≈1subscript𝑝𝑘𝐱1{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 1 for the k𝑘k that has the maximum activation ak(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) and pk(𝐱)≈0subscript𝑝𝑘𝐱0{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 0 for all other k𝑘k. The cross entropy then penalizes at each position the deviation of pℓ(𝐱)(𝐱)subscript𝑝ℓ𝐱𝐱{p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}}) from 1 using E=∑𝐱∈Ωw(𝐱)log(pℓ(𝐱)(𝐱))𝐸subscript𝐱Ω𝑤𝐱subscript𝑝ℓ𝐱𝐱E=\\sum_{\\boldsymbol{\\mathbf{x}}\\in\\Omega}w(\\boldsymbol{\\mathbf{x}})\\log({p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}})) (1) where ℓ:Ω→{1,…,K}:ℓ→Ω1…𝐾\\ell:\\Omega\\rightarrow\\{1,\\dots,K\\} is the true label of each pixel and w:Ω→ℝ:𝑤→Ωℝw:\\Omega\\rightarrow\\mathds{R} is a weight map that we introduced to give some pixels more importance in the training. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_12",
"text": " We pre-compute the weight map for each ground truth segmentation to compensate the different frequency of pixels from a certain class in the training data set, and to force the network to learn the small separation borders that we introduce between touching cells (See Figure 3c and d). ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_13",
"text": " The separation border is computed using morphological operations. The weight map is then computed as w(𝐱)=wc(𝐱)+w0⋅exp(−(d1(𝐱)+d2(𝐱))22σ2)𝑤𝐱subscript𝑤𝑐𝐱⋅subscript𝑤0superscriptsubscript𝑑1𝐱subscript𝑑2𝐱22superscript𝜎2w(\\boldsymbol{\\mathbf{x}})=w_{c}(\\boldsymbol{\\mathbf{x}})+w_{0}\\cdot\\exp\\left(-\\frac{(d_{1}(\\boldsymbol{\\mathbf{x}})+d_{2}(\\boldsymbol{\\mathbf{x}}))^{2}}{2\\sigma^{2}}\\right) (2) where wc:Ω→ℝ:subscript𝑤𝑐→Ωℝw_{c}:\\Omega\\rightarrow\\mathds{R} is the weight map to balance the class frequencies, d1:Ω→ℝ:subscript𝑑1→Ωℝd_{1}:\\Omega\\rightarrow\\mathds{R} denotes the distance to the border of the nearest cell and d2:Ω→ℝ:subscript𝑑2→Ωℝd_{2}:\\Omega\\rightarrow\\mathds{R} the distance to the border of the second nearest cell. In our experiments we set w0=10subscript𝑤010w_{0}=10 and σ≈5𝜎5\\sigma\\approx 5 pixels. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_14",
"text": " In deep networks with many convolutional layers and different paths through the network, a good initialization of the weights is extremely important. Otherwise, parts of the network might give excessive activations, while other parts never contribute. Ideally the initial weights should be adapted such that each feature map in the network has approximately unit variance. For a network with our architecture (alternating convolution and ReLU layers) this can be achieved by drawing the initial weights from a Gaussian distribution with a standard deviation of 2/N2𝑁\\sqrt{2/N}, where N𝑁N denotes the number of incoming nodes of one neuron . E.g. for a 3x3 convolution and 64 feature channels in the previous layer N=9⋅64=576𝑁⋅964576N=9\\cdot 64=576. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_15",
"text": " Data augmentation is essential to teach the network the desired invariance and robustness properties, when only few training samples are available. In case of microscopical images we primarily need shift and rotation invariance as well as robustness to deformations and gray value variations. Especially random elastic deformations of the training samples seem to be the key concept to train a segmentation network with very few annotated images. We generate smooth deformations using random displacement vectors on a coarse 3 by 3 grid. The displacements are sampled from a Gaussian distribution with 10 pixels standard deviation. Per-pixel displacements are then computed using bicubic interpolation. Drop-out layers at the end of the contracting path perform further implicit data augmentation. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_16",
"text": " We demonstrate the application of the u-net to three different segmentation tasks. The first task is the segmentation of neuronal structures in electron microscopic recordings. An example of the data set and our obtained segmentation is displayed in Figure 2. We provide the full result as Supplementary Material. The data set is provided by the EM segmentation challenge that was started at ISBI 2012 and is still open for new contributions. The training data is a set of 30 images (512x512 pixels) from serial section transmission electron microscopy of the Drosophila first instar larva ventral nerve cord (VNC). Each image comes with a corresponding fully annotated ground truth segmentation map for cells (white) and membranes (black). The test set is publicly available, but its segmentation maps are kept secret. An evaluation can be obtained by sending the predicted membrane probability map to the organizers. The evaluation is done by thresholding the map at 10 different levels and computation of the “warping error”, the “Rand error” and the “pixel error” . ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_17",
"text": " The u-net (averaged over 7 rotated versions of the input data) achieves without any further pre- or postprocessing a warping error of 0.0003529 (the new best score, see Table 1) and a rand-error of 0.0382. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_18",
"text": " This is significantly better than the sliding-window convolutional network result by Ciresan et al. , whose best submission had a warping error of 0.000420 and a rand error of 0.0504. In terms of rand error the only better performing algorithms on this data set use highly data set specific post-processing methods111The authors of this algorithm have submitted 78 different solutions to achieve this result. applied to the probability map of Ciresan et al. . ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_19",
"text": " We also applied the u-net to a cell segmentation task in light microscopic images. This segmenation task is part of the ISBI cell tracking challenge 2014 and 2015 (10, 13). The first data set “PhC-U373”222Data set provided by Dr. Sanjay Kumar. Department of Bioengineering University of California at Berkeley. Berkeley CA (USA) contains Glioblastoma-astrocytoma U373 cells on a polyacrylimide substrate recorded by phase contrast microscopy (see Figure 4a,b and Supp. Material). It contains 35 partially annotated training images. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_20",
"text": " Here we achieve an average IOU (“intersection over union”) of 92%, which is significantly better than the second best algorithm with 83% (see Table 2). ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_21",
"text": " The second data set “DIC-HeLa”333Data set provided by Dr. Gert van Cappellen Erasmus Medical Center. Rotterdam. The Netherlands are HeLa cells on a flat glass recorded by differential interference contrast (DIC) microscopy (see Figure 3, Figure 4c,d and Supp. Material). It contains 20 partially annotated training images. Here we achieve an average IOU of 77.5% which is significantly better than the second best algorithm with 46%. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_22",
"text": " The u-net architecture achieves very good performance on very different biomedical segmentation applications. Thanks to data augmentation with elastic deformations, it only needs very few annotated images and has a very reasonable training time of only 10 hours on a NVidia Titan GPU (6 GB). We provide the full Caffe-based implementation and the trained networks444U-net implementation, trained networks and supplementary material available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. We are sure that the u-net architecture can be applied easily to many more tasks. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
}
] |
What are the hyper-parameters used to design the neural architecture search network?
|
The number of cell repeats and the number of filters in the initial convolutional cell are the hyper-parameters used to design the Neural Architecture Search network [21].
|
[
21
] |
[
{
"id": "1707.07012_all_0",
"text": " Developing neural network image classification models often requires significant architecture engineering. Starting from the seminal work of on using convolutional architectures (17, 34) for ImageNet classification, successive advancements through architecture engineering have achieved impressive results (53, 59, 20, 60, 58, 68). ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_1",
"text": " In this paper, we study a new paradigm of designing convolutional architectures and describe a scalable method to optimize convolutional architectures on a dataset of interest, for instance the ImageNet classification dataset. Our approach is inspired by the recently proposed Neural Architecture Search (NAS) framework , which uses a reinforcement learning search method to optimize architecture configurations. Applying NAS, or any other search methods, directly to a large dataset, such as the ImageNet dataset, is however computationally expensive. We therefore propose to search for a good architecture on a proxy dataset, for example the smaller CIFAR-10 dataset, and then transfer the learned architecture to ImageNet. We achieve this transferrability by designing a search space (which we call “the NASNet search space”) so that the complexity of the architecture is independent of the depth of the network and the size of input images. More concretely, all convolutional networks in our search space are composed of convolutional layers (or “cells”) with identical structure but different weights. Searching for the best convolutional architectures is therefore reduced to searching for the best cell structure. Searching for the best cell structure has two main benefits: it is much faster than searching for an entire network architecture and the cell itself is more likely to generalize to other problems. In our experiments, this approach significantly accelerates the search for the best architectures using CIFAR-10 by a factor of 7×7\\times and learns architectures that successfully transfer to ImageNet. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_2",
"text": " Our main result is that the best architecture found on CIFAR-10, called NASNet, achieves state-of-the-art accuracy when transferred to ImageNet classification without much modification. On ImageNet, NASNet achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5. This result amounts to a 1.2% improvement in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS. On CIFAR-10 itself, NASNet achieves 2.4% error rate, which is also state-of-the-art. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_3",
"text": " Additionally, by simply varying the number of the convolutional cells and number of filters in the convolutional cells, we can create different versions of NASNets with different computational demands. Thanks to this property of the cells, we can generate a family of models that achieve accuracies superior to all human-invented models at equivalent or smaller computational budgets (60, 29). Notably, the smallest version of NASNet achieves 74.0% top-1 accuracy on ImageNet, which is 3.1% better than previously engineered architectures targeted towards mobile and embedded vision tasks (24, 70). ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_4",
"text": " Finally, we show that the image features learned by NASNets are generically useful and transfer to other computer vision problems. In our experiments, the features learned by NASNets from ImageNet classification can be combined with the Faster-RCNN framework to achieve state-of-the-art on COCO object detection task for both the largest as well as mobile-optimized models. Our largest NASNet model achieves 43.1% mAP, which is 4% better than previous state-of-the-art. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_5",
"text": " The proposed method is related to previous work in hyperparameter optimization (44, 4, 5, 54, 55, 6, 40) – especially recent approaches in designing architectures such as Neural Fabrics , DiffRNN , MetaQNN and DeepArchitect . A more flexible class of methods for designing architecture is evolutionary algorithms (65, 16, 57, 30, 46, 42, 67), yet they have not had as much success at large scale. Xie and Yuille also transferred learned architectures from CIFAR-10 to ImageNet but performance of these models (top-1 accuracy 72.1%) are notably below previous state-of-the-art (Table 2). ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_6",
"text": " The concept of having one neural network interact with a second neural network to aid the learning process, or learning to learn or meta-learning (23, 49) has attracted much attention in recent years (1, 62, 14, 19, 35, 45, 15). Most of these approaches have not been scaled to large problems like ImageNet. An exception is the recent work focused on learning an optimizer for ImageNet classification that achieved notable improvements . ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_7",
"text": " The design of our search space took much inspiration from LSTMs , and Neural Architecture Search Cell . The modular structure of the convolutional cell is also related to previous methods on ImageNet such as VGG , Inception (59, 60, 58), ResNet/ResNext (20, 68), and Xception/MobileNet (9, 24). ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_8",
"text": " Our work makes use of search methods to find good convolutional architectures on a dataset of interest. The main search method we use in this work is the Neural Architecture Search (NAS) framework proposed by . In NAS, a controller recurrent neural network (RNN) samples child networks with different architectures. The child networks are trained to convergence to obtain some accuracy on a held-out validation set. The resulting accuracies are used to update the controller so that the controller will generate better architectures over time. The controller weights are updated with policy gradient (see Figure 1). ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_9",
"text": " The main contribution of this work is the design of a novel search space, such that the best architecture found on the CIFAR-10 dataset would scale to larger, higher-resolution image datasets across a range of computational settings. We name this search space the NASNet search space as it gives rise to NASNet, the best architecture found in our experiments. One inspiration for the NASNet search space is the realization that architecture engineering with CNNs often identifies repeated motifs consisting of combinations of convolutional filter banks, nonlinearities and a prudent selection of connections to achieve state-of-the-art results (such as the repeated modules present in the Inception and ResNet models (59, 20, 60, 58)). These observations suggest that it may be possible for the controller RNN to predict a generic convolutional cell expressed in terms of these motifs. This cell can then be stacked in series to handle inputs of arbitrary spatial dimensions and filter depth. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_10",
"text": " In our approach, the overall architectures of the convolutional nets are manually predetermined. They are composed of convolutional cells repeated many times where each convolutional cell has the same architecture, but different weights. To easily build scalable architectures for images of any size, we need two types of convolutional cells to serve two main functions when taking in a feature map as input: (1) convolutional cells that return a feature map of the same dimension, and (2) convolutional cells that return a feature map where the feature map height and width is reduced by a factor of two. We name the first type and second type of convolutional cells Normal Cell and Reduction Cell respectively. For the Reduction Cell, we make the initial operation applied to the cell’s inputs have a stride of two to reduce the height and width. All of our operations that we consider for building our convolutional cells have an option of striding. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_11",
"text": " Figure 2 shows our placement of Normal and Reduction Cells for CIFAR-10 and ImageNet. Note on ImageNet we have more Reduction Cells, since the incoming image size is 299x299 compared to 32x32 for CIFAR. The Reduction and Normal Cell could have the same architecture, but we empirically found it beneficial to learn two separate architectures. We use a common heuristic to double the number of filters in the output whenever the spatial activation size is reduced in order to maintain roughly constant hidden state dimension (32, 53). Importantly, much like Inception and ResNet models (59, 20, 60, 58), we consider the number of motif repetitions N𝑁N and the number of initial convolutional filters as free parameters that we tailor to the scale of an image classification problem. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_12",
"text": " What varies in the convolutional nets is the structures of the Normal and Reduction Cells, which are searched by the controller RNN. The structures of the cells can be searched within a search space defined as follows (see Appendix, Figure 7 for schematic). In our search space, each cell receives as input two initial hidden states hisubscriptℎ𝑖h_{i} and hi−1subscriptℎ𝑖1h_{i-1} which are the outputs of two cells in previous two lower layers or the input image. The controller RNN recursively predicts the rest of the structure of the convolutional cell, given these two initial hidden states (Figure 3). The predictions of the controller for each cell are grouped into B𝐵B blocks, where each block has 5 prediction steps made by 5 distinct softmax classifiers corresponding to discrete choices of the elements of a block: ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_13",
"text": " Step 1. Select a hidden state from hi,hi−1subscriptℎ𝑖subscriptℎ𝑖1h_{i},h_{i-1} or from the set of hidden states created in previous blocks. Step 2. Select a second hidden state from the same options as in Step 1. Step 3. Select an operation to apply to the hidden state selected in Step 1. Step 4. Select an operation to apply to the hidden state selected in Step 2. Step 5. Select a method to combine the outputs of Step 3 and 4 to create a new hidden state. The algorithm appends the newly-created hidden state to the set of existing hidden states as a potential input in subsequent blocks. The controller RNN repeats the above 5 prediction steps B𝐵B times corresponding to the B𝐵B blocks in a convolutional cell. In our experiments, selecting B=5𝐵5B=5 provides good results, although we have not exhaustively searched this space due to computational limitations. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_14",
"text": " In steps 3 and 4, the controller RNN selects an operation to apply to the hidden states. We collected the following set of operations based on their prevalence in the CNN literature: ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_15",
"text": " In step 5 the controller RNN selects a method to combine the two hidden states, either (1) element-wise addition between two hidden states or (2) concatenation between two hidden states along the filter dimension. Finally, all of the unused hidden states generated in the convolutional cell are concatenated together in depth to provide the final cell output. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_16",
"text": " To allow the controller RNN to predict both Normal Cell and Reduction Cell, we simply make the controller have 2×5B25𝐵2\\times 5B predictions in total, where the first 5B5𝐵5B predictions are for the Normal Cell and the second 5B5𝐵5B predictions are for the Reduction Cell. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_17",
"text": " Finally, our work makes use of the reinforcement learning proposal in NAS ; however, it is also possible to use random search to search for architectures in the NASNet search space. In random search, instead of sampling the decisions from the softmax classifiers in the controller RNN, we can sample the decisions from the uniform distribution. In our experiments, we find that random search is slightly worse than reinforcement learning on the CIFAR-10 dataset. Although there is value in using reinforcement learning, the gap is smaller than what is found in the original work of . This result suggests that 1) the NASNet search space is well-constructed such that random search can perform reasonably well and 2) random search is a difficult baseline to beat. We will compare reinforcement learning against random search in Section 4.4. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_18",
"text": " In this section, we describe our experiments with the method described above to learn convolutional cells. In summary, all architecture searches are performed using the CIFAR-10 classification task . The controller RNN was trained using Proximal Policy Optimization (PPO) by employing a global workqueue system for generating a pool of child networks controlled by the RNN. In our experiments, the pool of workers in the workqueue consisted of 500 GPUs. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_19",
"text": " The result of this search process over 4 days yields several candidate convolutional cells. We note that this search procedure is almost 7×7\\times faster than previous approaches that took 28 days.111In particular, we note that previous architecture search used 800 GPUs for 28 days resulting in 22,400 GPU-hours. The method in this paper uses 500 GPUs across 4 days resulting in 2,000 GPU-hours. The former effort used Nvidia K40 GPUs, whereas the current efforts used faster NVidia P100s. Discounting the fact that the we use faster hardware, we estimate that the current procedure is roughly about 7×7\\times more efficient. Additionally, we demonstrate below that the resulting architecture is superior in accuracy. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_20",
"text": " Figure 4 shows a diagram of the top performing Normal Cell and Reduction Cell. Note the prevalence of separable convolutions and the number of branches compared with competing architectures (53, 59, 20, 60, 58). Subsequent experiments focus on this convolutional cell architecture, although we examine the efficacy of other, top-ranked convolutional cells in ImageNet experiments (described in Appendix B) and report their results as well. We call the three networks constructed from the best three searches NASNet-A, NASNet-B and NASNet-C. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_21",
"text": " We demonstrate the utility of the convolutional cells by employing this learned architecture on CIFAR-10 and a family of ImageNet classification tasks. The latter family of tasks is explored across a few orders of magnitude in computational budget. After having learned the convolutional cells, several hyper-parameters may be explored to build a final network for a given task: (1) the number of cell repeats N𝑁N and (2) the number of filters in the initial convolutional cell. After selecting the number of initial filters, we use a common heuristic to double the number of filters whenever the stride is 2. Finally, we define a simple notation, e.g., 444 @ 646464, to indicate these two parameters in all networks, where 444 and 646464 indicate the number of cell repeats and the number of filters in the penultimate layer of the network, respectively. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_22",
"text": " For complete details of of the architecture learning algorithm and the controller system, please refer to Appendix A. Importantly, when training NASNets, we discovered ScheduledDropPath, a modified version of DropPath , to be an effective regularization method for NASNet. In DropPath , each path in the cell is stochastically dropped with some fixed probability during training. In our modified version, ScheduledDropPath, each path in the cell is dropped out with a probability that is linearly increased over the course of training. We find that DropPath does not work well for NASNets, while ScheduledDropPath significantly improves the final performance of NASNets in both CIFAR and ImageNet experiments. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_23",
"text": " For the task of image classification with CIFAR-10, we set N=4𝑁4N=4 or 6 (Figure 2). The test accuracies of the best architectures are reported in Table 1 along with other state-of-the-art models. As can be seen from the Table, a large NASNet-A model with cutout data augmentation achieves a state-of-the-art error rate of 2.40% (averaged across 5 runs), which is slightly better than the previous best record of 2.56% by . The best single run from our model achieves 2.19% error rate. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_24",
"text": " We performed several sets of experiments on ImageNet with the best convolutional cells learned from CIFAR-10. We emphasize that we merely transfer the architectures from CIFAR-10 but train all ImageNet models weights from scratch. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_25",
"text": " Results are summarized in Table 2 and 3 and Figure 5. In the first set of experiments, we train several image classification systems operating on 299x299 or 331x331 resolution images with different experiments scaled in computational demand to create models that are roughly on par in computational cost with Inception-v2 , Inception-v3 and PolyNet . We show that this family of models achieve state-of-the-art performance with fewer floating point operations and parameters than comparable architectures. Second, we demonstrate that by adjusting the scale of the model we can achieve state-of-the-art performance at smaller computational budgets, exceeding streamlined CNNs hand-designed for this operating regime (24, 70). ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_26",
"text": " Note we do not have residual connections between convolutional cells as the models learn skip connections on their own. We empirically found manually inserting residual connections between cells to not help performance. Our training setup on ImageNet is similar to , but please see Appendix A for details. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_27",
"text": " Table 2 shows that the convolutional cells discovered with CIFAR-10 generalize well to ImageNet problems. In particular, each model based on the convolutional cells exceeds the predictive performance of the corresponding hand-designed model. Importantly, the largest model achieves a new state-of-the-art performance for ImageNet (82.7%) based on single, non-ensembled predictions, surpassing previous best published result by ∼similar-to\\sim1.2% . Among the unpublished works, our model is on par with the best reported result of 82.7% , while having significantly fewer floating point operations. Figure 5 shows a complete summary of our results in comparison with other published results. Note the family of models based on convolutional cells provides an envelope over a broad class of human-invented architectures. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_28",
"text": " Finally, we test how well the best convolutional cells may perform in a resource-constrained setting, e.g., mobile devices (Table 3). In these settings, the number of floating point operations is severely constrained and predictive performance must be weighed against latency requirements on a device with limited computational resources. MobileNet and ShuffleNet provide state-of-the-art results obtaining 70.6% and 70.9%percent\\% accuracy, respectively on 224x224 images using ∼similar-to\\sim550M multliply-add operations. An architecture constructed from the best convolutional cells achieves superior predictive performance (74.0% accuracy) surpassing previous models but with comparable computational demand. In summary, we find that the learned convolutional cells are flexible across model scales achieving state-of-the-art performance across almost 2 orders of magnitude in computational budget. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_29",
"text": " Image classification networks provide generic image features that may be transferred to other computer vision problems . One of the most important problems is the spatial localization of objects within an image. To further validate the performance of the family of NASNet-A networks, we test whether object detection systems derived from NASNet-A lead to improvements in object detection . ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_30",
"text": " To address this question, we plug in the family of NASNet-A networks pretrained on ImageNet into the Faster-RCNN object detection pipeline using an open-source software platform . We retrain the resulting object detection pipeline on the combined COCO training plus validation dataset excluding 8,000 mini-validation images. We perform single model evaluation using 300-500 RPN proposals per image. In other words, we only pass a single image through a single network. We evaluate the model on the COCO mini-val and test-dev dataset and report the mean average precision (mAP) as computed with the standard COCO metric library . We perform a simple search over learning rate schedules to identify the best possible model. Finally, we examine the behavior of two object detection systems employing the best performing NASNet-A image featurization (NASNet-A, 666 @ 403240324032) as well as the image featurization geared towards mobile platforms (NASNet-A, 444 @ 105610561056). ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_31",
"text": " For the mobile-optimized network, our resulting system achieves a mAP of 29.6% – exceeding previous mobile-optimized networks that employ Faster-RCNN by over 5.0% (Table 4). For the best NASNet network, our resulting network operating on images of the same spatial resolution (800 ×\\times 800) achieves mAP = 40.7%, exceeding equivalent object detection systems based off lesser performing image featurization (i.e. Inception-ResNet-v2) by 4.0% (28, 52) (see Appendix for example detections on images and side-by-side comparisons). Finally, increasing the spatial resolution of the input image results in the best reported, single model result for object detection of 43.1%, surpassing the best previous best by over 4.0% .222A primary advance in the best reported object detection system is the introduction of a novel loss . Pairing this loss with NASNet-A image featurization may lead to even further performance gains. Additionally, performance gains are achievable through ensembling multiple inferences across multiple model instances and image crops (e.g., ). These results provide further evidence that NASNet provides superior, generic image features that may be transferred across other computer vision tasks. Figure 10 and Figure 11 in Appendix C show four examples of object detection results produced by NASNet-A with the Faster-RCNN framework. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_32",
"text": " Though what search method to use is not the focus of the paper, an open question is how effective is the reinforcement learning search method. In this section, we study the effectiveness of reinforcement learning for architecture search on the CIFAR-10 image classification problem and compare it to brute-force random search (considered to be a very strong baseline for black-box optimization ) given an equivalent amount of computational resources. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_33",
"text": " Figure 6 shows the performance of reinforcement learning (RL) and random search (RS) as more model architectures are sampled. Note that the best model identified with RL is significantly better than the best model found by RS by over 1% as measured by on CIFAR-10. Additionally, RL finds an entire range of models that are of superior quality to random search. We observe this in the mean performance of the top-5 and top-25 models identified in RL versus RS. We take these results to indicate that although RS may provide a viable search strategy, RL finds better architectures in the NASNet search space. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_34",
"text": " In this work, we demonstrate how to learn scalable, convolutional cells from data that transfer to multiple image classification tasks. The learned architecture is quite flexible as it may be scaled in terms of computational cost and parameters to easily address a variety of problems. In all cases, the accuracy of the resulting model exceeds all human-designed models – ranging from models designed for mobile applications to computationally-heavy models designed to achieve the most accurate results. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_35",
"text": " The key insight in our approach is to design a search space that decouples the complexity of an architecture from the depth of a network. This resulting search space permits identifying good architectures on a small dataset (i.e., CIFAR-10) and transferring the learned architecture to image classifications across a range of data and computational scales. ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
},
{
"id": "1707.07012_all_36",
"text": " The resulting architectures approach or exceed state-of-the-art performance in both CIFAR-10 and ImageNet datasets with less computational demand than human-designed architectures (60, 29, 69). The ImageNet results are particularly important because many state-of-the-art computer vision problems (e.g., object detection , face detection , image localization ) derive image features or architectures from ImageNet classification models. For instance, we find that image features obtained from ImageNet used in combination with the Faster-RCNN framework achieves state-of-the-art object detection results. Finally, we demonstrate that we can use the resulting learned architecture to perform ImageNet classification with reduced computational budgets that outperform streamlined architectures targeted to mobile and embedded platforms (24, 70). ",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
}
] |
The authors proposed approach only works for classification models, and not for models that have other types of outputs. True or False?
|
In this work, the approach assumes that there are classes that the models should be able to predict [23]. The work focuses on classification models [27]. Thus, whether the approach can work on models with other types of outputs cannot be answered from this paper [28].
|
[
23,
27,
28
] |
[
{
"id": "1503.02531_all_0",
"text": " Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typically use very similar models for the training stage and the deployment stage despite their very different requirements: For tasks like speech and object recognition, training must extract structure from very large, highly redundant datasets but it does not need to operate in real time and it can use a huge amount of computation. Deployment to a large number of users, however, has much more stringent requirements on latency and computational resources. The analogy with insects suggests that we should be willing to train very cumbersome models if that makes it easier to extract structure from the data. The cumbersome model could be an ensemble of separately trained models or a single very large model trained with a very strong regularizer such as dropout . Once the cumbersome model has been trained, we can then use a different kind of training, which we call “distillation” to transfer the knowledge from the cumbersome model to a small model that is more suitable for deployment. A version of this strategy has already been pioneered by Rich Caruana and his collaborators . In their important paper they demonstrate convincingly that the knowledge acquired by a large ensemble of models can be transferred to a single small model. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_1",
"text": " A conceptual block that may have prevented more investigation of this very promising approach is that we tend to identify the knowledge in a trained model with the learned parameter values and this makes it hard to see how we can change the form of the model but keep the same knowledge. A more abstract view of the knowledge, that frees it from any particular instantiation, is that it is a learned mapping from input vectors to output vectors. For cumbersome models that learn to discriminate between a large number of classes, the normal training objective is to maximize the average log probability of the correct answer, but a side-effect of the learning is that the trained model assigns probabilities to all of the incorrect answers and even when these probabilities are very small, some of them are much larger than others. The relative probabilities of incorrect answers tell us a lot about how the cumbersome model tends to generalize. An image of a BMW, for example, may only have a very small chance of being mistaken for a garbage truck, but that mistake is still many times more probable than mistaking it for a carrot. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_2",
"text": " It is generally accepted that the objective function used for training should reflect the true objective of the user as closely as possible. Despite this, models are usually trained to optimize performance on the training data when the real objective is to generalize well to new data. It would clearly be better to train models to generalize well, but this requires information about the correct way to generalize and this information is not normally available. When we are distilling the knowledge from a large model into a small one, however, we can train the small model to generalize in the same way as the large model. If the cumbersome model generalizes well because, for example, it is the average of a large ensemble of different models, a small model trained to generalize in the same way will typically do much better on test data than a small model that is trained in the normal way on the same training set as was used to train the ensemble. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_3",
"text": " An obvious way to transfer the generalization ability of the cumbersome model to a small model is to use the class probabilities produced by the cumbersome model as “soft targets” for training the small model. For this transfer stage, we could use the same training set or a separate “transfer” set. When the cumbersome model is a large ensemble of simpler models, we can use an arithmetic or geometric mean of their individual predictive distributions as the soft targets. When the soft targets have high entropy, they provide much more information per training case than hard targets and much less variance in the gradient between training cases, so the small model can often be trained on much less data than the original cumbersome model and using a much higher learning rate. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_4",
"text": " For tasks like MNIST in which the cumbersome model almost always produces the correct answer with very high confidence, much of the information about the learned function resides in the ratios of very small probabilities in the soft targets. For example, one version of a 2 may be given a probability of 10−6superscript10610^{-6} of being a 3 and 10−9superscript10910^{-9} of being a 7 whereas for another version it may be the other way around. This is valuable information that defines a rich similarity structure over the data (i. e. it says which 2’s look like 3’s and which look like 7’s) but it has very little influence on the cross-entropy cost function during the transfer stage because the probabilities are so close to zero. Caruana and his collaborators circumvent this problem by using the logits (the inputs to the final softmax) rather than the probabilities produced by the softmax as the targets for learning the small model and they minimize the squared difference between the logits produced by the cumbersome model and the logits produced by the small model. Our more general solution, called “distillation”, is to raise the temperature of the final softmax until the cumbersome model produces a suitably soft set of targets. We then use the same high temperature when training the small model to match these soft targets. We show later that matching the logits of the cumbersome model is actually a special case of distillation. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_5",
"text": " The transfer set that is used to train the small model could consist entirely of unlabeled data or we could use the original training set. We have found that using the original training set works well, especially if we add a small term to the objective function that encourages the small model to predict the true targets as well as matching the soft targets provided by the cumbersome model. Typically, the small model cannot exactly match the soft targets and erring in the direction of the correct answer turns out to be helpful. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_6",
"text": " Neural networks typically produce class probabilities by using a “softmax” output layer that converts the logit, zisubscript𝑧𝑖z_{i}, computed for each class into a probability, qisubscript𝑞𝑖q_{i}, by comparing zisubscript𝑧𝑖z_{i} with the other logits. qi=exp(zi/T)∑jexp(zj/T)subscript𝑞𝑖𝑒𝑥𝑝subscript𝑧𝑖𝑇subscript𝑗𝑒𝑥𝑝subscript𝑧𝑗𝑇q_{i}=\\frac{exp(z_{i}/T)}{\\sum_{j}exp(z_{j}/T)} (1) where T𝑇T is a temperature that is normally set to 111. Using a higher value for T𝑇T produces a softer probability distribution over classes. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_7",
"text": " In the simplest form of distillation, knowledge is transferred to the distilled model by training it on a transfer set and using a soft target distribution for each case in the transfer set that is produced by using the cumbersome model with a high temperature in its softmax. The same high temperature is used when training the distilled model, but after it has been trained it uses a temperature of 1. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_8",
"text": " When the correct labels are known for all or some of the transfer set, this method can be significantly improved by also training the distilled model to produce the correct labels. One way to do this is to use the correct labels to modify the soft targets, but we found that a better way is to simply use a weighted average of two different objective functions. The first objective function is the cross entropy with the soft targets and this cross entropy is computed using the same high temperature in the softmax of the distilled model as was used for generating the soft targets from the cumbersome model. The second objective function is the cross entropy with the correct labels. This is computed using exactly the same logits in softmax of the distilled model but at a temperature of 1. We found that the best results were generally obtained by using a condiderably lower weight on the second objective function. Since the magnitudes of the gradients produced by the soft targets scale as 1/T21superscript𝑇21/T^{2} it is important to multiply them by T2superscript𝑇2T^{2} when using both hard and soft targets. This ensures that the relative contributions of the hard and soft targets remain roughly unchanged if the temperature used for distillation is changed while experimenting with meta-parameters. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_9",
"text": " Each case in the transfer set contributes a cross-entropy gradient, dC/dzi𝑑𝐶𝑑subscript𝑧𝑖dC/dz_{i}, with respect to each logit, zisubscript𝑧𝑖z_{i} of the distilled model. If the cumbersome model has logits visubscript𝑣𝑖v_{i} which produce soft target probabilities pisubscript𝑝𝑖p_{i} and the transfer training is done at a temperature of T𝑇T, this gradient is given by: ∂C∂zi=1T(qi−pi)=1T(ezi/T∑jezj/T−evi/T∑jevj/T)𝐶subscript𝑧𝑖1𝑇subscript𝑞𝑖subscript𝑝𝑖1𝑇superscript𝑒subscript𝑧𝑖𝑇subscript𝑗superscript𝑒subscript𝑧𝑗𝑇superscript𝑒subscript𝑣𝑖𝑇subscript𝑗superscript𝑒subscript𝑣𝑗𝑇\\frac{\\partial C}{\\partial z_{i}}=\\frac{1}{T}\\left(q_{i}-p_{i}\\right)=\\frac{1}{T}\\left(\\frac{e^{z_{i}/T}}{\\sum_{j}e^{z_{j}/T}}-\\frac{e^{v_{i}/T}}{\\sum_{j}e^{v_{j}/T}}\\right) (2) ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_10",
"text": " If the temperature is high compared with the magnitude of the logits, we can approximate: ∂C∂zi≈1T(1+zi/TN+∑jzj/T−1+vi/TN+∑jvj/T)𝐶subscript𝑧𝑖1𝑇1subscript𝑧𝑖𝑇𝑁subscript𝑗subscript𝑧𝑗𝑇1subscript𝑣𝑖𝑇𝑁subscript𝑗subscript𝑣𝑗𝑇\\frac{\\partial C}{\\partial z_{i}}\\approx\\frac{1}{T}\\left(\\frac{1+z_{i}/T}{N+\\sum_{j}z_{j}/T}-\\frac{1+v_{i}/T}{N+\\sum_{j}v_{j}/T}\\right) (3) ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_11",
"text": " If we now assume that the logits have been zero-meaned separately for each transfer case so that ∑jzj=∑jvj=0subscript𝑗subscript𝑧𝑗subscript𝑗subscript𝑣𝑗0\\sum_{j}z_{j}=\\sum_{j}v_{j}=0 Eq. 3 simplifies to: ∂C∂zi≈1NT2(zi−vi)𝐶subscript𝑧𝑖1𝑁superscript𝑇2subscript𝑧𝑖subscript𝑣𝑖\\frac{\\partial C}{\\partial z_{i}}\\approx\\frac{1}{NT^{2}}\\left(z_{i}-v_{i}\\right) (4) So in the high temperature limit, distillation is equivalent to minimizing 1/2(zi−vi)212superscriptsubscript𝑧𝑖subscript𝑣𝑖2{1/2}(z_{i}-v_{i})^{2}, provided the logits are zero-meaned separately for each transfer case. At lower temperatures, distillation pays much less attention to matching logits that are much more negative than the average. This is potentially advantageous because these logits are almost completely unconstrained by the cost function used for training the cumbersome model so they could be very noisy. On the other hand, the very negative logits may convey useful information about the knowledge acquired by the cumbersome model. Which of these effects dominates is an empirical question. We show that when the distilled model is much too small to capture all of the knowledege in the cumbersome model, intermediate temperatures work best which strongly suggests that ignoring the large negative logits can be helpful. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_12",
"text": " To see how well distillation works, we trained a single large neural net with two hidden layers of 1200 rectified linear hidden units on all 60,000 training cases. The net was strongly regularized using dropout and weight-constraints as described in . Dropout can be viewed as a way of training an exponentially large ensemble of models that share weights. In addition, the input images were jittered by up to two pixels in any direction. This net achieved 67 test errors whereas a smaller net with two hidden layers of 800 rectified linear hidden units and no regularization achieved 146 errors. But if the smaller net was regularized solely by adding the additional task of matching the soft targets produced by the large net at a temperature of 20, it achieved 74 test errors. This shows that soft targets can transfer a great deal of knowledge to the distilled model, including the knowledge about how to generalize that is learned from translated training data even though the transfer set does not contain any translations. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_13",
"text": " When the distilled net had 300 or more units in each of its two hidden layers, all temperatures above 8 gave fairly similar results. But when this was radically reduced to 30 units per layer, temperatures in the range 2.5 to 4 worked significantly better than higher or lower temperatures. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_14",
"text": " We then tried omitting all examples of the digit 3 from the transfer set. So from the perspective of the distilled model, 3 is a mythical digit that it has never seen. Despite this, the distilled model only makes 206 test errors of which 133 are on the 1010 threes in the test set. Most of the errors are caused by the fact that the learned bias for the 3 class is much too low. If this bias is increased by 3.5 (which optimizes overall performance on the test set), the distilled model makes 109 errors of which 14 are on 3s. So with the right bias, the distilled model gets 98.6% of the test 3s correct despite never having seen a 3 during training. If the transfer set contains only the 7s and 8s from the training set, the distilled model makes 47.3% test errors, but when the biases for 7 and 8 are reduced by 7.6 to optimize test performance, this falls to 13.2% test errors. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_15",
"text": " In this section, we investigate the effects of ensembling Deep Neural Network (DNN) acoustic models that are used in Automatic Speech Recognition (ASR). We show that the distillation strategy that we propose in this paper achieves the desired effect of distilling an ensemble of models into a single model that works significantly better than a model of the same size that is learned directly from the same training data. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_16",
"text": " State-of-the-art ASR systems currently use DNNs to map a (short) temporal context of features derived from the waveform to a probability distribution over the discrete states of a Hidden Markov Model (HMM) . More specifically, the DNN produces a probability distribution over clusters of tri-phone states at each time and a decoder then finds a path through the HMM states that is the best compromise between using high probability states and producing a transcription that is probable under the language model. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_17",
"text": " Although it is possible (and desirable) to train the DNN in such a way that the decoder (and, thus, the language model) is taken into account by marginalizing over all possible paths, it is common to train the DNN to perform frame-by-frame classification by (locally) minimizing the cross entropy between the predictions made by the net and the labels given by a forced alignment with the ground truth sequence of states for each observation: 𝜽=argmax𝜽′P(ht|𝐬t;𝜽′)𝜽subscriptsuperscript𝜽′𝑃conditionalsubscriptℎ𝑡subscript𝐬𝑡superscript𝜽′\\boldsymbol{\\theta}=\\arg\\max_{\\boldsymbol{\\theta}^{\\prime}}P(h_{t}|\\mathbf{s}_{t};\\boldsymbol{\\theta}^{\\prime}) where 𝜽𝜽\\boldsymbol{\\theta} are the parameters of our acoustic model P𝑃P which maps acoustic observations at time t𝑡t, 𝐬tsubscript𝐬𝑡\\mathbf{s}_{t}, to a probability, P(ht|𝐬t;𝜽′)𝑃conditionalsubscriptℎ𝑡subscript𝐬𝑡superscript𝜽′P(h_{t}|\\mathbf{s}_{t};\\boldsymbol{\\theta}^{\\prime}) , of the “correct” HMM state htsubscriptℎ𝑡h_{t}, which is determined by a forced alignment with the correct sequence of words. The model is trained with a distributed stochastic gradient descent approach. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_18",
"text": " We use an architecture with 8 hidden layers each containing 2560 rectified linear units and a final softmax layer with 14,000 labels (HMM targets htsubscriptℎ𝑡h_{t}). The input is 26 frames of 40 Mel-scaled filterbank coefficients with a 10ms advance per frame and we predict the HMM state of 21st frame. The total number of parameters is about 85M. This is a slightly outdated version of the acoustic model used by Android voice search, and should be considered as a very strong baseline. To train the DNN acoustic model we use about 2000 hours of spoken English data, which yields about 700M training examples. This system achieves a frame accuracy of 58.9%, and a Word Error Rate (WER) of 10.9% on our development set. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_19",
"text": " We trained 10 separate models to predict P(ht|𝐬t;𝜽)𝑃conditionalsubscriptℎ𝑡subscript𝐬𝑡𝜽P(h_{t}|\\mathbf{s}_{t};\\boldsymbol{\\theta}), using exactly the same architecture and training procedure as the baseline. The models are randomly initialized with different initial parameter values and we find that this creates sufficient diversity in the trained models to allow the averaged predictions of the ensemble to significantly outperform the individual models. We have explored adding diversity to the models by varying the sets of data that each model sees, but we found this to not significantly change our results, so we opted for the simpler approach. For the distillation we tried temperatures of (1,𝟐,5,10)12510(1,{\\bf 2},5,10) and used a relative weight of 0.5 on the cross-entropy for the hard targets, where bold font indicates the best value that was used for table 1 . ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_20",
"text": " Table 1 shows that, indeed, our distillation approach is able to extract more useful information from the training set than simply using the hard labels to train a single model. More than 80% of the improvement in frame classification accuracy achieved by using an ensemble of 10 models is transferred to the distilled model which is similar to the improvement we observed in our preliminary experiments on MNIST. The ensemble gives a smaller improvement on the ultimate objective of WER (on a 23K-word test set) due to the mismatch in the objective function, but again, the improvement in WER achieved by the ensemble is transferred to the distilled model. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_21",
"text": " We have recently become aware of related work on learning a small acoustic model by matching the class probabilities of an already trained larger model . However, they do the distillation at a temperature of 1 using a large unlabeled dataset and their best distilled model only reduces the error rate of the small model by 28% of the gap between the error rates of the large and small models when they are both trained with hard labels. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_22",
"text": " Training an ensemble of models is a very simple way to take advantage of parallel computation and the usual objection that an ensemble requires too much computation at test time can be dealt with by using distillation. There is, however, another important objection to ensembles: If the individual models are large neural networks and the dataset is very large, the amount of computation required at training time is excessive, even though it is easy to parallelize. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_23",
"text": " In this section we give an example of such a dataset and we show how learning specialist models that each focus on a different confusable subset of the classes can reduce the total amount of computation required to learn an ensemble. The main problem with specialists that focus on making fine-grained distinctions is that they overfit very easily and we describe how this overfitting may be prevented by using soft targets. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_24",
"text": " JFT is an internal Google dataset that has 100 million labeled images with 15,000 labels. When we did this work, Google’s baseline model for JFT was a deep convolutional neural network that had been trained for about six months using asynchronous stochastic gradient descent on a large number of cores. This training used two types of parallelism . First, there were many replicas of the neural net running on different sets of cores and processing different mini-batches from the training set. Each replica computes the average gradient on its current mini-batch and sends this gradient to a sharded parameter server which sends back new values for the parameters. These new values reflect all of the gradients received by the parameter server since the last time it sent parameters to the replica. Second, each replica is spread over multiple cores by putting different subsets of the neurons on each core. Ensemble training is yet a third type of parallelism that can be wrapped around the other two types, but only if a lot more cores are available. Waiting for several years to train an ensemble of models was not an option, so we needed a much faster way to improve the baseline model. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_25",
"text": " When the number of classes is very large, it makes sense for the cumbersome model to be an ensemble that contains one generalist model trained on all the data and many “specialist” models, each of which is trained on data that is highly enriched in examples from a very confusable subset of the classes (like different types of mushroom). The softmax of this type of specialist can be made much smaller by combining all of the classes it does not care about into a single dustbin class. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_26",
"text": " To reduce overfitting and share the work of learning lower level feature detectors, each specialist model is initialized with the weights of the generalist model. These weights are then slightly modified by training the specialist with half its examples coming from its special subset and half sampled at random from the remainder of the training set. After training, we can correct for the biased training set by incrementing the logit of the dustbin class by the log of the proportion by which the specialist class is oversampled. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_27",
"text": " In order to derive groupings of object categories for the specialists, we decided to focus on categories that our full network often confuses. Even though we could have computed the confusion matrix and used it as a way to find such clusters, we opted for a simpler approach that does not require the true labels to construct the clusters. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_28",
"text": " In particular, we apply a clustering algorithm to the covariance matrix of the predictions of our generalist model, so that a set of classes Smsuperscript𝑆𝑚S^{m} that are often predicted together will be used as targets for one of our specialist models, m𝑚m. We applied an on-line version of the K-means algorithm to the columns of the covariance matrix, and obtained reasonable clusters (shown in Table 2). We tried several clustering algorithms which produced similar results. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_29",
"text": " Before investigating what happens when specialist models are distilled, we wanted to see how well ensembles containing specialists performed. In addition to the specialist models, we always have a generalist model so that we can deal with classes for which we have no specialists and so that we can decide which specialists to use. Given an input image 𝐱𝐱\\mathbf{x}, we do top-one classification in two steps: ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_30",
"text": " Step 1: For each test case, we find the n𝑛n most probable classes according to the generalist model. Call this set of classes k𝑘k. In our experiments, we used n=1𝑛1n=1. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_31",
"text": " Step 2: We then take all the specialist models, m𝑚m, whose special subset of confusable classes, Smsuperscript𝑆𝑚S^{m}, has a non-empty intersection with k𝑘k and call this the active set of specialists Aksubscript𝐴𝑘A_{k} (note that this set may be empty). We then find the full probability distribution 𝐪𝐪\\mathbf{q} over all the classes that minimizes: KL(𝐩g,𝐪)+∑m∈AkKL(𝐩m,𝐪)𝐾𝐿superscript𝐩𝑔𝐪subscript𝑚subscript𝐴𝑘𝐾𝐿superscript𝐩𝑚𝐪KL(\\mathbf{p}^{g},\\mathbf{q})+\\sum_{m\\in A_{k}}KL(\\mathbf{p}^{m},\\mathbf{q}) (5) where KL𝐾𝐿KL denotes the KL divergence, and 𝐩msuperscript𝐩𝑚\\mathbf{p}^{m} 𝐩gsuperscript𝐩𝑔\\mathbf{p}^{g} denote the probability distribution of a specialist model or the generalist full model. The distribution 𝐩msuperscript𝐩𝑚\\mathbf{p}^{m} is a distribution over all the specialist classes of m𝑚m plus a single dustbin class, so when computing its KL divergence from the full 𝐪𝐪\\mathbf{q} distribution we sum all of the probabilities that the full 𝐪𝐪\\mathbf{q} distribution assigns to all the classes in m𝑚m’s dustbin. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_32",
"text": " Eq. 5 does not have a general closed form solution, though when all the models produce a single probability for each class the solution is either the arithmetic or geometric mean, depending on whether we use KL(𝐩,𝐪)𝐾𝐿𝐩𝐪KL(\\mathbf{p},\\mathbf{q}) or KL(𝐪,𝐩)𝐾𝐿𝐪𝐩KL(\\mathbf{q},\\mathbf{p})). We parameterize 𝐪=softmax(𝐳)𝐪𝑠𝑜𝑓𝑡𝑚𝑎𝑥𝐳\\mathbf{q}=softmax(\\mathbf{z}) (with T=1𝑇1T=1) and we use gradient descent to optimize the logits 𝐳𝐳\\mathbf{z} w.r.t. eq. 5. Note that this optimization must be carried out for each image. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_33",
"text": " Starting from the trained baseline full network, the specialists train extremely fast (a few days instead of many weeks for JFT). Also, all the specialists are trained completely independently. Table 3 shows the absolute test accuracy for the baseline system and the baseline system combined with the specialist models. With 61 specialist models, there is a 4.4% relative improvement in test accuracy overall. We also report conditional test accuracy, which is the accuracy by only considering examples belonging to the specialist classes, and restricting our predictions to that subset of classes. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_34",
"text": " For our JFT specialist experiments, we trained 61 specialist models, each with 300 classes (plus the dustbin class). Because the sets of classes for the specialists are not disjoint, we often had multiple specialists covering a particular image class. Table 4 shows the number of test set examples, the change in the number of examples correct at position 1 when using the specialist(s), and the relative percentage improvement in top1 accuracy for the JFT dataset broken down by the number of specialists covering the class. We are encouraged by the general trend that accuracy improvements are larger when we have more specialists covering a particular class, since training independent specialist models is very easy to parallelize. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_35",
"text": " One of our main claims about using soft targets instead of hard targets is that a lot of helpful information can be carried in soft targets that could not possibly be encoded with a single hard target. In this section we demonstrate that this is a very large effect by using far less data to fit the 85M parameters of the baseline speech model described earlier. Table 5 shows that with only 3% of the data (about 20M examples), training the baseline model with hard targets leads to severe overfitting (we did early stopping, as the accuracy drops sharply after reaching 44.5%), whereas the same model trained with soft targets is able to recover almost all the information in the full training set (about 2% shy). It is even more remarkable to note that we did not have to do early stopping: the system with soft targets simply “converged” to 57%. This shows that soft targets are a very effective way of communicating the regularities discovered by a model trained on all of the data to another model. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_36",
"text": " The specialists that we used in our experiments on the JFT dataset collapsed all of their non-specialist classes into a single dustbin class. If we allow specialists to have a full softmax over all classes, there may be a much better way to prevent them overfitting than using early stopping. A specialist is trained on data that is highly enriched in its special classes. This means that the effective size of its training set is much smaller and it has a strong tendency to overfit on its special classes. This problem cannot be solved by making the specialist a lot smaller because then we lose the very helpful transfer effects we get from modeling all of the non-specialist classes. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_37",
"text": " Our experiment using 3% of the speech data strongly suggests that if a specialist is initialized with the weights of the generalist, we can make it retain nearly all of its knowledge about the non-special classes by training it with soft targets for the non-special classes in addition to training it with hard targets. The soft targets can be provided by the generalist. We are currently exploring this approach. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_38",
"text": " The use of specialists that are trained on subsets of the data has some resemblance to mixtures of experts which use a gating network to compute the probability of assigning each example to each expert. At the same time as the experts are learning to deal with the examples assigned to them, the gating network is learning to choose which experts to assign each example to based on the relative discriminative performance of the experts for that example. Using the discriminative performance of the experts to determine the learned assignments is much better than simply clustering the input vectors and assigning an expert to each cluster, but it makes the training hard to parallelize: First, the weighted training set for each expert keeps changing in a way that depends on all the other experts and second, the gating network needs to compare the performance of different experts on the same example to know how to revise its assignment probabilities. These difficulties have meant that mixtures of experts are rarely used in the regime where they might be most beneficial: tasks with huge datasets that contain distinctly different subsets. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_39",
"text": " It is much easier to parallelize the training of multiple specialists. We first train a generalist model and then use the confusion matrix to define the subsets that the specialists are trained on. Once these subsets have been defined the specialists can be trained entirely independently. At test time we can use the predictions from the generalist model to decide which specialists are relevant and only these specialists need to be run. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_40",
"text": " We have shown that distilling works very well for transferring knowledge from an ensemble or from a large highly regularized model into a smaller, distilled model. On MNIST distillation works remarkably well even when the transfer set that is used to train the distilled model lacks any examples of one or more of the classes. For a deep acoustic model that is version of the one used by Android voice search, we have shown that nearly all of the improvement that is achieved by training an ensemble of deep neural nets can be distilled into a single neural net of the same size which is far easier to deploy. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_41",
"text": " For really big neural networks, it can be infeasible even to train a full ensemble, but we have shown that the performance of a single really big net that has been trained for a very long time can be significantly improved by learning a large number of specialist nets, each of which learns to discriminate between the classes in a highly confusable cluster. We have not yet shown that we can distill the knowledge in the specialists back into the single large net. ",
"title": "Distilling the Knowledge in a Neural Network"
},
{
"id": "1503.02531_all_42",
"text": " We thank Yangqing Jia for assistance with training models on ImageNet and Ilya Sutskever and Yoram Singer for helpful discussions. ",
"title": "Distilling the Knowledge in a Neural Network"
}
] |
What was used as the backbone network for RetinaNet?
|
For RetinaNet, Feature Pyramid Network (FPN) was used as a backbone [25].
|
[
25
] |
[
{
"id": "1708.02002_all_0",
"text": " Current state-of-the-art object detectors are based on a two-stage, proposal-driven mechanism. As popularized in the R-CNN framework , the first stage generates a sparse set of candidate object locations and the second stage classifies each candidate location as one of the foreground classes or as background using a convolutional neural network. Through a sequence of advances (10, 28, 20, 14), this two-stage framework consistently achieves top accuracy on the challenging COCO benchmark . ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_1",
"text": " Despite the success of two-stage detectors, a natural question to ask is: could a simple one-stage detector achieve similar accuracy? One stage detectors are applied over a regular, dense sampling of object locations, scales, and aspect ratios. Recent work on one-stage detectors, such as YOLO (26, 27) and SSD (22, 9), demonstrates promising results, yielding faster detectors with accuracy within 10-40% relative to state-of-the-art two-stage methods. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_2",
"text": " This paper pushes the envelop further: we present a one-stage object detector that, for the first time, matches the state-of-the-art COCO AP of more complex two-stage detectors, such as the Feature Pyramid Network (FPN) or Mask R-CNN variants of Faster R-CNN . To achieve this result, we identify class imbalance during training as the main obstacle impeding one-stage detector from achieving state-of-the-art accuracy and propose a new loss function that eliminates this barrier. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_3",
"text": " Class imbalance is addressed in R-CNN-like detectors by a two-stage cascade and sampling heuristics. The proposal stage (e.g., Selective Search , EdgeBoxes , DeepMask (24, 25), RPN ) rapidly narrows down the number of candidate object locations to a small number (e.g., 1-2k), filtering out most background samples. In the second classification stage, sampling heuristics, such as a fixed foreground-to-background ratio (1:3), or online hard example mining (OHEM) , are performed to maintain a manageable balance between foreground and background. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_4",
"text": " In contrast, a one-stage detector must process a much larger set of candidate object locations regularly sampled across an image. In practice this often amounts to enumerating ∼similar-to\\scriptstyle\\sim100k locations that densely cover spatial positions, scales, and aspect ratios. While similar sampling heuristics may also be applied, they are inefficient as the training procedure is still dominated by easily classified background examples. This inefficiency is a classic problem in object detection that is typically addressed via techniques such as bootstrapping (33, 29) or hard example mining (37, 8, 31). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_5",
"text": " In this paper, we propose a new loss function that acts as a more effective alternative to previous approaches for dealing with class imbalance. The loss function is a dynamically scaled cross entropy loss, where the scaling factor decays to zero as confidence in the correct class increases, see Figure 1. Intuitively, this scaling factor can automatically down-weight the contribution of easy examples during training and rapidly focus the model on hard examples. Experiments show that our proposed Focal Loss enables us to train a high-accuracy, one-stage detector that significantly outperforms the alternatives of training with the sampling heuristics or hard example mining, the previous state-of-the-art techniques for training one-stage detectors. Finally, we note that the exact form of the focal loss is not crucial, and we show other instantiations can achieve similar results. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_6",
"text": " To demonstrate the effectiveness of the proposed focal loss, we design a simple one-stage object detector called RetinaNet, named for its dense sampling of object locations in an input image. Its design features an efficient in-network feature pyramid and use of anchor boxes. It draws on a variety of recent ideas from (22, 6, 28, 20). RetinaNet is efficient and accurate; our best model, based on a ResNet-101-FPN backbone, achieves a COCO test-dev AP of 39.1 while running at 5 fps, surpassing the previously best published single-model results from both one and two-stage detectors, see Figure 2. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_7",
"text": " The sliding-window paradigm, in which a classifier is applied on a dense image grid, has a long and rich history. One of the earliest successes is the classic work of LeCun et al. who applied convolutional neural networks to handwritten digit recognition (19, 36). Viola and Jones used boosted object detectors for face detection, leading to widespread adoption of such models. The introduction of HOG and integral channel features gave rise to effective methods for pedestrian detection. DPMs helped extend dense detectors to more general object categories and had top results on PASCAL for many years. While the sliding-window approach was the leading detection paradigm in classic computer vision, with the resurgence of deep learning , two-stage detectors, described next, quickly came to dominate object detection. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_8",
"text": " The dominant paradigm in modern object detection is based on a two-stage approach. As pioneered in the Selective Search work , the first stage generates a sparse set of candidate proposals that should contain all objects while filtering out the majority of negative locations, and the second stage classifies the proposals into foreground classes / background. R-CNN upgraded the second-stage classifier to a convolutional network yielding large gains in accuracy and ushering in the modern era of object detection. R-CNN was improved over the years, both in terms of speed (15, 10) and by using learned object proposals (6, 24, 28). Region Proposal Networks (RPN) integrated proposal generation with the second-stage classifier into a single convolution network, forming the Faster R-CNN framework . Numerous extensions to this framework have been proposed, e.g. (20, 31, 32, 16, 14). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_9",
"text": " OverFeat was one of the first modern one-stage object detector based on deep networks. More recently SSD (22, 9) and YOLO (26, 27) have renewed interest in one-stage methods. These detectors have been tuned for speed but their accuracy trails that of two-stage methods. SSD has a 10-20% lower AP, while YOLO focuses on an even more extreme speed/accuracy trade-off. See Figure 2. Recent work showed that two-stage detectors can be made fast simply by reducing input image resolution and the number of proposals, but one-stage methods trailed in accuracy even with a larger compute budget . In contrast, the aim of this work is to understand if one-stage detectors can match or surpass the accuracy of two-stage detectors while running at similar or faster speeds. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_10",
"text": " The design of our RetinaNet detector shares many similarities with previous dense detectors, in particular the concept of ‘anchors’ introduced by RPN and use of features pyramids as in SSD and FPN . We emphasize that our simple detector achieves top results not based on innovations in network design but due to our novel loss. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_11",
"text": " Both classic one-stage object detection methods, like boosted detectors (37, 5) and DPMs , and more recent methods, like SSD , face a large class imbalance during training. These detectors evaluate 104superscript10410^{4}-105superscript10510^{5} candidate locations per image but only a few locations contain objects. This imbalance causes two problems: (1) training is inefficient as most locations are easy negatives that contribute no useful learning signal; (2) en masse, the easy negatives can overwhelm training and lead to degenerate models. A common solution is to perform some form of hard negative mining (33, 37, 8, 31, 22) that samples hard examples during training or more complex sampling/reweighing schemes . In contrast, we show that our proposed focal loss naturally handles the class imbalance faced by a one-stage detector and allows us to efficiently train on all examples without sampling and without easy negatives overwhelming the loss and computed gradients. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_12",
"text": " There has been much interest in designing robust loss functions (e.g., Huber loss ) that reduce the contribution of outliers by down-weighting the loss of examples with large errors (hard examples). In contrast, rather than addressing outliers, our focal loss is designed to address class imbalance by down-weighting inliers (easy examples) such that their contribution to the total loss is small even if their number is large. In other words, the focal loss performs the opposite role of a robust loss: it focuses training on a sparse set of hard examples. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_13",
"text": " The Focal Loss is designed to address the one-stage object detection scenario in which there is an extreme imbalance between foreground and background classes during training (e.g., 1:1000). We introduce the focal loss starting from the cross entropy (CE) loss for binary classification111Extending the focal loss to the multi-class case is straightforward and works well; for simplicity we focus on the binary loss in this work.: CE(p,y)={−log(p)if y=1−log(1−p)otherwise.CE𝑝𝑦cases𝑝if y=11𝑝otherwise.\\textrm{CE}(p,y)=\\begin{cases}-\\log(p)&\\text{if $y=1$}\\\\ -\\log(1-p)&\\text{otherwise.}\\end{cases} (1) In the above y∈{±1}𝑦plus-or-minus1y\\in\\{\\pm 1\\} specifies the ground-truth class and p∈(0,1)𝑝01p\\in(0,1) is the model’s estimated probability for the class with label y=1𝑦1y=1. For notational convenience, we define ptsubscript𝑝tp_{\\textrm{t}}: pt={pif y=11−potherwise,subscript𝑝tcases𝑝if y=11𝑝otherwise,p_{\\textrm{t}}=\\begin{cases}p&\\text{if $y=1$}\\\\ 1-p&\\text{otherwise,}\\end{cases} (2) and rewrite CE(p,y)=CE(pt)=−log(pt)CE𝑝𝑦CEsubscript𝑝tsubscript𝑝t\\textrm{CE}(p,y)=\\textrm{CE}(p_{\\textrm{t}})=-\\log(p_{\\textrm{t}}). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_14",
"text": " The CE loss can be seen as the blue (top) curve in Figure 1. One notable property of this loss, which can be easily seen in its plot, is that even examples that are easily classified (pt≫.5much-greater-thansubscript𝑝t.5p_{\\textrm{t}}\\gg.5) incur a loss with non-trivial magnitude. When summed over a large number of easy examples, these small loss values can overwhelm the rare class. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_15",
"text": " A common method for addressing class imbalance is to introduce a weighting factor α∈(0,1)𝛼01\\alpha\\in(0,1) for class 111 and 1−α1𝛼1-\\alpha for class −11-1. In practice α𝛼\\alpha may be set by inverse class frequency or treated as a hyperparameter to set by cross validation. For notational convenience, we define αtsubscript𝛼t\\alpha_{\\textrm{t}} analogously to how we defined ptsubscript𝑝tp_{\\textrm{t}}. We write the α𝛼\\alpha-balanced CE loss as: CE(pt)=−αtlog(pt).CEsubscript𝑝tsubscript𝛼tsubscript𝑝t\\textrm{CE}(p_{\\textrm{t}})=-\\alpha_{\\textrm{t}}\\log(p_{\\textrm{t}}). (3) This loss is a simple extension to CE that we consider as an experimental baseline for our proposed focal loss. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_16",
"text": " As our experiments will show, the large class imbalance encountered during training of dense detectors overwhelms the cross entropy loss. Easily classified negatives comprise the majority of the loss and dominate the gradient. While α𝛼\\alpha balances the importance of positive/negative examples, it does not differentiate between easy/hard examples. Instead, we propose to reshape the loss function to down-weight easy examples and thus focus training on hard negatives. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_17",
"text": " More formally, we propose to add a modulating factor (1−pt)γsuperscript1subscript𝑝t𝛾(1-p_{\\textrm{t}})^{\\gamma} to the cross entropy loss, with tunable focusing parameter γ≥0𝛾0\\gamma\\geq 0. We define the focal loss as: FL(pt)=−(1−pt)γlog(pt).FLsubscript𝑝tsuperscript1subscript𝑝t𝛾subscript𝑝t\\textrm{FL}(p_{\\textrm{t}})=-(1-p_{\\textrm{t}})^{\\gamma}\\log(p_{\\textrm{t}}). (4) ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_18",
"text": " The focal loss is visualized for several values of γ∈(0,5)𝛾05\\gamma\\in(0,5) in Figure 1. We note two properties of the focal loss. (1) When an example is misclassified and ptsubscript𝑝tp_{\\textrm{t}} is small, the modulating factor is near 111 and the loss is unaffected. As pt→1→subscript𝑝t1p_{\\textrm{t}}\\rightarrow 1, the factor goes to 0 and the loss for well-classified examples is down-weighted. (2) The focusing parameter γ𝛾\\gamma smoothly adjusts the rate at which easy examples are down-weighted. When γ=0𝛾0\\gamma=0, FL is equivalent to CE, and as γ𝛾\\gamma is increased the effect of the modulating factor is likewise increased (we found γ=2𝛾2\\gamma=2 to work best in our experiments). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_19",
"text": " Intuitively, the modulating factor reduces the loss contribution from easy examples and extends the range in which an example receives low loss. For instance, with γ=2𝛾2\\gamma=2, an example classified with pt=0.9subscript𝑝t0.9p_{\\textrm{t}}=0.9 would have 100×100\\times lower loss compared with CE and with pt≈0.968subscript𝑝t0.968p_{\\textrm{t}}\\approx 0.968 it would have 1000×1000\\times lower loss. This in turn increases the importance of correcting misclassified examples (whose loss is scaled down by at most 4×4\\times for pt≤.5subscript𝑝t.5p_{\\textrm{t}}\\leq.5 and γ=2𝛾2\\gamma=2). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_20",
"text": " In practice we use an α𝛼\\alpha-balanced variant of the focal loss: FL(pt)=−αt(1−pt)γlog(pt).FLsubscript𝑝tsubscript𝛼tsuperscript1subscript𝑝t𝛾subscript𝑝t\\textrm{FL}(p_{\\textrm{t}})=-\\alpha_{\\textrm{t}}(1-p_{\\textrm{t}})^{\\gamma}\\log(p_{\\textrm{t}}). (5) We adopt this form in our experiments as it yields slightly improved accuracy over the non-α𝛼\\alpha-balanced form. Finally, we note that the implementation of the loss layer combines the sigmoid operation for computing p𝑝p with the loss computation, resulting in greater numerical stability. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_21",
"text": " While in our main experimental results we use the focal loss definition above, its precise form is not crucial. In the appendix we consider other instantiations of the focal loss and demonstrate that these can be equally effective. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_22",
"text": " Binary classification models are by default initialized to have equal probability of outputting either y=−1𝑦1y=-1 or 111. Under such an initialization, in the presence of class imbalance, the loss due to the frequent class can dominate total loss and cause instability in early training. To counter this, we introduce the concept of a ‘prior’ for the value of p𝑝p estimated by the model for the rare class (foreground) at the start of training. We denote the prior by π𝜋\\pi and set it so that the model’s estimated p𝑝p for examples of the rare class is low, e.g. 0.010.010.01. We note that this is a change in model initialization (see §4.1) and not of the loss function. We found this to improve training stability for both the cross entropy and focal loss in the case of heavy class imbalance. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_23",
"text": " Two-stage detectors are often trained with the cross entropy loss without use of α𝛼\\alpha-balancing or our proposed loss. Instead, they address class imbalance through two mechanisms: (1) a two-stage cascade and (2) biased minibatch sampling. The first cascade stage is an object proposal mechanism (35, 24, 28) that reduces the nearly infinite set of possible object locations down to one or two thousand. Importantly, the selected proposals are not random, but are likely to correspond to true object locations, which removes the vast majority of easy negatives. When training the second stage, biased sampling is typically used to construct minibatches that contain, for instance, a 1:3 ratio of positive to negative examples. This ratio is like an implicit α𝛼\\alpha-balancing factor that is implemented via sampling. Our proposed focal loss is designed to address these mechanisms in a one-stage detection system directly via the loss function. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_24",
"text": " RetinaNet is a single, unified network composed of a backbone network and two task-specific subnetworks. The backbone is responsible for computing a convolutional feature map over an entire input image and is an off-the-self convolutional network. The first subnet performs convolutional object classification on the backbone’s output; the second subnet performs convolutional bounding box regression. The two subnetworks feature a simple design that we propose specifically for one-stage, dense detection, see Figure 3. While there are many possible choices for the details of these components, most design parameters are not particularly sensitive to exact values as shown in the experiments. We describe each component of RetinaNet next. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_25",
"text": " We adopt the Feature Pyramid Network (FPN) from as the backbone network for RetinaNet. In brief, FPN augments a standard convolutional network with a top-down pathway and lateral connections so the network efficiently constructs a rich, multi-scale feature pyramid from a single resolution input image, see Figure 3(a)-(b). Each level of the pyramid can be used for detecting objects at a different scale. FPN improves multi-scale predictions from fully convolutional networks (FCN) , as shown by its gains for RPN and DeepMask-style proposals , as well at two-stage detectors such as Fast R-CNN or Mask R-CNN . ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_26",
"text": " Following , we build FPN on top of the ResNet architecture . We construct a pyramid with levels P3subscript𝑃3P_{3} through P7subscript𝑃7P_{7}, where l𝑙l indicates pyramid level (Plsubscript𝑃𝑙P_{l} has resolution 2lsuperscript2𝑙2^{l} lower than the input). As in all pyramid levels have C=256𝐶256C=256 channels. Details of the pyramid generally follow with a few modest differences.222RetinaNet uses feature pyramid levels P3subscript𝑃3P_{3} to P7subscript𝑃7P_{7}, where P3subscript𝑃3P_{3} to P5subscript𝑃5P_{5} are computed from the output of the corresponding ResNet residual stage (C3subscript𝐶3C_{3} through C5subscript𝐶5C_{5}) using top-down and lateral connections just as in , P6subscript𝑃6P_{6} is obtained via a 3×\\times3 stride-2 conv on C5subscript𝐶5C_{5}, and P7subscript𝑃7P_{7} is computed by applying ReLU followed by a 3×\\times3 stride-2 conv on P6subscript𝑃6P_{6}. This differs slightly from : (1) we don’t use the high-resolution pyramid level P2subscript𝑃2P_{2} for computational reasons, (2) P6subscript𝑃6P_{6} is computed by strided convolution instead of downsampling, and (3) we include P7subscript𝑃7P_{7} to improve large object detection. These minor modifications improve speed while maintaining accuracy. While many design choices are not crucial, we emphasize the use of the FPN backbone is; preliminary experiments using features from only the final ResNet layer yielded low AP. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_27",
"text": " We use translation-invariant anchor boxes similar to those in the RPN variant in . The anchors have areas of 322superscript32232^{2} to 5122superscript5122512^{2} on pyramid levels P3subscript𝑃3P_{3} to P7subscript𝑃7P_{7}, respectively. As in , at each pyramid level we use anchors at three aspect ratios {1\\{1:2,22, 111:111, 222:1}1\\}. For denser scale coverage than in , at each level we add anchors of sizes {20superscript202^{0}, 21/3superscript2132^{1/3}, 22/3superscript2232^{2/3}} of the original set of 3 aspect ratio anchors. This improve AP in our setting. In total there are A=9𝐴9A=9 anchors per level and across levels they cover the scale range 32 - 813 pixels with respect to the network’s input image. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_28",
"text": " Each anchor is assigned a length K𝐾K one-hot vector of classification targets, where K𝐾K is the number of object classes, and a 4-vector of box regression targets. We use the assignment rule from RPN but modified for multi-class detection and with adjusted thresholds. Specifically, anchors are assigned to ground-truth object boxes using an intersection-over-union (IoU) threshold of 0.5; and to background if their IoU is in (0, 0.4). As each anchor is assigned to at most one object box, we set the corresponding entry in its length K𝐾K label vector to 111 and all other entries to 00. If an anchor is unassigned, which may happen with overlap in (0.4, 0.5), it is ignored during training. Box regression targets are computed as the offset between each anchor and its assigned object box, or omitted if there is no assignment. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_29",
"text": " The classification subnet predicts the probability of object presence at each spatial position for each of the A𝐴A anchors and K𝐾K object classes. This subnet is a small FCN attached to each FPN level; parameters of this subnet are shared across all pyramid levels. Its design is simple. Taking an input feature map with C𝐶C channels from a given pyramid level, the subnet applies four 3×\\times3 conv layers, each with C𝐶C filters and each followed by ReLU activations, followed by a 3×\\times3 conv layer with KA𝐾𝐴KA filters. Finally sigmoid activations are attached to output the KA𝐾𝐴KA binary predictions per spatial location, see Figure 3 (c). We use C=256𝐶256C=256 and A=9𝐴9A=9 in most experiments. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_30",
"text": " In contrast to RPN , our object classification subnet is deeper, uses only 3×\\times3 convs, and does not share parameters with the box regression subnet (described next). We found these higher-level design decisions to be more important than specific values of hyperparameters. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_31",
"text": " In parallel with the object classification subnet, we attach another small FCN to each pyramid level for the purpose of regressing the offset from each anchor box to a nearby ground-truth object, if one exists. The design of the box regression subnet is identical to the classification subnet except that it terminates in 4A4𝐴4A linear outputs per spatial location, see Figure 3 (d). For each of the A𝐴A anchors per spatial location, these 444 outputs predict the relative offset between the anchor and the ground-truth box (we use the standard box parameterization from R-CNN ). We note that unlike most recent work, we use a class-agnostic bounding box regressor which uses fewer parameters and we found to be equally effective. The object classification subnet and the box regression subnet, though sharing a common structure, use separate parameters. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_32",
"text": " RetinaNet forms a single FCN comprised of a ResNet-FPN backbone, a classification subnet, and a box regression subnet, see Figure 3. As such, inference involves simply forwarding an image through the network. To improve speed, we only decode box predictions from at most 1k top-scoring predictions per FPN level, after thresholding detector confidence at 0.05. The top predictions from all levels are merged and non-maximum suppression with a threshold of 0.5 is applied to yield the final detections. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_33",
"text": " We use the focal loss introduced in this work as the loss on the output of the classification subnet. As we will show in §5, we find that γ=2𝛾2\\gamma=2 works well in practice and the RetinaNet is relatively robust to γ∈(0.5,5)𝛾0.55\\gamma\\in(0.5,5). We emphasize that when training RetinaNet, the focal loss is applied to all ∼similar-to\\scriptstyle\\sim100k anchors in each sampled image. This stands in contrast to common practice of using heuristic sampling (RPN) or hard example mining (OHEM, SSD) to select a small set of anchors (e.g., 256) for each minibatch. The total focal loss of an image is computed as the sum of the focal loss over all ∼similar-to\\scriptstyle\\sim100k anchors, normalized by the number of anchors assigned to a ground-truth box. We perform the normalization by the number of assigned anchors, not total anchors, since the vast majority of anchors are easy negatives and receive negligible loss values under the focal loss. Finally we note that α𝛼\\alpha, the weight assigned to the rare class, also has a stable range, but it interacts with γ𝛾\\gamma making it necessary to select the two together (see Tables 1a and 1b). In general α𝛼\\alpha should be decreased slightly as γ𝛾\\gamma is increased (for γ=2𝛾2\\gamma=2, α=0.25𝛼0.25\\alpha=0.25 works best). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_34",
"text": " We experiment with ResNet-50-FPN and ResNet-101-FPN backbones . The base ResNet-50 and ResNet-101 models are pre-trained on ImageNet1k; we use the models released by . New layers added for FPN are initialized as in . All new conv layers except the final one in the RetinaNet subnets are initialized with bias b=0𝑏0b=0 and a Gaussian weight fill with σ=0.01𝜎0.01\\sigma=0.01. For the final conv layer of the classification subnet, we set the bias initialization to b=−log((1−π)/π)𝑏1𝜋𝜋b=-\\log((1-\\pi)/\\pi), where π𝜋\\pi specifies that at the start of training every anchor should be labeled as foreground with confidence of ∼similar-to\\scriptstyle\\simπ𝜋\\pi. We use π=.01𝜋.01\\pi=.01 in all experiments, although results are robust to the exact value. As explained in §3.3, this initialization prevents the large number of background anchors from generating a large, destabilizing loss value in the first iteration of training. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_35",
"text": " RetinaNet is trained with stochastic gradient descent (SGD). We use synchronized SGD over 8 GPUs with a total of 16 images per minibatch (2 images per GPU). Unless otherwise specified, all models are trained for 90k iterations with an initial learning rate of 0.01, which is then divided by 10 at 60k and again at 80k iterations. We use horizontal image flipping as the only form of data augmentation unless otherwise noted. Weight decay of 0.0001 and momentum of 0.9 are used. The training loss is the sum the focal loss and the standard smooth L1subscript𝐿1L_{1} loss used for box regression . Training time ranges between 10 and 35 hours for the models in Table 1e. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_36",
"text": " We present experimental results on the bounding box detection track of the challenging COCO benchmark . For training, we follow common practice (1, 20) and use the COCO trainval35k split (union of 80k images from train and a random 35k subset of images from the 40k image val split). We report lesion and sensitivity studies by evaluating on the minival split (the remaining 5k images from val). For our main results, we report COCO AP on the test-dev split, which has no public labels and requires use of the evaluation server. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_37",
"text": " We run numerous experiments to analyze the behavior of the loss function for dense detection along with various optimization strategies. For all experiments we use depth 50 or 101 ResNets with a Feature Pyramid Network (FPN) constructed on top. For all ablation studies we use an image scale of 600 pixels for training and testing. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_38",
"text": " Our first attempt to train RetinaNet uses standard cross entropy (CE) loss without any modifications to the initialization or learning strategy. This fails quickly, with the network diverging during training. However, simply initializing the last layer of our model such that the prior probability of detecting an object is π=.01𝜋.01\\pi=.01 (see §4.1) enables effective learning. Training RetinaNet with ResNet-50 and this initialization already yields a respectable AP of 30.2 on COCO. Results are insensitive to the exact value of π𝜋\\pi so we use π=.01𝜋.01\\pi=.01 for all experiments. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_39",
"text": " Our next attempt to improve learning involved using the α𝛼\\alpha-balanced CE loss described in §3.1. Results for various α𝛼\\alpha are shown in Table 1a. Setting α=.75𝛼.75\\alpha=.75 gives a gain of 0.9 points AP. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_40",
"text": " Results using our proposed focal loss are shown in Table 1b. The focal loss introduces one new hyperparameter, the focusing parameter γ𝛾\\gamma, that controls the strength of the modulating term. When γ=0𝛾0\\gamma=0, our loss is equivalent to the CE loss. As γ𝛾\\gamma increases, the shape of the loss changes so that “easy” examples with low loss get further discounted, see Figure 1. FL shows large gains over CE as γ𝛾\\gamma is increased. With γ=2𝛾2\\gamma=2, FL yields a 2.9 AP improvement over the α𝛼\\alpha-balanced CE loss. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_41",
"text": " For the experiments in Table 1b, for a fair comparison we find the best α𝛼\\alpha for each γ𝛾\\gamma. We observe that lower α𝛼\\alpha’s are selected for higher γ𝛾\\gamma’s (as easy negatives are down-weighted, less emphasis needs to be placed on the positives). Overall, however, the benefit of changing γ𝛾\\gamma is much larger, and indeed the best α𝛼\\alpha’s ranged in just (.25,.75) (we tested α∈(.01,.999)𝛼.01.999\\alpha\\in(.01,.999)). We use γ=2.0𝛾2.0\\gamma=2.0 with α=.25𝛼.25\\alpha=.25 for all experiments but α=.5𝛼.5\\alpha=.5 works nearly as well (.4 AP lower). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_42",
"text": " To understand the focal loss better, we analyze the empirical distribution of the loss of a converged model. For this, we take take our default ResNet-101 600-pixel model trained with γ=2𝛾2\\gamma=2 (which has 36.0 AP). We apply this model to a large number of random images and sample the predicted probability for ∼similar-to\\scriptstyle\\sim107superscript10710^{7} negative windows and ∼similar-to\\scriptstyle\\sim105superscript10510^{5} positive windows. Next, separately for positives and negatives, we compute FL for these samples, and normalize the loss such that it sums to one. Given the normalized loss, we can sort the loss from lowest to highest and plot its cumulative distribution function (CDF) for both positive and negative samples and for different settings for γ𝛾\\gamma (even though model was trained with γ=2𝛾2\\gamma=2). ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_43",
"text": " Cumulative distribution functions for positive and negative samples are shown in Figure 4. If we observe the positive samples, we see that the CDF looks fairly similar for different values of γ𝛾\\gamma. For example, approximately 20% of the hardest positive samples account for roughly half of the positive loss, as γ𝛾\\gamma increases more of the loss gets concentrated in the top 20% of examples, but the effect is minor. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_44",
"text": " The effect of γ𝛾\\gamma on negative samples is dramatically different. For γ=0𝛾0\\gamma=0, the positive and negative CDFs are quite similar. However, as γ𝛾\\gamma increases, substantially more weight becomes concentrated on the hard negative examples. In fact, with γ=2𝛾2\\gamma=2 (our default setting), the vast majority of the loss comes from a small fraction of samples. As can be seen, FL can effectively discount the effect of easy negatives, focusing all attention on the hard negative examples. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_45",
"text": " proposed to improve training of two-stage detectors by constructing minibatches using high-loss examples. Specifically, in OHEM each example is scored by its loss, non-maximum suppression (nms) is then applied, and a minibatch is constructed with the highest-loss examples. The nms threshold and batch size are tunable parameters. Like the focal loss, OHEM puts more emphasis on misclassified examples, but unlike FL, OHEM completely discards easy examples. We also implement a variant of OHEM used in SSD : after applying nms to all examples, the minibatch is constructed to enforce a 1:3 ratio between positives and negatives to help ensure each minibatch has enough positives. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_46",
"text": " We test both OHEM variants in our setting of one-stage detection which has large class imbalance. Results for the original OHEM strategy and the ‘OHEM 1:3’ strategy for selected batch sizes and nms thresholds are shown in Table 1d. These results use ResNet-101, our baseline trained with FL achieves 36.0 AP for this setting. In contrast, the best setting for OHEM (no 1:3 ratio, batch size 128, nms of .5) achieves 32.8 AP. This is a gap of 3.2 AP, showing FL is more effective than OHEM for training dense detectors. We note that we tried other parameter setting and variants for OHEM but did not achieve better results. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_47",
"text": " Finally, in early experiments, we attempted to train with the hinge loss on ptsubscript𝑝tp_{\\textrm{t}}, which sets loss to 0 above a certain value of ptsubscript𝑝tp_{\\textrm{t}}. However, this was unstable and we did not manage to obtain meaningful results. Results exploring alternate loss functions are in the appendix. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_48",
"text": " One of the most important design factors in a one-stage detection system is how densely it covers the space of possible image boxes. Two-stage detectors can classify boxes at any position, scale, and aspect ratio using a region pooling operation . In contrast, as one-stage detectors use a fixed sampling grid, a popular approach for achieving high coverage of boxes in these approaches is to use multiple ‘anchors’ at each spatial position to cover boxes of various scales and aspect ratios. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_49",
"text": " We sweep over the number of scale and aspect ratio anchors used at each spatial position and each pyramid level in FPN. We consider cases from a single square anchor at each location to 12 anchors per location spanning 4 sub-octave scales (2k/4superscript2𝑘42^{k/4}, for k≤3𝑘3k\\leq 3) and 3 aspect ratios (0.5, 1, 2). Results using ResNet-50 are shown in Table 1c. A surprisingly good AP (30.3) is achieved using just one square anchor. However, the AP can be improved by nearly 4 points (to 34.0) when using 3 scales and 3 aspect ratios per location. We used this setting for all other experiments in this work. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_50",
"text": " Finally, we note that increasing beyond 6-9 anchors did not shown further gains. Thus while two-stage systems can classify arbitrary boxes in an image, the saturation of performance w.r.t. density implies the higher potential density of two-stage systems may not offer an advantage. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_51",
"text": " Larger backbone networks yield higher accuracy, but also slower inference speeds. Likewise for input image scale (defined by the shorter image side). We show the impact of these two factors in Table 1e. In Figure 2 we plot the speed/accuracy trade-off curve for RetinaNet and compare it to recent methods using public numbers on COCO test-dev. The plot reveals that RetinaNet, enabled by our focal loss, forms an upper envelope over all existing methods, discounting the low-accuracy regime. RetinaNet with ResNet-101-FPN and a 600 pixel image scale (which we denote by RetinaNet-101-600 for simplicity) matches the accuracy of the recently published ResNet-101-FPN Faster R-CNN , while running in 122 ms per image compared to 172 ms (both measured on an Nvidia M40 GPU). Using larger scales allows RetinaNet to surpass the accuracy of all two-stage approaches, while still being faster. For faster runtimes, there is only one operating point (500 pixel input) at which using ResNet-50-FPN improves over ResNet-101-FPN. Addressing the high frame rate regime will likely require special network design, as in , and is beyond the scope of this work. We note that after publication, faster and more accurate results can now be obtained by a variant of Faster R-CNN from . ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_52",
"text": " We evaluate RetinaNet on the challenging COCO dataset and compare test-dev results to recent state-of-the-art methods including both one-stage and two-stage models. Results are presented in Table 2 for our RetinaNet-101-800 model trained using scale jitter and for 1.5×\\times longer than the models in Table 1e (giving a 1.3 AP gain). Compared to existing one-stage methods, our approach achieves a healthy 5.9 point AP gap (39.1 vs. 33.2) with the closest competitor, DSSD , while also being faster, see Figure 2. Compared to recent two-stage methods, RetinaNet achieves a 2.3 point gap above the top-performing Faster R-CNN model based on Inception-ResNet-v2-TDM . Plugging in ResNeXt-32x8d-101-FPN as the RetinaNet backbone further improves results another 1.7 AP, surpassing 40 AP on COCO. ",
"title": "Focal Loss for Dense Object Detection"
},
{
"id": "1708.02002_all_53",
"text": " In this work, we identify class imbalance as the primary obstacle preventing one-stage object detectors from surpassing top-performing, two-stage methods. To address this, we propose the focal loss which applies a modulating term to the cross entropy loss in order to focus learning on hard negative examples. Our approach is simple and highly effective. We demonstrate its efficacy by designing a fully convolutional one-stage detector and report extensive experimental analysis showing that it achieves state-of-the-art accuracy and speed. Source code is available at https://github.com/facebookresearch/Detectron . ",
"title": "Focal Loss for Dense Object Detection"
}
] |
What is the advantage of stacking encoders and decoders for semantic segmentation?
|
Stacking encoders and decoders architecture produce smooth segment labels [0].
|
[
0
] |
[
{
"id": "1505.07293_all_0",
"text": " Semantic segmentation is an important step towards understanding and inferring different objects and their arrangements observed in a scene. This has wide array of applications ranging from estimating scene geometry, inferring support-relationships among objects to autonomous vehicle driving. Early methods that relied on low-level vision cues have fast been superseded by popular machine learning algorithms. In particular, deep learning has seen huge success lately in handwritten digit recognition, speech, categorising whole images and detecting objects in images (37, 34) also seen growing interest in semantic pixel-wise labelling problems (7, 14, 35). However, these recent approaches have tried to directly adopt deep architectures designed for category prediction to pixel-wise labelling. The results, although very encouraging, have not been quite satisfactory. Primarily, the deepest layer representations/feature maps are of a small resolution as compared to input image dimensions due to several pooling layers e.g. if 2×2222\\times 2 non-overlapping max-pooling-subsampling layers are used three times, the resulting feature map is 1/8th1superscript8𝑡ℎ1/8^{th} of the input dimension. Therefore, an ad hoc technique is used to upsample the deepest layer feature map to match the input image dimensions by replicating features within a block i.e. all pixels within a block (8×8888\\times 8 in our example) have the same features. This often results in predictions that appear blocky222see http://david.grangier.info/scene_parsing/. This is exactly what we improve using our proposed SegNet architecture, wherein the decoders learn to map the deepest layer features to full image dimensions. Learning to decode has two other advantages. First, deeper layers each with pooling-subsampling can be introduced which increases the spatial context for pixel labelling. This results in smooth predictions unlike patch based classifiers (36, 2). Second, ablation studies to understand the effects of features such as in can be performed using the decoder stack. ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_1",
"text": " We draw inspiration of our encoder-decoder type architectures from probabilistic auto-encoders used to build generative models and unsupervised learning of feature hierarchies . Our main contribution is to learn an encoder-decoder stack trained in a modular and fully supervised manner for pixel-wise labelling. The addition of each deeper encoder-decoder pair results in an increased spatial context i.e., a 444 layer SegNet with 7×7777\\times 7 kernels and 2×2222\\times 2 non-overlapping max pooling in each layer has a spatial context of 106×106106106106\\times 106 pixels when a feature-map is backtracked to the input image. The SegNet predictions get smoother as more layers are added and demonstrate high accuracy, comparable to or even exceeding methods which use CRFs . SegNet maintains a constant number of features per layer which is typically set to 646464. This has a practical advantage that the computational cost successively decreases for each additional/deeper encoder-decoder pair. ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_2",
"text": " In Sec. 2 we review related recent literature. We describe in detail the SegNet architecture in Sec. 3 along with its qualitative analysis. Our quantitative experiments with SegNet on several well known benchmark datasets are described in Sec. 4. We also discuss the advantages and drawbacks of our approach including computational times. We conclude with pointers to future work in Sec. 5. For most of our experiments, we use outdoor RGB road scene analysis (1, 9) and indoor RGBD scene analysis datasets to measure the quantitative performance. ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_3",
"text": " Semantic pixel-wise segmentation is an ongoing topic of research, fuelled by challenging datasets (1, 33, 9). Current best performing methods all mostly rely on hand engineered features generally used for per-pixel independent classification. Typically, a patch is fed into a classifier e.g. Random Forest (32, 2) or Boosting (36, 20) to predict the class probabilities of the center pixel. Features based on appearance , SfM and appearance (2, 36, 20) have been explored for the CamVid test. These per-pixel noisy predictions (often called unary terms) from the classifiers are then smoothed by using a pair-wise or higher order CRF (36, 20) to improve the accuracy. More recent approaches have aimed to produce high quality unaries by trying to predict the labels for all the pixels in a patch as opposed to only the center pixel. This improves the results of Random Forest based unaries but thin structured classes are classfied poorly. Dense depth maps computed from the CamVid video have also been used as input for classification using Random Forests . Another approach argues for the use of a combination of popular hand designed features and spatio temporal super-pixelization to obtain higher accuracy . Recent top performing technique on the CamVid test addresses the imbalance among label frequencies by using additional training data from the PASCAL VOC dataset to learn object detectors. The result of all these techniques indicates the need for improved classification as increases in accuracy have mostly come from adding new features or modalities to the classifier. Post-processing using CRF models of various orders has mainly resulted in improving the accuracy of dominant classes such as sky, road, buildings with little effect on the accuracy of thin structured but equally important classes such as signs, poles, pedestrians. This highlights the need for better pixel-wise classification when imbalanced label frequencies exist. Meanwhile, indoor RGBD pixel-wise semantic segmentation has also gained popularity since the release of the NYU dataset which showed the usefulness of the depth channel to improve segmentation. Their approach used features such as RGB-SIFT, depth-SIFT, location as input to a neural network classifier to predict pixel unaries. The noisy unaries are then smoothed using a CRF. Improvements were made using a richer feature set including LBP and region segmentation to obtain higher accuracy followed by a CRF. In more recent work , both class segmentation and support relationships are inferred together using a combination of RGB and depth based cues. Another approach focusses on real-time joint reconstruction and semantic segmentation, where Random Forests are used as the classifier . Gupta et al. use boundary detection and hierarchical grouping before performing category segmentation. The common attribute along all these approaches is the use of hand engineered features for pixel-wise classifiction of either RGB or RGBD images. The application of deep learning for scene segmentation has only just begun. There have also been a few attempts to apply networks designed for categorization to segmentation, particularly by replicating the deepest layer features in blocks to match image dimensions (7, 6, 11, 8). However, the resulting classification is blocky . Another approach using recurrent neural networks merges several low resolution predictions to create input image resolution predictions. On the whole, although some of these techniques already present improvements over hand engineered features . ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_4",
"text": " Our work is inspired by the unsupervised feature learning architecture proposed by Ranzato et. al . The key learning module is an encoder-decoder network where the encoder consists of a filter bank convolution, tanh squashing function, max pooling followed by sub-sampling to obtain the feature maps. For each sample, the indices of the max locations computed during pooling are stored and passed to the decoder. The decoder upsamples the feature maps by using the already stored pooled indices, also called switches, and learns a decoder filter bank to reconstruct the input image. This architecture was used for unsupervised pre-training of feature hierarchies. A similar decoding technique is used for visualizing trained convolutional networks for object classification; the transposed encoder kernels are set as the decoder kernels which are followed by a non-linearity and the pooling indices are used for upsampling. The architecture of Ranzato mainly concentrated on layer wise feature learning using small input patches although during test time a full sized image was the input. This discrepancy was corrected for by Kavukcuoglu et. al. by using test size images/feature maps to learn hierarchical encoders. Both these approaches however did not attempt to use deep encoder-decoder networks for unsupervised feature training as they discarded the decoders after each encoder training. Here, the SegNet architecture differs from these approaches as the objective used for training all the encoder-decoder pairs is the same, i.e., to minimise the cross-entropy label loss. ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_5",
"text": " Other applications where pixel wise predictions are made using deep networks are image super-resolution and depth map prediction from a single image . The authors in discuss the need for learning to upsample from low resolution feature maps which is the central topic of this paper. ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_6",
"text": " A four layer SegNet architecture used in our experiments is illustrated in Fig. 1. Each encoder performs dense convolutions, ReLU non-linearity, a non-overlapping max pooling with a 2×2222\\times 2 window and finally down-sampling. Each decoder upsamples its input using the memorized pooled indices and convolves it with a trainable filter bank. No ReLU non-linearity is used in the decoder unlike the deconvolution network (41, 42). This makes it easier to optimize the filters in each pair. The encoder and decoder filters are also untied to provide additional degrees of freedom to minimize the objective. The final layer is a soft-max classifier (with no bias term) which classifies each pixel independently. The output of the soft-max is a K channel image where K is the number of classes. ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_7",
"text": " SegNet uses a “flat” architecture, i.e, the number of features in each layer remains the same (646464 in our case) but with full connectivity. This choice is motivated by two reasons. First, it avoids parameter explosion, unlike an expanding deep encoder network with full feature connectivity (same for decoder). Second, the training time remains the same (in our experiments it slightly decreases) for each additional/deeper encoder-decoder pair as the feature map resolution is smaller which makes convolutions faster. Note that the decoder corresponding to the first encoder (closest to the input image) produces a multi-channel feature map although the encoder input is either 3 or 4 channels (RGB or RGBD) (see Fig. 1). This high dimensional feature representation is fed to the soft-max classifier. This is unlike the other decoders which produce feature maps the same size as their encoder inputs. A fixed pooling window of 2×2222\\times 2 with a stride of non-overlapping 222 pixels is used. This small size preserves thin structures in the scene. Further, a constant kernel size of 7×7777\\times 7 over all the layers was chosen to provide a wide context for smooth labelling i.e. a pixel in the deepest layer feature map can be traced back to a context window in the input image of 106×106106106106\\times 106 pixels. The trade-off here is between the size of the context window and retaining thin structures. Smaller kernels decrease context and larger ones potentially destroy thin structures. ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_8",
"text": " The input to the SegNet can be any arbitrary multi-channel image or feature map(s), e.g., RGB, RGBD, map of normals, depth etc. We perform local contrast normalization (LCN) as a pre-processing step to the input (23, 15). The advantage of this step are many, (i) to correct for non-uniform scene illumination thus reducing the dynamic range (increases contrast in shadowed parts). (ii) highlighting edges which leads the network to learn category shape, (iii) improves convergence as it decorrelates the input dimensions . LCN is performed independently for each modality, i.e., RGB is contrast normalized as a three channel input and depth as a single channel for RGBD inputs. This avoids highlighting pseudo depth edges due to RGB edges and vice-versa. ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_9",
"text": " Most deep learning methods use stochastic gradient descent (SGD) for training . SGD needs sufficient expertise to initialize weights with appropriate magnitudes, adapting appropriately learning rates and momentum parameters which both control the step sizes. Therefore, we adopt L-BFGS based on the comparative study by Ngiam et. al who advocate the use of L-BFGS particularly for auto-encoders. L-BFGS has faster and more stable convergence than SGD. It also works well in large batches which is useful to maximize the throughput of powerful GPUs. We initialize the weights in all the layers and the soft-max weights from a zero mean unit variance Gaussian 𝒩(0,1)𝒩01\\mathcal{N}(0,1) and normalized the kernels to unit L2 norm. We obtained good predictive performance from the network without the need for special layer-wise weight initialization or any learning rate tuning. We also use inverse frequency weighting for the classes to correct for any label imbalances in the training set . ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_10",
"text": " We use mini-batches that maximize GPU usage and avoid GPU-CPU memory transfers. Typically, 25−50255025-50 randomly chosen images (with replacement) per mini-batch. The optimizer is run for 202020 iterations per mini-batch and 101010 epochs for each layer. We empirically observe that the objective plateaus after 5−6565-6 epochs and so we run another 444 epochs as a margin. Note that, after 101010 epochs, each input sample approximately “influences” the optimizer 200200200 times. We train the encoder-decoder pair weights closest to the input layer. The soft-max layer can be trained first or randomly initialised. It then remains fixed throughout the experiment. Next, we introduce a deeper layer of encoder-decoder (see Fig. 2) and train their weights while holding the shallower layer encoder-decoder weights fixed. Note that the objective remains the same, i.e., to minimize label cross-entropy loss over the mini-batch. This is unlike unsupervised feature learning approaches which reconstruct the input of the layer in question (27, 16), thus varying the objective with each layer. The deconvolution network on the other hand optimizes the same reconstruction objective with each deeper layer. The difference to our approach is (i) the objective is unsupervised, (ii) there is no encoder to learn a feed-forward representation thus requiring an optimisation step during test time to produce features for recognition. We successively add deeper encoder-decoder pairs and train them while holding the preceeding pair’s weights fixed. In total, we use 4 layer networks, i.e., 4 encoders and 4 decoders in our experiments. Once the encoder-decoder stack is trained, we find that there is no advantage to training the soft-max layer as it only relies on a linear discriminant function. We wrote our own Matlab GPU compatible implementation of SegNet that uses the minFunc optimization library . Our code has been tested on NVIDIA Tesla K40, GTX GeForce 880M and GTXGeForce780 GPUs. We will make our light-weight Matlab code available publicly soon. With the current state of code optimisation, training a 4 layer deep SegNet on the CamVid dataset (367 training images of 360×480360480360\\times 480) takes about a week. The unoptimized test time is in the order of 222secs/frame: bulk of the computation time is spent performing tensor convolutions in the feedforward path and FFT based convolutions during backpropagation 333more speedup can be gained https://developer.nvidia.com/cuDNN. ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_11",
"text": " We perform an ablation study to gain some insight into about the SegNet features. The work of Zeiler et al. study the effects of feature activations in each layer of a trained network . The feature activations are mapped back to image pixel space using a deconvolutional network. The SegNet architecture by construction is trained to decode the encoder activations and we use this to visualize the effect of feature activations (which layer) in the pixel label space. A recent study has shown that in each layer of a deep network it is the “direction” or “space” (ensemble of feature activations) which encodes useful class information rather than individual units (feature activations). We therefore focus our study on the predictive effect of a subset of feature activations at each layer. For a given layer, we compute the feature activations/maps for each sample in the training set. We then compute the root mean square value of each map i.e. ∀j∈{1..64}for-all𝑗1..64\\forall j\\in\\{1..64\\} 1N∑i∈ℐ(fji)21𝑁subscript𝑖ℐsuperscriptsuperscriptsubscript𝑓𝑗𝑖2\\sqrt{\\frac{1}{N}\\sum_{i\\in\\mathcal{I}}(f_{j}^{i})^{2}} where fjisuperscriptsubscript𝑓𝑗𝑖f_{j}^{i} is jthsuperscript𝑗𝑡ℎj^{th} feature map value at pixel i𝑖i at a given layer. This assigns each map a single value e.g., the CamVid training set would have a 646464 dimensional vector for each training sample for layer 4 of the SegNet. We now compute a histogram of the top ‘N’ elements of each such vector over all the samples. This histogram shows the most activated features in that layer over the training set. For any ‘N’, we set the remainder of feature maps to zero (ablation) and decode the pixel-wise labelling for a given input sample. Note that since our training is modular, this can be done after each deeper layer has been added. Some results of the top ’N’ feature activations based labelling across all the layers are shown in Fig. 3. We observe firstly that the predictions get smoother as depth is increased which is a consequence of larger spatial context in the input space. More interestingly, the top-1 4th layer features predict almost entirely the static scene classes and “fill in” the missing cars e.g. with sidewalk. Given the feature(s) which get activated for cars are zeroed out, this prediction is reasonable and indicates the network is able to learn spatial context/class location information. Similarly, trees are filled in with buildings and bollards are extended to poles. In contrast, this effect is less clear and gets worse for shallower layers. This suggests subsets of features in the deeper layers are more “tuned” to certain scene categories in agreement with earlier work . We would like to add here that our efforts to perform an ablation study by choosing each feature map in turn and setting the remaining to zero produced results which were not clearly interpretable. It is also interesting to note that for shallower layers to produce qualitatively better predictions ’N’ has to be set to about 5 or 10. The corresponding histogram has atleast 50%percent5050\\% of the features activated as opposed to about 15%percent1515\\% for the top-1 in layer 4, indicating deeper features are tuned to groups of related categories. ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_12",
"text": " A number of outdoor scene datasets are available for semantic parsing (10, 30, 1, 9). Out of these, we chose the CamVid and KITTI datasets which contains 11 semantic classes such as road, building, cars, pedestrians etc.. There is a large imbalance in their frequencies . Road, Sky, Building pixels are approximately 40−50405040-50 times more than pedestrian, poles, sign-symbols, cars, bicyclists in the dataset making it very challenging to label smaller categories. This dataset contains video sequences, thus we are able to benchmark our approach with those which use motion and structure (20, 36, 2) and video segments . Other datasets have more balanced label frequencies and are still image datasets. Another reason for choosing CamVid as compared to SIFT-flow, LabelMe is that the size of the training set is small (367367367) making it feasible to train the SegNet given a standard GPU in reasonable time. The CamVid dataset also contains train and test images (233233233) in day and dusk (poor lighting) conditions. The qualitative comparisons of SegNet predictions with several well known algorithms (unaries, unaries+CRF) are shown in Fig. 4. The qualitative results show the ability of the SegNet to segment small (cars, pedestrians, bicyclist) classes while producing a smooth segmentation of the overall scene. The other methods shown in Fig. 4 use structure from motion based cues. Lacking this cue, the SegNet misses some labels (cars) but fills it in with other reasonable context related classes. The CRF based results are smooth but do not retain small classes. More dense models can be better but with additional cost of inference. Table 1 compares the algorithms numerically and demonstrates its superiority over recent competing methods. The KITTI dataset is the largest publicly available road scene dataset. Recently, some images from this dataset have been hand-labelled (888 classes) for inferring dense 3D semantic maps . Note that the image sizes are approximately, 376×12413761241376\\times 1241, and so we cropped the centre 360×480360480360\\times 480 to make it compatible with the CamVid dataset. We use this dataset to analyse the effect of supervised pre-training using the CamVid data on the KITTI test set. First, we add here that testing on the KITTI samples with only the pre-trained SegNet (using CamVid data) resulted in poor performance. This is because of illumination related differences between the datasets. Therefore, we experimented with three other training variants for the KITTI dataset; (i) training all the layers of the SegNet from a random initialization, denoted SegNet(R), (ii) initializing the parameters with CamVid trained values and training only a soft-max classifier with a hidden layer, denoted SegNet(SM), and (iii) initializing the parameters with CamVid trained values and training only the 4th layer of the SegNet for just 222 epochs, denoted SegNet(L4). High quality predictions are obtained in scenario SegNet(R) as expected (Fig. 5). The good performance with CamVid pre-training and layer 4 training shows that, (i) useful semantic cues can be transferred across datasets using the shallower layers, and (ii) it is beneficial to train the deepest layer of the SegNet first given a small computational budget. Table 3 shows the SegNet(R) is competitive even when temporal cues are not used. For indoor RGBD scenes, the NYU dataset (version 2) is the largest benchmark dataset containing 795795795 training and 654654654 testing images with 141414 class (objects, furniture, wall, ceiling etc.) labelling comparison. The NYU dataset has been used to benchmark Farabet et. al’s multi-scale deep learning approach to scene parsing. This benchmark is therefore useful to compare their method, which uses ad hoc feature upsampling, with our learning to upsample based approach. We also note that they learn approximately 1.2M1.2𝑀1.2M parameters as compared to SegNet’s 1.4M1.4𝑀1.4M parameters. Other methods either use the smaller NYU dataset , different performance measures or test on a small set of classes citeraey. The quantitative analysis shown in Table 2 show that the SegNet predictions are better the multi-scale convnet (2 pooling layers only) in 9 out of 13 classes. This suggests the SegNet can deal with scale changes by increasing context using deeper layers. The overall results are still far from satisfactory and the lack of cues such as height from ground, depth normalization (used in ) are needed to achieve better performance. The qualitative results in Fig. 6 show that the predictions are largely correct but lack sharp edges. This is due to low input resolution of 320×240320240320\\times 240, lack of ground truth around class edges,and errors in depth interpolation. Another reason is that over the different datasets we tested on, the parameters of the SegNet remained the same. We plan to study the NYU dataset in more detail in the future. Additional results can be viewed in the supplementary material. ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
},
{
"id": "1505.07293_all_13",
"text": " We presented SegNet, a fully trainable deep architecture for joint feature learning and mapping an input image in a feed-forward manner to its pixel-wise semantic labels. A highlight of the proposed architecture is its ability to produce smooth segment labels when compared with local patch based classifiers. This is due to deep layers of feature encoding that employ a large spatial context for pixel-wise labelling. To the best of our knowledge this is the first deep learning method to learn to map low resolution encoder feature maps to semantic labels. Both qualitative and numerical accuracy of the SegNet for outdoor and indoor scenes is very competitive, even without use of any CRF post-processing. We have also demonstrated the use of pre-trained SegNet for obtaining good performance on other datasets with a small extra computational effort. The encoder-decoder architecture of the SegNet can also be trained unsupervised and to handle missing data in the input during test time. ",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation"
}
] |
Why is it adequate to say this problem is strictly convex?
|
We prove that hessian is a positive definite matrix [22].
|
[
22
] |
[
{
"id": "2211.02284_all_0",
"text": " There has been a growing interest in using a large-scale dataset to build powerful machine learning models . Self-supervised learning (SSL), which aims to learn a useful representation without labels, is suitable for this trend; it is actively studied in the fields of natural language processing (19, 20) and computer vision (10, 30). In the vision domain, recent SSL methods commonly use data augmentations and induce their visual representation to be augmentation-invariant. They have achieved state-of-the-art performance surpassing supervised representation in a variety of visual tasks, including semi-supervised learning (8, 53), transfer learning , and object detection . ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_1",
"text": " Meanwhile, a line of work uses clustering for un-/self-supervised representation learning. They explicitly assign pseudo-labels to embedded representation via clustering, and the model is thereby trained to predict such labels. These clustering-based methods can account for inter-data similarity; representations are encouraged to encode the semantic structure of data. Prior works (51, 49, 4, 32) have shown encouraging results in small-scaled settings; Caron et al. show that it can also be applied to the large-scaled dataset or even to a non-curated dataset . Recently, several works (2, 8, 39) have adopted the philosophy of augmentation invariance and achieved strong empirical results. They typically assign pseudo-labels using augmented views while predicting the labels by looking at other differently augmented views. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_2",
"text": " Despite its conceptual simplicity, a naive application of clustering to representation learning is hard to achieve, especially when training with large-scale datasets. This is because clustering-based methods are prone to collapse, i.e., all samples are assigned to a single cluster; hence, recent methods heavily rely on extra training techniques or artificial constraints, such as pre-training , sampling strategy , equipartition constraints (2, 8), to avoid collapsing. However, it is unclear if these additions are appropriate or how such components will affect the representation quality. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_3",
"text": " In this paper, we propose Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm that enables clustering-based SSL without any artificial constraints or extra training techniques. MIRA is designed to follow the infomax principle and the intuition that good labels are something that can reduce most of the uncertainty about the data. Our method assigns a pseudo-label in a principled way by constructing an optimization problem. For a given training model that predicts pseudo-labels, the optimization problem finds a solution that maximizes the mutual information (MI) between the pseudo-labels and data while considering the model probability. We formulate the problem as a convex optimization problem and derive the necessary and sufficient condition of solution with the Karush-Kuhn-Tucker (KKT) condition. This solution can be achieved by fixed-point iteration that we prove the convergence. We remark that MIRA does not require any form of extra training techniques or artificial constraints, e.g., equipartition constraints. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_4",
"text": " We apply MIRA to clustering-based representation learning and verify the representation quality on several standard self-supervised learning benchmarks. We demonstrate its state-of-the-art performance on linear/k-NN evaluation, semi-supervised learning, and transfer learning benchmark. We further experiment with convergence speed, scalability, and different components of our method. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_5",
"text": " Our contributions are summarized as follows: • We propose MIRA, a simple and principled pseudo-label assignment algorithm based on mutual information. Our method does not require extra training techniques or artificial constraints. • We apply MIRA to clustering-based representation learning, showing comparable performance against the state-of-the-art methods with half of the training epochs. Specifically, MIRA achieves 75.6% top-1 accuracy on ImageNet linear evaluation with only 400 epochs of training and the best performance in 9 out of 11 datasets in transfer learning. • Representation by MIRA also consistently improves over other information-based SSL methods. Especially our method without multi-crop augmentation achieves 74.1% top-1 accuracy and outperforms BarlowTwins , a baseline information maximization-based self-supervised method. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_6",
"text": " SSL methods are designed to learn the representation by solving pretext tasks, and recent state-of-the-art methods encourage their learned representations to be augmentation invariant. They are based on various pretext tasks: instance discrimination (10, 11, 13, 14), metric learning (28, 12), self-training (54, 9), and clustering (2, 6, 8); only a few account for encoding the semantic structure of data. While some works (47, 21, 35) consider the nearest neighbors in the latent space, our method belongs to the clustering-based SSL method that flexibly accounts for inter-data similarity. Meanwhile, many SSL methods are prone to collapsing into a trivial solution where every representation is mapped into a constant vector. Various schemes and mechanisms are suggested to address this, e.g., the asymmetric structure, redundancy reduction, etc. We will review more relevant works in detail below. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_7",
"text": " Many SSL approaches rely on extra training techniques and artificial assumptions to prevent collapsing. In clustering-based methods, DeepCluster adapts a sampling strategy to sample elements uniformly across pseudo-labels to deal with empty clusters; SeLa and SwAV impose equipartition constraints to balance the cluster distribution. Similarly, SelfClassifier uses a uniform pseudo-label prior, and PCL employs concentration scaling. DINO and ReSSL address collapsing by specific combinations of implementation details, i.e., centering and scaling with an exponential moving average network; their mechanism for preventing collapse is unclear. In this work, we show our method can naturally avoid collapsing without any of these assumptions or training techniques. We achieve results better than baselines with a simple but novel information regularization algorithm. We take a more detailed comparison with SeLa and SwAV after explaining our method in Sec. 3.3. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_8",
"text": " Information maximization is a principal approach to learn representation and to avoid collapse. DeepInfoMax propose the MI maximization between the local and global views for representation learning; the existence of negative pairs prevents training toward the trivial solution. BarlowTwins and W-MSE address the collapsing with redundancy reduction that indirectly maximizes the content information of embedding vectors. Among clustering-based approaches, IIC maximizes the MI between the embedding codes to enable representation learning; similar to ours, TWIST proposes combining the MI between the data and class prediction as a negative loss term with an augmentation invariance consistency loss. Both IIC and TWIST use the MI as a loss function and directly optimize their model parameters with gradient descent of the loss. However, the direct optimization of MI terms by updating model parameters often leads to a sub-optimal solution ; TWIST copes with this issue by appending the normalization layer before softmax and introducing an additional self-labeling stage. In contrast, MIRA addresses the difficulty of MI maximization in a principled way via explicit optimization. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_9",
"text": " In this section, we explain our pseudo-labeling algorithm–MIRA. When applying MIRA to representation learning, we follow the basic framework of clustering-based representation learning that alternates between pseudo-labeling, i.e., cluster assignments, and model training to predict such labels. Figure 1 illustrates our representation training cycle. We will first explain our main contribution, MIRA (pseudo-labeling) and then explain how it applies to model training. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_10",
"text": " Our idea is to employ the information maximization principle into pseudo-labeling. We formulate an optimization problem for online clustering that assigns soft pseudo-labels to mini-batch samples (Sec. 3.1). The problem minimizes the KL divergence between the model prediction (probability) and pseudo-label while maximizing the mutual information between the data and pseudo-label. We propose an iterative fixed point method to solve the optimization problem (Sec. 3.2). For the model training, we use the swapped prediction loss (Sec. 3.3). ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_11",
"text": " We have a classification model111In our setting, the model consists of an encoder, projection head, and classification (prototype) head as in Caron et al. (8, 9); the encoder output will be used as a representation. fθsubscript𝑓𝜃f_{\\theta} parametrized by θ𝜃\\theta that outputs K𝐾K-dimensional logit fθ(𝒙)∈ℝKsubscript𝑓𝜃𝒙superscriptℝ𝐾f_{\\theta}(\\bm{x})\\in\\mathbb{R}^{K} for an image 𝒙𝒙\\bm{x}, where K𝐾K is a predefined number of clusters. The model probability 𝒑𝒑\\bm{p} of an image 𝒙𝒙\\bm{x} is then given by the temperature τtsubscript𝜏𝑡\\tau_{t} scaled output of the model—𝒑≔softmax(fθ(𝒙)/τt)≔𝒑softmaxsubscript𝑓𝜃𝒙subscript𝜏𝑡\\bm{p}\\coloneqq\\text{softmax}(f_{\\theta}(\\bm{x})/\\tau_{t})—as in Caron et al. (8, 9). For a mini-batch of input images 𝑿={𝒙i}i=1B𝑿superscriptsubscriptsubscript𝒙𝑖𝑖1𝐵\\bm{X}=\\{\\bm{x}_{i}\\}_{i=1}^{B}, we denote the set of model probabilities 𝑷={𝒑i}i=1B⊂ℝK𝑷superscriptsubscriptsubscript𝒑𝑖𝑖1𝐵superscriptℝ𝐾\\bm{P}=\\{\\bm{p}_{i}\\}_{i=1}^{B}\\subset\\mathbb{R}^{K}. In our pseudo-labeling, for the given model probabilities 𝑷𝑷\\bm{P}, we want to assign pseudo-labels 𝑾∗={𝒘∗i}i=1Bsuperscript𝑾superscriptsubscriptsubscriptsuperscript𝒘𝑖𝑖1𝐵\\bm{W^{*}}=\\{\\bm{w^{*}}_{i}\\}_{i=1}^{B} that will be used for training the model by predicting them. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_12",
"text": " We argue that such pseudo-labels should maximize the mutual information (MI) between themselves and data while accounting for the model probabilities 𝑷𝑷\\bm{P}. Let ℬ∈{1,…,B}ℬ1…𝐵\\mathcal{B}\\in\\{1,...,B\\} and 𝒴𝑾∈{1,…,K}subscript𝒴𝑾1…𝐾\\mathcal{Y}_{\\bm{W}}\\in\\{1,...,K\\} be the random variables associated with the data index in mini-batch and labels by probability distributions 𝑾={𝒘i}i=1B𝑾superscriptsubscriptsubscript𝒘𝑖𝑖1𝐵\\bm{W}=\\{\\bm{w}_{i}\\}_{i=1}^{B}, respectively. Our online pseudo-label (cluster) assignment is determined by solving the following optimization problem: 𝑾∗superscript𝑾\\displaystyle\\bm{W^{*}} =argmin𝑾⊂ΔK1B∑i=1BDKL(𝒘i,𝒑i)−βI^(𝒴𝑾;ℬ),absentsubscriptargmin𝑾subscriptΔ𝐾1𝐵superscriptsubscript𝑖1𝐵subscript𝐷KLsubscript𝒘𝑖subscript𝒑𝑖𝛽^𝐼subscript𝒴𝑾ℬ\\displaystyle=\\operatorname*{arg\\,min}_{\\bm{W}\\subset\\Delta_{K}}\\frac{1}{B}\\sum_{i=1}^{B}D_{\\text{KL}}(\\bm{w}_{i},\\bm{p}_{i})-\\beta\\hat{I}(\\mathcal{Y}_{\\bm{W}};\\mathcal{B}), (1) where ΔK≔{𝒘∈ℝ+K∣𝒘⊺𝟏K=1}≔subscriptΔ𝐾conditional-set𝒘subscriptsuperscriptℝ𝐾superscript𝒘⊺subscript1𝐾1\\Delta_{K}\\coloneqq\\{\\bm{w}\\in\\mathbb{R}^{K}_{+}\\mid\\bm{w}^{\\intercal}\\bm{1}_{K}=1\\}, I^^𝐼\\hat{I} indicates an empirical (Monte Carlo) estimates of MI, and β𝛽\\beta is a trade-off parameter. The problem consists of the (1) KL divergence term that makes pseudo-labels to be based on the model probability 𝒑𝒑\\bm{p} and (2) MI term between the pseudo-labels and data to induce more information about data into the pseudo-labels. By combining these two terms, we provide a refined pseudo-label that take account of both the model probability and MI. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_13",
"text": " To make the optimization problem tractable, we substitute the MI term I^^𝐼\\hat{I} with the mini-batch estimates of the entropy H^(𝒴𝑾|ℬ)^𝐻conditionalsubscript𝒴𝑾ℬ\\hat{H}(\\mathcal{Y}_{\\bm{W}}|\\mathcal{B}) and marginal entropy H^(𝒴𝑾)^𝐻subscript𝒴𝑾\\hat{H}(\\mathcal{Y}_{\\bm{W}}) in Eq. 2. We get: I^(𝒴𝑾;ℬ)=H^(𝒴𝑾)−H^(𝒴𝑾|ℬ)=−∑j=1Kw¯jlogw¯j+1B∑i=1B∑j=1Kwijlogwij,^𝐼subscript𝒴𝑾ℬ^𝐻subscript𝒴𝑾^𝐻conditionalsubscript𝒴𝑾ℬsuperscriptsubscript𝑗1𝐾subscript¯𝑤𝑗subscript¯𝑤𝑗1𝐵superscriptsubscript𝑖1𝐵superscriptsubscript𝑗1𝐾subscript𝑤𝑖𝑗subscript𝑤𝑖𝑗\\displaystyle\\hat{I}(\\mathcal{Y}_{\\bm{W}};\\mathcal{B})=\\hat{H}(\\mathcal{Y}_{\\bm{W}})-\\hat{H}(\\mathcal{Y}_{\\bm{W}}|\\mathcal{B})=-\\sum_{j=1}^{K}\\bar{w}_{j}\\log{\\bar{w}_{j}}+\\frac{1}{B}\\sum_{i=1}^{B}\\sum_{j=1}^{K}w_{ij}\\log{w_{ij}}, (2) 1B∑i=1BDKL(𝒘i,𝒑i)=−1B∑i=1B∑j=1Kwijlogpij+1B∑i=1B∑j=1Kwijlogwij,1𝐵superscriptsubscript𝑖1𝐵subscript𝐷KLsubscript𝒘𝑖subscript𝒑𝑖1𝐵superscriptsubscript𝑖1𝐵superscriptsubscript𝑗1𝐾subscript𝑤𝑖𝑗subscript𝑝𝑖𝑗1𝐵superscriptsubscript𝑖1𝐵superscriptsubscript𝑗1𝐾subscript𝑤𝑖𝑗subscript𝑤𝑖𝑗\\displaystyle\\frac{1}{B}\\sum_{i=1}^{B}D_{\\text{KL}}(\\bm{w}_{i},\\bm{p}_{i})=-\\frac{1}{B}\\sum_{i=1}^{B}\\sum_{j=1}^{K}w_{ij}\\log{p_{ij}}+\\frac{1}{B}\\sum_{i=1}^{B}\\sum_{j=1}^{K}w_{ij}\\log{w_{ij}}, (3) 𝑾∗=argmin𝑾⊂ΔK−1B∑i=1B∑j=1Kwijlogpij+1−βB∑i=1B∑j=1Kwijlogwij+β∑j=1Kw¯jlogw¯j,superscript𝑾subscriptargmin𝑾subscriptΔ𝐾1𝐵superscriptsubscript𝑖1𝐵superscriptsubscript𝑗1𝐾subscript𝑤𝑖𝑗subscript𝑝𝑖𝑗1𝛽𝐵superscriptsubscript𝑖1𝐵superscriptsubscript𝑗1𝐾subscript𝑤𝑖𝑗subscript𝑤𝑖𝑗𝛽superscriptsubscript𝑗1𝐾subscript¯𝑤𝑗subscript¯𝑤𝑗\\displaystyle\\bm{W^{*}}=\\operatorname*{arg\\,min}_{\\bm{W}\\subset\\Delta_{K}}-\\frac{1}{B}\\sum_{i=1}^{B}\\sum_{j=1}^{K}w_{ij}\\log{p_{ij}}+\\frac{1-\\beta}{B}\\sum_{i=1}^{B}\\sum_{j=1}^{K}w_{ij}\\log{w_{ij}}+\\beta\\sum_{j=1}^{K}\\overline{w}_{j}\\log{\\overline{w}_{j}}, (4) where w¯j=1B∑i=1Bwijsubscript¯𝑤𝑗1𝐵superscriptsubscript𝑖1𝐵subscript𝑤𝑖𝑗\\overline{w}_{j}=\\frac{1}{B}\\sum_{i=1}^{B}w_{ij} is the marginal probability of a cluster j𝑗j with 𝑾𝑾\\bm{W}. In practice, we find the optimal point 𝑾∗superscript𝑾\\bm{W}^{*} of the optimization problem Eq. 4 for pseudo-labeling. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_14",
"text": " To solve efficiently, we propose a fixed point iteration that guarantees convergence to the unique optimal solution 𝑾∗superscript𝑾\\bm{W}^{*} of our optimization problem. The method is based on the following proposition. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_15",
"text": " The proposition 1 is driven by applying the Karush-Kuhn-Tucker (KKT) conditions to the optimization problem Eq. 4. By substituting the necessary and sufficient condition (Eq. 5) into the marginal probability w¯j=1B∑i=1Bwijsubscript¯𝑤𝑗1𝐵superscriptsubscript𝑖1𝐵subscript𝑤𝑖𝑗\\overline{w}_{j}=\\frac{1}{B}\\sum_{i=1}^{B}w_{ij}, we get the necessary and sufficient condition for w∗¯¯superscript𝑤\\overline{w^{*}}: w∗¯j=w∗¯j−β1−β1B∑i=1Bpij11−β∑k=1Kw∗¯k−β1−βpik11−β⇔w∗¯j=(1B∑i=1Bpij11−β∑k=1Kw∗¯k−β1−βpik11−β)1−β.⇔subscript¯superscript𝑤𝑗superscriptsubscript¯superscript𝑤𝑗𝛽1𝛽1𝐵superscriptsubscript𝑖1𝐵superscriptsubscript𝑝𝑖𝑗11𝛽superscriptsubscript𝑘1𝐾superscriptsubscript¯superscript𝑤𝑘𝛽1𝛽superscriptsubscript𝑝𝑖𝑘11𝛽subscript¯superscript𝑤𝑗superscriptdelimited-()1𝐵superscriptsubscript𝑖1𝐵superscriptsubscript𝑝𝑖𝑗11𝛽superscriptsubscript𝑘1𝐾superscriptsubscript¯superscript𝑤𝑘𝛽1𝛽superscriptsubscript𝑝𝑖𝑘11𝛽1𝛽\\displaystyle\\overline{w^{*}}_{j}=\\overline{w^{*}}_{j}^{-\\frac{\\beta}{1-\\beta}}\\frac{1}{B}\\sum_{i=1}^{B}\\frac{p_{ij}^{\\frac{1}{1-\\beta}}}{\\sum_{k=1}^{K}\\overline{w^{*}}_{k}^{-\\frac{\\beta}{1-\\beta}}p_{ik}^{\\frac{1}{1-\\beta}}}\\Leftrightarrow\\overline{w^{*}}_{j}=\\Bigg{(}\\frac{1}{B}\\sum_{i=1}^{B}\\frac{p_{ij}^{\\frac{1}{1-\\beta}}}{\\sum_{k=1}^{K}\\overline{w^{*}}_{k}^{-\\frac{\\beta}{1-\\beta}}p_{ik}^{\\frac{1}{1-\\beta}}}\\Bigg{)}^{1-\\beta}. (6) Based on Eq. 6, we propose the update rule for {uj(n)}j=1K⊂ℝ+superscriptsubscriptsuperscriptsubscript𝑢𝑗𝑛𝑗1𝐾subscriptℝ\\{u_{j}^{(n)}\\}_{j=1}^{K}\\subset\\mathbb{R}_{+} using the fixed point iteration as follows: ∀j∈{1,…,K},uj(n+1)=(1B∑i=1Bpij11−β∑k=1K(uk(n))−β1−βpik11−β)1−β,formulae-sequencefor-all𝑗1…𝐾subscriptsuperscript𝑢𝑛1𝑗superscriptdelimited-()1𝐵superscriptsubscript𝑖1𝐵superscriptsubscript𝑝𝑖𝑗11𝛽superscriptsubscript𝑘1𝐾superscriptsubscriptsuperscript𝑢𝑛𝑘𝛽1𝛽superscriptsubscript𝑝𝑖𝑘11𝛽1𝛽\\displaystyle\\forall j\\in\\{1,...,K\\},\\quad u^{(n+1)}_{j}=\\bigg{(}\\frac{1}{B}\\sum_{i=1}^{B}\\frac{p_{ij}^{\\frac{1}{1-\\beta}}}{\\sum_{k=1}^{K}(u^{(n)}_{k})^{-\\frac{\\beta}{1-\\beta}}p_{ik}^{\\frac{1}{1-\\beta}}}\\bigg{)}^{1-\\beta}, (7) where uj(n)superscriptsubscript𝑢𝑗𝑛u_{j}^{(n)} converges to w∗¯jsubscript¯superscript𝑤𝑗\\overline{w^{*}}_{j} as n→∞→𝑛n\\rightarrow\\infty since the necessary and sufficient condition (Eq. 6) is satisfied at the convergence. We can easily get w∗ijsubscriptsuperscript𝑤𝑖𝑗{w^{*}}_{ij} by Eq. 5 when the optimal marginal probability w∗¯jsubscript¯superscript𝑤𝑗\\overline{w^{*}}_{j} is given. The proof of the proposition and convergence is in the Appendix. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_16",
"text": " By using the iterative updates of Eq. 7, we get our desirable pseudo-labels 𝑾∗superscript𝑾\\bm{W^{*}}. This requires a few lines of code that are simple to implement. We observe that a few steps of iteration are enough for training. This is supported by the convergence analysis in Sec. 4.3. We use this fixed point iteration for pseudo-labeling and name the method–Mutual Information Regularized Assignment (MIRA) since it finds the pseudo-labels that are regularized by the mutual information. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_17",
"text": " We explain how our pseudo-labeling method is applied to representation learning. We integrate the computed pseudo-labels with swapped prediction loss to train the model; the model is trained to predict the pseudo-labels from the augmented views using the other views. Specifically, given two mini-batches of differently augmented views 𝑿(m)={𝒙i(m)}i=1B,m∈{1,2}formulae-sequencesuperscript𝑿𝑚superscriptsubscriptsuperscriptsubscript𝒙𝑖𝑚𝑖1𝐵𝑚12\\bm{X}^{(m)}=\\{\\bm{x}_{i}^{(m)}\\}_{i=1}^{B},m\\in\\{1,2\\}, MIRA independently assigns pseudo-labels 𝑼(m)={𝒖i(m)}i=1Bsuperscript𝑼𝑚superscriptsubscriptsuperscriptsubscript𝒖𝑖𝑚𝑖1𝐵\\bm{U}^{(m)}=\\{\\bm{u}_{i}^{(m)}\\}_{i=1}^{B} for each mini-batch. In parallel, model fθsubscript𝑓𝜃f_{\\theta} provides the temperature τssubscript𝜏𝑠\\tau_{s} scaled softmax predictions222That is 𝒒i(m)≔softmax(fθ(𝒙i(m))/τs)≔superscriptsubscript𝒒𝑖𝑚softmaxsubscript𝑓𝜃superscriptsubscript𝒙𝑖𝑚subscript𝜏𝑠\\bm{q}_{i}^{(m)}\\coloneqq\\text{softmax}(f_{\\theta}(\\bm{x}_{i}^{(m)})/\\tau_{s}). 𝑸(m)={𝒒i(m)}i=1Bsuperscript𝑸𝑚superscriptsubscriptsuperscriptsubscript𝒒𝑖𝑚𝑖1𝐵\\bm{Q}^{(m)}=\\{\\bm{q}_{i}^{(m)}\\}_{i=1}^{B} of each mini-batch. The swapped prediction loss is given as follows: L(𝑿(1),𝑿(2))𝐿superscript𝑿1superscript𝑿2\\displaystyle L(\\bm{X}^{(1)},\\bm{X}^{(2)}) =ℓ(𝑼(1),𝑸(2))+ℓ(𝑼(2),𝑸(1))absentℓsuperscript𝑼1superscript𝑸2ℓsuperscript𝑼2superscript𝑸1\\displaystyle=\\ell(\\bm{U}^{(1)},\\bm{Q}^{(2)})+\\ell(\\bm{U}^{(2)},\\bm{Q}^{(1)}) =−1B∑i=1B∑j=1Kuij(1)logqij(2)−1B∑i=1B∑j=1Kuij(2)logqij(1).absent1𝐵superscriptsubscript𝑖1𝐵superscriptsubscript𝑗1𝐾subscriptsuperscript𝑢1𝑖𝑗subscriptsuperscript𝑞2𝑖𝑗1𝐵superscriptsubscript𝑖1𝐵superscriptsubscript𝑗1𝐾subscriptsuperscript𝑢2𝑖𝑗subscriptsuperscript𝑞1𝑖𝑗\\displaystyle=-\\frac{1}{B}\\sum_{i=1}^{B}\\sum_{j=1}^{K}u^{(1)}_{ij}\\log{q^{(2)}_{ij}}-\\frac{1}{B}\\sum_{i=1}^{B}\\sum_{j=1}^{K}u^{(2)}_{ij}\\log{q^{(1)}_{ij}}. (8) This loss function (Eq. 8) is minimized with respect to the parameters θ𝜃\\theta of the model fθsubscript𝑓𝜃f_{\\theta} used to produce the predictions 𝑸(1),𝑸(2)superscript𝑸1superscript𝑸2\\bm{Q}^{(1)},\\bm{Q}^{(2)}. For more detailed information about swapped prediction loss, please refer to Caron et al. . ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_18",
"text": " The pseudo-code of MIRA for representation learning with Eq. 8 is provided in the Appendix. In the following experiments, we verify the effectiveness of MIRA for a representation learning purpose. We note that MIRA can integrate recently suggested self-supervised learning components, such as exponential moving average (EMA) or multi-crop (MC) augmentation strategy following the baselines (14, 8, 9). For convenience, in the rest of this paper, we call the representation learning with MIRA also as MIRA. We discuss some further details as follows: ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_19",
"text": " The MI term in Eq. 4 takes a minimum value when collapsing happens. MIRA naturally avoids collapsed solution via penalizing assignment that exhibits low MI. Specifically, unless starting from the collapsed state, MIRA finds MI-maximizing points around the model prediction; it will not choose collapsed pseudo-labels. Hence, the iterative training to predict such labels will not collapse whenever the prediction of pseudo-labels is achievable. Our empirical results verify that MIRA does not require extra training techniques or artificial constraints to address collapsing. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_20",
"text": " SeLa and SwAV formulate their pseudo-labeling process into optimization problems, i.e., optimal transport (OT) problem, and solve it iteratively with Sinkhorn-Knopp (SK) algorithm . To avoid collapse and apply the SK algorithm, they assume the equipartition of data into clusters. Mathematically, the difference to MIRA is in how to deal with the marginal entropy. SeLa and SwAV constrain the marginal entropy to maximum value–equipartition while MIRA decides it by MI regularization333Adding the equipartition constraint into Eq. 4, our problem converts to the OT problem of SwAV .. Asano et al. argue that their pseudo-labels with the OT problem maximize the MI between labels and data indices under the equipartition constraint. However, it more resembles assuming MI maximization and then finding the cluster assignments that are optimal transport to the model prediction. In contrast, MIRA directly maximizes the MI by regularization without artificial constraints. While SwAV performs better than SeLa in most self-supervised benchmarks, we verify that MIRA improves over SwAV in various downstream tasks. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_21",
"text": " In this section, we evaluate the representation quality learned via MIRA. We first provide the implementation details of our representation learning with MIRA (Sec. 4.1). We present our main results on linear, k-NN, semi-supervised learning, and transfer learning benchmarks in comparison to other self-supervised baselines (Sec. 4.2). Finally, we conduct an analysis of MIRA (Sec. 4.3). ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_22",
"text": " We mostly follow the implementation details of representation learning from our baselines (8, 9). More training details about evaluation procedures and analysis are described in the Appendix. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_23",
"text": " The training model (network) consists of an encoder, projection head, and classification head as in Caron et al. . We employ a widely used ResNet50 as our base encoder and use the output of average-pooled 204820482048d embedding for representation training and downstream evaluations. The projection head is a 333-layer fully connected MLP of sizes (2048,2048,d)20482048𝑑(2048,2048,d); each hidden layer is followed by batch normalization and ReLU. The classification head is used to predict the pseudo-labels; it is composed of an L2-normalization layer and a weight-normalization layer of the size d×K𝑑𝐾d\\times K. We use d=256𝑑256d=256 and K=3000𝐾3000K=3000. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_24",
"text": " We train our model on the training set of the ILSVRC-2012 ImageNet-1k dataset without using class labels. We use the same data augmentation scheme (color jittering, Gaussian blur, and solarization) and multi-crop strategy (two 224×224224224224\\times 224 and six 96×96969696\\times 96) used in Caron et al. . We use a batch size of 4096 and employ the LARS optimizer with a weight decay of 10−6superscript10610^{-6}. We use linearly scaled learning rate of lr×batch size/256𝑙𝑟batch size256lr\\times\\text{batch size}/256 with a base learning rate of 0.30.30.3.444Otherwise stated, we also use a linearly scaled learning rate for evaluation training. We adjust the learning rate with 10 epochs of a linear warmup followed by cosine scheduling. We also use an exponential moving average (EMA) network by default. When using EMA, we set the momentum update parameter to start from 0.99 and increase it to 1 by cosine scheduling. We use temperature scales of τs=0.1,τt=0.225formulae-sequencesubscript𝜏𝑠0.1subscript𝜏𝑡0.225\\tau_{s}=0.1,\\tau_{t}=0.225 with a trade-off coefficient β=2/3𝛽23\\beta=2/3. We obtain soft pseudo-labels by 30 steps of the fixed point iteration. We further discuss this choice in Sec. 4.3. Otherwise stated, we use the encoder model trained for 400 epochs with multi-crop augmentations for the downstream task evaluations in this section. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_25",
"text": " Tables 4 and 4 report linear evaluation results. We follow the linear evaluation settings in (28, 10). We train a linear classifier on the top of the frozen trained backbone with the labeled training set of ImageNet. We train for 100 epochs using an SGD optimizer with a batch size of 1024. We choose a learning rate555We use the learning rate of 0.0750.0750.075 for multi-crop 400 epochs training. with a local validation set in the ImageNet train dataset and adjust the learning rate by cosine annealing schedule. We apply random-resized-crop and horizontal flip augmentations for training. We evaluate the representation quality by the linear classifier’s performance on the validation set of ImageNet. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_26",
"text": " Table 4 shows linear evaluation performance in top-1 accuracy for different training epochs. We train and report the performance of MIRA in two settings, with and without multi-crop augmentations. With multi-crop augmentations, MIRA consistently outperforms baselines while achieving 75.6% top-1 accuracy with only 400 epochs of training. We also report that 200 epochs of training with MIRA can outperform the 800 epochs results of other baselines that do not use multi-crops. Without multi-crop augmentations, MIRA performs slightly worse than BYOL . However, MIRA performs the best among the clustering-based (6, 8) and information-driven (53, 26) methods. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_27",
"text": " In Table 4, we compare MIRA to other self-supervised methods with the final performance. MIRA achieves state-of-the-art performance on linear evaluation of ImageNet with only 400 epochs of training. While TWIST can achieve similar performance to MIRA within 450 epochs, they require an extra training stage with self-labeling; without it, they achieve 74.1% accuracy with 800 epochs of training. In contrast, MIRA does not require additional training. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_28",
"text": " In Table 4, we evaluate the trained model on the semi-supervised learning benchmark of ImageNet. Following the evaluation protocol in (28, 10), we add a linear classifier on top of the trained backbone and fine-tune the model with ImageNet 1% and 10% subsets. We report top-1 and top-5 accuracies on the validation set of ImageNet. For the 1% subset, MIRA outperforms the baselines; both the top-1 and top-5 accuracies achieve the best. For the 10% subset, MIRA is comparable to SwAV . ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_29",
"text": " We evaluate the quality of learned representation via the nearest neighbor classifier. We follow the evaluation procedure of Caron et al. . We first store the representations of labeled training data; then, we predict the test data by the majority vote of the k-nearest stored representations. We use 1/10/100% subsets of ImageNet training dataset to produce labeled representations. For ImageNet 1% and 10% subsets, we use the same subsets of semi-supervised learning evaluation. We use the same hyperparameter settings in Caron et al. with k=20𝑘20k=20 nearest neighbors, temperature scaling 666The temperature scaling τ𝜏\\tau is used to calculate contributions αi∼exp(distancei/τ)similar-tosubscript𝛼𝑖subscriptdistance𝑖𝜏\\alpha_{i}\\sim\\exp(\\text{distance}_{i}/\\tau) and voting is weighted by the contributions of the nearest neighbors. of 0.070.070.07, and cosine distance metric. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_30",
"text": " Table 4 shows the classification accuracies on the validation set of ImageNet. The results show that our method achieves the best evaluation performance. Specifically, our method outperforms the previous state-of-the-art DINO on 100% and 10% subset evaluation by 1.2 ∼similar-to\\sim1.4%. We note that BarlowTwins , a method also motivated by information maximization, shows a strong performance of 47.7% in the 1% subset evaluation. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_31",
"text": " We evaluate the representation learned by MIRA on the transfer learning benchmark with FGVC aircraft , Caltech-101 , Standford Cars , CIFAR-10/100 , DTD , Oxford 102 Flowers , Food-101 , Oxford-IIIT Pets , SUN397 , and Pascal VOC2007 datasets. We follow the linear evaluation procedure from Ericsson et al. that fits a multinomial logistic regression classifier on the extracted representations of 2048d from the trained encoder. We perform a hyperparameter search on the L2-normalization coefficient of the logistic regression model. The final performance is evaluated on the model that is retrained on all training and validation sets with the found coefficient. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_32",
"text": " Table 5 shows the performance of our algorithm compared to other baselines in 11 datasets. MIRA outperforms supervised representation on 10 out of 11 datasets. Compared to other self-supervised methods, representations learned from MIRA achieved the best performance in 9 out of 11 datasets, with an average improvement of 0.9% over the second-best baseline method. The results confirm that the representation trained with MIRA has a strong generalization ability for classification. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_33",
"text": " We study the convergence speed of the proposed fixed point iteration in MIRA. We also experiment with the Sinkhorn-Knopp (SK) algorithm used in SwAV as a baseline. Both methods have experimented with a batch size of 512. We observe the converging behavior with the pre-trained models from MIRA and SwAV. Results are averaged over 1000 randomly sampled batches. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_34",
"text": " Figure 2 shows the result of the converging behavior of our method (blue) and SK algorithm (yellow) on trained models of MIRA (left) and SwAV (right). Our fixed-point iteration converges faster than the SK algorithm in both pre-trained models, and our default setting of 30 steps of updates is sufficient for convergence. While we use 30 steps of updates to follow the theoretical motivation in our main experiments, we observe that choosing a small number of iterations is possible in practice.777The result with a small number of iterations is in the Appendix. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_35",
"text": " Table 6 reports an ablation study on how EMA and multi-crop augmentations affect our representation quality. We train a model for 200 epochs in the settings with and without EMA or Multi-crop augmentations. EMA and Multi-crop augmentations greatly improve the linear evaluation performance as in Caron et al. (8, 9). We take a further comparison with baselines that are in the same setups. While the only difference in the pseudo-labeling, our method outperforms SwAV by 1.3% in top-1 accuracy. DINO uses both multi-crop and EMA, our method outperforms DINO with fewer training epochs. The results validate the effectiveness of MIRA. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_36",
"text": " We further validate MIRA’s scalability on the small-and medium-scaled datasets. ResNet-18 is used as a base encoder throughout the experiments. While changing the base encoder, other architectural details remain the same as in ImageNet-1k. We do not apply multi-crop augmentations while using the EMA. We use image sizes of 32×\\times32 and 256×\\times256 for small and medium datasets, respectively. Following the procedures in da Costa et al. , we report the linear evaluation performance on the validation set. More experimental details about the optimizer, batch size, augmentations, etc., are provided in the Appendix. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_37",
"text": " The results are in Table 7. In CIFAR-10 and ImageNet-100, our method outperforms other self-supervised baselines by 0.4% and 0.7% in top-1 accuracy, respectively. For CIFAR-100, our method is comparable to the best performing baseline–BarlowTwins; MIRA performs better in top-5 accuracy. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_38",
"text": " Throughout the experiments in Sec. 4.2, we use a batch size of 4096. While such batch size is commonly used in self-supervised learning, a large amount of GPU memory is required, limiting the accessibility. In Table 8, we test our method with a smaller batch size of 512 that can be used in an 8 GPU machine with 96GB memory. In this setting, we use the SGD optimizer with a weight decay of 10−4superscript10410^{-4}. We also test the robustness of pseudo-labeling with the Sinkhorn-Knopp algorithm in SwAV and compare the results. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_39",
"text": " We report a top-1 linear evaluation performance of both methods after 100 epochs of training. In the result, the performance gap between our method and SwAV is amplified from 2.9% to 6% in the reduced batch size of 512. One possible explanation is that since SwAV is based on the equipartition constraint, the performance of SwAV harshly degrades when the batch size is not enough to match the number of clusters. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_40",
"text": " This paper proposes the mutual information maximization perspective pseudo-labeling algorithm MIRA. We formulate online pseudo-labeling into a convex optimization problem with mutual information regularization and solve it in a principled way. We apply MIRA for representation learning and demonstrate the effectiveness in various self-supervised learning benchmarks. We hope our simple yet theoretically guaranteed approach to mutual information maximization will guide many future applications. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
},
{
"id": "2211.02284_all_41",
"text": " Our mutual information regularization with optimization strategy seems applicable to various tasks and domains, e.g., semi-supervised training . However, we validate the effectiveness only in self-supervised visual representation learning. We note that the performance of MIRA for non-classification downstream tasks888We list the experimental results in the Appendix., e.g., detection, is not as dominating as in the classification tasks. In these tasks, methods that consider local or pixel-wise information achieve superior performance; incorporating such formulation into clustering-based methods seems to be an important future direction. Furthermore, despite our improved training efficiency, the self-supervised learning methods still require massive computation compared to supervised approaches. Such computational requirements may accelerate the environmental problems of global warming. ",
"title": "Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment"
}
] |
I believe T2I models can do this using latent exploration. What is the difference between them? What is novel about T2V’s interpolation?
|
A frame interpolation network for high frame rate generation can make a semantically similar video by taking the average CLIP embedding of all frames from a video as the condition [10].
|
[
10
] |
[
{
"id": "2209.14792_all_0",
"text": " The Internet has fueled collecting billions of (alt-text, image) pairs from HTML pages (Schuhmann et al., 2022), enabling the recent breakthroughs in Text-to-Image (T2I) modeling. However, replicating this success for videos is limited since a similarly sized (text, video) dataset cannot be easily collected. It would be wasteful to train Text-to-Video (T2V) models from scratch when there already exist models that can generate images. Moreover, unsupervised learning enables networks to learn from orders of magnitude more data. This large quantity of data is important to learn representations of more subtle, less common concepts in the world. Unsupervised learning has long had great success in advancing the field of natural language processing (NLP) (Liu et al., 2019a; Brown et al., 2020). Models pre-trained this way yield considerably higher performance than when solely trained in a supervised manner. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_1",
"text": " Inspired by these motivations, we propose Make-A-Video. Make-A-Video leverages T2I models to learn the correspondence between text and the visual world, and uses unsupervised learning on unlabeled (unpaired) video data, to learn realistic motion. Together, Make-A-Video generates videos from text without leveraging paired text-video data. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_2",
"text": " Clearly, text describing images does not capture the entirety of phenomena observed in videos. That said, one can often infer actions and events from static images (e.g. a woman drinking coffee, or an elephant kicking a football) as done in image-based action recognition systems (Girish et al., 2020). Moreover, even without text descriptions, unsupervised videos are sufficient to learn how different entities in the world move and interact (e.g. the motion of waves at the beach, or of an elephant’s trunk). As a result, a model that has only seen text describing images is surprisingly effective at generating short videos, as demonstrated by our temporal diffusion-based method. Make-A-Video sets the new state-of-the-art in T2V generation. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_3",
"text": " Using function-preserving transformations, we extend the spatial layers at the model initialization stage, to include temporal information. The extended spatial-temporal network includes new attention modules that learn temporal world dynamics from a collection of videos. This procedure significantly accelerates the T2V training process by instantaneously transferring the knowledge from a previously trained T2I network to a new T2V one. To enhance the visual quality, we train spatial super-resolution models as well as frame interpolation models. This increases the resolution of the generated videos, as well as enables a higher (controllable) frame rate. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_4",
"text": " Our main contributions are: • We present Make-A-Video – an effective method that extends a diffusion-based T2I model to T2V through a spatiotemporally factorized diffusion model. • We leverage joint text-image priors to bypass the need for paired text-video data, which in turn allows us to potentially scale to larger quantities of video data. • We present super-resolution strategies in space and time that, for the first time, generate high-definition, high frame-rate videos given a user-provided textual input. • We evaluate Make-A-Video against existing T2V systems and present: (a) State-of-the-art results in quantitative as well as qualitative measures, and (b) A more thorough evaluation than existing literature in T2V. We also collect a test set of 300 prompts for zero-shot T2V human evaluation which we plan to release. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_5",
"text": " Text-to-Image Generation. (Reed et al., 2016) is among the first methods to extend unconditional Generative Adversairal Network (GAN) (Goodfellow et al., 2014) to T2I generation. Later GAN variants have focused on progressive generation (Zhang et al., 2017; Hong et al., 2018), or better text-image alignment (Xu et al., 2018; Zhang et al., 2021). The pioneering work of DALL-E (Ramesh et al., 2021) considers T2I generation as a sequence-to-sequence translation problem using a discrete variational auto-encoder (VQVAE) and Transformer (Vaswani et al., 2017). Additional variants (Ding et al., 2022) have been proposed since then. For example, Make-A-Scene (Gafni et al., 2022) explores controllable T2I generation using semantic maps. Parti (Yu et al., 2022a) aims for more diverse content generation through an encoder-decoder architecture and an improved image tokenizer (Yu et al., 2021). On the other hand, Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020) are successfully leveraged for T2I generation. GLIDE (Nichol et al., 2021) trained a T2I and an upsampling diffusion model for cascade generation. GLIDE’s proposed classifier-free guidance has been widely adopted in T2I generation to improve image quality and text faithfulness. DALLE-2 (Ramesh et al., 2022) leverages the CLIP (Radford et al., 2021) latent space and a prior model. VQ-diffusion (Gu et al., 2022) and stable diffusion (Rombach et al., 2022) performs T2I generation in the latent space instead of pixel space to improve efficiency. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_6",
"text": " Text-to-Video Generation. While there is remarkable progress in T2I generation, the progress of T2V generation lags behind largely due to two main reasons: the lack of large-scale datasets with high-quality text-video pairs, and the complexity of modeling higher-dimensional video data. Early works (Mittal et al., 2017; Pan et al., 2017; Marwah et al., 2017; Li et al., 2018; Gupta et al., 2018; Liu et al., 2019b) are mainly focused on video generation in simple domains, such as moving digits or specific human actions. To our knowledge, Sync-DRAW (Mittal et al., 2017) is the first T2V generation approach that leverages a VAE with recurrent attention. (Pan et al., 2017) and (Li et al., 2018) extend GANs from image generation to T2V generation. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_7",
"text": " More recently, GODIVA (Wu et al., 2021a) is the first to use 2D VQVAE and sparse attention for T2V generation supporting more realistic scenes. NÜWA (Wu et al., 2021b) extends GODIVA, and presents a unified representation for various generation tasks in a multitask learning scheme. To further improve the performance of T2V generation, CogVideo (Hong et al., 2022) is built on top of a frozen CogView-2 (Ding et al., 2022) T2I model by adding additional temporal attention modules. Video Diffusion Models (VDM) (Ho et al., 2022) uses a space-time factorized U-Net with joint image and video data training. While both CogVideo and VDM collected 10M private text-video pairs for training, our work uses solely open-source datasets, making it easier to reproduce. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_8",
"text": " Leveraging Image Priors for Video Generation. Due to the complexity of modeling videos and the challenges in high-quality video data collection, it is natural to consider leveraging image priors for videos to simplifying the learning process. After all, an image is a video with a single frame (Bain et al., 2021). In unconditional video generation, MoCoGAN-HD (Tian et al., 2021) formulates video generation as the task of finding a trajectory in the latent space of a pre-trained and fixed image generation model. In T2V generation, NÜWA (Wu et al., 2021b) combines image and video datasets in a multitask pre-training stage to improve model generalization for fine-tuning. CogVideo (Hong et al., 2022) uses a pre-trained and fixed T2I model for T2V generation with only a small number of trainable parameters to reduce memory usage during training. But the fixed autoencoder and T2I models can be restrictive for T2V generation. The architecture of VDM (Ho et al., 2022) can enable joint image and video generation. However, they sample random independent images from random videos as their source of images, and do not leverage the massive text-image datasets. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_9",
"text": " Make-A-Video differs from previous works in several aspects. First, our architecture breaks the dependency on text-video pairs for T2V generation. This is a significant advantage compared to prior work, that has to be restricted to narrow domains (Mittal et al., 2017; Gupta et al., 2018; Ge et al., 2022; Hayes et al., 2022), or require large-scale paired text-video data (Hong et al., 2022; Ho et al., 2022). Second, we fine-tune the T2I model for video generation, gaining the advantage of adapting the model weights effectively, compared to freezing the weights as in CogVideo (Hong et al., 2022). Third, motivated from prior work on efficient architectures for video and 3D vision tasks (Ye et al., 2019; Qiu et al., 2017; Xie et al., 2018), our use of pseudo-3D convolution (Qiu et al., 2017) and temporal attention layers not only better leverage a T2I architecture, it also allows for better temporal information fusion compared to VDM (Ho et al., 2022). ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_10",
"text": " Make-A-Video consists of three main components: (i) A base T2I model trained on text-image pairs (Sec. 3.1), (ii) spatiotemporal convolution and attention layers that extend the networks’ building blocks to the temporal dimension (Sec. 3.2), and (iii) spatiotemporal networks that consist of both spatiotemporal layers, as well as another crucial element needed for T2V generation - a frame interpolation network for high frame rate generation (Sec. 3.3). ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_11",
"text": " Make-A-Video’s final T2V inference scheme (depicted in Fig. 2) can be formulated as: yt^=SRh∘SRlt∘↑F∘Dt∘P∘(x^,Cx(x)),\\hat{y_{t}}=\\operatorname{SR}_{h}\\circ\\operatorname{SR}_{l}^{t}\\circ\\uparrow_{F}\\circ\\operatorname{D}^{t}\\circ\\operatorname{P}\\circ(\\hat{x},\\operatorname{C}_{x}(x)), (1) where yt^^subscript𝑦𝑡\\hat{y_{t}} is the generated video, SRh,SRlsubscriptSRℎsubscriptSR𝑙\\operatorname{SR}_{h},\\operatorname{SR}_{l} are the spatial and spatiotemporal super-resolution networks (Sec. 3.2), ↑Fsubscript↑𝐹\\uparrow_{F} is a frame interpolation network (Sec. 3.3), DtsuperscriptD𝑡\\operatorname{D}^{t} is the spatiotemporal decoder (Sec. 3.2), PP\\operatorname{P} is the prior (Sec. 3.1), x^^𝑥\\hat{x} is the BPE-encoded text, CxsubscriptC𝑥\\operatorname{C}_{x} is the CLIP text encoder (Radford et al., 2021), and x𝑥x is the input text. The three main components are described in detail in the following sections. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_12",
"text": " Prior to the addition of the temporal components, we train the backbone of our method: a T2I model trained on text-image pairs, sharing the core components with the work of (Ramesh et al., 2022). ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_13",
"text": " We use the following networks to produce high-resolution images from text: (i) A prior network PP\\operatorname{\\textbf{P}}, that during inference generates image embeddings yesubscript𝑦𝑒y_{e} given text embeddings xesubscript𝑥𝑒x_{e} and BPE encoded text tokens x^^𝑥\\hat{x}, (ii) a decoder network DD\\operatorname{\\textbf{D}} that generates a low-resolution 64×64646464\\times 64 RGB image y^lsubscript^𝑦𝑙\\hat{y}_{l}, conditioned on the image embeddings yesubscript𝑦𝑒y_{e}, and (iii) two super-resolution networks SRlsubscriptSRl\\operatorname{\\textbf{SR}}_{\\textbf{l}},SRhsubscriptSRh\\operatorname{\\textbf{SR}}_{\\textbf{h}} that increase the generated image y^lsubscript^𝑦𝑙\\hat{y}_{l} resolution to 256×256256256256\\times 256 and 768×768768768768\\times 768 pixels respectively, resulting in the final222We then downsample to 512 using bicubic interpolation for a cleaner aesthetic. Maintaining a clean aesthetic for high definition videos is part of future work. generated image y^^𝑦\\hat{y}. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_14",
"text": " In order to expand the two-dimensional (2D) conditional network into the temporal dimension, we modify the two key building blocks that now require not just spatial but also temporal dimensions in order to generate videos: (i) Convolutional layers (Sec. 3.2.1), and (ii) attention layers (Sec. 3.2.2), discussed in the following two subsections. Other layers, such as fully-connected layers, do not require specific handling when adding an additional dimension, as they are agnostic to structured spatial and temporal information. Temporal modifications are made in most U-Net-based diffusion networks: the spatiotemporal decoder DtsuperscriptDt\\operatorname{D^{t}} now generating 161616 RGB frames, each of size 64×64646464\\times 64, the newly added frame interpolation network ↑Fsubscript↑𝐹\\uparrow_{F}, increasing the effective frame rate by interpolating between the 161616 generated frames (as depicted in Fig. 2), and the super-resolution networks SRltsuperscriptsubscriptSR𝑙𝑡\\operatorname{SR}_{l}^{t}. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_15",
"text": " Note that super resolution involves hallucinating information. In order to not have flickering artifacts, the hallucination must be consistent across frames. As a result, our SRltsuperscriptsubscriptSR𝑙𝑡\\operatorname{SR}_{l}^{t} module operates across spatial and temporal dimensions. In qualitative inspection we found this to significantly outperform per-frame super resolution. It is challenging to extend SRhsubscriptSRℎ\\operatorname{SR}_{h} to the temporal dimension due to memory and compute constraints, as well as a scarcity of high resolution video data. So SRhsubscriptSRℎ\\operatorname{SR}_{h} operates only along the spatial dimensions. But to encourage consistent detail hallucination across frames, we use the same noise initialization for each frame. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_16",
"text": " Motivated by separable convolutions (Chollet, 2017), we stack a 1D convolution following each 2D convolutional (conv) layer, as shown in Fig. 3. This facilitates information sharing between the spatial and temporal axes, without succumbing to the heavy computational load of 3D conv layers. In addition, it creates a concrete partition between the pre-trained 2D conv layers and the newly initialized 1D conv layers, allowing us to train the temporal convolutions from scratch, while retaining the previously learned spatial knowledge in the spatial convolutions’ weights. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_17",
"text": " Given an input tensor h∈ℝB×C×F×H×Wℎsuperscriptℝ𝐵𝐶𝐹𝐻𝑊h\\in\\mathbb{R}^{B\\times C\\times F\\times H\\times W}, where B𝐵B, C𝐶C, F𝐹F, H𝐻H, W𝑊W are the batch, channels, frames, height, and width dimensions respectively, the Pseudo-3D convolutional layer is defined as: ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_18",
"text": " ConvP3D(h):=Conv1D(Conv2D(h)∘T)∘T,assign𝐶𝑜𝑛subscript𝑣𝑃3𝐷ℎ𝐶𝑜𝑛subscript𝑣1𝐷𝐶𝑜𝑛subscript𝑣2𝐷ℎ𝑇𝑇Conv_{P3D}(h):=Conv_{1D}(Conv_{2D}(h)\\circ T)\\circ T, (2) ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_19",
"text": " where the transpose operator ∘Tabsent𝑇\\circ T swaps between the spatial and temporal dimensions. For smooth initialization, while the Conv2D𝐶𝑜𝑛subscript𝑣2𝐷Conv_{2D} layer is initialized from the pre-trained T2I model, the Conv1D𝐶𝑜𝑛subscript𝑣1𝐷Conv_{1D} layer is initialized as the identity function, enabling a seamless transition from training spatial-only layers, to spatiotemporal layers. Note that at initialization, the network will generate K different images (due to random noise), each faithful to the input text but lacking temporal coherence. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_20",
"text": " A crucial component of T2I networks is the attention layer, where in addition to self-attending to extracted features, text information is injected to several network hierarchies, alongside other relevant information, such as the diffusion time-step. While using 3D convolutional layers is computationally heavy, adding the temporal dimension to attention layers is outright infeasible in terms of memory consumption. Inspired by the work of (Ho et al., 2022), we extend our dimension decomposition strategy to attention layers as well. Following each (pre-trained) spatial attention layer, we stack a temporal attention layer, which as with the convolutional layers, approximates a full spatiotemporal attention layer. Specifically, given an input tensor hℎh, we define flatten𝑓𝑙𝑎𝑡𝑡𝑒𝑛flatten as a matrix operator that flattens the spatial dimension into h′∈RB×C×F×HWsuperscriptℎ′superscript𝑅𝐵𝐶𝐹𝐻𝑊h^{\\prime}\\in R^{B\\times C\\times F\\times HW}. unflatten𝑢𝑛𝑓𝑙𝑎𝑡𝑡𝑒𝑛unflatten is defined as the inverse matrix operator. The Pseudo-3D attention layer therefore is therefore defined as: ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_21",
"text": " ATTNP3D(h)=unflatten(ATTN1D(ATTN2D(flatten(h))∘T)∘T).𝐴𝑇𝑇subscript𝑁𝑃3𝐷ℎ𝑢𝑛𝑓𝑙𝑎𝑡𝑡𝑒𝑛𝐴𝑇𝑇subscript𝑁1𝐷𝐴𝑇𝑇subscript𝑁2𝐷𝑓𝑙𝑎𝑡𝑡𝑒𝑛ℎ𝑇𝑇ATTN_{P3D}(h)=unflatten(ATTN_{1D}(ATTN_{2D}(flatten(h))\\circ T)\\circ T). (3) ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_22",
"text": " Similarly to ConvP3D𝐶𝑜𝑛subscript𝑣𝑃3𝐷Conv_{P3D}, to allow for smooth spatiotemporal initialization, the ATTN2D𝐴𝑇𝑇subscript𝑁2𝐷ATTN_{2D} layer is initialized from the pre-trained T2I model and the ATTN1D𝐴𝑇𝑇subscript𝑁1𝐷ATTN_{1D} layer is initialized as the identity function. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_23",
"text": " Factorized space-time attention layers have also been used in VDM (Ho et al., 2022) and CogVideo (Hong et al., 2022). CogVideo has added temporal layers to each (frozen) spatial layers whereas we train them jointly. In order to force their network to train for images and videos interchangeably, VDM has extended their 2D U-Net to 3D through unflattened 1x3x3 convolution filters, such that the subsequent spatial attention remains 2D, and added 1D temporal attention through relative position embeddings. In contrast, we apply an additional 3x1x1 convolution projection (after each 1x3x3) such that the temporal information will also be passed through each convolution layer. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_24",
"text": " Frame rate conditioning. In addition to the T2I conditionings, similar to CogVideo (Hong et al., 2022), we add an additional conditioning parameter fps𝑓𝑝𝑠fps, representing the number of frames-per-second in a generated video. Conditioning on a varying number of frames-per-second, enables an additional augmentation method to tackle the limited volume of available videos at training time, and provides additional control on the generated video at inference time. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_25",
"text": " In addition to the spatiotemporal modifications discussed in Sec. 3.2, we train a new masked frame interpolation and extrapolation network ↑Fsubscript↑𝐹\\uparrow_{F}, capable of increasing the number of frames of the generated video either by frame interpolation for a smoother generated video, or by pre/post frame extrapolation for extending the video length. In order to increase the frame rate within memory and compute constraints, we fine-tune a spatiotemporal decoder DtsuperscriptDt\\operatorname{D^{t}} on the task of masked frame interpolation, by zero-padding the masked input frames, enabling video upsampling. When fine-tuning on masked frame interpolation, we add an additional 4 channels to the input of the U-Net: 3 channels for the RGB masked video input and an additional binary channel indicating which frames are masked. We fine-tune with variable frame-skips and fps𝑓𝑝𝑠fps conditioning to enable multiple temporal upsample rates at inference time. We denote ↑Fsubscript↑𝐹\\uparrow_{F} as the operator that expands the given video tensor through masked frame interpolation. For all of our experiments we applied ↑Fsubscript↑𝐹\\uparrow_{F} with frame skip 5 to upsample a 16 frame video to 76 frames ((16-1)×\\times5+1). Note that we can use the same architecture for video extrapolation or image animation by masking frames at the beginning or end of a video. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_26",
"text": " The different components of Make-A-Video described above are trained independently. The only component that receives text as input is the prior PP\\operatorname{P}. We train it on paired text-image data and do not fine-tune it on videos. The decoder, prior, and two super-resolution components are first trained on images alone (no aligned text). Recall that the decoder receives CLIP image embedding as input, and the super-resolution components receive downsampled images as input during training. After training on images, we add and initialize the new temporal layers and fine-tune them over unlabeled video data. 16 frames are sampled from the original video with random fps𝑓𝑝𝑠fps ranging from 111 to 303030. We use the beta function for sampling and while training the decoder, start from higher FPS ranges (less motion) and then transition to lower FPS ranges (more motion). The masked-frame-interpolation component is fine-tuned from the temporal decoder. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_27",
"text": " Datasets. To train the image models, we use a 2.32.32.3B subset of the dataset from (Schuhmann et al., ) where the text is English. We filter out sample pairs with NSFW images 333We used this model: https://github.com/GantMan/nsfw_model, toxic words in the text, or images with a watermark probability larger than 0.50.50.5. We use WebVid-10M (Bain et al., 2021) and a 101010M subset from HD-VILA-100M (Xue et al., 2022) 444These 100100100M clips are sourced from 3.13.13.1M videos. We randomly downloaded 333 clips per video to form our HD-VILA-10M subset. to train our video generation models. Note that only the videos (no aligned text) are used. The decoder DtsuperscriptD𝑡\\operatorname{D}^{t} and the interpolation model is trained on WebVid-10M. SRltsuperscriptsubscriptSR𝑙𝑡\\operatorname{SR}_{l}^{t} is trained on both WebVid-10M and HD-VILA-10M. While prior work (Hong et al., 2022; Ho et al., 2022) have collected private text-video pairs for T2V generation, we use only public datasets (and no paired text for videos). We conduct automatic evaluation on UCF-101 (Soomro et al., 2012) and MSR-VTT (Xu et al., 2016) in a zero-shot setting. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_28",
"text": " Automatic Metrics. For UCF-101, we write one template sentence for each class (without generating any video) and fix it for evaluation. We report Frechet Video Distance (FVD) and Inception Score (IS) on 101010K samples following (Ho et al., 2022). We generate samples that follow the same class distribution as the training set. For MSR-VTT, we report Frechet Inception Distance (FID) (Parmar et al., 2022) and CLIPSIM (average CLIP similarity between video frames and text) (Wu et al., 2021a), where all 59,7945979459,794 captions from the test set are used, following (Wu et al., 2021b). ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_29",
"text": " Human Evaluation Set and Metrics. We collect an evaluation set from Amazon Mechanical Turk (AMT) that consists of 300300300 prompts. We asked annotators what they would be interested in generating if there were a T2V system. We filtered out prompts that were incomplete (e.g., “jump into water”), too abstract (e.g., “climate change”), or offensive. We then identified 555 categories (animals, fantasy, people, nature and scenes, food and beverage) and selected prompts for these categories. These prompts were selected without generating any videos for them, and were kept fixed. In addition, we also used the DrawBench prompts from Imagen (Saharia et al., 2022) for human evaluation. We evaluate video quality and text-video faithfulness. For video quality, we show two videos in random order and ask annotators which one is of higher quality. For faithfulness, we additionally show the text and ask annotators which video has a better correspondence with the text (we suggest them to ignore quality issues). In addition, we also conducted human evaluation to compare video motion realism of our interpolation model and FILM (Reda et al., 2022). For each comparison, we use the majority vote from 555 different annotators as the final result. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_30",
"text": " Automatic Evaluation on MSR-VTT. In addition to GODIVA and NÜWA that report on MSR-VTT, we also perform inference on the officially released CogVideo model with both Chinese and English inputs for comparison. For CogVideo and Make-A-Video, we only generate one sample for each prompt in a zero-shot setting. We only generate videos that are at 16×256×2561625625616\\times 256\\times 256 as the evaluation models do not expect higher resolutions and frame rate. The results are shown in Table 1. Make-A-Video’s zero-shot performance is much better than GODIVA and NÜWA which are trained on MSR-VTT. We also outperform CogVideo in both Chinese and English settings. Thus, Make-A-Video has significantly better generalization capabilities than prior work. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_31",
"text": " Automatic Evaluation on UCF-101. UCF-101 is a popular benchmark to evaluate video generation and has been recently used in T2V models. CogVideo performed finetuning of their pretrained model for class-conditional video generation. VDM (Ho et al., 2022) performed unconditional video generation and trained from scratch on UCF-101. We argue that both settings are not ideal and is not a direct evaluation of the T2V generation capabilities. Moreover, the FVD evaluation model expects the videos to be 0.50.50.5 second (161616 frames), which is too short to be used for video generation in practice. Nevertheless, in order to compare to prior work, we conducted evaluation on UCF-101 in both zero-shot and finetuning settings. As shown in Table 2, Make-A-Video’s zero-shot performance is already competitive than other approaches that are trained on UCF-101, and is much better than CogVideo, which indicates that Make-A-Video can generalize better even to such a specific domain. Our finetuning setting achieves state-of-the-art results with a significant reduction in FVD, which suggests that Make-A-Video can generate more coherent videos than prior work. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_32",
"text": " Human Evaluation. We compare to CogVideo (the only public zero-shot T2V generation model) on DrawBench and our test set. We also evaluate on the 282828 videos shown on the webpage of VDM (Ho et al., 2022) (which may be biased towards showcasing the model’s strengths). Since this is a very small test set, we randomly generate 888 videos for each input and perform evaluation 888 times and report the average results. We generate videos at 76×256×2567625625676\\times 256\\times 256 resolution for human evaluation. The results are shown in Table 3. Make-A-Video achieves much better performance in both video quality and text-video faithfulness in all benchmarks and comparisons. For CogVideo, the results are similar on DrawBench and our evaluation set. For VDM, it is worth noting that we have achieved significantly better results without any cherry-picking. We also evaluate our frame interpolation network in comparison to FILM (Reda et al., 2022). We first generate low frame rate videos (1 FPS) from text prompts in DrawBench and our evaluation set, then use each method to upsample to 4 FPS. Raters choose our method for more realistic motion 62% of the time on our evaluation set and 54% of the time on DrawBench. We observe that our method excels when there are large differences between frames where having real-world knowledge of how objects move is crucial. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_33",
"text": " Examples of Make-A-Video’s generations are shown in Figure 1. In this section, we will show T2V generation comparison to CogVideo (Hong et al., 2022) and VDM (Ho et al., 2022), and video interpolation comparison to FILM (Reda et al., 2022). In addition, our models can be used for a variety of other tasks such as image animation, video variation, etc. Due to space constraint, we only show a single example of each. Figure 4 (a) shows the comparison of Make-A-Video to CogVideo and VDM. Make-A-Video can generate richer content with motion consistency and text correspondence. Figure 4 (b) shows an example of image animation where we condition the masked frame interpolation and extrapolation network ↑Fsubscript↑𝐹\\uparrow_{F} on the image and CLIP image embedding to extrapolate the rest of the video. This allows a user to generate a video using their own image – giving them the opportunity to personalize and directly control the generated video. Figure 4 (c) shows a comparison of our approach to FILM (Reda et al., 2022) on the task of interpolation between two images. We achieve this by using the interpolation model that takes the two images as the beginning and end frames and masks 141414 frames in between for generation. Our model generates more semantically meaningful interpolation while FILM seems to primarily smoothly transition between frames without semantic real-world understanding of what is moving. Figure 4 (d) shows an example for video variation. We take the average CLIP embedding of all frames from a video as the condition to generate a semantically similar video. More video generation examples and applications can be found here: make-a-video.github.io. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_34",
"text": " Learning from the world around us is one of the greatest strengths of human intelligence. Just as we quickly learn to recognize people, places, things, and actions through observation, generative systems will be more creative and useful if they can mimic the way humans learn. Learning world dynamics from orders of magnitude more videos using unsupervised learning helps researchers break away from the reliance on labeled data. The presented work has shown how labeled images combined effectively with unlabeled video footage can achieve that. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_35",
"text": " As a next step we plan to address several of the technical limitations. As discussed earlier, our approach can not learn associations between text and phenomenon that can only be inferred in videos. How to incorporate these (e.g., generating a video of a person waving their hand left-to-right or right-to-left), along with generating longer videos, with multiple scenes and events, depicting more detailed stories, is left for future work. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_36",
"text": " As with all large-scale models trained on data from the web, our models have learnt and likely exaggerated social biases, including harmful ones. Our T2I generation model was trained on data that removed NSFW content and toxic words. All our data (image as well as videos) is publicly available, adding a layer of transparency to our models, and making it possible for the community to reproduce our work. ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
},
{
"id": "2209.14792_all_37",
"text": " Mustafa Said Mehmetoglu, Jacob Xu, Katayoun Zand, Jia-Bin-Huang, Jiebo Luo, Shelly Sheynin, Angela Fan, Kelly Freed. Thank you for your contributions! ",
"title": "Make-A-Video: Text-to-Video Generation without Text-Video Data-VIDEO DATA"
}
] |
Why does the approach need a gating mechanism when a good retrieval should be able to correctly filter out irrelevant feedback from the memory?
|
A gating function is needed precisely because the retrieval function might not be able to filter out irrelevant feedback from memory [16]. This is a challenging thing to implement since syntactically or lexically similar things might or might not refer to similar concepts [36]. Another challenge is with adversarial feedback, made by users intending to mess with the system [62].
|
[
16,
36,
62
] |
[
{
"id": "2201.06009_all_0",
"text": " Language models are now better than ever before at generating realistic content, but still lack commonsense Bender and Koller (2020); Marcus (2021). One failure mode due to a lack of commonsense is in misunderstanding a user’s intent. The typical remedy of retraining with more data is prohibitive due to the cost and infrastructure requirements. In such cases, even if users repeatedly observe the model making a mistake, there are no avenues to provide feedback to the model to make it more accurate and personalized over time. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_1",
"text": " Our goal is to allow users to correct such errors directly through interaction, and without retraining by injecting the knowledge required to correct the model’s misunderstanding. Building upon the recent success of injecting commonsense in the input (Lewis et al., 2020; Talmor et al., 2020), we propose a novel approach of injecting knowledge in the input via interactive feedback from an end-user. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_2",
"text": " Our approach, \\ours, pairs gpt-3 with a growing memory of cases where the model misunderstood user’s intent and was provided with corrective feedback. This feedback is question dependent, and thus the prompt for each sample is edited to adapt to the input. In this sense, our work can be seen as an instance of prompt engineering Liu et al. (2021b) which involves editing the prompts. Our work adds interactivity to prompt engineering as it involves dynamically updating the prompt for every instance. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_3",
"text": " Figure 1 presents a sample interaction between a user and gpt-3 that our setup enables. The model was asked for a similar word. However, the model’s (incorrect) task understanding 𝐮𝐮\\mathbf{u} was “The homophone of good is”. The user can detect such discrepancy between the intended and interpreted task instruction, and can provide feedback 𝐟𝐛𝐟𝐛\\mathbf{fb} as \"similar to means with a similar meaning\", clarifying that they actually wanted a synonym. Crucially, note that such instructional correction is feasible even if the user does not know the correct answer to their question, as they are critiquing the model’s understanding of their intent, rather than the answers themselves. Thus, our setup does not require the users to be experts at tasks being solved, another advantage of our approach. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_4",
"text": " Further, it is desirable to have a system that can leverage past feedback on new, unseen examples for prompt-editing. We maintain a memory ℳℳ\\mathcal{M} of such feedback as a set of key-value pairs, where the key is a misunderstood question, and the value is the user’s feedback to correct that misunderstanding. Given a new question, we check if the model has made a mistake on a similar question earlier, by querying the memory for a similar question. If found, append the corresponding feedback to the question prompt. This mechanism aims to prevent the model from making the same type of mistake twice. This failure-driven reminding mechanism draws inspiration from the theory of recursive reminding in psychology Jacoby and Wahlheim (2013), which suggests humans index error corrections in the context in which those errors occurred. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_5",
"text": " This paper presents the general architecture for the system and provides representative implementations for each component. We then demonstrate the system on four tasks, using simulated user feedback: (1) lexical relations (e.g., antonyms, Figure 1), (2) word scrambling (e.g., anagrams), (3) ethical reasoning with user feedback being the appropriate class of ethical consideration, e.g., “it is about cheating”, using a small set of categories, and (4) ethics reasoning with user feedback being natural language. We find that in all cases, gpt-3’s accuracy significantly increases with time, without retraining, as our approach enables it to use corrective feedback from earlier examples to avoid similar misunderstandings on future examples. In summary, our contributions are: ∙∙\\bullet We show that a large model like gpt-3 can be improved after deployment, without retraining, through a memory-assisted architecture. ∙∙\\bullet Our implementation, \\ours, is the first demonstration that this is possible - this is an important step forward for real use of LMs, and the paper sets out a general architecture that others can build on, a specific implementation, and detailed evaluation on multiple tasks. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_6",
"text": " In Tandon et al. (2022), we show that using a memory of user feedback can be used to repair erroneous model in a supervised setting. In this work, we build upon the recent advances in few-shot prompting to modify gpt-3’s behavior by adding user feedback to the query (prompt). Like others, we use gpt-3 with few-shot prompting, where the prompt consists of a prefix prefix𝑝𝑟𝑒𝑓𝑖𝑥prefix containing a few input-output “training” examples of the task, followed by the input x𝑥x, e.g., a question, to operate on. However, while prior work has focused on constructing better prefixes, e.g., dynamically selecting good “training” examples based on the question Le Scao and Rush (2021); Liu et al. (2021a), or even representing the prefix latently Li and Liang (2021), our work elaborates the input x𝑥x itself to clarify the intended task, by adding user feedback fb𝑓𝑏fb from previous misunderstandings. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_7",
"text": " Similarly, our work can be seen as a form of retrieval-augmented QA. Extensive prior work has used retrievals from a text corpus to aid QA, e.g., Pan et al. (2019); Guu et al. (2020), or retrievals of prior QA pairs for nearest-neighbor QA (Khandelwal et al., 2020). In contrast, we retrieve from a dynamic memory of user feedback. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_8",
"text": " The idea of failure-driven reminding and dynamic memory date back several decades, e.g., Schank (1983); Riesbeck (1981). Our work resurrects these ideas in a modern context. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_9",
"text": " Learning from instruction has become important for large LMs that can perform a task based on direct instruction rather than examples Wei et al. (2021); Mishra et al. (2021). Our work extends this by adding an adaptive component when those instructions are misinterpreted. While it may not be possible for a user to provide meaningful feedback on the output itself, giving feedback on the understanding of the instruction is more feasible. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_10",
"text": " Our approach aims to modify the model’s behavior through prompting, given a wrong answer. An alternative, recently explored approach is “model editing” - updating the model itself by modifying its parameters to fix incorrect answers (Mitchell et al., 2021; De Cao et al., 2021; Hase et al., 2021). Model editing approaches have to date been limited due to uncontrollable out-of-scope changes Mitchell et al. (2021). In contrast, our goal is not just to correct a prediction, but to generalize that correction for new problems by collecting feedback to clarify the misunderstanding without damaging the model’s basic problem-solving acumen. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_11",
"text": " Finally, our work is a simple example of debugging and learning via dialog. While system debugging through dialogue has been explored in many contexts (Hixon et al., 2015; Wang et al., 2016; Davis, 1977), our contribution is a dialogue about the model’s understanding of the user’s intent. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_12",
"text": " In our setup, given an input 𝐱𝐱\\mathbf{x}, a model generates an output 𝐲𝐲\\mathbf{y} and a sentence 𝐮𝐮\\mathbf{u} expressing its understanding of the task, a skill learned through few-shot examples in the prompt (Appendix D). The user can then critique 𝐮𝐮\\mathbf{u} by providing natural language feedback 𝐟𝐛𝐟𝐛\\mathbf{fb}. This is feasible even if the user does not know the correctness of 𝐲𝐲\\mathbf{y} because they are critiquing the model’s understanding of their intent rather the answers themselves. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_13",
"text": " Given a new query, \\oursuses 𝐟𝐛𝐟𝐛\\mathbf{fb} from similar, prior queries to enrich the (few-shot) prompt 𝐩𝐩\\mathbf{p}. We use the principle that if two inputs xisubscript𝑥𝑖{x}_{i} and xjsubscript𝑥𝑗{x}_{j} are similar (i.e., xi∼xjsimilar-tosubscript𝑥𝑖subscript𝑥𝑗{x}_{i}\\sim{x}_{j}), then their feedback 𝐟𝐛isubscript𝐟𝐛𝑖\\mathbf{fb}_{i} and 𝐟𝐛jsubscript𝐟𝐛𝑗\\mathbf{fb}_{j} should be exchangeable (xi∼xj⇔fbi∼fbj)⇔similar-tosubscript𝑥𝑖subscript𝑥𝑗similar-to𝑓subscript𝑏𝑖𝑓subscript𝑏𝑗(x_{i}\\sim x_{j}\\Leftrightarrow fb_{i}\\sim fb_{j}). The underlying assumption here is that for a fixed model, similar inputs will incur similar errors, and thus can use the same feedback for correction. Fig. 2 gives an overview of \\ours, with the following components: ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_14",
"text": " : ℳℳ\\mathcal{M} is a growing table of key (𝐱isubscript𝐱𝑖\\mathbf{x}_{i}) - value (𝐟𝐛isubscript𝐟𝐛𝑖\\mathbf{fb}_{i}) pairs that supports read, write, and lookup operations. The write operation is used whenever a user gives new feedback. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_15",
"text": " : The memory allows lookup operations, denoted as ℳ(𝐱)ℳ𝐱\\mathcal{M}(\\mathbf{x}), that matches the query=𝐱𝐱\\mathbf{x} against all the keys of ℳℳ\\mathcal{M}. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_16",
"text": " : A gating function allowing irrelevant, retrieved feedback to be ignored. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_17",
"text": " Let us briefly recap few-shot prompting with gpt-3. Consider a general setup where given an input 𝐱𝐱\\mathbf{x}, a model is expected to generate an output 𝐲𝐲\\mathbf{y}. In a few-shot prompting mode (Brown et al., 2020), a prompt 𝐩𝐩\\mathbf{p} consists of k𝑘k (𝐱,𝐲)𝐱𝐲(\\mathbf{x},\\mathbf{y}) “in-context” examples, i.e., 𝐩=𝐱1.𝐲1#𝐱2.𝐲2…#𝐱k.𝐲kformulae-sequence𝐩subscript𝐱1subscript𝐲1#subscript𝐱2subscript𝐲2…#subscript𝐱𝑘subscript𝐲𝑘\\mathbf{p}=\\mathbf{x}_{1}.\\mathbf{y}_{1}\\#\\mathbf{x}_{2}.\\mathbf{y}_{2}\\ldots\\#\\mathbf{x}_{k}.\\mathbf{y}_{k}, where ##\\# is a token separating examples and . indicates concatenation. During inference, the user inputs a question 𝐱isubscript𝐱𝑖\\mathbf{x}_{i}, and the model is fed 𝐩#𝐱i𝐩#subscript𝐱𝑖\\mathbf{p}\\ \\#\\ \\mathbf{x}_{i} (i.e., the question suffixed to the prompt) and is expected to generate the answer 𝐲isubscript𝐲𝑖\\mathbf{y}_{i} as a continuation. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_18",
"text": " As mentioned, given an input 𝐱𝐱\\mathbf{x}, we prompt the model to generate an output 𝐲𝐲\\mathbf{y} and a sentence 𝐮𝐮\\mathbf{u} expressing its understanding of the task. Thus, the in-context examples for \\oursare of the form 𝐱→𝐮,𝐲→𝐱𝐮𝐲\\mathbf{x}\\rightarrow\\mathbf{u},\\mathbf{y}. In addition to the input 𝐱𝐱\\mathbf{x}, \\oursretrieves a 𝐟𝐛𝐟𝐛\\mathbf{fb} if a question similar to 𝐱𝐱\\mathbf{x} has been asked before. To enable the model to react to such feedback, we also include examples of the form (𝐱,𝐟𝐛→𝐮,𝐲)formulae-sequence→𝐱𝐟𝐛𝐮𝐲(\\mathbf{x},\\mathbf{fb}\\rightarrow\\mathbf{u},\\mathbf{y}) in the prompt, which are aimed to teach the model to react to 𝐟𝐛𝐟𝐛\\mathbf{fb} (Appendix D). ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_19",
"text": " Existing methods for receiving user feedback typically assume the user knows the correct answer 𝐲𝐲\\mathbf{y} Elgohary et al. (2021). This assumption is paradoxical: if the user knew the answer, why would they be using the model? Further, allowing only “oracle” users (who know correct 𝐲𝐲\\mathbf{y}) might lead to sampling biases. In real-world settings, it is common for users to not have the exact answer, but rather, a general understanding of what they are searching for. Thus, we propose eliciting a verbalization of task understanding 𝐮𝐮\\mathbf{u} from the model in addition to the answer. End users can thus critique 𝐮𝐮\\mathbf{u}. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_20",
"text": " We operationalize this idea by including task verbalization in the prompt (Fig. 3). Given a question What sounds like < sighted > ?, a vanilla prompting approach will generate the answer cited. In contrast, we include a 𝐮𝐮\\mathbf{u} the homophone for in the prompt. Large-scale language models, such as gpt-3, have been shown to excel at reasoning with a limited number of examples, making them well-suited to mimic the prompt and generate not only the answer, but also an understanding of the task at hand. Given a test question What sounds similar to < sighted > ?, if the model generates the word that has the same meaning as 𝐮𝐮\\mathbf{u}, the user has a reason to believe that the answer is wrong. Our experiments demonstrate that gpt-3 models are able to generate this additional information in all tasks presented. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_21",
"text": " Our approach is not foolproof— the model may spell out a wrong 𝐮𝐮\\mathbf{u} while giving out the correct answer, misleading the user into believing that there is an error (or vice-versa). Hallucinating remains a critical limitation of generative models Cao et al. (2022), therefore additional heuristics and model calibration might be necessary to make our approach foolproof. In practice, however, we found such cases to be rare for the tasks in this paper. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_22",
"text": " Once the feedback is received from the user, can the model successfully utilize it? By adding a few examples of the form 𝐱,𝐟𝐛→𝐮,𝐲formulae-sequence→𝐱𝐟𝐛𝐮𝐲\\mathbf{x},\\mathbf{fb}\\rightarrow\\mathbf{u},\\mathbf{y} in the prompt and setting 𝐟𝐛=𝐮𝐟𝐛𝐮\\mathbf{fb}=\\mathbf{u}, we force the model to use the task understanding present in the input when generating the output (Figure 4). Recently, it has been shown that such repetition plays a crucial role in the success of few-shot prompting models (Madaan and Yazdanbakhsh, 2022). ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_23",
"text": " Within the setup 𝐱→𝐮,𝐲→𝐱𝐮𝐲\\mathbf{x}\\rightarrow\\mathbf{u},\\mathbf{y}, we focus on following two modes of failure: ∙∙\\bullet Task instruction understanding: this is especially concerning in a multi-tasking setup, where the model may consider the question to be about a different task than the one user intended. ∙∙\\bullet Task nuanced understanding: when the model understands the task type, but misunderstands the subtle intent in a question. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_24",
"text": " Our primary goal is to elicit feedback on the model’s understanding of the task, however, we also explore settings where an Oracle is available to provide feedback on the labels (as detailed in Section §4.3). Finally, we note again that the model reacts to the feedback because some in-context samples are of the form: (𝐱,𝐟𝐛→𝐮,𝐲)formulae-sequence→𝐱𝐟𝐛𝐮𝐲(\\mathbf{x},\\mathbf{fb}\\rightarrow\\mathbf{u},\\mathbf{y}). We consider a diverse set of tasks (𝐱→𝐲→𝐱𝐲\\mathbf{x}\\rightarrow\\mathbf{y}), 𝐟𝐛𝐟𝐛\\mathbf{fb} and 𝐮𝐮\\mathbf{u}, as summarized in Table 1. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_25",
"text": " We apply our approach to four tasks: (1) lexical relations (e.g., antonyms, Figure 1), (2) word scrambling (e.g., anagrams), (3) ethics (with user feedback being the appropriate class of ethical consideration), and (4) ethics (with user feedback being natural language). For all five tasks, the dataset consists of (𝐱,𝐟𝐛→𝐮,𝐲)formulae-sequence→𝐱𝐟𝐛𝐮𝐲(\\mathbf{x},\\mathbf{fb}\\rightarrow\\mathbf{u},\\mathbf{y}) tuples, where 𝐟𝐛𝐟𝐛\\mathbf{fb} clarifies the task in 𝐱𝐱\\mathbf{x}. We have a simulated conversational setting, in which a user can ask the model 𝐱𝐱\\mathbf{x} (covering any of these five tasks). If the model gives a wrong answer to query 𝐱𝐱\\mathbf{x}, then 𝐟𝐛𝐟𝐛\\mathbf{fb} is used as the simulated corrective feedback. The sources for these datasets are listed in Appendix §E. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_26",
"text": " The lexical relation task is to predict a word with a given lexical relationship to an input word. We use five relationships: synonym (syn), antonym (ant), homophone (hom), definition (defn), and sentence usage generation (sent). ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_27",
"text": " For this task, given a word with its characters transformed, the model is expected to recover the original characters. There are four transformation operations the user can request: reversal of words (rev, yppup →→\\rightarrow puppy), cycle letters in word (cyc, atc →→\\rightarrow cat), random insertions (rand, c!r ic/ke!t→→\\rightarrow cricket), and anagrams by changing all but the first and last (anag1, eelhpnat →→\\rightarrow elephant) or all but the first and last 2 characters (anag2, elapehnt →→\\rightarrow elephant). We use the original dataset by Brown et al. (2020).222word scrambling dataset https://github.com/openai/gpt-3/tree/master/data ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_28",
"text": " For both these tasks, each question can be asked in multiple ways (e.g., for synonym generation, the users might ask questions of the form what is like, what has a similar sense, what is akin to, what is something like, etc.) Similarly for the lexical relations task, we specify the task description x𝑥x using different phrasings, e.g., “rearrange the letters” (which the system sometimes misunderstands), and the (simulated) user feedback fb𝑓𝑏fb is a clearer task description, e.g., “The anagram is”. The system thus accumulates a set of (x𝑥x, fb𝑓𝑏fb) pairs in memory after each failure, helping it avoid future misunderstandings of x𝑥x through feedback retrieval. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_29",
"text": " For ethical reasoning, we consider a setup where given a situation (e.g., cheating on your partner), the model is expected to provide a judgment on whether the situation is ethical or not (e.g., it’s not okay). In addition to providing a judgment on the ethics of the situation, the model also elucidates its understanding of what the question is about (e.g., being loyal). While the user may not know the answer, we posit that they would be able to provide feedback on the broader context. For example, if the model generates being financially savvy instead of being loyal for the situation cheating on your partner, a user can still point out this problem and provide feedback. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_30",
"text": " We use a subset 333social norms dataset (social-chemistry-101, Forbes et al. (2020)) https://github.com/mbforbes/social-chemistry-101 of the dataset provided by delphi (Jiang et al., 2021). We simulate two different kinds of user feedback, using two of the annotations attached to each example in the Delphi dataset: ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_31",
"text": " ∙∙\\bullet Categorical feedback (ert-cat): In this setting, the model generates its understanding u𝑢u of the situation by selecting one of 10 different possible categories of morality to which the situation might belong: care, loyalty, authority, fairness, sanctity, degradation, cheating, subversion, betrayal, and harm. These categories are explicitly provided for each example in the Delphi dataset. ∙∙\\bullet Natural language feedback (ert-nl): For this, we use the associated “rule of thumb” (RoT) annotation —a general moral principle — attached to each example in the Delphi dataset. To compile a challenging subset of the data for ert-nl, we sample by input length, preferring long 𝐱𝐱\\mathbf{x}, with a short feedback 𝐟𝐛𝐟𝐛\\mathbf{fb}. Specifically, we use the top 1% of the inputs by length to create a challenging set of input situations (𝐱𝐱\\mathbf{x}). User feedback 𝐟𝐛𝐟𝐛\\mathbf{fb} is a natural language feedback on the understanding 𝐮𝐮\\mathbf{u}. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_32",
"text": " In both the cases, the model is “taught” to generate a category 𝐮𝐮\\mathbf{u} (as well as the okay/not-okay answer 𝐲𝐲\\mathbf{y} to the ethical question) by being given a few examples in the prompt prefix, thus articulating which moral category (for ert-cat) or rule-of-thumb (for ert-nl) it thinks is applicable. The simulated feedback 𝐟𝐛𝐟𝐛\\mathbf{fb} is the gold category associated with the example in the question, if gpt-3 gets the answer wrong. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_33",
"text": " We selected these tasks because situations that involve reasoning about similar ethical principles can utilize similar past feedback. For example, sharing an extra umbrella with your friend if they don’t have one, and donating surplus food to the homeless both involve compassion. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_34",
"text": " ℳℳ\\mathcal{M} uses the user input 𝐱𝐱\\mathbf{x} as the key and the corresponding feedback 𝐟𝐛𝐟𝐛\\mathbf{fb} as value. Given a question 𝐱isubscript𝐱𝑖\\mathbf{x}_{i}, if the user detects that the model has misunderstood the question, they may provide a 𝐟𝐛isubscript𝐟𝐛𝑖\\mathbf{fb}_{i} with clarification probability Pr(𝐟𝐛i)𝑃𝑟subscript𝐟𝐛𝑖Pr(\\mathbf{fb}_{i}). The (𝐱isubscript𝐱𝑖\\mathbf{x}_{i}, 𝐟𝐛isubscript𝐟𝐛𝑖\\mathbf{fb}_{i}) pair is stored in a memory ℳℳ\\mathcal{M}, with 𝐱isubscript𝐱𝑖\\mathbf{x}_{i} as the key and 𝐟𝐛isubscript𝐟𝐛𝑖\\mathbf{fb}_{i} as the value. For a subsequent question 𝐱jsubscript𝐱𝑗\\mathbf{x}_{j}, the retriever ℳ(𝐱)ℳ𝐱\\mathcal{M}(\\mathbf{x}) checks if a similar question appears in memory. If yes, then the corresponding feedback is attached with the question and fed to the model for generation. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_35",
"text": " For example, a question asking for a synonym, such as what is akin to fast? might be misinterpreted as a request for antonyms. As mentioned, in our setup, the model generates its understanding of the task 𝐮𝐮\\mathbf{u}, and not just the answer to the question. The user, by inspecting 𝐮𝐮\\mathbf{u} = The opposite of fast is: might determine that the model has misunderstood them, and give feedback i wanted a synonym, which gets stored in ℳℳ\\mathcal{M}. If a similar question (e.g., what is akin to pretty ?) is asked later by the same or a different user, the corresponding feedback (i wanted a synonym) is attached with the question to generate the answer. Figure 5 illustrates a sample memory for this task. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_36",
"text": " A retrieved past feedback that is incorrect might cause the model to make a mistake, thus necessitating a good retrieval function. We propose a two-stage method for effective retrieval involving: transforming 𝐱𝐱\\mathbf{x}, followed by a similarity lookup of the transformed 𝐱𝐱\\mathbf{x} in ℳℳ\\mathcal{M}. When the task involves high surface-level similarity among past feedback, such as in lexical word tasks, then a simple heuristic-based transformation is sufficient. However, such simple transformations are insufficient for tasks that involves more complex retrieval e.g., when two lexically dissimilar situations can share the same understanding. For example, consider two situations from ert-nl: Filling a false time sheet at work and Being at a party, and telling parents I am studying. These situations look lexically dissimilar but correspond to the same underlying social principle lying to authority. In our experiments, off-the-shelf methods failed to address these challenges (see §4 later). ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_37",
"text": " To address these challenges with transformation in complex tasks, we have designed a novel seq2seq based transformation called gud-ir. Given 𝐱𝐱\\mathbf{x}, gud-ir generates a transformed feedback 𝐟𝐛^^𝐟𝐛\\hat{\\mathbf{fb}} for 𝐱𝐱\\mathbf{x} using a generative seq2seq model. Our approach is inspired and supported by the recent success of generate and retrieve Mao et al. (2021) methods. However, despite the similarity, the methods have different goals: Mao et al. (2021) leverage generative models for query expansion, whereas our goal is explainable input understanding. See Appendix B for more details on gud-ir. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_38",
"text": " After the transformation stage, the closest matching entry is then used as the corresponding 𝐟𝐛𝐟𝐛\\mathbf{fb}. Transformation reduces ℳ(𝐱)ℳ𝐱\\mathcal{M}(\\mathbf{x}) to a search over 𝐟𝐛1,𝐟𝐛2,…,𝐟𝐛|ℳ|subscript𝐟𝐛1subscript𝐟𝐛2…subscript𝐟𝐛ℳ\\mathbf{fb}_{1},\\mathbf{fb}_{2},\\ldots,\\mathbf{fb}_{|\\mathcal{M}|} with 𝐟𝐛^^𝐟𝐛\\hat{\\mathbf{fb}} as the search query. We compute similarity based on a fine-tuned Sentence transformers (Reimers and Gurevych, 2019). ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_39",
"text": " 𝒞𝒞\\mathcal{C} concatenates 𝐱𝐱\\mathbf{x} with relevant 𝐟𝐛𝐟𝐛\\mathbf{fb} retrieved by ℳ(𝐱)ℳ𝐱\\mathcal{M}(\\mathbf{x}). To ensure that the 𝐱𝐱\\mathbf{x} is appended with 𝐟𝐛𝐟𝐛\\mathbf{fb} only if it is relevant, our current implementation of combiner uses a threshold on the similarity score between the 𝐱𝐱\\mathbf{x} and the closest feedback 𝐟𝐛𝐟𝐛\\mathbf{fb} retrieved by ℳ(𝐱)ℳ𝐱\\mathcal{M}(\\mathbf{x}). We rely on the model (gpt-3) to pay attention to the relevant parts of the input. Exploring more complex gating mechanisms remains an important future work. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_40",
"text": " We compare \\ours(memory-assisted prompt editing) with two baselines: ∙∙\\bullet no-mem This is the standard gpt-3 444We use gpt-3-175b (davinci) for all experiments. in few-shot prompting mode (hyper-parameters listed in Appendix §C). Input is 𝐩#𝐱i𝐩#subscript𝐱𝑖\\mathbf{p}\\ \\#\\ \\mathbf{x}_{i} (i.e., question 𝐱isubscript𝐱𝑖\\mathbf{x}_{i} appended to prompt 𝐩𝐩\\mathbf{p}). It generates answer 𝐲isubscript𝐲𝑖\\mathbf{y}_{i} and its understanding of the user’s intent 𝐮isubscript𝐮𝑖\\mathbf{u}_{i}. ∙∙\\bullet grow-prompt: Similar to no-mem, but the 𝐩𝐩\\mathbf{p} is continuously grown with a subset of memory ℳℳ\\mathcal{M} that can fit within the prompt (max. 2048 tokens). The most recent subset of ℳℳ\\mathcal{M} of memory inserted is inserted in the prompt. The ethical reasoning tasks (ert) involve long examples, and the initial prompt itself takes close to the max allowed tokens. Thus, the grow-prompt setup is only provided for the lexical relations and word scrambling tasks. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_41",
"text": " We use two different metrics: ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_42",
"text": " ∙∙\\bullet Acc(𝐲)𝐴𝑐𝑐𝐲Acc(\\mathbf{y}): % of cases where answer matched the ground truth. ∙∙\\bullet Acc(𝐮)𝐴𝑐𝑐𝐮Acc(\\mathbf{u}): % of cases where the model’s understanding of user’s intent is correct. Acc(𝐮)𝐴𝑐𝑐𝐮Acc(\\mathbf{u}) is also referred to as instruction accuracy. As discussed in §3.4, depending on the task, the model generates its understanding on either the instruction or semantics of the question. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_43",
"text": " In real-world cases, we cannot expect a user to provide feedback for all the examples (e.g., the user might not know that the understanding of the model is wrong). To simulate this realistic setting, we experiment with various values of clarification probabilities Pr𝑃𝑟Pr. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_44",
"text": " Does pairing gpt-3 with \\ourshelp? §4.1.1 empirically validates this on ethical reasoning tasks and §4.1.2 on word reasoning tasks. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_45",
"text": " Table 2 presents results on the delphi dataset (1,000 points in the test set). Recall from §3.5 that there are two kinds of feedback on delphi questions: cat and nl feedback. \\oursgets over 25% relative improvement for both ert-nl and ert-cat. We found that having an efficient retriever was critical for ert-nl: sentence transformer based retriever scored 38.5, vs. 45.2 using gud-ir, a 17% improvement. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_46",
"text": " Figure 7 demonstrates that the instruction accuracy increases over time for different values of clarification probability. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_47",
"text": " Fig. 6 shows that label accuracy improves over time. Baseline (no-mem) saturates after 200 time steps; \\ourscontinues to improve. Continuous improvement is one of our key advantages. These charts show that instruction accuracy and label accuracy are correlated (corr. coeff = 0.36). ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_48",
"text": " We observe that using a higher clarification probability leads to a sharp increase in instruction and label accuracy early on in the training for both ert-cat and ert-nl. This is because a higher clarification probability causes the feedback memory to fill up more quickly, providing more feedback for new questions. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_49",
"text": " In ert nl and cat tasks, a primary source of label errors is confusion between labels such as okay and good due to the nuanced differences e.g., input = teaching your child a musical instrument. \\ourspredicts good, but the expected answer is okay. Jiang et al. (2021) make similar observations. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_50",
"text": " We randomly sampled examples from the ert-nl dev set where the model generates an incorrect understanding (i.e., Acc(𝐮)=0𝐴𝑐𝑐𝐮0Acc(\\mathbf{u})=0 based on exact match). Our goal is to understand the typical errors made by the model and use the analysis to calibrate the findings in Table 2. We select ert-nl for the analysis because it involves free-form natural language which is difficult to study quantitatively. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_51",
"text": " ∙∙\\bullet Correct, lexically variant understanding (30%): Exact match underestimates model performance (as the task involves generation). ∼similar-to\\sim 30% 𝐮𝐮\\mathbf{u} is a lexical variation of the reference gold understanding. E.g., telling a spouse your true feeling vs. loving your partner. The generated label in these 30% cases is still correct. (Table 3, row 1) ∙∙\\bullet Distracted understanding (50%): A major source of instruction and label errors is the model getting distracted by an unimportant context. Bad retrieval accounts for 30% errors within this category, e.g., matching a situation in the memory where the expected understanding is only partially applicable to the query. (Table 3, row 2) ∙∙\\bullet Retrieval failures (18%): These errors are caused by an irrelevant retrieved understanding from the memory , when using a state-of-the-art retrieval method (Table 3, row 3). gud-ir helps to reduce these retrieval failures. See Appendix §B. Table 3 presents canonical examples of these error categories. We also find that over time, more relevant past examples are fetched (see Table 7). ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_52",
"text": " For these tasks, we compare gold 𝐮∗superscript𝐮\\mathbf{u}^{*} and generated 𝐮𝐮\\mathbf{u} based on hard-coded linguistic variations (e.g., the antonym is matches the opposite is). While we do not explicitly evaluate task accuracy, we observe a near-perfect correlation between the accuracy of 𝐲𝐲\\mathbf{y} and 𝐮𝐮\\mathbf{u} (i.e., if the gpt-3 understands the task correctly, the output was almost always correct). This shows improving model’s understanding of a task might lead to an improved performance. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_53",
"text": " Figure 8 reports the overall performance on the word reasoning tasks. The accuracy improves substantially within 300 examples when using memory (in yellow) vs. no memory (in blue). Note that our approach operates in a few-shot learning regime, where there is no pre-existing training data available. The only examples provided to the model are through the prompt. The performance of grow-prompt (red) lies in between, showing that non-selective memory is partially helpful, although not as effective as failure-driven retrieval (our model). However, grow-prompt is ∼similar-to\\sim 3x more expensive (larger prompts) and cannot scale beyond the 2048 tokens limit. We also found that the retrieved feedback from memory was effective 97% of the time; only in ≈\\approx 3% of cases feedback had no positive effect. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_54",
"text": " When the memory is used for every example (green line, Fig 8, top), the performance improves quickly vs. the yellow line (Pr(𝐟𝐛i)𝑃𝑟subscript𝐟𝐛𝑖Pr(\\mathbf{fb}_{i}) = 0.5). ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_55",
"text": " Recent work such as Liu et al. (2021a) investigate using dynamic prompts for better generation. For a given input 𝐱𝐱\\mathbf{x}, their method( kate) relies on retrieving examples from the training set that are similar to 𝐱𝐱\\mathbf{x} for dynamically creating the prompt 𝐩𝐩\\mathbf{p}. Note that our method edits 𝐱𝐱\\mathbf{x} with a feedback 𝐟𝐛𝐟𝐛\\mathbf{fb}, and is thus complementary to kate. To demonstrate this, we conduct experiments on ert-cat and ert-nl tasks, where dynamic prompts were created using kate, and \\ourswas used to attach feedback to the question. Our results show a consistent 10% improvement when using both kate and \\ours, indicating that the improvements are complementary. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_56",
"text": " \\ours requires the model to verbalize its understanding of the question, on which a user provides feedback. To investigate the efficacy of \\oursin settings where generating an understanding is not easy, we experiment with factual question answering on the webqa dataset (Berant et al., 2013), and find that \\oursis effective even with label feedback (Appendix §F). ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_57",
"text": " We demonstrate an application of \\oursfor personalization with a use-case where user language preferences can be folded in the memory. We simulate a user who does not speak fluent English and uses code-mixed language. The queries posed by the user contain words from two Indian languages: Hindi and Punjabi. gpt-3 predictably misunderstands the task. The user clarifies the meanings of their dialect/language phrases. While initial queries fail, subsequent queries that reuse similar words succeed because their clarifications are present in the memory (details in Appendix §G). ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_58",
"text": " We present \\ours, a novel, memory-enhanced gpt-3 that allows users to interact and improve the model without retraining. A key insight is to have the model articulate not just its answer but also its understanding of the user’s intent, providing an avenue for feedback. We show that deployed systems with fixed large-language models can still be improved by interacting with end-users, potentially improving their performance and broadening their utility. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_59",
"text": " We have shown how to improve very large models through interaction. Our memory-based enhancement is a low-cost utility enhancement eventually geared towards personalized, correctable models, which is currently an open question in NLP with unresolved issues. While our method is a step toward a promising open direction, it comes with limitations and opportunities when deploying to the real world. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_60",
"text": " In practical deployments of the \\oursmethod, the memory can grow to orders of magnitude, introducing scaling challenges. We anticipate using memory as a buffer between cycles of re-training, and these cycles could range from a week to several months. Between cycles of re-training, \\ourscan serve as a way to avoid repeating mistakes and collect feedback which can be used to fine-tune and improve the next version of the model. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_61",
"text": " Currently, we operate with a single user at a time, but a real-world deployment could encounter multiple users. These users could exhibit characteristics of a user community where some feedback could apply to multiple users in a community cluster, while others differ in interpretation and style. In such a multi-user environment, managing the memory effectively when dealing with incompatible entries would be important. Existing initial ideas towards managing a bank of beliefs could be extended to address these problems, e.g., Kassner et al. (2021). In addition, when looking up such a rich and potentially noisy feedback collection, rather than retrieving a single feedback item, it would help to have an adapter over the memory that generates feedback by adapting the existing, diverse, and related past feedback to the current scenario. This increases the diversity of the generated knowledge and reduces the impact of erroneous feedback and noise. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_62",
"text": " Extending the discussion on noise in feedback, our setting assumes that users will not provide any adversarial feedback. However, in real-world environments, this assumption is unlikely to hold. Additionally, there is a risk in the real-world deployment of our system, wherein an adversarial user might provide harmful feedback, thus maliciously controlling the systems (potentially a home-based robot) where our method is deployed. Thus, robust mechanisms such as gud-ir and memory adapters will be critical for successful real-world deployments. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_63",
"text": " Privacy is another ethical concern, as the deployed system collects and records feedback from a user, some of which could contain personal information (when I look for an interesting movie, I mean something that contains romance). Therefore, the system needs to win the trust of the users so they would be encouraged to interact closely, and to win this trust, the system needs to demonstrate smartness, receptivity to user feedback, and the ability to maintain the memory without leaking any personal information safely. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
},
{
"id": "2201.06009_all_64",
"text": " Finally, large-language models generate text that might be biased and insensitive to a user’s socio-cultural context (Bordia and Bowman, 2019; Sharma et al., 2021; Hovy and Prabhumoye, 2021). In a multi-user deployment of our system, the memory could contain feedback from user communities of diverse beliefs, gender identities, and cultural backgrounds could lead to conflicts. Thus the system will need checks and balances to ensure that the content produced by the system as a result of the feedback is not harmful. ",
"title": "MemPrompt: Memory-assisted Prompt Editing with User Feedback"
}
] |
Which specific metrics are improved when increasing attention modules ?
|
The Top-1 and Top-5 error metrics are improved when increasing attention modules [27].
|
[
27
] |
[
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representations of objects at that location. Previous works formulate attention drift as a sequential process to capture different attended aspects. However, as far as we know, no attention mechanism has been applied to feedforward network structure to achieve state-of-art results in image classification task. Recent advances of image classification focus on training feedforward convolutional neural networks using “very deep” structure (27, 33, 10). ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_1",
"text": " Inspired by the attention mechanism and recent advances in the deep neural network, we propose Residual Attention Network, a convolutional network that adopts mixed attention mechanism in “very deep” structure. The Residual Attention Network is composed of multiple Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_2",
"text": " Apart from more discriminative feature representation brought by the attention mechanism, our model also exhibits following appealing properties: ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_3",
"text": " (1) Increasing Attention Modules lead to consistent performance improvement, as different types of attention are captured extensively. Fig.1 shows an example of different types of attentions for a hot air balloon image. The sky attention mask diminishes background responses while the balloon instance mask highlighting the bottom part of the balloon. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_4",
"text": " (2) It is able to incorporate with state-of-the-art deep network structures in an end-to-end training fashion. Specifically, the depth of our network can be easily extended to hundreds of layers. Our Residual Attention Network outperforms state-of-the-art residual networks on CIFAR-10, CIFAR-100 and challenging ImageNet image classification dataset with significant reduction of computation (69% forward FLOPs). ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_5",
"text": " All of the aforementioned properties, which are challenging to achieve with previous approaches, are made possible with following contributions: ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_6",
"text": " (1) Stacked network structure: Our Residual Attention Network is constructed by stacking multiple Attention Modules. The stacked structure is the basic application of mixed attention mechanism. Thus, different types of attention are able to be captured in different Attention Modules. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_7",
"text": " (2) Attention Residual Learning: Stacking Attention Modules directly would lead to the obvious performance drop. Therefore, we propose attention residual learning mechanism to optimize very deep Residual Attention Network with hundreds of layers. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_8",
"text": " (3) Bottom-up top-down feedforward attention: Bottom-up top-down feedforward structure has been successfully applied to human pose estimation and image segmentation (22, 25, 1). We use such structure as part of Attention Module to add soft weights on features. This structure can mimic bottom-up fast feedforward process and top-down attention feedback in a single feedforward process which allows us to develop an end-to-end trainable network with top-down attention. The bottom-up top-down structure in our work differs from stacked hourglass network in its intention of guiding feature learning. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_9",
"text": " Evidence from human perception process shows the importance of attention mechanism, which uses top information to guide bottom-up feedforward process. Recently, tentative efforts have been made towards applying attention into deep neural network. Deep Boltzmann Machine (DBM) contains top-down attention by its reconstruction process in the training stage. Attention mechanism has also been widely applied to recurrent neural networks (RNN) and long short term memory (LSTM) to tackle sequential decision tasks (25, 29, 21, 18). Top information is gathered sequentially and decides where to attend for the next feature learning steps. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_10",
"text": " Residual learning is proposed to learn residual of identity mapping. This technique greatly increases the depth of feedforward neuron network. Similar to our work, (25, 29, 21, 18) use residual learning with attention mechanism to benefit from residual learning. Two information sources (query and query context) are captured using attention mechanism to assist each other in their work. While in our work, a single information source (image) is split into two different ones and combined repeatedly. And residual learning is applied to alleviate the problem brought by repeated splitting and combining. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_11",
"text": " In image classification, top-down attention mechanism has been applied using different methods: sequential process, region proposal and control gates. Sequential process (23, 12, 37, 7) models image classification as a sequential decision. Thus attention can be applied similarly with above. This formulation allows end-to-end optimization using RNN and LSTM and can capture different kinds of attention in a goal-driven way. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_12",
"text": " Region proposal (26, 4, 8, 38) has been successfully adopted in image detection task. In image classification, an additional region proposal stage is added before feedforward classification. The proposed regions contain top information and are used for feature learning in the second stage. Unlike image detection whose region proposals rely on large amount of supervision, e.g. the ground truth bounding boxes or detailed segmentation masks , unsupervised learning is usually used to generate region proposals for image classification. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_13",
"text": " Control gates have been extensively used in LSTM. In image classification with attention, control gates for neurones are updated with top information and have influence on the feedforward process during training (2, 30). However, a new process, reinforcement learning or optimization is involved during the training step. Highway Network extends control gate to solve gradient degradation problem for deep convolutional neural network. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_14",
"text": " However, recent advances of image classification focus on training feedforward convolutional neural networks using “very deep” structure (27, 33, 10). The feedforward convolutional network mimics the bottom-up paths of human cortex. Various approaches have been proposed to further improve the discriminative ability of deep convolutional neural network. VGG , Inception and residual learning are proposed to train very deep neural networks. Stochastic depth , Batch Normalization and Dropout exploit regularization for convergence and avoiding overfitting and degradation. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_15",
"text": " Soft attention developed in recent work (3, 17) can be trained end-to-end for convolutional network. Our Residual Attention Network incorporates the soft attention in fast developing feedforward network structure in an innovative way. Recent proposed spatial transformer module achieves state-of-the-art results on house number recognition task. A deep network module capturing top information is used to generate affine transformation. The affine transformation is applied to the input image to get attended region and then feed to another deep network module. The whole process can be trained end-to-end by using differentiable network layer which performs spatial transformation. Attention to scale uses soft attention as a scale selection mechanism and gets state-of-the-art results in image segmentation task. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_16",
"text": " The design of soft attention structure in our Residual Attention Network is inspired by recent development of localization oriented task, i.e. segmentation (22, 25, 1) and human pose estimation . These tasks motivate researchers to explore structure with fined-grained feature maps. The frameworks tend to cascade a bottom-up and a top-down structure. The bottom-up feedforward structure produces low resolution feature maps with strong semantic information. After that, a top-down network produces dense features to inference on each pixel. Skip connection is employed between bottom and top feature maps and achieved state-of-the-art result on image segmentation. The recent stacked hourglass network fuses information from multiple scales to predict human pose, and benefits from encoding both global and local information. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_17",
"text": " Our Residual Attention Network is constructed by stacking multiple Attention Modules. Each Attention Module is divided into two branches: mask branch and trunk branch. The trunk branch performs feature processing and can be adapted to any state-of-the-art network structures. In this work, we use pre-activation Residual Unit , ResNeXt and Inception as our Residual Attention Networks basic unit to construct Attention Module. Given trunk branch output T(x)𝑇𝑥T(x) with input x𝑥x, the mask branch uses bottom-up top-down structure (22, 25, 1, 24) to learn same size mask M(x)𝑀𝑥M(x) that softly weight output features T(x)𝑇𝑥T(x). The bottom-up top-down structure mimics the fast feedforward and feedback attention process. The output mask is used as control gates for neurons of trunk branch similar to Highway Network . The output of Attention Module H𝐻H is: Hi,c(x)=Mi,c(x)∗Ti,c(x)subscript𝐻𝑖𝑐𝑥subscript𝑀𝑖𝑐𝑥subscript𝑇𝑖𝑐𝑥H_{i,c}(x)=M_{i,c}(x)*T_{i,c}(x) (1) where i ranges over all spatial positions and c∈{1,…,C}𝑐1…𝐶c\\in\\{1,...,C\\} is the index of the channel. The whole structure can be trained end-to-end. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_18",
"text": " In Attention Modules, the attention mask can not only serve as a feature selector during forward inference, but also as a gradient update filter during back propagation. In the soft mask branch, the gradient of mask for input feature is: ∂M(x,θ)T(x,ϕ)∂ϕ=M(x,θ)∂T(x,ϕ)∂ϕ𝑀𝑥𝜃𝑇𝑥italic-ϕitalic-ϕ𝑀𝑥𝜃𝑇𝑥italic-ϕitalic-ϕ\\frac{\\partial M(x,\\theta)T(x,\\phi)}{\\partial\\phi}=M(x,\\theta)\\frac{\\partial T(x,\\phi)}{\\partial\\phi} (2) where the θ𝜃\\theta are the mask branch parameters and the ϕitalic-ϕ\\phi are the trunk branch parameters. This property makes Attention Modules robust to noisy labels. Mask branches can prevent wrong gradients (from noisy labels) to update trunk parameters. Experiment in Sec.4.1 shows the robustness of our Residual Attention Network against noisy labels. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_19",
"text": " Instead of stacking Attention Modules in our design, a simple approach would be using a single network branch to generate soft weight mask, similar to spatial transformer layer . However, these methods have several drawbacks on challenging datasets such as ImageNet. First, images with clutter background, complex scenes, and large appearance variations need to be modeled by different types of attentions. In this case, features from different layers need to be modeled by different attention masks. Using a single mask branch would require exponential number of channels to capture all combinations of different factors. Second, a single Attention Module only modify the features once. If the modification fails on some parts of the image, the following network modules do not get a second chance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_20",
"text": " The Residual Attention Network alleviates above problems. In Attention Module, each trunk branch has its own mask branch to learn attention that is specialized for its features. As shown in Fig.1, in hot air balloon images, blue color features from bottom layer have corresponding sky mask to eliminate background, while part features from top layer are refined by balloon instance mask. Besides, the incremental nature of stacked network structure can gradually refine attention for complex images. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_21",
"text": " However, naive stacking Attention Modules leads to the obvious performance drop. First, dot production with mask range from zero to one repeatedly will degrade the value of features in deep layers. Second, soft mask can potentially break good property of trunk branch, for example, the identical mapping of Residual Unit. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_22",
"text": " We propose attention residual learning to ease the above problems. Similar to ideas in residual learning, if soft mask unit can be constructed as identical mapping, the performances should be no worse than its counterpart without attention. Thus we modify output H𝐻H of Attention Module as Hi,c(x)=(1+Mi,c(x))∗Fi,c(x)subscript𝐻𝑖𝑐𝑥1subscript𝑀𝑖𝑐𝑥subscript𝐹𝑖𝑐𝑥H_{i,c}(x)=(1+M_{i,c}(x))*F_{i,c}(x) (3) M(x)𝑀𝑥M(x) ranges from (0,1)01(0,1), with M(x)𝑀𝑥M(x) approximating 0, H(x)𝐻𝑥H(x) will approximate original features F(x)𝐹𝑥F(x). We call this method attention residual learning. Our stacked attention residual learning is different from residual learning. In the origin ResNet, residual learning is formulated as Hi,c(x)=x+Fi,c(x)subscript𝐻𝑖𝑐𝑥𝑥subscript𝐹𝑖𝑐𝑥H_{i,c}(x)=x+F_{i,c}(x), where Fi,c(x)subscript𝐹𝑖𝑐𝑥F_{i,c}(x) approximates the residual function. In our formulation, Fi,c(x)subscript𝐹𝑖𝑐𝑥F_{i,c}(x) indicates the features generated by deep convolutional networks. The key lies on our mask branches M(x)𝑀𝑥M(x). They work as feature selectors which enhance good features and suppress noises from trunk features. In addition, stacking Attention Modules backs up attention residual learning by its incremental nature. Attention residual learning can keep good properties of original features, but also gives them the ability to bypass soft mask branch and forward to top layers to weaken mask branch’s feature selection ability. Stacked Attention Modules can gradually refine the feature maps. As show in Fig.1, features become much clearer as depth going deeper. By using attention residual learning, increasing depth of the proposed Residual Attention Network can improve performance consistently. As shown in the experiment section, the depth of Residual Attention Network is increased up to 452 whose performance surpasses ResNet-1001 by a large margin on CIFAR dataset. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_23",
"text": " Following previous attention mechanism idea in DBN , our mask branch contains fast feed-forward sweep and top-down feedback steps. The former operation quickly collects global information of the whole image, the latter operation combines global information with original feature maps. In convolutional neural network, the two steps unfold into bottom-up top-down fully convolutional structure. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_24",
"text": " From input, max pooling are performed several times to increase the receptive field rapidly after a small number of Residual Units. After reaching the lowest resolution, the global information is then expanded by a symmetrical top-down architecture to guide input features in each position. Linear interpolation up sample the output after some Residual Units. The number of bilinear interpolation is the same as max pooling to keep the output size the same as the input feature map. Then a sigmoid layer normalizes the output range to (0,1)01(0,1) after two consecutive 1×1111\\times 1 convolution layers. We also added skip connections between bottom-up and top-down parts to capture information from different scales. The full module is illustrated in Fig.2. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_25",
"text": " The bottom-up top-down structure has been applied to image segmentation and human pose estimation. However, the difference between our structure and the previous one lies in its intention. Our mask branch aims at improving trunk branch features rather than solving a complex problem directly. Experiment in Sec.4.1 is conducted to verify above arguments. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_26",
"text": " In our work, attention provided by mask branch changes adaptably with trunk branch features. However, constrains to attention can still be added to mask branch by changing normalization step in activation function before soft mask output. We use three types of activation functions corresponding to mixed attention, channel attention and spatial attention. Mixed attention f1subscript𝑓1f_{1} without additional restriction use simple sigmoid for each channel and spatial position. Channel attention f2subscript𝑓2f_{2} performs L2𝐿2L2 normalization within all channels for each spatial position to remove spatial information. Spatial attention f3subscript𝑓3f_{3} performs normalization within feature map from each channel and then sigmoid to get soft mask related to spatial information only. f1(xi,c)=11+exp(−xi,c)subscript𝑓1subscript𝑥𝑖𝑐11𝑒𝑥𝑝subscript𝑥𝑖𝑐\\displaystyle f_{1}(x_{i,c})=\\frac{1}{1+exp(-x_{i,c})} (4) f2(xi,c)=xi,c‖xi‖subscript𝑓2subscript𝑥𝑖𝑐subscript𝑥𝑖𝑐normsubscript𝑥𝑖\\displaystyle f_{2}(x_{i,c})=\\frac{x_{i,c}}{\\|x_{i}\\|} (5) f3(xi,c)=11+exp(−(xi,c−meanc)/stdc)subscript𝑓3subscript𝑥𝑖𝑐11𝑒𝑥𝑝subscript𝑥𝑖𝑐subscriptmean𝑐subscriptstd𝑐\\displaystyle f_{3}(x_{i,c})=\\frac{1}{1+exp(-(x_{i,c}-\\text{mean}_{c})/\\text{std}_{c})} (6) Where i𝑖i ranges over all spatial positions and c𝑐c ranges over all channels. meancsubscriptmean𝑐\\text{mean}_{c} and stdcsubscriptstd𝑐\\text{std}_{c} denotes the mean value and standard deviation of feature map from c𝑐c-th channel. xisubscript𝑥𝑖x_{i} denotes the feature vector at the i𝑖ith spatial position. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_27",
"text": " The experiment results are shown in Table 1, the mixed attention has the best performance. Previous works normally focus on only one type of attention, for example scale attention or spatial attention , which puts additional constrain on soft mask by weight sharing or normalization. However, as supported by our experiments, making attention change adaptively with features without additional constraint leads to the best performance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_28",
"text": " In this section, we evaluate the performance of proposed Residual Attention Network on a series of benchmark datasets including CIFAR-10, CIFAR-100 , and ImageNet . Our experiments contain two parts. In the first part, we analyze the effectiveness of each component in the Residual Attention Network including attention residual learning mechanism and different architectures of soft mask branch in the Attention Module. After that, we explore the noise resistance property. Given limited computation resources, we choose CIFAR-10 and CIFAR-100 dataset to conduct these experiments. Finally, we compare our network with state-of-the-art results in CIFAR dataset. In the second part, we replace the Residual Unit with Inception Module and ResNeXt to demonstrate our Residual Attention Network surpasses origin networks both in parameter efficiency and final performance. We also compare image classification performance with state-of-the-art ResNet and Inception on ImageNet dataset. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_29",
"text": " The CIFAR-10 and CIFAR-100 datasets consist of 60,0006000060,000 32×32323232\\times 32 color images of 101010 and 100100100 classes respectively, with 50,0005000050,000 training images and 10,0001000010,000 test images. The broadly applied state-of-the-art network structure ResNet is used as baseline method. To conduct fair comparison, we keep most of the settings same as ResNet paper . The image is padded by 4 pixels on each side, filled with 00 value resulting in 40×40404040\\times 40 image. A 32×32323232\\times 32 crop is randomly sampled from an image or its horizontal flip, with the per-pixel RGB mean value subtracted. We adopt the same weight initialization method following previous study and train Residual Attention Network using nesterov SGD with a mini-batch size of 64. We use a weight decay of 0.00010.00010.0001 with a momentum of 0.90.90.9 and set the initial learning rate to 0.1. The learning rate is divided by 10 at 646464k and 969696k iterations. We terminate training at 160160160k iterations. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_30",
"text": " The overall network architecture and the hyper parameters setting are described in Fig.2. The network consists of 3 stages and similar to ResNet , equal number of Attention Modules are stacked in each stage. Additionally, we add two Residual Units at each stage. The number of weighted layers in trunk branch is 36m𝑚m+20 where m𝑚m is the number of Attention Module in one stage. We use original 32×32323232\\times 32 image for testing. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_31",
"text": " In this experiment, we evaluate the effectiveness of attention residual learning mechanism. Since the notion of attention residual learning (ARL) is new, no suitable previous methods are comparable therefore we use “naive attention learning” (NAL) as baseline. Specifically, “naive attention learning” uses Attention Module where features are directly dot product by soft mask without attention residual learning. We set the number of Attention Module in each stage m𝑚m = {1, 2, 3, 4}. For Attention Module, this leads to Attention-56 (named by trunk layer depth), Attention-92, Attention-128 and Attention-164 respectively. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_32",
"text": " We train these networks using different mechanisms and summarize the results in the Table 3. As shown in Table 3, the networks trained using attention residual learning technique consistently outperform the networks trained with baseline method which proves the effectiveness of our method. The performance increases with the number of Attention Module when applying attention residual learning. In contrast, the performance of networks trained with “naive attention learning” method suffers obvious degradation with increased number of Attention Module. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_33",
"text": " To understand the benefit of attention residual learning, we calculate mean absolute response value of output layers for each stage. We use Attention-164 to conduct this experiment. As shown in the Fig. 4, the response generated by the network trained using naive attention learning quickly vanishes in the stage 2 after four Attention Modules compared with network trained using attention residual learning. The Attention Module is designed to suppress noise while keeping useful information by applying dot product between feature and soft mask. However, repeated dot product will lead to severe degradation of both useful and useless information in this process. The attention residual learning can relieve signal attenuation using identical mapping, which enhances the feature contrast. Therefore, it gains benefits from noise reduction without significant information loss, which makes optimization much easier while improving the discrimination of represented features. In the rest of the experiments, we apply this technique to train our networks. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_34",
"text": " We conduct experiments to validate the effectiveness of encoder-decoder structure by comparing with local convolutions without any down sampling or up sampling. The local convolutions soft mask consists of three Residual Units using the same number of FLOPs. The Attention-56 is used to construct Attention-Encoder-Decoder-56 and Attention-Local-Conv-56 respectively. Results are shown in Table 4. The Attention-Encoder-Decoder-56 network achieves lower test error 5.52%percent5.525.52\\% compared with Attention-Local-Conv-56 network 6.48%percent6.486.48\\% with a considerable margin 0.94%percent0.940.94\\%. The result suggests that the soft attention optimization process will benefit from multi-scale information. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_35",
"text": " In this experiment, we show our Residual Attention Network enjoys noise resistant property on CIFAR-10 dataset following the setting of paper . The confusion matrix Q𝑄Q in our experiment is set as follows: Q=(r1−r9⋯1−r91−r9r⋯1−r9⋮⋮⋱⋮1−r91−r9⋯r)10×10𝑄subscriptmatrix𝑟1𝑟9⋯1𝑟91𝑟9𝑟⋯1𝑟9⋮⋮⋱⋮1𝑟91𝑟9⋯𝑟1010Q=\\left(\\begin{matrix}r&\\frac{1-r}{9}&\\cdots&\\frac{1-r}{9}\\\\ \\frac{1-r}{9}&r&\\cdots&\\frac{1-r}{9}\\\\ \\vdots&\\vdots&\\ddots&\\vdots\\\\ \\frac{1-r}{9}&\\frac{1-r}{9}&\\cdots&r\\\\ \\end{matrix}\\right)_{10\\times 10} (7) ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_36",
"text": " where r𝑟r denotes the clean label ratio for the whole dataset. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_37",
"text": " We compare ResNet-164 network with Attention-92 network under different noise levels. The Table 5 shows the results. The test error of Attention-92 network is significantly lower than ResNet-164 network with the same noise level. In addition, when we increase the ratio of noise, test error of Attenion-92 declines slowly compared with ResNet-164 network. These results suggest that our Residual Attention Network can perform well even trained with high level noise data. When the label is noisy, the corresponding mask can prevent gradient caused by label error to update trunk branch parameters in the network. In this way, only the trunk branch is learning the wrong supervision information and soft mask branch masks the wrong label. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_38",
"text": " We compare our Residual Attention Network with state-of-the-art methods including ResNet and Wide ResNet on CIFAR-10 and CIFAR-100 datasets. The results are shown in Table 6. Our Attention-452 outperforms all the baseline methods on CIFAR-10 and CIFAR-100 datasets. Note that Attention-92 network achieves 4.99%percent4.994.99\\% test error on CIFAR-10 and 21.71%percent21.7121.71\\% test error on CIFAR-100 compared with 5.46%percent5.465.46\\% and 24.33%percent24.3324.33\\% test error on CIFAR-10 and CIFAR-100 for ResNet-164 network under similar parameter size. In addition, Attention-236 outperforms ResNet-1001 using only half of the parameters. It suggests that our Attention Module and attention residual learning scheme can effectively reduce the number of parameters in the network while improving the classification performance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_39",
"text": " In this section, we conduct experiments using ImageNet LSVRC 201220122012 dataset , which contains 1,00010001,000 classes with 1.21.21.2 million training images, 50,0005000050,000 validation images, and 100,000100000100,000 test images. The evaluation is measured on the non-blacklist images of the ImageNet LSVRC 201220122012 validation set. We use Attention-56 and Attention-92 to conduct the experiments. The network structures and hyper parameters can be found in the Table 2. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_40",
"text": " Our implementation generally follows the practice in the previous study . We apply scale and aspect ratio augmentation to the original image. A 224×224224224224\\times 224 crop is randomly sampled from an augment image or its horizontal flip, with the per-pixel RGB scale to (0,1)01(0,1) and mean value subtracted and standard variance divided. We adopt standard color augmentation . The network is trained using SGD with a momentum of 0.90.90.9. We set initial learning rate to 0.1. The learning rate is divided by 10 at 200200200k, 400400400k, 500500500k iterations. We terminate training at 530530530k iterations. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_41",
"text": " In this experiment, we explore the efficiency of proposed Residual Attention Network. We compare Attention-56 with ResNet-152 . The ResNet-152 has 50 trunk Residual Units and 60.2×106absentsuperscript106\\times 10^{6} parameters compared with 18 trunk Residual Units and 31.9×106absentsuperscript106\\times 10^{6} parameters in Attention-56. We evaluate our model using single crop scheme on the ImageNet validation set and show results in Table 7. The Attention-56 network outperforms ResNet-152 by a large margin with a 0.4%percent0.40.4\\% reduction on top-1 error and a 0.26%percent0.260.26\\% reduction on top-5 error. More importantly, Attention-56 network achieves better performance with only 52% parameters and 56% FLOPs compared with ResNet-152, which suggests that the proposed attention mechanism can significantly improve network performance while reducing the model complexity. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_42",
"text": " In this experiment, we show Residual Attention Network can generalize well using different basic unit. We apply three popular basic units: Residual Unit, ResNeXt , and Inception to construct our Residual Attention Networks. To keep the number of parameters and FLOPs in the same scale, we simplify the Inception. Results are shown in Table 7. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_43",
"text": " When the basic unit is ResNeXt, the AttentionNeXt-56 network performance is the same as ResNeXt-101 while the parameters and FLOPs are significantly fewer than ResNeXt-101. For Inception, The AttentionIncepiton-56 outperforms Inception-ResNet-v1 by a margin with a 0.94% reduction on top-1 error and a 0.21% reduction on top-5 error. The results show that our method can be applied on different network structures. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_44",
"text": " We compare our Attention-92 evaluated using single crop on the ILSVRC 2012 validation set with state-of-the-art algorithms. Table 7 shows the results. Our Attention-92 outperforms ResNet-200 with a large margin. The reduction on top-1 error is 0.6%percent0.60.6\\%. Note that the ResNet-200 network contains 32%percent3232\\% more parameters than Attention-92. The computational complexity of Attention-92 shown in the Table 7 suggests that our network reduces nearly half training time comparing with ResNet-200 by adding attention mechanism and reducing trunk depth. Above results suggest that our model enjoys high efficiency and good performance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_45",
"text": " We propose a Residual Attention Network which stacks multiple Attention Modules. The benefits of our network are in two folds: it can capture mixed attention and is an extensible convolutional neural network. The first benefit lies in that different Attention Modules capture different types of attention to guide feature learning. Our experiments on the forms of activation function also validate this point: free form mixed attention will have better performance than constrained (including single) attention. The second benefit comes from encoding top-down attention mechanism into bottom-up top-down feedforward convolutional structure in each Attention Module. Thus, the basic Attention Modules can be combined to form larger network structure. Moreover, residual attention learning allows training very deep Residual Attention Network. The performance of our model surpasses state-of-the-art image classification methods, i.e. ResNet on CIFAR-10 (3.90% error), CIFAR-100 (20.67% error), and challenging ImageNet dataset (0.6% top-1 accuracy improvement) with only 46%percent4646\\% trunk depth and 69%percent6969\\% forward FLOPs (comparing with ResNet-200). In the future, we will exploit different applications of deep Residual Attention Network such as detection and segmentation to better explore mixed attention mechanism for specific tasks. ",
"title": "Residual Attention Network for Image Classification"
}
] |
Does deconvolution and unpooling conduct the same goal in the network?
|
Yes the purpose of the de-convolution layer is to increase the size similar to un-pooling operation [10].
|
[
10
] |
[
{
"id": "1606.04797_all_0",
"text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This success has been attributed to the ability of CNNs to learn a hierarchical representation of raw input data, without relying on handcrafted features. As the inputs are processed through the network layers, the level of abstraction of the resulting features increases. Shallower layers grasp local information while deeper layers use filters whose receptive fields are much broader that therefore capture global information . ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_1",
"text": " Segmentation is a highly relevant task in medical image analysis. Automatic delineation of organs and structures of interest is often necessary to perform tasks such as visual augmentation , computer assisted diagnosis , interventions and extraction of quantitative indices from images . In particular, since diagnostic and interventional imagery often consists of 3D images, being able to perform volumetric segmentations by taking into account the whole volume content at once, has a particular relevance. In this work, we aim to segment prostate MRI volumes. This is a challenging task due to the wide range of appearance the prostate can assume in different scans due to deformations and variations of the intensity distribution. Moreover, MRI volumes are often affected by artefacts and distortions due to field inhomogeneity. Prostate segmentation is nevertheless an important task having clinical relevance both during diagnosis, where the volume of the prostate needs to be assessed , and during treatment planning, where the estimate of the anatomical boundary needs to be accurate (4, 20). ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_2",
"text": " CNNs have been recently used for medical image segmentation. Early approaches obtain anatomy delineation in images or volumes by performing patch-wise image classification. Such segmentations are obtained by only considering local context and therefore are prone to failure, especially in challenging modalities such as ultrasound, where a high number of mis-classified voxel are to be expected. Post-processing approaches such as connected components analysis normally yield no improvement and therefore, more recent works, propose to use the network predictions in combination with Markov random fields , voting strategies or more traditional approaches such as level-sets . Patch-wise approaches also suffer from efficiency issues. When densely extracted patches are processed in a CNN, a high number of computations is redundant and therefore the total algorithm runtime is high. In this case, more efficient computational schemes can be adopted. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_3",
"text": " Fully convolutional network trained end-to-end were so far applied only to 2D images both in computer vision (11, 8) and microscopy image analysis . These models, which served as an inspiration for our work, employed different network architectures and were trained to predict a segmentation mask, delineating the structures of interest, for the whole image. In a pre-trained VGG network architecture was used in conjunction with its mirrored, de-convolutional, equivalent to segment RGB images by leveraging the descriptive power of the features extracted by the innermost layer. In three fully convolutional deep neural networks, pre-trained on a classification task, were refined to produce segmentations while in a brand new CNN model, especially tailored to tackle biomedical image analysis problems in 2D, was proposed. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_4",
"text": " In this work we present our approach to medical image segmentation that leverages the power of a fully convolutional neural networks, trained end-to-end, to process MRI volumes. Differently from other recent approaches we refrain from processing the input volumes slice-wise and we propose to use volumetric convolutions instead. We propose a novel objective function based on Dice coefficient maximisation, that we optimise during training. We demonstrate fast and accurate results on prostate MRI test volumes and we provide direct comparison with other methods which were evaluated on the same test data 111Detailed results available on http://promise12.grand-challenge.org/results/. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_5",
"text": " In Figure 2 we provide a schematic representation of our convolutional neural network. We perform convolutions aiming to both extract features from the data and, at the end of each stage, to reduce its resolution by using appropriate stride. The left part of the network consists of a compression path, while the right part decompresses the signal until its original size is reached. Convolutions are all applied with appropriate padding. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_6",
"text": " The left side of the network is divided in different stages that operate at different resolutions. Each stage comprises one to three convolutional layers. Similarly to the approach presented in , we formulate each stage such that it learns a residual function: the input of each stage is (a) used in the convolutional layers and processed through the non-linearities and (b) added to the output of the last convolutional layer of that stage in order to enable learning a residual function. As confirmed by our empirical observations, this architecture ensures convergence in a fraction of the time required by a similar network that does not learn residual functions. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_7",
"text": " The convolutions performed in each stage use volumetric kernels having size 5×5×55555\\times 5\\times 5 voxels. As the data proceeds through different stages along the compression path, its resolution is reduced. This is performed through convolution with 2×2×22222\\times 2\\times 2 voxels wide kernels applied with stride 222 (Figure 3). Since the second operation extracts features by considering only non overlapping 2×2×22222\\times 2\\times 2 volume patches, the size of the resulting feature maps is halved. This strategy serves a similar purpose as pooling layers that, motivated by and other works discouraging the use of max-pooling operations in CNNs, have been replaced in our approach by convolutional ones. Moreover, since the number of feature channels doubles at each stage of the compression path of the V-Net, and due to the formulation of the model as a residual network, we resort to these convolution operations to double the number of feature maps as we reduce their resolution. PReLu non linearities are applied throughout the network. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_8",
"text": " Replacing pooling operations with convolutional ones results also to networks that, depending on the specific implementation, can have a smaller memory footprint during training, due to the fact that no switches mapping the output of pooling layers back to their inputs are needed for back-propagation, and that can be better understood and analysed by applying only de-convolutions instead of un-pooling operations. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_9",
"text": " Downsampling allows us to reduce the size of the signal presented as input and to increase the receptive field of the features being computed in subsequent network layers. Each of the stages of the left part of the network, computes a number of features which is two times higher than the one of the previous layer. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_10",
"text": " The right portion of the network extracts features and expands the spatial support of the lower resolution feature maps in order to gather and assemble the necessary information to output a two channel volumetric segmentation. The two features maps computed by the very last convolutional layer, having 1×1×11111\\times 1\\times 1 kernel size and producing outputs of the same size as the input volume, are converted to probabilistic segmentations of the foreground and background regions by applying soft-max voxelwise. After each stage of the right portion of the CNN, a de-convolution operation is employed in order increase the size of the inputs (Figure 3) followed by one to three convolutional layers involving half the number of 5×5×55555\\times 5\\times 5 kernels employed in the previous layer. Similar to the left part of the network, also in this case we resort to learn residual functions in the convolutional stages. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_11",
"text": " Similarly to , we forward the features extracted from early stages of the left part of the CNN to the right part. This is schematically represented in Figure 2 by horizontal connections. In this way we gather fine grained detail that would be otherwise lost in the compression path and we improve the quality of the final contour prediction. We also observed that when these connections improve the convergence time of the model. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_12",
"text": " We report in Table 1 the receptive fields of each network layer, showing the fact that the innermost portion of our CNN already captures the content of the whole input volume. We believe that this characteristic is important during segmentation of poorly visible anatomy: the features computed in the deepest layer perceive the whole anatomy of interest at once, since they are computed from data having a spatial support much larger than the typical size of the anatomy we seek to delineate, and therefore impose global constraints. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_13",
"text": " The network predictions, which consist of two volumes having the same resolution as the original input data, are processed through a soft-max layer which outputs the probability of each voxel to belong to foreground and to background. In medical volumes such as the ones we are processing in this work, it is not uncommon that the anatomy of interest occupies only a very small region of the scan. This often causes the learning process to get trapped in local minima of the loss function yielding a network whose predictions are strongly biased towards background. As a result the foreground region is often missing or only partially detected. Several previous approaches resorted to loss functions based on sample re-weighting where foreground regions are given more importance than background ones during learning. In this work we propose a novel objective function based on dice coefficient, which is a quantity ranging between 00 and 111 which we aim to maximise. The dice coefficient D𝐷D between two binary volumes can be written as D=2∑iNpigi∑iNpi2+∑iNgi2𝐷2superscriptsubscript𝑖𝑁subscript𝑝𝑖subscript𝑔𝑖superscriptsubscript𝑖𝑁superscriptsubscript𝑝𝑖2superscriptsubscript𝑖𝑁superscriptsubscript𝑔𝑖2D=\\frac{2\\sum_{i}^{N}p_{i}g_{i}}{\\sum_{i}^{N}p_{i}^{2}+\\sum_{i}^{N}g_{i}^{2}} ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_14",
"text": " where the sums run over the N𝑁N voxels, of the predicted binary segmentation volume pi∈Psubscript𝑝𝑖𝑃p_{i}\\in{P} and the ground truth binary volume gi∈Gsubscript𝑔𝑖𝐺g_{i}\\in{G}. This formulation of Dice can be differentiated yielding the gradient ∂D∂pj=2(gj(∑iNpi2+∑iNgi2)−2pj(∑iNpigi)(∑iNpi2+∑iNgi2)2)𝐷subscript𝑝𝑗2delimited-()subscript𝑔𝑗superscriptsubscript𝑖𝑁superscriptsubscript𝑝𝑖2superscriptsubscript𝑖𝑁superscriptsubscript𝑔𝑖22subscript𝑝𝑗superscriptsubscript𝑖𝑁subscript𝑝𝑖subscript𝑔𝑖superscriptsuperscriptsubscript𝑖𝑁superscriptsubscript𝑝𝑖2superscriptsubscript𝑖𝑁superscriptsubscript𝑔𝑖22\\frac{\\partial D}{\\partial p_{j}}=2\\left(\\frac{g_{j}\\left(\\sum_{i}^{N}p_{i}^{2}+\\sum_{i}^{N}g_{i}^{2}\\right)-2p_{j}\\left(\\sum_{i}^{N}p_{i}g_{i}\\right)}{\\left(\\sum_{i}^{N}p_{i}^{2}+\\sum_{i}^{N}g_{i}^{2}\\right)^{2}}\\right) computed with respect to the j𝑗j-th voxel of the prediction. Using this formulation we do not need to assign weights to samples of different classes to establish the right balance between foreground and background voxels, and we obtain results that we experimentally observed are much better than the ones computed through the same network trained optimising a multinomial logistic loss with sample re-weighting (Fig. 6). ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_15",
"text": " Our CNN is trained end-to-end on a dataset of prostate scans in MRI. An example of the typical content of such volumes is shown in Figure 1. All the volumes processed by the network have fixed size of 128×128×6412812864128\\times 128\\times 64 voxels and a spatial resolution of 1×1×1.5111.51\\times 1\\times 1.5 millimeters. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_16",
"text": " Annotated medical volumes are not easy to obtain due to the fact that one or more experts are required to manually trace a reliable ground truth annotation and that there is a cost associated with their acquisition. In this work we found necessary to augment the original training dataset in order to obtain robustness and increased precision on the test dataset. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_17",
"text": " During every training iteration, we fed as input to the network randomly deformed versions of the training images by using a dense deformation field obtained through a 2×2×22222\\times 2\\times 2 grid of control-points and B-spline interpolation. This augmentation has been performed ”on-the-fly”, prior to each optimisation iteration, in order to alleviate the otherwise excessive storage requirements. Additionally we vary the intensity distribution of the data by adapting, using histogram matching, the intensity distributions of the training volumes used in each iteration, to the ones of other randomly chosen scans belonging to the dataset. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_18",
"text": " A Previously unseen MRI volume can be segmented by processing it in a feed-forward manner through the network. The output of the last convolutional layer, after soft-max, consists of a probability map for background and foreground. The voxels having higher probability (>0.5absent0.5>0.5) to belong to the foreground than to the background are considered part of the anatomy. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_19",
"text": " We trained our method on 505050 MRI volumes, and the relative manual ground truth annotation, obtained from the ”PROMISE2012” challenge dataset . This dataset contains medical data acquired in different hospitals, using different equipment and different acquisition protocols. The data in this dataset is representative of the clinical variability and challenges encountered in clinical settings. As previously stated we massively augmented this dataset through random transformation performed in each training iteration, for each mini-batch fed to the network. The mini-batches used in our implementation contained two volumes each, mainly due to the high memory requirement of the model during training. We used a momentum of 0.990.990.99 and a initial learning rate of 0.00010.00010.0001 which decreases by one order of magnitude every 252525K iterations. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_20",
"text": " We tested V-Net on 303030 MRI volumes depicting prostate whose ground truth annotation was secret. All the results reported in this section of the paper were obtained directly from the organisers of the challenge after submitting the segmentation obtained through our approach. The test set was representative of the clinical variability encountered in prostate scans in real clinical settings . ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_21",
"text": " We evaluated the approach performance in terms of Dice coefficient, Hausdorff distance of the predicted delineation to the ground truth annotation and in terms of score obtained on the challenge data as computed by the organisers of ”PROMISE 2012” . The results are shown in Table 2 and Fig. 5. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_22",
"text": " Our implementation222Implementation available at https://github.com/faustomilletari/VNet was realised in python, using a custom version of the Caffe333Implementation available at https://github.com/faustomilletari/3D-Caffe framework which was enabled to perform volumetric convolutions via CuDNN v3. All the trainings and experiments were ran on a standard workstation equipped with 646464 GB of memory, an Intel(R) Core(TM) i7-5820K CPU working at 3.30GHz, and a NVidia GTX 1080 with 888 GB of video memory. We let our model train for 484848 hours, or 303030K iterations circa, and we were able to segment a previously unseen volume in circa 111 second. The datasets were first normalised using the N4 bias filed correction function of the ANTs framework and then resampled to a common resolution of 1×1×1.5111.51\\times 1\\times 1.5 mm. We applied random deformations to the scans used for training by varying the position of the control points with random quantities obtained from gaussian distribution with zero mean and 151515 voxels standard deviation. Qualitative results can be seen in Fig. 4. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_23",
"text": " We presented and approach based on a volumetric convolutional neural network that performs segmentation of MRI prostate volumes in a fast and accurate manner. We introduced a novel objective function that we optimise during training based on the Dice overlap coefficient between the predicted segmentation and the ground truth annotation. Our Dice loss layer does not need sample re-weighting when the amount of background and foreground pixels is strongly unbalanced and is indicated for binary segmentation tasks. Although we inspired our architecture to the one proposed in , we divided it into stages that learn residuals and, as empirically observed, improve both results and convergence time. Future works will aim at segmenting volumes containing multiple regions in other modalities such as ultrasound and at higher resolutions by splitting the network over multiple GPUs. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_24",
"text": " We would like to acknowledge NVidia corporation, that donated a Tesla K40 GPU to our group enabling this research, Dr. Geert Litjens who dedicated some of his time to evaluate our results against the ground truth of the PROMISE 2012 dataset and Ms. Iro Laina for her support to this project. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
}
] |
Why did the authors chose to do experiments on different basic units to prove the generalization of the residual attention network?
|
Proving generalization shows that the proposed method can be applied to multiple structures without a significant loss in performance [43].
|
[
43
] |
[
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representations of objects at that location. Previous works formulate attention drift as a sequential process to capture different attended aspects. However, as far as we know, no attention mechanism has been applied to feedforward network structure to achieve state-of-art results in image classification task. Recent advances of image classification focus on training feedforward convolutional neural networks using “very deep” structure (27, 33, 10). ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_1",
"text": " Inspired by the attention mechanism and recent advances in the deep neural network, we propose Residual Attention Network, a convolutional network that adopts mixed attention mechanism in “very deep” structure. The Residual Attention Network is composed of multiple Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_2",
"text": " Apart from more discriminative feature representation brought by the attention mechanism, our model also exhibits following appealing properties: ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_3",
"text": " (1) Increasing Attention Modules lead to consistent performance improvement, as different types of attention are captured extensively. Fig.1 shows an example of different types of attentions for a hot air balloon image. The sky attention mask diminishes background responses while the balloon instance mask highlighting the bottom part of the balloon. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_4",
"text": " (2) It is able to incorporate with state-of-the-art deep network structures in an end-to-end training fashion. Specifically, the depth of our network can be easily extended to hundreds of layers. Our Residual Attention Network outperforms state-of-the-art residual networks on CIFAR-10, CIFAR-100 and challenging ImageNet image classification dataset with significant reduction of computation (69% forward FLOPs). ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_5",
"text": " All of the aforementioned properties, which are challenging to achieve with previous approaches, are made possible with following contributions: ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_6",
"text": " (1) Stacked network structure: Our Residual Attention Network is constructed by stacking multiple Attention Modules. The stacked structure is the basic application of mixed attention mechanism. Thus, different types of attention are able to be captured in different Attention Modules. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_7",
"text": " (2) Attention Residual Learning: Stacking Attention Modules directly would lead to the obvious performance drop. Therefore, we propose attention residual learning mechanism to optimize very deep Residual Attention Network with hundreds of layers. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_8",
"text": " (3) Bottom-up top-down feedforward attention: Bottom-up top-down feedforward structure has been successfully applied to human pose estimation and image segmentation (22, 25, 1). We use such structure as part of Attention Module to add soft weights on features. This structure can mimic bottom-up fast feedforward process and top-down attention feedback in a single feedforward process which allows us to develop an end-to-end trainable network with top-down attention. The bottom-up top-down structure in our work differs from stacked hourglass network in its intention of guiding feature learning. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_9",
"text": " Evidence from human perception process shows the importance of attention mechanism, which uses top information to guide bottom-up feedforward process. Recently, tentative efforts have been made towards applying attention into deep neural network. Deep Boltzmann Machine (DBM) contains top-down attention by its reconstruction process in the training stage. Attention mechanism has also been widely applied to recurrent neural networks (RNN) and long short term memory (LSTM) to tackle sequential decision tasks (25, 29, 21, 18). Top information is gathered sequentially and decides where to attend for the next feature learning steps. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_10",
"text": " Residual learning is proposed to learn residual of identity mapping. This technique greatly increases the depth of feedforward neuron network. Similar to our work, (25, 29, 21, 18) use residual learning with attention mechanism to benefit from residual learning. Two information sources (query and query context) are captured using attention mechanism to assist each other in their work. While in our work, a single information source (image) is split into two different ones and combined repeatedly. And residual learning is applied to alleviate the problem brought by repeated splitting and combining. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_11",
"text": " In image classification, top-down attention mechanism has been applied using different methods: sequential process, region proposal and control gates. Sequential process (23, 12, 37, 7) models image classification as a sequential decision. Thus attention can be applied similarly with above. This formulation allows end-to-end optimization using RNN and LSTM and can capture different kinds of attention in a goal-driven way. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_12",
"text": " Region proposal (26, 4, 8, 38) has been successfully adopted in image detection task. In image classification, an additional region proposal stage is added before feedforward classification. The proposed regions contain top information and are used for feature learning in the second stage. Unlike image detection whose region proposals rely on large amount of supervision, e.g. the ground truth bounding boxes or detailed segmentation masks , unsupervised learning is usually used to generate region proposals for image classification. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_13",
"text": " Control gates have been extensively used in LSTM. In image classification with attention, control gates for neurones are updated with top information and have influence on the feedforward process during training (2, 30). However, a new process, reinforcement learning or optimization is involved during the training step. Highway Network extends control gate to solve gradient degradation problem for deep convolutional neural network. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_14",
"text": " However, recent advances of image classification focus on training feedforward convolutional neural networks using “very deep” structure (27, 33, 10). The feedforward convolutional network mimics the bottom-up paths of human cortex. Various approaches have been proposed to further improve the discriminative ability of deep convolutional neural network. VGG , Inception and residual learning are proposed to train very deep neural networks. Stochastic depth , Batch Normalization and Dropout exploit regularization for convergence and avoiding overfitting and degradation. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_15",
"text": " Soft attention developed in recent work (3, 17) can be trained end-to-end for convolutional network. Our Residual Attention Network incorporates the soft attention in fast developing feedforward network structure in an innovative way. Recent proposed spatial transformer module achieves state-of-the-art results on house number recognition task. A deep network module capturing top information is used to generate affine transformation. The affine transformation is applied to the input image to get attended region and then feed to another deep network module. The whole process can be trained end-to-end by using differentiable network layer which performs spatial transformation. Attention to scale uses soft attention as a scale selection mechanism and gets state-of-the-art results in image segmentation task. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_16",
"text": " The design of soft attention structure in our Residual Attention Network is inspired by recent development of localization oriented task, i.e. segmentation (22, 25, 1) and human pose estimation . These tasks motivate researchers to explore structure with fined-grained feature maps. The frameworks tend to cascade a bottom-up and a top-down structure. The bottom-up feedforward structure produces low resolution feature maps with strong semantic information. After that, a top-down network produces dense features to inference on each pixel. Skip connection is employed between bottom and top feature maps and achieved state-of-the-art result on image segmentation. The recent stacked hourglass network fuses information from multiple scales to predict human pose, and benefits from encoding both global and local information. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_17",
"text": " Our Residual Attention Network is constructed by stacking multiple Attention Modules. Each Attention Module is divided into two branches: mask branch and trunk branch. The trunk branch performs feature processing and can be adapted to any state-of-the-art network structures. In this work, we use pre-activation Residual Unit , ResNeXt and Inception as our Residual Attention Networks basic unit to construct Attention Module. Given trunk branch output T(x)𝑇𝑥T(x) with input x𝑥x, the mask branch uses bottom-up top-down structure (22, 25, 1, 24) to learn same size mask M(x)𝑀𝑥M(x) that softly weight output features T(x)𝑇𝑥T(x). The bottom-up top-down structure mimics the fast feedforward and feedback attention process. The output mask is used as control gates for neurons of trunk branch similar to Highway Network . The output of Attention Module H𝐻H is: Hi,c(x)=Mi,c(x)∗Ti,c(x)subscript𝐻𝑖𝑐𝑥subscript𝑀𝑖𝑐𝑥subscript𝑇𝑖𝑐𝑥H_{i,c}(x)=M_{i,c}(x)*T_{i,c}(x) (1) where i ranges over all spatial positions and c∈{1,…,C}𝑐1…𝐶c\\in\\{1,...,C\\} is the index of the channel. The whole structure can be trained end-to-end. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_18",
"text": " In Attention Modules, the attention mask can not only serve as a feature selector during forward inference, but also as a gradient update filter during back propagation. In the soft mask branch, the gradient of mask for input feature is: ∂M(x,θ)T(x,ϕ)∂ϕ=M(x,θ)∂T(x,ϕ)∂ϕ𝑀𝑥𝜃𝑇𝑥italic-ϕitalic-ϕ𝑀𝑥𝜃𝑇𝑥italic-ϕitalic-ϕ\\frac{\\partial M(x,\\theta)T(x,\\phi)}{\\partial\\phi}=M(x,\\theta)\\frac{\\partial T(x,\\phi)}{\\partial\\phi} (2) where the θ𝜃\\theta are the mask branch parameters and the ϕitalic-ϕ\\phi are the trunk branch parameters. This property makes Attention Modules robust to noisy labels. Mask branches can prevent wrong gradients (from noisy labels) to update trunk parameters. Experiment in Sec.4.1 shows the robustness of our Residual Attention Network against noisy labels. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_19",
"text": " Instead of stacking Attention Modules in our design, a simple approach would be using a single network branch to generate soft weight mask, similar to spatial transformer layer . However, these methods have several drawbacks on challenging datasets such as ImageNet. First, images with clutter background, complex scenes, and large appearance variations need to be modeled by different types of attentions. In this case, features from different layers need to be modeled by different attention masks. Using a single mask branch would require exponential number of channels to capture all combinations of different factors. Second, a single Attention Module only modify the features once. If the modification fails on some parts of the image, the following network modules do not get a second chance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_20",
"text": " The Residual Attention Network alleviates above problems. In Attention Module, each trunk branch has its own mask branch to learn attention that is specialized for its features. As shown in Fig.1, in hot air balloon images, blue color features from bottom layer have corresponding sky mask to eliminate background, while part features from top layer are refined by balloon instance mask. Besides, the incremental nature of stacked network structure can gradually refine attention for complex images. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_21",
"text": " However, naive stacking Attention Modules leads to the obvious performance drop. First, dot production with mask range from zero to one repeatedly will degrade the value of features in deep layers. Second, soft mask can potentially break good property of trunk branch, for example, the identical mapping of Residual Unit. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_22",
"text": " We propose attention residual learning to ease the above problems. Similar to ideas in residual learning, if soft mask unit can be constructed as identical mapping, the performances should be no worse than its counterpart without attention. Thus we modify output H𝐻H of Attention Module as Hi,c(x)=(1+Mi,c(x))∗Fi,c(x)subscript𝐻𝑖𝑐𝑥1subscript𝑀𝑖𝑐𝑥subscript𝐹𝑖𝑐𝑥H_{i,c}(x)=(1+M_{i,c}(x))*F_{i,c}(x) (3) M(x)𝑀𝑥M(x) ranges from (0,1)01(0,1), with M(x)𝑀𝑥M(x) approximating 0, H(x)𝐻𝑥H(x) will approximate original features F(x)𝐹𝑥F(x). We call this method attention residual learning. Our stacked attention residual learning is different from residual learning. In the origin ResNet, residual learning is formulated as Hi,c(x)=x+Fi,c(x)subscript𝐻𝑖𝑐𝑥𝑥subscript𝐹𝑖𝑐𝑥H_{i,c}(x)=x+F_{i,c}(x), where Fi,c(x)subscript𝐹𝑖𝑐𝑥F_{i,c}(x) approximates the residual function. In our formulation, Fi,c(x)subscript𝐹𝑖𝑐𝑥F_{i,c}(x) indicates the features generated by deep convolutional networks. The key lies on our mask branches M(x)𝑀𝑥M(x). They work as feature selectors which enhance good features and suppress noises from trunk features. In addition, stacking Attention Modules backs up attention residual learning by its incremental nature. Attention residual learning can keep good properties of original features, but also gives them the ability to bypass soft mask branch and forward to top layers to weaken mask branch’s feature selection ability. Stacked Attention Modules can gradually refine the feature maps. As show in Fig.1, features become much clearer as depth going deeper. By using attention residual learning, increasing depth of the proposed Residual Attention Network can improve performance consistently. As shown in the experiment section, the depth of Residual Attention Network is increased up to 452 whose performance surpasses ResNet-1001 by a large margin on CIFAR dataset. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_23",
"text": " Following previous attention mechanism idea in DBN , our mask branch contains fast feed-forward sweep and top-down feedback steps. The former operation quickly collects global information of the whole image, the latter operation combines global information with original feature maps. In convolutional neural network, the two steps unfold into bottom-up top-down fully convolutional structure. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_24",
"text": " From input, max pooling are performed several times to increase the receptive field rapidly after a small number of Residual Units. After reaching the lowest resolution, the global information is then expanded by a symmetrical top-down architecture to guide input features in each position. Linear interpolation up sample the output after some Residual Units. The number of bilinear interpolation is the same as max pooling to keep the output size the same as the input feature map. Then a sigmoid layer normalizes the output range to (0,1)01(0,1) after two consecutive 1×1111\\times 1 convolution layers. We also added skip connections between bottom-up and top-down parts to capture information from different scales. The full module is illustrated in Fig.2. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_25",
"text": " The bottom-up top-down structure has been applied to image segmentation and human pose estimation. However, the difference between our structure and the previous one lies in its intention. Our mask branch aims at improving trunk branch features rather than solving a complex problem directly. Experiment in Sec.4.1 is conducted to verify above arguments. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_26",
"text": " In our work, attention provided by mask branch changes adaptably with trunk branch features. However, constrains to attention can still be added to mask branch by changing normalization step in activation function before soft mask output. We use three types of activation functions corresponding to mixed attention, channel attention and spatial attention. Mixed attention f1subscript𝑓1f_{1} without additional restriction use simple sigmoid for each channel and spatial position. Channel attention f2subscript𝑓2f_{2} performs L2𝐿2L2 normalization within all channels for each spatial position to remove spatial information. Spatial attention f3subscript𝑓3f_{3} performs normalization within feature map from each channel and then sigmoid to get soft mask related to spatial information only. f1(xi,c)=11+exp(−xi,c)subscript𝑓1subscript𝑥𝑖𝑐11𝑒𝑥𝑝subscript𝑥𝑖𝑐\\displaystyle f_{1}(x_{i,c})=\\frac{1}{1+exp(-x_{i,c})} (4) f2(xi,c)=xi,c‖xi‖subscript𝑓2subscript𝑥𝑖𝑐subscript𝑥𝑖𝑐normsubscript𝑥𝑖\\displaystyle f_{2}(x_{i,c})=\\frac{x_{i,c}}{\\|x_{i}\\|} (5) f3(xi,c)=11+exp(−(xi,c−meanc)/stdc)subscript𝑓3subscript𝑥𝑖𝑐11𝑒𝑥𝑝subscript𝑥𝑖𝑐subscriptmean𝑐subscriptstd𝑐\\displaystyle f_{3}(x_{i,c})=\\frac{1}{1+exp(-(x_{i,c}-\\text{mean}_{c})/\\text{std}_{c})} (6) Where i𝑖i ranges over all spatial positions and c𝑐c ranges over all channels. meancsubscriptmean𝑐\\text{mean}_{c} and stdcsubscriptstd𝑐\\text{std}_{c} denotes the mean value and standard deviation of feature map from c𝑐c-th channel. xisubscript𝑥𝑖x_{i} denotes the feature vector at the i𝑖ith spatial position. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_27",
"text": " The experiment results are shown in Table 1, the mixed attention has the best performance. Previous works normally focus on only one type of attention, for example scale attention or spatial attention , which puts additional constrain on soft mask by weight sharing or normalization. However, as supported by our experiments, making attention change adaptively with features without additional constraint leads to the best performance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_28",
"text": " In this section, we evaluate the performance of proposed Residual Attention Network on a series of benchmark datasets including CIFAR-10, CIFAR-100 , and ImageNet . Our experiments contain two parts. In the first part, we analyze the effectiveness of each component in the Residual Attention Network including attention residual learning mechanism and different architectures of soft mask branch in the Attention Module. After that, we explore the noise resistance property. Given limited computation resources, we choose CIFAR-10 and CIFAR-100 dataset to conduct these experiments. Finally, we compare our network with state-of-the-art results in CIFAR dataset. In the second part, we replace the Residual Unit with Inception Module and ResNeXt to demonstrate our Residual Attention Network surpasses origin networks both in parameter efficiency and final performance. We also compare image classification performance with state-of-the-art ResNet and Inception on ImageNet dataset. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_29",
"text": " The CIFAR-10 and CIFAR-100 datasets consist of 60,0006000060,000 32×32323232\\times 32 color images of 101010 and 100100100 classes respectively, with 50,0005000050,000 training images and 10,0001000010,000 test images. The broadly applied state-of-the-art network structure ResNet is used as baseline method. To conduct fair comparison, we keep most of the settings same as ResNet paper . The image is padded by 4 pixels on each side, filled with 00 value resulting in 40×40404040\\times 40 image. A 32×32323232\\times 32 crop is randomly sampled from an image or its horizontal flip, with the per-pixel RGB mean value subtracted. We adopt the same weight initialization method following previous study and train Residual Attention Network using nesterov SGD with a mini-batch size of 64. We use a weight decay of 0.00010.00010.0001 with a momentum of 0.90.90.9 and set the initial learning rate to 0.1. The learning rate is divided by 10 at 646464k and 969696k iterations. We terminate training at 160160160k iterations. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_30",
"text": " The overall network architecture and the hyper parameters setting are described in Fig.2. The network consists of 3 stages and similar to ResNet , equal number of Attention Modules are stacked in each stage. Additionally, we add two Residual Units at each stage. The number of weighted layers in trunk branch is 36m𝑚m+20 where m𝑚m is the number of Attention Module in one stage. We use original 32×32323232\\times 32 image for testing. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_31",
"text": " In this experiment, we evaluate the effectiveness of attention residual learning mechanism. Since the notion of attention residual learning (ARL) is new, no suitable previous methods are comparable therefore we use “naive attention learning” (NAL) as baseline. Specifically, “naive attention learning” uses Attention Module where features are directly dot product by soft mask without attention residual learning. We set the number of Attention Module in each stage m𝑚m = {1, 2, 3, 4}. For Attention Module, this leads to Attention-56 (named by trunk layer depth), Attention-92, Attention-128 and Attention-164 respectively. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_32",
"text": " We train these networks using different mechanisms and summarize the results in the Table 3. As shown in Table 3, the networks trained using attention residual learning technique consistently outperform the networks trained with baseline method which proves the effectiveness of our method. The performance increases with the number of Attention Module when applying attention residual learning. In contrast, the performance of networks trained with “naive attention learning” method suffers obvious degradation with increased number of Attention Module. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_33",
"text": " To understand the benefit of attention residual learning, we calculate mean absolute response value of output layers for each stage. We use Attention-164 to conduct this experiment. As shown in the Fig. 4, the response generated by the network trained using naive attention learning quickly vanishes in the stage 2 after four Attention Modules compared with network trained using attention residual learning. The Attention Module is designed to suppress noise while keeping useful information by applying dot product between feature and soft mask. However, repeated dot product will lead to severe degradation of both useful and useless information in this process. The attention residual learning can relieve signal attenuation using identical mapping, which enhances the feature contrast. Therefore, it gains benefits from noise reduction without significant information loss, which makes optimization much easier while improving the discrimination of represented features. In the rest of the experiments, we apply this technique to train our networks. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_34",
"text": " We conduct experiments to validate the effectiveness of encoder-decoder structure by comparing with local convolutions without any down sampling or up sampling. The local convolutions soft mask consists of three Residual Units using the same number of FLOPs. The Attention-56 is used to construct Attention-Encoder-Decoder-56 and Attention-Local-Conv-56 respectively. Results are shown in Table 4. The Attention-Encoder-Decoder-56 network achieves lower test error 5.52%percent5.525.52\\% compared with Attention-Local-Conv-56 network 6.48%percent6.486.48\\% with a considerable margin 0.94%percent0.940.94\\%. The result suggests that the soft attention optimization process will benefit from multi-scale information. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_35",
"text": " In this experiment, we show our Residual Attention Network enjoys noise resistant property on CIFAR-10 dataset following the setting of paper . The confusion matrix Q𝑄Q in our experiment is set as follows: Q=(r1−r9⋯1−r91−r9r⋯1−r9⋮⋮⋱⋮1−r91−r9⋯r)10×10𝑄subscriptmatrix𝑟1𝑟9⋯1𝑟91𝑟9𝑟⋯1𝑟9⋮⋮⋱⋮1𝑟91𝑟9⋯𝑟1010Q=\\left(\\begin{matrix}r&\\frac{1-r}{9}&\\cdots&\\frac{1-r}{9}\\\\ \\frac{1-r}{9}&r&\\cdots&\\frac{1-r}{9}\\\\ \\vdots&\\vdots&\\ddots&\\vdots\\\\ \\frac{1-r}{9}&\\frac{1-r}{9}&\\cdots&r\\\\ \\end{matrix}\\right)_{10\\times 10} (7) ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_36",
"text": " where r𝑟r denotes the clean label ratio for the whole dataset. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_37",
"text": " We compare ResNet-164 network with Attention-92 network under different noise levels. The Table 5 shows the results. The test error of Attention-92 network is significantly lower than ResNet-164 network with the same noise level. In addition, when we increase the ratio of noise, test error of Attenion-92 declines slowly compared with ResNet-164 network. These results suggest that our Residual Attention Network can perform well even trained with high level noise data. When the label is noisy, the corresponding mask can prevent gradient caused by label error to update trunk branch parameters in the network. In this way, only the trunk branch is learning the wrong supervision information and soft mask branch masks the wrong label. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_38",
"text": " We compare our Residual Attention Network with state-of-the-art methods including ResNet and Wide ResNet on CIFAR-10 and CIFAR-100 datasets. The results are shown in Table 6. Our Attention-452 outperforms all the baseline methods on CIFAR-10 and CIFAR-100 datasets. Note that Attention-92 network achieves 4.99%percent4.994.99\\% test error on CIFAR-10 and 21.71%percent21.7121.71\\% test error on CIFAR-100 compared with 5.46%percent5.465.46\\% and 24.33%percent24.3324.33\\% test error on CIFAR-10 and CIFAR-100 for ResNet-164 network under similar parameter size. In addition, Attention-236 outperforms ResNet-1001 using only half of the parameters. It suggests that our Attention Module and attention residual learning scheme can effectively reduce the number of parameters in the network while improving the classification performance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_39",
"text": " In this section, we conduct experiments using ImageNet LSVRC 201220122012 dataset , which contains 1,00010001,000 classes with 1.21.21.2 million training images, 50,0005000050,000 validation images, and 100,000100000100,000 test images. The evaluation is measured on the non-blacklist images of the ImageNet LSVRC 201220122012 validation set. We use Attention-56 and Attention-92 to conduct the experiments. The network structures and hyper parameters can be found in the Table 2. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_40",
"text": " Our implementation generally follows the practice in the previous study . We apply scale and aspect ratio augmentation to the original image. A 224×224224224224\\times 224 crop is randomly sampled from an augment image or its horizontal flip, with the per-pixel RGB scale to (0,1)01(0,1) and mean value subtracted and standard variance divided. We adopt standard color augmentation . The network is trained using SGD with a momentum of 0.90.90.9. We set initial learning rate to 0.1. The learning rate is divided by 10 at 200200200k, 400400400k, 500500500k iterations. We terminate training at 530530530k iterations. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_41",
"text": " In this experiment, we explore the efficiency of proposed Residual Attention Network. We compare Attention-56 with ResNet-152 . The ResNet-152 has 50 trunk Residual Units and 60.2×106absentsuperscript106\\times 10^{6} parameters compared with 18 trunk Residual Units and 31.9×106absentsuperscript106\\times 10^{6} parameters in Attention-56. We evaluate our model using single crop scheme on the ImageNet validation set and show results in Table 7. The Attention-56 network outperforms ResNet-152 by a large margin with a 0.4%percent0.40.4\\% reduction on top-1 error and a 0.26%percent0.260.26\\% reduction on top-5 error. More importantly, Attention-56 network achieves better performance with only 52% parameters and 56% FLOPs compared with ResNet-152, which suggests that the proposed attention mechanism can significantly improve network performance while reducing the model complexity. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_42",
"text": " In this experiment, we show Residual Attention Network can generalize well using different basic unit. We apply three popular basic units: Residual Unit, ResNeXt , and Inception to construct our Residual Attention Networks. To keep the number of parameters and FLOPs in the same scale, we simplify the Inception. Results are shown in Table 7. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_43",
"text": " When the basic unit is ResNeXt, the AttentionNeXt-56 network performance is the same as ResNeXt-101 while the parameters and FLOPs are significantly fewer than ResNeXt-101. For Inception, The AttentionIncepiton-56 outperforms Inception-ResNet-v1 by a margin with a 0.94% reduction on top-1 error and a 0.21% reduction on top-5 error. The results show that our method can be applied on different network structures. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_44",
"text": " We compare our Attention-92 evaluated using single crop on the ILSVRC 2012 validation set with state-of-the-art algorithms. Table 7 shows the results. Our Attention-92 outperforms ResNet-200 with a large margin. The reduction on top-1 error is 0.6%percent0.60.6\\%. Note that the ResNet-200 network contains 32%percent3232\\% more parameters than Attention-92. The computational complexity of Attention-92 shown in the Table 7 suggests that our network reduces nearly half training time comparing with ResNet-200 by adding attention mechanism and reducing trunk depth. Above results suggest that our model enjoys high efficiency and good performance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_45",
"text": " We propose a Residual Attention Network which stacks multiple Attention Modules. The benefits of our network are in two folds: it can capture mixed attention and is an extensible convolutional neural network. The first benefit lies in that different Attention Modules capture different types of attention to guide feature learning. Our experiments on the forms of activation function also validate this point: free form mixed attention will have better performance than constrained (including single) attention. The second benefit comes from encoding top-down attention mechanism into bottom-up top-down feedforward convolutional structure in each Attention Module. Thus, the basic Attention Modules can be combined to form larger network structure. Moreover, residual attention learning allows training very deep Residual Attention Network. The performance of our model surpasses state-of-the-art image classification methods, i.e. ResNet on CIFAR-10 (3.90% error), CIFAR-100 (20.67% error), and challenging ImageNet dataset (0.6% top-1 accuracy improvement) with only 46%percent4646\\% trunk depth and 69%percent6969\\% forward FLOPs (comparing with ResNet-200). In the future, we will exploit different applications of deep Residual Attention Network such as detection and segmentation to better explore mixed attention mechanism for specific tasks. ",
"title": "Residual Attention Network for Image Classification"
}
] |
How is the "average attention sparsity" measured in the experiments?
|
The average attention sparsity is measured by the densities of masks sampled in SBM-Transformer averaged across all attention heads [32].
|
[
32
] |
[
{
"id": "2210.15541_all_0",
"text": " The Transformer architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation , image classification , and protein language modeling . Its key strength stems from the multi-head attention module, where a so-called attention score matrix computes how contextually important one token is to another for all possible token pairs. Each Transformer layer simultaneously pools the token representations based on the attention scores, eventually returning contextualized features without sequentially traversing through the input sequence as its recurrent neural network-based predecessors . ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_1",
"text": " A well-known drawback of the original Transformer is its high computational cost in time and memory that increases quadratically with sequence length. This is due to the full pairwise computation of attention scores, which prohibits applying it in tasks involving long-range dependencies such as document summarization or high-resolution image processing . Many works have thus focused on developing more efficient alternatives by exploiting fixed or learnable attention sparsity patterns (9, 51, 22, 13), low-rank approximations (45, 48), or kernelized attention modules (21, 10). ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_2",
"text": " Even though the efficient alternatives hold theoretical expressibility guarantees , they are far from sufficient, still failing to convince practitioners to replace the original Transformer. We believe this is mostly due to their lack of adaptability. They apply the same modifications to unanimously sparsify all the attention modules across layers, without considering the tasks at hand. Such strategy imposes inductive bias too strongly and often leads to sub-optimal cost vs. performance trade-offs in downstream tasks . In this work, we argue that to retain the utmost potential of Transformers, each attention module should have the ability to flexibly choose between sparse and full attention. This is especially evident when considering many state-of-the-art systems suggest the need for a mixture of dense and sparse attention layers. For example, a qualitative analysis on pretrained BERT showed that lower layers exhibit broad dense attention while upper layers perform focused sparse attention . In the case of GPT-3 , the Transformer blocks are manually arranged to alternate between dense and sparse attention. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_3",
"text": " To contribute to the efficient Transformers lineage, we propose SBM-Transformer, capable of adjusting its attention sparsity data-adaptively based without fully computing the attention score matrix (Figure 1). Leveraging a mixed-membership Stochastic Block Model (SBM) , each attention head samples a bipartite graph connecting queries to keys. Then, the adjacency of the sampled graph is used as an attention mask so that only attention scores corresponding to sampled edges are computed. The overall computational cost is linear in the number of edges, which can range from linear to quadratic in sequence length depending on the data and task under concern. Each attention head is equipped with its own underlying SBM, enabling the model to diversify the attention sparsity across heads and layers. By incorporating a straight-through estimator in the discrete graph-sampling step, SBM-Transformer enjoys end-to-end differentiability and can find the proper attention sparsity based solely upon minimizing the predictive loss. The model can also easily be further regularized by penalizing the number of sampled edges, which results in a lighter model using less computational resources during inference. To the best of our knowledge, our method is the first Transformer architecture that can data-adaptively choose between linear to full attention with respective computational costs. To summarize, our main contributions are as follows: ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_4",
"text": " • We present SBM-Transformer, a novel Transformer of which each attention head can adaptively adjust its attention sparsity as well as computational cost based on the input data. • To demonstrate the benefit of this flexibility, we theoretically prove that SBM-Transformer retains universal approximability, and also stress-test the model under a synthetic task where full attention is required to achieve 100% accuracy. • Evaluations on LRA and GLUE benchmarks show that SBM-Transformer outperforms previous efficient Transformer models as well as the vanilla Transformer with dense attention. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_5",
"text": " In this section we discuss previous efficient Transformer variants and several works similar to ours with respect to adaptively learning sparse attention patterns. We also review several works on SBMs. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_6",
"text": " Many efficient Transformers tackle to reduce the quadratic cost of multi-head attention with different approaches. While we discuss only a handful of representative approaches, a much more comprehensive survey can be found in . The Linear Transformer achieves linear complexity by replacing the softmax with a low-rank kernelized function. Linformer and Nyströmformer use a similar approach by low-rank approximating the attention score matrix. Performer uses positive orthogonal random features to approximate the softmax kernel. Reformer gathers similar tokens together through locality-sensitive hashing (LSH) and performs attention amongst tokens within the same bucket. Of all methods above, our method is most similar to Reformer, in the sense that we adaptively assign queries and keys into clusters and form a low-rank sparse attention pattern. However, our method performs soft-clustering with much less structural constraints, allowing each attention head to represent a wider variety of dependency structure and to adjust its sparsity towards full attention if needed. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_7",
"text": " With respect to flexible training between sparse and dense attention, there exist some works that parameterize how sparse the attention pattern should be based on the input. The Adaptive Sparse Transformer proposed replacing the usual softmax activation with α𝛼\\alpha-entmax, in which the α𝛼\\alpha parameter can be differentiably trained to adjust the activation between softmax and sparsemax activation . SparseBERT uses a differentiable masking technique where each attention mask is sampled from a Gumbel-sigmoid distribution using data-independent mask probability parameters. While these methods possess the flexibility to adjust between sparse and full attention based on data, they still require full computation of the attention score matrix before sparsification, and hence are unable to leverage the learned sparsity towards better model efficiency. To the best of our knowledge, ours is the first work to be able to adaptively tune its attention sparsity between sparse to full attention without requiring the explicit computation of the attention score matrix, thereby avoiding quadratic cost when possible. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_8",
"text": " The Stochastic Block Model (SBM) is a generative model that encodes the latent structure of graphs by grouping nodes into clusters. By modeling the cluster-membership of each node as well as inter-cluster relationships, SBMs can represent a wide variety of graph structures, which is a feature especially useful for generating new graphs or predicting missing edges in noisy data . The standard SBM assigns each node to a single cluster, and the probability of an edge between two nodes strictly depends on the corresponding clusters. Several structural extensions include overlapping SBM and mixed-membership SBM , which allow each node to be assigned to multiple clusters. The underlying SBM used by our framework mostly resembles these two variants, while the edge probability is modeled by a nonlinear function of two node embeddings rather than a bilinear one. There exist many other extensions including degree-corrected SBM for multi-graphs and hierarchical SBM for multiplex-graphs. Further details can be found in a recent survey . ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_9",
"text": " We first introduce the full attention mechanism used in the original Transformer as well as masked attention which will serve as a backbone of our approach. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_10",
"text": " In vanilla Transformer , each attention head takes a sequence of token features as input 𝑿∈ℝn×d𝑿superscriptℝ𝑛𝑑\\bm{X}\\in\\mathbb{R}^{n\\times d} where n𝑛n is the sequence length and d𝑑d the embedding dimension. Weight parameters 𝑾Q,𝑾K∈ℝd×dhsuperscript𝑾𝑄superscript𝑾𝐾superscriptℝ𝑑subscript𝑑ℎ\\bm{W}^{Q},\\bm{W}^{K}\\in\\mathbb{R}^{d\\times d_{h}} and 𝑾V∈ℝd×dhsuperscript𝑾𝑉superscriptℝ𝑑subscript𝑑ℎ\\bm{W}^{V}\\in\\mathbb{R}^{d\\times d_{h}} with head-dimension dhsubscript𝑑ℎd_{h} first maps the input features 𝑿𝑿\\bm{X} into query 𝑸𝑸\\bm{Q}, key 𝑲𝑲\\bm{K}, and value 𝑽𝑽\\bm{V}, respectively. Then, the attention score matrix is computed with scaled dot-product of queries and keys followed by row-wise softmax activation σ(⋅)𝜎⋅\\sigma(\\cdot). Note that explicit computation of this matrix is the main bottleneck of full attention, incurring 𝒪(n2)𝒪superscript𝑛2\\mathcal{O}(n^{2}) asymptotic cost in both time and memory. The value features 𝑽𝑽\\bm{V} are then pooled based on the attention scores, returning the output token representations. Altogether, the operation performed by each attention head can be written as 𝑸=𝑿𝑾Q,𝑲=𝑿𝑾K,𝑽=𝑿𝑾Vformulae-sequence𝑸𝑿superscript𝑾𝑄formulae-sequence𝑲𝑿superscript𝑾𝐾𝑽𝑿superscript𝑾𝑉\\displaystyle\\bm{Q}=\\bm{X}\\bm{W}^{Q},\\;\\;\\bm{K}=\\bm{X}\\bm{W}^{K},\\;\\;\\bm{V}=\\bm{X}\\bm{W}^{V} (1) Attn(𝑿)=σ(𝑸𝑲Tdh)𝑽.Attn𝑿𝜎𝑸superscript𝑲𝑇subscript𝑑ℎ𝑽\\displaystyle\\texttt{Attn}(\\bm{X})=\\sigma\\left(\\dfrac{\\bm{Q}\\bm{K}^{T}}{\\sqrt{d_{h}}}\\right)\\bm{V}. (2) ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_11",
"text": " One way to remove the quadratic bottleneck from the attention score matrix is to apply a binary mask 𝑴∈{0,1}n×n𝑴superscript01𝑛𝑛\\bm{M}\\in\\{0,1\\}^{n\\times n} and compute the scaled dot-products 𝑸i𝑲jT/dhsubscript𝑸𝑖superscriptsubscript𝑲𝑗𝑇subscript𝑑ℎ\\bm{Q}_{i}\\bm{K}_{j}^{T}/\\sqrt{d_{h}} only if 𝑴ij=1subscript𝑴𝑖𝑗1\\bm{M}_{ij}=1. In presence of an attention mask, the operation is modified to Attnmask(𝑿,𝑴)=σ𝑴(𝑴⊙𝑸𝑲Tdh)𝑽subscriptAttnmask𝑿𝑴subscript𝜎𝑴direct-product𝑴𝑸superscript𝑲𝑇subscript𝑑ℎ𝑽\\displaystyle\\texttt{Attn}_{\\text{mask}}(\\bm{X},\\bm{M})=\\sigma_{\\bm{M}}\\left(\\bm{M}\\odot\\dfrac{\\bm{Q}\\bm{K}^{T}}{\\sqrt{d_{h}}}\\right)\\bm{V} (3) σ𝑴(𝑨)ij≔{exp(𝑨ij)∑k∈{k′|𝑴ik′=1}exp(𝑨ik)if𝑴ij=10otherwise≔subscript𝜎𝑴subscript𝑨𝑖𝑗casessubscript𝑨𝑖𝑗subscript𝑘conditional-setsuperscript𝑘′subscript𝑴𝑖superscript𝑘′1subscript𝑨𝑖𝑘ifsubscript𝑴𝑖𝑗10otherwise\\displaystyle\\sigma_{\\bm{M}}(\\bm{A})_{ij}\\coloneqq\\begin{cases}\\dfrac{\\exp(\\bm{A}_{ij})}{\\sum_{k\\in\\{k^{\\prime}|\\bm{M}_{ik^{\\prime}}=1\\}}\\exp(\\bm{A}_{ik})}&\\text{if}\\;\\;\\bm{M}_{ij}=1\\\\ \\hfil 0&\\text{otherwise}\\end{cases} (4) where ⊙direct-product\\odot indicates entry-wise multiplication. Note that the masked-softmax σ𝑴(⋅)subscript𝜎𝑴⋅\\sigma_{\\bm{M}}(\\cdot) operator only computes unmasked terms, ensuring that each (i,j)𝑖𝑗(i,j)-th attention score survives as nonzero if and only if 𝑴ij=1subscript𝑴𝑖𝑗1\\bm{M}_{ij}=1. This is thus equivalent to filling in the (i,j)𝑖𝑗(i,j)-th attention score with −∞-\\infty if 𝑴ij=0subscript𝑴𝑖𝑗0\\bm{M}_{ij}=0, then applying the standard softmax operator. Most sparsity-based efficient Transformers fall under this formulation, while using different methods to either manually fix or learn the mask 𝑴𝑴\\bm{M}. For instance, local attention (9, 3, 51) with a sliding window sets 𝑴ij=1subscript𝑴𝑖𝑗1\\bm{M}_{ij}=1 if |i−j|<c𝑖𝑗𝑐|i-j|<c for some context window size c𝑐c while Reformer sets 𝑴ij=1subscript𝑴𝑖𝑗1\\bm{M}_{ij}=1 if 𝑸isubscript𝑸𝑖\\bm{Q}_{i} and 𝑲jsubscript𝑲𝑗\\bm{K}_{j} are hashed into the same bucket. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_12",
"text": " Here we discuss the details of SBM-Transformer (Figure 2). We first illustrate the forward step of our attention module and how the underlying SBM of each head, from which we sample our attention masks, is parameterized by the input tensors. We then discuss how the model enables end-to-end differentiability despite the discrete graph sampling step. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_13",
"text": " In our framework, we view the attention mask 𝑴𝑴\\bm{M} as an adjacency matrix of a bipartite graph that connects queries to keys, and let each attention head sample an adjacency matrix that best represents the contextual dependencies amongst input tokens. In order to efficiently sample adjacency matrices while avoiding the quadratic cost, the distribution of graphs must first be parameterized with a sub-quadratic number of latent variables. Stochastic Block Models fit perfectly for our purpose as it models graphs that are low-rank structured with k𝑘k latent clusters, allowing full parameterization using 𝒪(nk)𝒪𝑛𝑘\\mathcal{O}(nk) memory. More concretely, the SBM distribution is defined by two nonnegative node-to-cluster memberships 𝒀,𝒁∈ℝ+n×k𝒀𝒁superscriptsubscriptℝ𝑛𝑘\\bm{Y},\\bm{Z}\\in\\mathbb{R}_{+}^{n\\times k} and a so-called block matrix 𝑩∈ℝ+k×k𝑩superscriptsubscriptℝ𝑘𝑘\\bm{B}\\in\\mathbb{R}_{+}^{k\\times k} that stores the inter-cluster connection probabilities. The probability of node i𝑖i being connected to node j𝑗j is computed as p(i,j)=𝒀i𝑩𝒁jT𝑝𝑖𝑗subscript𝒀𝑖𝑩superscriptsubscript𝒁𝑗𝑇p(i,j)=\\bm{Y}_{i}\\bm{B}\\bm{Z}_{j}^{T}. Equivalently, the expectation of the adjacency matrix sampled from 𝑨∼SBM(𝒀,𝑩,𝒁)similar-to𝑨𝑆𝐵𝑀𝒀𝑩𝒁\\bm{A}\\sim SBM(\\bm{Y},\\bm{B},\\bm{Z}) can be written as 𝔼(𝑨)=𝒀𝑩𝒁T𝔼delimited-()𝑨𝒀𝑩superscript𝒁𝑇\\mathbb{E}(\\bm{A})=\\bm{Y}\\bm{B}\\bm{Z}^{T}. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_14",
"text": " For proper parameterization of the SBM, we must infer the nonnegative node-memberships and block matrix from the queries and keys. To do so, we equip each attention head a 2-layer MLPdh→dhsubscriptMLP→subscript𝑑ℎsubscript𝑑ℎ\\text{MLP}_{d_{h}\\to d_{h}} with ReLU activation, and a set of k𝑘k trainable cluster-embeddings 𝑪∈ℝk×dh𝑪superscriptℝ𝑘subscript𝑑ℎ\\bm{C}\\in\\mathbb{R}^{k\\times d_{h}}. First, our model computes the block matrix 𝑺^∈ℝ+k×k^𝑺superscriptsubscriptℝ𝑘𝑘\\smash{\\hat{\\bm{S}}}\\in\\mathbb{R}_{+}^{k\\times k} by taking dot products amongst cluster-embeddings 𝑪𝑪\\bm{C} followed by a 2-dimensional softmax activation. The node embeddings are obtained by processing each query and key through the MLPdh→dhsubscriptMLP→subscript𝑑ℎsubscript𝑑ℎ\\text{MLP}_{d_{h}\\to d_{h}}, mapping token representations into the node representation space. The memberships of query and key nodes, which we denote by 𝑸^^𝑸\\smash{\\hat{\\bm{Q}}} and 𝑲^^𝑲\\smash{\\hat{\\bm{K}}}, are then inferred by taking dot products of node and cluster embeddings, followed by a sigmoid function. The block matrix 𝑺^^𝑺\\smash{\\hat{\\bm{S}}}, query node-memberships 𝑸^^𝑸\\smash{\\hat{\\bm{Q}}}, and key node-memberships 𝑲^^𝑲\\smash{\\hat{\\bm{K}}} altogether provide a well-defined parameterization for the SBM. Thus, a bipartite graph adjacency 𝑴∈{0,1}n×m𝑴superscript01𝑛𝑚\\bm{M}\\in\\{0,1\\}^{n\\times m} can be sampled from 𝑴∼SBM(𝑸^,𝑺^,𝑲^)similar-to𝑴𝑆𝐵𝑀^𝑸^𝑺^𝑲\\bm{M}\\sim SBM(\\smash{\\hat{\\bm{Q}}},\\smash{\\hat{\\bm{S}}},\\smash{\\hat{\\bm{K}}}) with expectation 𝔼(𝑴)=𝑸^𝑺^𝑲^T𝔼delimited-()𝑴^𝑸^𝑺superscript^𝑲𝑇\\mathbb{E}(\\bm{M})=\\smash{\\hat{\\bm{Q}}}\\smash{\\hat{\\bm{S}}}\\smash{\\hat{\\bm{K}}}^{T}: the probability of connecting query 𝑸isubscript𝑸𝑖\\bm{Q}_{i} to key 𝑲jsubscript𝑲𝑗\\bm{K}_{j} equals p(i,j)=𝑸^i𝑺^𝑲^jT𝑝𝑖𝑗subscript^𝑸𝑖^𝑺superscriptsubscript^𝑲𝑗𝑇p(i,j)=\\smash{\\hat{\\bm{Q}}}_{i}\\smash{\\hat{\\bm{S}}}\\smash{\\hat{\\bm{K}}}_{j}^{T}. Formally, the sampling procedure can be written as ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_15",
"text": " 𝑺^^𝑺\\displaystyle\\smash{\\hat{\\bm{S}}} =softmax(𝑪𝑪T)absentsoftmax𝑪superscript𝑪𝑇\\displaystyle=\\texttt{softmax}(\\bm{C}\\bm{C}^{T}) (5) 𝑸^^𝑸\\displaystyle\\smash{\\hat{\\bm{Q}}} =sigmoid(MLPdh→dh(𝑸)𝑪T)absentsigmoidsubscriptMLP→subscript𝑑ℎsubscript𝑑ℎ𝑸superscript𝑪𝑇\\displaystyle=\\texttt{sigmoid}(\\text{MLP}_{d_{h}\\to d_{h}}(\\bm{Q})\\bm{C}^{T}) (6) 𝑲^^𝑲\\displaystyle\\smash{\\hat{\\bm{K}}} =sigmoid(MLPdh→dh(𝑲)𝑪T)absentsigmoidsubscriptMLP→subscript𝑑ℎsubscript𝑑ℎ𝑲superscript𝑪𝑇\\displaystyle=\\texttt{sigmoid}(\\text{MLP}_{d_{h}\\to d_{h}}(\\bm{K})\\bm{C}^{T}) (7) 𝑴𝑴\\displaystyle\\bm{M} ∼SBM(𝑸^,𝑺^,𝑲^)similar-toabsent𝑆𝐵𝑀^𝑸^𝑺^𝑲\\displaystyle\\sim SBM(\\smash{\\hat{\\bm{Q}}},\\smash{\\hat{\\bm{S}}},\\smash{\\hat{\\bm{K}}}) (8) ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_16",
"text": " For the last sampling step, we incorporate a fast random graph sampling algorithm fastRG (Alg. 1, ) that can sample graphs from a SBM in time and memory asymptotically linear in the number of edges. One advantage of fastRG is that each edge can be sampled in parallel, allowing high efficiency with the help of multiprocessing. A more significant feature of the method is that the number of edges, which determines the overall cost, is sampled from a Poisson distribution with input-dependent mean (Line 4). Thus, the model can dynamically adjust its computational cost between linear and quadratic in sequence length based on the data. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_17",
"text": " Figure 3 shows example placements of nodes and clusters on the dhsubscript𝑑ℎd_{h}-dimensional space to show how the sparse structure is determined. If all nodes and clusters are gathered closely, then all entries in 𝑸^^𝑸\\smash{\\hat{\\bm{Q}}} and 𝑲^^𝑲\\smash{\\hat{\\bm{K}}} become close to 1, resulting in p(i,j)≈1𝑝𝑖𝑗1p(i,j)\\approx 1 for all i,j𝑖𝑗i,j and hence a dense 𝑴𝑴\\bm{M}. If clusters are well-separated but each surrounded by some set of nodes, 𝑺^^𝑺\\smash{\\hat{\\bm{S}}} becomes close to diagonal while each row in 𝑸^^𝑸\\smash{\\hat{\\bm{Q}}} and 𝑲^^𝑲\\smash{\\hat{\\bm{K}}} is close to a one-hot vector indicating the cluster nearby. Such setting leads to a block diagonal mask similar to LSH bucketing of Reformer . Lastly, if all clusters are far apart from the nodes, both 𝑸^^𝑸\\smash{\\hat{\\bm{Q}}} and 𝑲^^𝑲\\smash{\\hat{\\bm{K}}} approximately equal zero, zeroing out all the edge probabilities. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_18",
"text": " The graph sampling procedure is naturally a discrete operation. Thus, naive backpropagation cannot learn the proper parameterization for the SBM that minimizes the predictive loss. To cope with this non-differentiability, we incorporate a Straight-Through Estimator (STE) to pass the gradient beyond the discrete sampling step. The STE enables providing the gradient ∂ℒ/∂𝑴ijℒsubscript𝑴𝑖𝑗\\partial\\mathcal{L}/\\partial\\bm{M}_{ij} to the probability for each sampled edge (i,j)𝑖𝑗(i,j) (Eqn. 9). It works as if we had used a continuous mask 𝑴⊙𝔼(𝑴)direct-product𝑴𝔼delimited-()𝑴\\bm{M}\\odot\\mathbb{E}(\\bm{M}) that stores the probability of each sampled edge instead of the binary mask 𝑴𝑴\\bm{M} during forward propagation. This way, the probabilities of sampled edges can be learned end-to-end: the gradients provide information on whether each sampled edge was useful or not for prediction. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_19",
"text": " ∂ℒ∂pij≔∂ℒ∂𝑴ij={∂ℒ∂𝑨ij⋅𝑸i𝑲jTdhif 𝑴ij=10otherwise where 𝑨≔𝑴⊙𝑸𝑲Tdh≔ℒsubscript𝑝𝑖𝑗ℒsubscript𝑴𝑖𝑗cases⋅ℒsubscript𝑨𝑖𝑗subscript𝑸𝑖superscriptsubscript𝑲𝑗𝑇subscript𝑑ℎif subscript𝑴𝑖𝑗10otherwise where 𝑨≔direct-product𝑴𝑸superscript𝑲𝑇subscript𝑑ℎ\\displaystyle\\dfrac{\\partial\\mathcal{L}}{\\partial p_{ij}}\\coloneqq\\dfrac{\\partial\\mathcal{L}}{\\partial\\bm{M}_{ij}}=\\begin{cases}\\dfrac{\\partial\\mathcal{L}}{\\partial\\bm{A}_{ij}}\\cdot\\dfrac{\\bm{Q}_{i}\\bm{K}_{j}^{T}}{\\sqrt{d_{h}}}&\\text{if }\\bm{M}_{ij}=1\\\\ \\hfil 0&\\text{otherwise}\\end{cases}\\text{ where }\\bm{A}\\coloneqq\\bm{M}\\odot\\dfrac{\\bm{Q}\\bm{K}^{T}}{\\sqrt{d_{h}}} (9) ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_20",
"text": " While this approach enables backpropagation in the same 𝒪(m)𝒪𝑚\\mathcal{O}(m) cost as in the forward step, this comes at the expense of not being able to propagate information through edges that were not sampled. This can be problematic when an edge probability accidentally collapses to zero, after which the edge becomes unlikely to ever be sampled even when it may be useful for the prediction task at hand. Therefore, we add a small perturbation δ>0𝛿0\\delta>0 to each edge probability pijsubscript𝑝𝑖𝑗p_{ij}, allowing the model to explore new edges and resuscitate their sampling probabilities if necessary. We find that a δ𝛿\\delta as small as 0.010.010.01 significantly helps in practice, and thus use this edge exploration scheme during training for our experiments. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_21",
"text": " Note that the gradient ∂ℒ/∂pijℒsubscript𝑝𝑖𝑗\\partial\\mathcal{L}/\\partial p_{ij} can be positive, which suppresses the probability of edge (i,j)𝑖𝑗(i,j). At first, it may seem counter-intuitive why the model would ever limit itself to using fewer edges during training without any sparsity-based regularizations. One explanation is that masked attention provides an easy way to reduce attention scores under finite head dimensions. Under full attention, it is known that the representational space of attention score matrices is limited by the head dimension and softmax activation . This limitation inevitably introduces unwanted noise in the attention scores especially when working with long sequences. In SBM-Transformer, however, the structural sparsity in masked attention introduces another dimension that induces a larger space of row-stochastic matrices (full attention is a special case of masked attention where 𝑴ij=1subscript𝑴𝑖𝑗1\\bm{M}_{ij}=1 for all i,j𝑖𝑗i,j). Therefore, it is reasonable that the model may encourage sparsity to leverage the additional expressiveness assuming the loss landscape has local optima within the sparse attention regime. Our experiments on the LRA benchmark show that this is indeed the case, as our SBM-Transformer converges to an average attention sparsity of 20% to 30% while outperforming Transformer with full attention. We also show in the experiment that we can easily incorporate additional regularization that further encourages sparse attention masks. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_22",
"text": " Leveraging previous work on the theoretical expressiveness of sparse attention (50, 51), we show that SBM-Transformer with a small modification111Here we consider a variant of SBM-Transformer where self-loops are added manually (i.e. 𝑴ii=1subscript𝑴𝑖𝑖1\\bm{M}_{ii}=1 for all i𝑖i). While this is useful in theoretical analysis, we find that not having self-loops slightly helps in empirical performance and hence omit self-loops for the main experiments. retains the same level of expressibility as full attention. Specifically, we show that the low-rank structure of the underlying SBMs does not degrade the expressive power of Transformer, and that SBM-Transformer can universally approximate arbitrary functions with 𝒪(n)𝒪𝑛\\mathcal{O}(n) connections. For brevity, we provide a rough overview of the proof and defer further details to Appendix A. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_23",
"text": " According to the main theorem of Yun et al. (2020) , SBM-Transformer achieves universal approximability if 1) each node attends to itself, 2) the aggregation of all attention patterns contains a Hamiltonian path, and 3) there exists a path between all node pairs. While the first condition is trivially true due to our modification, the other two conditions require careful choice of three SBMs. Here we first parameterize one SBM to hard-assign tokens into k𝑘k equally-sized clusters, inducing a block-diagonal attention pattern. The other two SBMs are parameterized such that the two graphs together form a star graph with k𝑘k global relay tokens. Combining the three attention patterns lead to a parameterization of SBM-Transformer that satisfies all three conditions, hence proving the theorem. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_24",
"text": " For empirical evaluations, we first use a synthetic task to show that our model is flexible enough to learn towards full attention when needed in contrast to previous works. We then experiment on Long Range Arena (LRA) , a benchmark widely used to assess the capacity of efficient Transformers in learning long-range contexts across different modalities. Lastly, we show results on the GLUE benchmark to assess the performance of SBM-Transformer in a downstream NLP setting. All experiments were run on a remote GCP server equipped with 16 NVIDIA A100 Tensor Core GPUs. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_25",
"text": " We formulate a token-level binary classification task as follows: each input sequence consists of N𝑁N integers, each of which is uniformly sampled from {1,2,…,N}12…𝑁\\{1,2,\\dots,N\\}. We use N=256𝑁256N=256 in our setup. The prediction target is a sequence of equal length, where each token is labeled 1 if there exists a duplicate somewhere within the sequence, and 0 otherwise. Below is a simple example with N=8𝑁8N=8 that illustrates the task. We measure the performance of models via binary cross-entropy loss. Input: 1 4 3 7 3 2 3 1 ⇒⇒\\Rightarrow Target: 1 0 1 0 1 0 1 1 ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_26",
"text": " For this task, we compare SBM-Transformer with k=128𝑘128k=128 clusters against various efficient Transformers: Linear Transformer , Linformer , Reformer , Performer , and Nyströmformer . Across all methods, we use a single-layer and single-head architecture with 32 hidden dimensions. Note that due to this constrained setting, the sole head must perform full attention to compare each token to all the others in order to attain 100% accuracy. All models are trained for 2000 epochs where a new batch of sequences is sampled on-the-fly at each epoch. We use a batch size of 256 and learning rate of 1e-3. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_27",
"text": " Figure 4 shows the training loss curves of each baseline method as well as SBM-Transformer. Full attention quickly converges to 100% accuracy, which is expected as it computes all possible pairwise interactions by default. Other models that apply low-rank or kernelized attention fail to achieve the same level of accuracy, due to limited expressibility under the constrained setting. Though SBM-Transformer converges more slowly compared to full-attention, it demonstrates the ability to drive itself towards full-attention, eventually attaining zero loss. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_28",
"text": " To demonstrate that the flexible inductive bias of SBM-Transformer is effective for modeling long-range dependencies, we test SBM-Transformer against previous work on the LRA benchmark. We also test how the performance is affected with respect to applying a sparsity-based regularizer. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_29",
"text": " LRA consists of five different testbeds with varying modalities: ListOps is a 10-way classification task to map a sequence of single-digit numbers and 4 different set operations, to its corresponding solution. Text is a binary classification task where byte-level IMDB movie reviews must be classified into one of positive or negative sentiments. Retrieval is also a char-level binary classification task, where two sequences from ACL Anthology papers are given as input, and the model must predict whether there exists a citation link between them. Image is a 10-way classification task mapping flattened pixel-sequences from CIFAR-10 to its class. Pathfinder provides flattened pixel-sequences from an image and the model must decide whether two circles in the image are connected by a dashed line. For this benchmark, we use the PyTorch implementation of LRA provided by the authors of Nyströmformer and adhere to the same train-test splits. Performance in all five tasks is measured using classification accuracy. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_30",
"text": " We compare SBM-Transformer against the same baselines as with the synthetic task above. For fair comparison, we set all Transformer models to use the default setting used in , which fixes 2 layers, 2 attention heads, and 64 embedding dimensions. For SBM-Transformer, we use k=128𝑘128k=128 clusters. The output token representations are mean-pooled to obtain the sequence representation for all tasks. More details on the architecture setups can be found in Appendix C. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_31",
"text": " Table 8 shows the test accuracies of each method. Our SBM-Transformer achieves the best overall performance, ranking first in two tasks, and second in one other. SBM-Transformer also outperforms full attention in all five tasks while computing 30% or less attention scores on average, which supports our claim that masked attention with partial attention score computations can be preferred over full attention depending on the task. With respect to the attention mask structure, we find that flexibility of SBM is indeed beneficial, as Reformer struggles in ListOps, most likely due to the inability of block-diagonal masks to model hierarchical contexts. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_32",
"text": " To test if the model can effectively learn under a constraint on the computational cost, we also test the model under a sparsity-based regularizer that discourages excessive use of query-key edges. We penalize each sampled edge by adding to the predictive loss a weighted regularization term λℒs𝜆subscriptℒ𝑠\\lambda\\mathcal{L}_{s}, where ℒssubscriptℒ𝑠\\mathcal{L}_{s} denotes the average mask density across all attention heads. Table 9 shows the performance of SBM-Transformer across varying regularization weights. Under strong regularization, the model surprisingly retains competitive performance while significantly reducing the average mask density. This indicates that similar local optima are shared across regimes with varying attention density in the loss landscape, and the regularization term is able to drive the model towards finding optimal attention scores with smaller density. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_33",
"text": " Furthermore, we compare computational costs during inference by measuring FLOP count and peak memory usage. For SBM-Transformer, we test the model trained under λ=10−1𝜆superscript101\\lambda=10^{-1}. Due to lack of support for sparse tensor operations in existing FLOP-counters, we measure FLOP counts by manually enumerating through each tensor operation. Table 3 shows that SBM-Transformer is comparably efficient across all tasks except for Text, where SBM-Transformer showed the largest average mask density. Note that while the cost of other baselines are fixed after initialization, the cost of SBM-Transformer is data-adaptive and can vary input-by-input. Further analysis and qualitative examples demonstrating the input-dependent attention mask densities can be found in Appendix C. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_34",
"text": " We also compare the densities of masks sampled at each layer of SBM-Transformer during test time to examine whether our model is capable of diversifying sparsity across layers for better performance. Recall that this allows models to gather information in different levels, as seen in pretrained BERT where lower layers focus on the overall content via dense attention while upper layers gather syntactic information with tree-like patterns . For each of the five tasks, we pick two highest-performing models (one for unregularized and another for regularized) for measurement. Figure 5 shows the average layer-wise mask densities of unregularized and regularized SBM-Transformers across different tasks. We find that under no regularization, the two layers can differ by more than 10% in tasks such as ListOps and Image. This may be due to the hierarchical and compositional structure of the two tasks. We also find that the variation is relatively low in Text with densities around 25%, indicating that the task requires broad attention overall. Lastly, the standard deviation is extremely large in upper layers for Pathfinder, showing that it samples a wide variety of masks depending on the input. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_35",
"text": " To check whether its strong performance demonstrated in LRA extends to the downstream NLP setting as well, we evaluate SBM-Transformer against baselines on the GLUE benchmark . ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_36",
"text": " We consider four NLP tasks in GLUE . SST-2 consists of movie reviews the model must predict their positive or negative sentiments. For QQP , the task is to determine whether one question is a paraphrase of the other given a pair of questions. MNLI consists of sentence pairs, each with a target label indicating whether the two sentences are connected through entailment, contradiction, or neither. QNLI consists of sentence-question pairs and the task is to determine whether the sentence contains an answer to the question. Each task is formulated as sequence classification, and we measure performance by F1 score on the respective validation sets. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_37",
"text": " Following previous work , we arrange a small variant of BERT with 4 layers, 8 attention heads, and 512 embedding dimensions. We replace full attention with each attention module used in previous experiments. For SBM-Transformer, we use k=128𝑘128k=128 clusters without sparsity regularization (i.e. λ=0𝜆0\\lambda=0). Here, we find that adding local attention significantly boosts performance, and thus fix a sliding window of size 64 to SBM-Transformer. We first pretrain each model under the masked language modeling objective for 50 epochs on a corpus with text from English Wikipedia, BookCorpus , and RealNews . We then finetune each pretrained model for 5 epochs on the GLUE training sets. More details on the architecture and training setup can be found in Appendix C. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_38",
"text": " Table 4 reports the F1 scores of each method on different NLP tasks. SBM-Transformer performs competitively against full attention overall, and outperforms all baselines in SST-2 and QQP. We also find that the fine-tuned SBM-Transformer models use 13.5% dense attention masks on average across all tasks, showing that the model can encode useful information from input sentences effectively under highly sparse attention. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_39",
"text": " We propose SBM-Transformer, an efficient Transformer that can data-adaptively choose its attention sparsity between sparse and full attention without the need to explicitly compute the full attention score matrix. Theoretically, we show that our model enjoys the same expressibility as the original Transformer due to the flexibility of the latent SBM. Empirical experiments on LRA and GLUE show that our model performs competitively against previous state-of-the-art efficient Transformers. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_40",
"text": " Nonetheless, there are limitations due to sparse tensor operations being less optimized on GPU kernels. In the LRA experiments, we found that SBM-Transformer can result in longer runtimes compared to dense counterparts while its memory usage is much lower. While previous sparsity-based attention mechanisms with block-sparse attention are much more amenable for GPU computation (51, 9, 3), our work requires an architecture with better workload balancing and acceleration under unstructured sparsity, for which there is ongoing work (46, 54). ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
},
{
"id": "2210.15541_all_41",
"text": " We still believe this work is valuable as it is the first approach to induce per-example attention sparsity, allowing the model to adjust its computational cost based on the input. The cost being dependent on the number of edges also allows practitioners to easily impose constraints based on the available computational resources. We hope to see more GPU-friendly tensor operations optimized for fine-grained sparsity in the future, at which point the value of this work will increase even further. As we propose a foundational replacement for the scaled dot-product attention module in the Transformer architecture, we do not expect any immediate negative societal impact due to this work. ",
"title": "Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost"
}
] |
Is choosing NAL as a baseline a good choice knowing that it always results in performance drop ?
|
As there was no other available comparison, NAL seems to be the only choice for the baseline [31].
|
[
31
] |
[
{
"id": "1704.06904_all_0",
"text": " Not only a friendly face but also red color will draw our attention. The mixed nature of attention has been studied extensively in the previous literatures (34, 16, 23, 40). Attention not only serves to select a focused location but also enhances different representations of objects at that location. Previous works formulate attention drift as a sequential process to capture different attended aspects. However, as far as we know, no attention mechanism has been applied to feedforward network structure to achieve state-of-art results in image classification task. Recent advances of image classification focus on training feedforward convolutional neural networks using “very deep” structure (27, 33, 10). ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_1",
"text": " Inspired by the attention mechanism and recent advances in the deep neural network, we propose Residual Attention Network, a convolutional network that adopts mixed attention mechanism in “very deep” structure. The Residual Attention Network is composed of multiple Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_2",
"text": " Apart from more discriminative feature representation brought by the attention mechanism, our model also exhibits following appealing properties: ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_3",
"text": " (1) Increasing Attention Modules lead to consistent performance improvement, as different types of attention are captured extensively. Fig.1 shows an example of different types of attentions for a hot air balloon image. The sky attention mask diminishes background responses while the balloon instance mask highlighting the bottom part of the balloon. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_4",
"text": " (2) It is able to incorporate with state-of-the-art deep network structures in an end-to-end training fashion. Specifically, the depth of our network can be easily extended to hundreds of layers. Our Residual Attention Network outperforms state-of-the-art residual networks on CIFAR-10, CIFAR-100 and challenging ImageNet image classification dataset with significant reduction of computation (69% forward FLOPs). ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_5",
"text": " All of the aforementioned properties, which are challenging to achieve with previous approaches, are made possible with following contributions: ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_6",
"text": " (1) Stacked network structure: Our Residual Attention Network is constructed by stacking multiple Attention Modules. The stacked structure is the basic application of mixed attention mechanism. Thus, different types of attention are able to be captured in different Attention Modules. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_7",
"text": " (2) Attention Residual Learning: Stacking Attention Modules directly would lead to the obvious performance drop. Therefore, we propose attention residual learning mechanism to optimize very deep Residual Attention Network with hundreds of layers. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_8",
"text": " (3) Bottom-up top-down feedforward attention: Bottom-up top-down feedforward structure has been successfully applied to human pose estimation and image segmentation (22, 25, 1). We use such structure as part of Attention Module to add soft weights on features. This structure can mimic bottom-up fast feedforward process and top-down attention feedback in a single feedforward process which allows us to develop an end-to-end trainable network with top-down attention. The bottom-up top-down structure in our work differs from stacked hourglass network in its intention of guiding feature learning. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_9",
"text": " Evidence from human perception process shows the importance of attention mechanism, which uses top information to guide bottom-up feedforward process. Recently, tentative efforts have been made towards applying attention into deep neural network. Deep Boltzmann Machine (DBM) contains top-down attention by its reconstruction process in the training stage. Attention mechanism has also been widely applied to recurrent neural networks (RNN) and long short term memory (LSTM) to tackle sequential decision tasks (25, 29, 21, 18). Top information is gathered sequentially and decides where to attend for the next feature learning steps. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_10",
"text": " Residual learning is proposed to learn residual of identity mapping. This technique greatly increases the depth of feedforward neuron network. Similar to our work, (25, 29, 21, 18) use residual learning with attention mechanism to benefit from residual learning. Two information sources (query and query context) are captured using attention mechanism to assist each other in their work. While in our work, a single information source (image) is split into two different ones and combined repeatedly. And residual learning is applied to alleviate the problem brought by repeated splitting and combining. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_11",
"text": " In image classification, top-down attention mechanism has been applied using different methods: sequential process, region proposal and control gates. Sequential process (23, 12, 37, 7) models image classification as a sequential decision. Thus attention can be applied similarly with above. This formulation allows end-to-end optimization using RNN and LSTM and can capture different kinds of attention in a goal-driven way. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_12",
"text": " Region proposal (26, 4, 8, 38) has been successfully adopted in image detection task. In image classification, an additional region proposal stage is added before feedforward classification. The proposed regions contain top information and are used for feature learning in the second stage. Unlike image detection whose region proposals rely on large amount of supervision, e.g. the ground truth bounding boxes or detailed segmentation masks , unsupervised learning is usually used to generate region proposals for image classification. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_13",
"text": " Control gates have been extensively used in LSTM. In image classification with attention, control gates for neurones are updated with top information and have influence on the feedforward process during training (2, 30). However, a new process, reinforcement learning or optimization is involved during the training step. Highway Network extends control gate to solve gradient degradation problem for deep convolutional neural network. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_14",
"text": " However, recent advances of image classification focus on training feedforward convolutional neural networks using “very deep” structure (27, 33, 10). The feedforward convolutional network mimics the bottom-up paths of human cortex. Various approaches have been proposed to further improve the discriminative ability of deep convolutional neural network. VGG , Inception and residual learning are proposed to train very deep neural networks. Stochastic depth , Batch Normalization and Dropout exploit regularization for convergence and avoiding overfitting and degradation. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_15",
"text": " Soft attention developed in recent work (3, 17) can be trained end-to-end for convolutional network. Our Residual Attention Network incorporates the soft attention in fast developing feedforward network structure in an innovative way. Recent proposed spatial transformer module achieves state-of-the-art results on house number recognition task. A deep network module capturing top information is used to generate affine transformation. The affine transformation is applied to the input image to get attended region and then feed to another deep network module. The whole process can be trained end-to-end by using differentiable network layer which performs spatial transformation. Attention to scale uses soft attention as a scale selection mechanism and gets state-of-the-art results in image segmentation task. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_16",
"text": " The design of soft attention structure in our Residual Attention Network is inspired by recent development of localization oriented task, i.e. segmentation (22, 25, 1) and human pose estimation . These tasks motivate researchers to explore structure with fined-grained feature maps. The frameworks tend to cascade a bottom-up and a top-down structure. The bottom-up feedforward structure produces low resolution feature maps with strong semantic information. After that, a top-down network produces dense features to inference on each pixel. Skip connection is employed between bottom and top feature maps and achieved state-of-the-art result on image segmentation. The recent stacked hourglass network fuses information from multiple scales to predict human pose, and benefits from encoding both global and local information. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_17",
"text": " Our Residual Attention Network is constructed by stacking multiple Attention Modules. Each Attention Module is divided into two branches: mask branch and trunk branch. The trunk branch performs feature processing and can be adapted to any state-of-the-art network structures. In this work, we use pre-activation Residual Unit , ResNeXt and Inception as our Residual Attention Networks basic unit to construct Attention Module. Given trunk branch output T(x)𝑇𝑥T(x) with input x𝑥x, the mask branch uses bottom-up top-down structure (22, 25, 1, 24) to learn same size mask M(x)𝑀𝑥M(x) that softly weight output features T(x)𝑇𝑥T(x). The bottom-up top-down structure mimics the fast feedforward and feedback attention process. The output mask is used as control gates for neurons of trunk branch similar to Highway Network . The output of Attention Module H𝐻H is: Hi,c(x)=Mi,c(x)∗Ti,c(x)subscript𝐻𝑖𝑐𝑥subscript𝑀𝑖𝑐𝑥subscript𝑇𝑖𝑐𝑥H_{i,c}(x)=M_{i,c}(x)*T_{i,c}(x) (1) where i ranges over all spatial positions and c∈{1,…,C}𝑐1…𝐶c\\in\\{1,...,C\\} is the index of the channel. The whole structure can be trained end-to-end. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_18",
"text": " In Attention Modules, the attention mask can not only serve as a feature selector during forward inference, but also as a gradient update filter during back propagation. In the soft mask branch, the gradient of mask for input feature is: ∂M(x,θ)T(x,ϕ)∂ϕ=M(x,θ)∂T(x,ϕ)∂ϕ𝑀𝑥𝜃𝑇𝑥italic-ϕitalic-ϕ𝑀𝑥𝜃𝑇𝑥italic-ϕitalic-ϕ\\frac{\\partial M(x,\\theta)T(x,\\phi)}{\\partial\\phi}=M(x,\\theta)\\frac{\\partial T(x,\\phi)}{\\partial\\phi} (2) where the θ𝜃\\theta are the mask branch parameters and the ϕitalic-ϕ\\phi are the trunk branch parameters. This property makes Attention Modules robust to noisy labels. Mask branches can prevent wrong gradients (from noisy labels) to update trunk parameters. Experiment in Sec.4.1 shows the robustness of our Residual Attention Network against noisy labels. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_19",
"text": " Instead of stacking Attention Modules in our design, a simple approach would be using a single network branch to generate soft weight mask, similar to spatial transformer layer . However, these methods have several drawbacks on challenging datasets such as ImageNet. First, images with clutter background, complex scenes, and large appearance variations need to be modeled by different types of attentions. In this case, features from different layers need to be modeled by different attention masks. Using a single mask branch would require exponential number of channels to capture all combinations of different factors. Second, a single Attention Module only modify the features once. If the modification fails on some parts of the image, the following network modules do not get a second chance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_20",
"text": " The Residual Attention Network alleviates above problems. In Attention Module, each trunk branch has its own mask branch to learn attention that is specialized for its features. As shown in Fig.1, in hot air balloon images, blue color features from bottom layer have corresponding sky mask to eliminate background, while part features from top layer are refined by balloon instance mask. Besides, the incremental nature of stacked network structure can gradually refine attention for complex images. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_21",
"text": " However, naive stacking Attention Modules leads to the obvious performance drop. First, dot production with mask range from zero to one repeatedly will degrade the value of features in deep layers. Second, soft mask can potentially break good property of trunk branch, for example, the identical mapping of Residual Unit. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_22",
"text": " We propose attention residual learning to ease the above problems. Similar to ideas in residual learning, if soft mask unit can be constructed as identical mapping, the performances should be no worse than its counterpart without attention. Thus we modify output H𝐻H of Attention Module as Hi,c(x)=(1+Mi,c(x))∗Fi,c(x)subscript𝐻𝑖𝑐𝑥1subscript𝑀𝑖𝑐𝑥subscript𝐹𝑖𝑐𝑥H_{i,c}(x)=(1+M_{i,c}(x))*F_{i,c}(x) (3) M(x)𝑀𝑥M(x) ranges from (0,1)01(0,1), with M(x)𝑀𝑥M(x) approximating 0, H(x)𝐻𝑥H(x) will approximate original features F(x)𝐹𝑥F(x). We call this method attention residual learning. Our stacked attention residual learning is different from residual learning. In the origin ResNet, residual learning is formulated as Hi,c(x)=x+Fi,c(x)subscript𝐻𝑖𝑐𝑥𝑥subscript𝐹𝑖𝑐𝑥H_{i,c}(x)=x+F_{i,c}(x), where Fi,c(x)subscript𝐹𝑖𝑐𝑥F_{i,c}(x) approximates the residual function. In our formulation, Fi,c(x)subscript𝐹𝑖𝑐𝑥F_{i,c}(x) indicates the features generated by deep convolutional networks. The key lies on our mask branches M(x)𝑀𝑥M(x). They work as feature selectors which enhance good features and suppress noises from trunk features. In addition, stacking Attention Modules backs up attention residual learning by its incremental nature. Attention residual learning can keep good properties of original features, but also gives them the ability to bypass soft mask branch and forward to top layers to weaken mask branch’s feature selection ability. Stacked Attention Modules can gradually refine the feature maps. As show in Fig.1, features become much clearer as depth going deeper. By using attention residual learning, increasing depth of the proposed Residual Attention Network can improve performance consistently. As shown in the experiment section, the depth of Residual Attention Network is increased up to 452 whose performance surpasses ResNet-1001 by a large margin on CIFAR dataset. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_23",
"text": " Following previous attention mechanism idea in DBN , our mask branch contains fast feed-forward sweep and top-down feedback steps. The former operation quickly collects global information of the whole image, the latter operation combines global information with original feature maps. In convolutional neural network, the two steps unfold into bottom-up top-down fully convolutional structure. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_24",
"text": " From input, max pooling are performed several times to increase the receptive field rapidly after a small number of Residual Units. After reaching the lowest resolution, the global information is then expanded by a symmetrical top-down architecture to guide input features in each position. Linear interpolation up sample the output after some Residual Units. The number of bilinear interpolation is the same as max pooling to keep the output size the same as the input feature map. Then a sigmoid layer normalizes the output range to (0,1)01(0,1) after two consecutive 1×1111\\times 1 convolution layers. We also added skip connections between bottom-up and top-down parts to capture information from different scales. The full module is illustrated in Fig.2. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_25",
"text": " The bottom-up top-down structure has been applied to image segmentation and human pose estimation. However, the difference between our structure and the previous one lies in its intention. Our mask branch aims at improving trunk branch features rather than solving a complex problem directly. Experiment in Sec.4.1 is conducted to verify above arguments. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_26",
"text": " In our work, attention provided by mask branch changes adaptably with trunk branch features. However, constrains to attention can still be added to mask branch by changing normalization step in activation function before soft mask output. We use three types of activation functions corresponding to mixed attention, channel attention and spatial attention. Mixed attention f1subscript𝑓1f_{1} without additional restriction use simple sigmoid for each channel and spatial position. Channel attention f2subscript𝑓2f_{2} performs L2𝐿2L2 normalization within all channels for each spatial position to remove spatial information. Spatial attention f3subscript𝑓3f_{3} performs normalization within feature map from each channel and then sigmoid to get soft mask related to spatial information only. f1(xi,c)=11+exp(−xi,c)subscript𝑓1subscript𝑥𝑖𝑐11𝑒𝑥𝑝subscript𝑥𝑖𝑐\\displaystyle f_{1}(x_{i,c})=\\frac{1}{1+exp(-x_{i,c})} (4) f2(xi,c)=xi,c‖xi‖subscript𝑓2subscript𝑥𝑖𝑐subscript𝑥𝑖𝑐normsubscript𝑥𝑖\\displaystyle f_{2}(x_{i,c})=\\frac{x_{i,c}}{\\|x_{i}\\|} (5) f3(xi,c)=11+exp(−(xi,c−meanc)/stdc)subscript𝑓3subscript𝑥𝑖𝑐11𝑒𝑥𝑝subscript𝑥𝑖𝑐subscriptmean𝑐subscriptstd𝑐\\displaystyle f_{3}(x_{i,c})=\\frac{1}{1+exp(-(x_{i,c}-\\text{mean}_{c})/\\text{std}_{c})} (6) Where i𝑖i ranges over all spatial positions and c𝑐c ranges over all channels. meancsubscriptmean𝑐\\text{mean}_{c} and stdcsubscriptstd𝑐\\text{std}_{c} denotes the mean value and standard deviation of feature map from c𝑐c-th channel. xisubscript𝑥𝑖x_{i} denotes the feature vector at the i𝑖ith spatial position. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_27",
"text": " The experiment results are shown in Table 1, the mixed attention has the best performance. Previous works normally focus on only one type of attention, for example scale attention or spatial attention , which puts additional constrain on soft mask by weight sharing or normalization. However, as supported by our experiments, making attention change adaptively with features without additional constraint leads to the best performance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_28",
"text": " In this section, we evaluate the performance of proposed Residual Attention Network on a series of benchmark datasets including CIFAR-10, CIFAR-100 , and ImageNet . Our experiments contain two parts. In the first part, we analyze the effectiveness of each component in the Residual Attention Network including attention residual learning mechanism and different architectures of soft mask branch in the Attention Module. After that, we explore the noise resistance property. Given limited computation resources, we choose CIFAR-10 and CIFAR-100 dataset to conduct these experiments. Finally, we compare our network with state-of-the-art results in CIFAR dataset. In the second part, we replace the Residual Unit with Inception Module and ResNeXt to demonstrate our Residual Attention Network surpasses origin networks both in parameter efficiency and final performance. We also compare image classification performance with state-of-the-art ResNet and Inception on ImageNet dataset. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_29",
"text": " The CIFAR-10 and CIFAR-100 datasets consist of 60,0006000060,000 32×32323232\\times 32 color images of 101010 and 100100100 classes respectively, with 50,0005000050,000 training images and 10,0001000010,000 test images. The broadly applied state-of-the-art network structure ResNet is used as baseline method. To conduct fair comparison, we keep most of the settings same as ResNet paper . The image is padded by 4 pixels on each side, filled with 00 value resulting in 40×40404040\\times 40 image. A 32×32323232\\times 32 crop is randomly sampled from an image or its horizontal flip, with the per-pixel RGB mean value subtracted. We adopt the same weight initialization method following previous study and train Residual Attention Network using nesterov SGD with a mini-batch size of 64. We use a weight decay of 0.00010.00010.0001 with a momentum of 0.90.90.9 and set the initial learning rate to 0.1. The learning rate is divided by 10 at 646464k and 969696k iterations. We terminate training at 160160160k iterations. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_30",
"text": " The overall network architecture and the hyper parameters setting are described in Fig.2. The network consists of 3 stages and similar to ResNet , equal number of Attention Modules are stacked in each stage. Additionally, we add two Residual Units at each stage. The number of weighted layers in trunk branch is 36m𝑚m+20 where m𝑚m is the number of Attention Module in one stage. We use original 32×32323232\\times 32 image for testing. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_31",
"text": " In this experiment, we evaluate the effectiveness of attention residual learning mechanism. Since the notion of attention residual learning (ARL) is new, no suitable previous methods are comparable therefore we use “naive attention learning” (NAL) as baseline. Specifically, “naive attention learning” uses Attention Module where features are directly dot product by soft mask without attention residual learning. We set the number of Attention Module in each stage m𝑚m = {1, 2, 3, 4}. For Attention Module, this leads to Attention-56 (named by trunk layer depth), Attention-92, Attention-128 and Attention-164 respectively. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_32",
"text": " We train these networks using different mechanisms and summarize the results in the Table 3. As shown in Table 3, the networks trained using attention residual learning technique consistently outperform the networks trained with baseline method which proves the effectiveness of our method. The performance increases with the number of Attention Module when applying attention residual learning. In contrast, the performance of networks trained with “naive attention learning” method suffers obvious degradation with increased number of Attention Module. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_33",
"text": " To understand the benefit of attention residual learning, we calculate mean absolute response value of output layers for each stage. We use Attention-164 to conduct this experiment. As shown in the Fig. 4, the response generated by the network trained using naive attention learning quickly vanishes in the stage 2 after four Attention Modules compared with network trained using attention residual learning. The Attention Module is designed to suppress noise while keeping useful information by applying dot product between feature and soft mask. However, repeated dot product will lead to severe degradation of both useful and useless information in this process. The attention residual learning can relieve signal attenuation using identical mapping, which enhances the feature contrast. Therefore, it gains benefits from noise reduction without significant information loss, which makes optimization much easier while improving the discrimination of represented features. In the rest of the experiments, we apply this technique to train our networks. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_34",
"text": " We conduct experiments to validate the effectiveness of encoder-decoder structure by comparing with local convolutions without any down sampling or up sampling. The local convolutions soft mask consists of three Residual Units using the same number of FLOPs. The Attention-56 is used to construct Attention-Encoder-Decoder-56 and Attention-Local-Conv-56 respectively. Results are shown in Table 4. The Attention-Encoder-Decoder-56 network achieves lower test error 5.52%percent5.525.52\\% compared with Attention-Local-Conv-56 network 6.48%percent6.486.48\\% with a considerable margin 0.94%percent0.940.94\\%. The result suggests that the soft attention optimization process will benefit from multi-scale information. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_35",
"text": " In this experiment, we show our Residual Attention Network enjoys noise resistant property on CIFAR-10 dataset following the setting of paper . The confusion matrix Q𝑄Q in our experiment is set as follows: Q=(r1−r9⋯1−r91−r9r⋯1−r9⋮⋮⋱⋮1−r91−r9⋯r)10×10𝑄subscriptmatrix𝑟1𝑟9⋯1𝑟91𝑟9𝑟⋯1𝑟9⋮⋮⋱⋮1𝑟91𝑟9⋯𝑟1010Q=\\left(\\begin{matrix}r&\\frac{1-r}{9}&\\cdots&\\frac{1-r}{9}\\\\ \\frac{1-r}{9}&r&\\cdots&\\frac{1-r}{9}\\\\ \\vdots&\\vdots&\\ddots&\\vdots\\\\ \\frac{1-r}{9}&\\frac{1-r}{9}&\\cdots&r\\\\ \\end{matrix}\\right)_{10\\times 10} (7) ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_36",
"text": " where r𝑟r denotes the clean label ratio for the whole dataset. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_37",
"text": " We compare ResNet-164 network with Attention-92 network under different noise levels. The Table 5 shows the results. The test error of Attention-92 network is significantly lower than ResNet-164 network with the same noise level. In addition, when we increase the ratio of noise, test error of Attenion-92 declines slowly compared with ResNet-164 network. These results suggest that our Residual Attention Network can perform well even trained with high level noise data. When the label is noisy, the corresponding mask can prevent gradient caused by label error to update trunk branch parameters in the network. In this way, only the trunk branch is learning the wrong supervision information and soft mask branch masks the wrong label. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_38",
"text": " We compare our Residual Attention Network with state-of-the-art methods including ResNet and Wide ResNet on CIFAR-10 and CIFAR-100 datasets. The results are shown in Table 6. Our Attention-452 outperforms all the baseline methods on CIFAR-10 and CIFAR-100 datasets. Note that Attention-92 network achieves 4.99%percent4.994.99\\% test error on CIFAR-10 and 21.71%percent21.7121.71\\% test error on CIFAR-100 compared with 5.46%percent5.465.46\\% and 24.33%percent24.3324.33\\% test error on CIFAR-10 and CIFAR-100 for ResNet-164 network under similar parameter size. In addition, Attention-236 outperforms ResNet-1001 using only half of the parameters. It suggests that our Attention Module and attention residual learning scheme can effectively reduce the number of parameters in the network while improving the classification performance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_39",
"text": " In this section, we conduct experiments using ImageNet LSVRC 201220122012 dataset , which contains 1,00010001,000 classes with 1.21.21.2 million training images, 50,0005000050,000 validation images, and 100,000100000100,000 test images. The evaluation is measured on the non-blacklist images of the ImageNet LSVRC 201220122012 validation set. We use Attention-56 and Attention-92 to conduct the experiments. The network structures and hyper parameters can be found in the Table 2. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_40",
"text": " Our implementation generally follows the practice in the previous study . We apply scale and aspect ratio augmentation to the original image. A 224×224224224224\\times 224 crop is randomly sampled from an augment image or its horizontal flip, with the per-pixel RGB scale to (0,1)01(0,1) and mean value subtracted and standard variance divided. We adopt standard color augmentation . The network is trained using SGD with a momentum of 0.90.90.9. We set initial learning rate to 0.1. The learning rate is divided by 10 at 200200200k, 400400400k, 500500500k iterations. We terminate training at 530530530k iterations. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_41",
"text": " In this experiment, we explore the efficiency of proposed Residual Attention Network. We compare Attention-56 with ResNet-152 . The ResNet-152 has 50 trunk Residual Units and 60.2×106absentsuperscript106\\times 10^{6} parameters compared with 18 trunk Residual Units and 31.9×106absentsuperscript106\\times 10^{6} parameters in Attention-56. We evaluate our model using single crop scheme on the ImageNet validation set and show results in Table 7. The Attention-56 network outperforms ResNet-152 by a large margin with a 0.4%percent0.40.4\\% reduction on top-1 error and a 0.26%percent0.260.26\\% reduction on top-5 error. More importantly, Attention-56 network achieves better performance with only 52% parameters and 56% FLOPs compared with ResNet-152, which suggests that the proposed attention mechanism can significantly improve network performance while reducing the model complexity. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_42",
"text": " In this experiment, we show Residual Attention Network can generalize well using different basic unit. We apply three popular basic units: Residual Unit, ResNeXt , and Inception to construct our Residual Attention Networks. To keep the number of parameters and FLOPs in the same scale, we simplify the Inception. Results are shown in Table 7. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_43",
"text": " When the basic unit is ResNeXt, the AttentionNeXt-56 network performance is the same as ResNeXt-101 while the parameters and FLOPs are significantly fewer than ResNeXt-101. For Inception, The AttentionIncepiton-56 outperforms Inception-ResNet-v1 by a margin with a 0.94% reduction on top-1 error and a 0.21% reduction on top-5 error. The results show that our method can be applied on different network structures. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_44",
"text": " We compare our Attention-92 evaluated using single crop on the ILSVRC 2012 validation set with state-of-the-art algorithms. Table 7 shows the results. Our Attention-92 outperforms ResNet-200 with a large margin. The reduction on top-1 error is 0.6%percent0.60.6\\%. Note that the ResNet-200 network contains 32%percent3232\\% more parameters than Attention-92. The computational complexity of Attention-92 shown in the Table 7 suggests that our network reduces nearly half training time comparing with ResNet-200 by adding attention mechanism and reducing trunk depth. Above results suggest that our model enjoys high efficiency and good performance. ",
"title": "Residual Attention Network for Image Classification"
},
{
"id": "1704.06904_all_45",
"text": " We propose a Residual Attention Network which stacks multiple Attention Modules. The benefits of our network are in two folds: it can capture mixed attention and is an extensible convolutional neural network. The first benefit lies in that different Attention Modules capture different types of attention to guide feature learning. Our experiments on the forms of activation function also validate this point: free form mixed attention will have better performance than constrained (including single) attention. The second benefit comes from encoding top-down attention mechanism into bottom-up top-down feedforward convolutional structure in each Attention Module. Thus, the basic Attention Modules can be combined to form larger network structure. Moreover, residual attention learning allows training very deep Residual Attention Network. The performance of our model surpasses state-of-the-art image classification methods, i.e. ResNet on CIFAR-10 (3.90% error), CIFAR-100 (20.67% error), and challenging ImageNet dataset (0.6% top-1 accuracy improvement) with only 46%percent4646\\% trunk depth and 69%percent6969\\% forward FLOPs (comparing with ResNet-200). In the future, we will exploit different applications of deep Residual Attention Network such as detection and segmentation to better explore mixed attention mechanism for specific tasks. ",
"title": "Residual Attention Network for Image Classification"
}
] |
What are the two place recognition benchmarks used by the authors?
|
Pittsburgh(Pitts250k) and Tokyo 24/7 benchmarks [39].
|
[
39
] |
[
{
"id": "1511.07247_all_0",
"text": " Visual place recognition has received a significant amount of attention in the past years both in computer vision (66, 35, 10, 64, 65, 24, 9, 81, 4, 80, 63) and robotics communities (15, 16, 46, 44, 75) motivated by, e.g., applications in autonomous driving , augmented reality or geo-localizing archival imagery . ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_1",
"text": " The place recognition problem, however, still remains extremely challenging. How can we recognize the same street-corner in the entire city or on the scale of the entire country despite the fact it can be captured in different illuminations or change its appearance over time? The fundamental scientific question is what is the appropriate representation of a place that is rich enough to distinguish similarly looking places yet compact to represent entire cities or countries. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_2",
"text": " The place recognition problem has been traditionally cast as an instance retrieval task, where the query image location is estimated using the locations of the most visually similar images obtained by querying a large geotagged database (66, 10, 35, 81, 80, 4). Each database image is represented using local invariant features such as SIFT that are aggregated into a single vector representation for the entire image such as bag-of-visual-words (74, 53), VLAD (3, 29) or Fisher vector (52, 31). The resulting representation is then usually compressed and efficiently indexed (74, 28). The image database can be further augmented by 3D structure that enables recovery of accurate camera pose (40, 63, 64). ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_3",
"text": " In the last few years convolutional neural networks (CNNs) (38, 39) have emerged as powerful image representations for various category-level recognition tasks such as object classification (37, 49, 73, 77), scene recognition or object detection . The basic principles of CNNs are known from 80’s (38, 39) and the recent successes are a combination of advances in GPU-based computation power together with large labelled image datasets . While it has been shown that the trained representations are, to some extent, transferable between recognition tasks (19, 21, 49, 69, 89), a direct application of CNN representations trained for object classification as black-box descriptor extractors has so far yielded limited improvements in performance on instance-level recognition tasks (6, 7, 22, 60, 62). In this work we investigate whether this gap in performance can be bridged by CNN representations developed and trained directly for place recognition. This requires addressing the following three main challenges. First, what is a good CNN architecture for place recognition? Second, how to gather sufficient amount of annotated data for the training? Third, how can we train the developed architecture in an end-to-end manner tailored for the place recognition task? To address these challenges we bring the following three innovations. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_4",
"text": " First, building on the lessons learnt from the current well performing hand-engineered object retrieval and place recognition pipelines (2, 3, 25, 80) we develop a convolutional neural network architecture for place recognition that aggregates mid-level (conv5) convolutional features extracted from the entire image into a compact single vector representation amenable to efficient indexing. To achieve this, we design a new trainable generalized VLAD layer, NetVLAD, inspired by the Vector of Locally Aggregated Descriptors (VLAD) representation that has shown excellent performance in image retrieval and place recognition. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. The resulting aggregated representation is then compressed using Principal Component Analysis (PCA) to obtain the final compact descriptor of the image. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_5",
"text": " Second, to train the architecture for place recognition, we gather a large dataset of multiple panoramic images depicting the same place from different viewpoints over time from the Google Street View Time Machine. Such data is available for vast areas of the world, but provides only weak form of supervision: we know the two panoramas are captured at approximately similar positions based on their (noisy) GPS but we don’t know which parts of the panoramas depict the same parts of the scene. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_6",
"text": " Third, we develop a learning procedure for place recognition that learns parameters of the architecture in an end-to-end manner tailored for the place recognition task from the weakly labelled Time Machine imagery. The resulting representation is robust to changes in viewpoint and lighting conditions, while simultaneously learns to focus on the relevant parts of the image such as the building façades and the skyline, while ignoring confusing elements such as cars and people that may occur at many different places. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_7",
"text": " We show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state-of-the-art compact image representations on standard image retrieval benchmarks. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_8",
"text": " While there have been many improvements in designing better image retrieval (2, 3, 12, 11, 17, 26, 27, 29, 25, 32, 48, 51, 52, 53, 54, 71, 78, 79, 82) and place recognition (4, 10, 15, 16, 24, 9, 35, 46, 44, 64, 65, 63, 75, 81, 80) systems, not many works have performed learning for these tasks. All relevant learning-based approaches fall into one or both of the following two categories: (i) learning for an auxiliary task (e.g. some form of distinctiveness of local features (4, 15, 30, 35, 58, 59, 90)), and (ii) learning on top of shallow hand-engineered descriptors that cannot be fine-tuned for the target task (2, 24, 9, 35, 57). Both of these are in spirit opposite to the core idea behind deep learning that has provided a major boost in performance in various recognition tasks: end-to-end learning. We will indeed show in section 5.2 that training representations directly for the end-task, place recognition, is crucial for obtaining good performance. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_9",
"text": " Numerous works concentrate on learning better local descriptors or metrics to compare them (88, 55, 45, 48, 71, 56, 70, 50), but even though some of them show results on image retrieval, the descriptors are learnt on the task of matching local image patches, and not directly with image retrieval in mind. Some of them also make use of hand-engineered features to bootstrap the learning, i.e. to provide noisy training data (55, 45, 48, 71, 50). ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_10",
"text": " Several works have investigated using CNN-based features for image retrieval. These include treating activations from certain layers directly as descriptors by concatenating them (8, 60), or by pooling (6, 22, 7). However, none of these works actually train the CNNs for the task at hand, but use CNNs as black-box descriptor extractors. One exception is the work of Babenko et al. in which the network is fine-tuned on an auxiliary task of classifying 700 landmarks. However, again the network is not trained directly on the target retrieval task. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_11",
"text": " Finally, recently and performed end-to-end learning for different but related tasks of ground-to-aerial matching and camera pose estimation . ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_12",
"text": " Building on the success of current place recognition systems (e.g. (66, 35, 10, 64, 65, 81, 4, 80, 63)), we cast place recognition as image retrieval. The query image with unknown location is used to visually search a large geotagged image database, and the locations of top ranked images are used as suggestions for the location of the query. This is generally done by designing a function f𝑓f which acts as the “image representation extractor”, such that given an image Iisubscript𝐼𝑖I_{i} it produces a fixed size vector f(Ii)𝑓subscript𝐼𝑖f(I_{i}). The function is used to extract the representations for the entire database {Ii}subscript𝐼𝑖\\{I_{i}\\}, which can be done offline, and to extract the query image representation f(q)𝑓𝑞f(q), done online. At test time, the visual search is performed by finding the nearest database image to the query, either exactly or through fast approximate nearest neighbour search, by sorting images based on the Euclidean distance d(q,Ii)𝑑𝑞subscript𝐼𝑖d(q,I_{i}) between f(q)𝑓𝑞f(q) and f(Ii)𝑓subscript𝐼𝑖f(I_{i}). ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_13",
"text": " While previous works have mainly used hand-engineered image representations (e.g. f(I)𝑓𝐼f(I) corresponds to extracting SIFT descriptors , followed by pooling into a bag-of-words vector or a VLAD vector ), here we propose to learn the representation f(I)𝑓𝐼f(I) in an end-to-end manner, directly optimized for the task of place recognition. The representation is parametrized with a set of parameters θ𝜃\\theta and we emphasize this fact by referring to it as fθ(I)subscript𝑓𝜃𝐼f_{\\theta}(I). It follows that the Euclidean distance dθ(Ii,Ij)=∥fθ(Ii)−fθ(Ij)∥subscript𝑑𝜃subscript𝐼𝑖subscript𝐼𝑗delimited-∥∥subscript𝑓𝜃subscript𝐼𝑖subscript𝑓𝜃subscript𝐼𝑗d_{\\theta}(I_{i},I_{j})=\\lVert f_{\\theta}(I_{i})-f_{\\theta}(I_{j})\\rVert also depends on the same parameters. An alternative setup would be to learn the distance function itself, but here we choose to fix the distance function to be Euclidean distance, and to pose our problem as the search for the explicit feature map fθsubscript𝑓𝜃f_{\\theta} which works well under the Euclidean distance. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_14",
"text": " In section 3 we describe the proposed representation fθsubscript𝑓𝜃f_{\\theta} based on a new deep convolutional neural network architecture inspired by the compact aggregated image descriptors for instance retrieval. In section 4 we describe a method to learn the parameters θ𝜃\\theta of the network in an end-to-end manner using weakly supervised training data from the Google Street View Time Machine. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_15",
"text": " This section describes the proposed CNN architecture fθsubscript𝑓𝜃f_{\\theta}, guided by the best practices from the image retrieval community. Most image retrieval pipelines are based on (i) extracting local descriptors, which are then (ii) pooled in an orderless manner. The motivation behind this choice is that the procedure provides significant robustness to translation and partial occlusion. Robustness to lighting and viewpoint changes is provided by the descriptors themselves, and scale invariance is ensured through extracting descriptors at multiple scales. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_16",
"text": " In order to learn the representation end-to-end, we design a CNN architecture that mimics this standard retrieval pipeline in an unified and principled manner with differentiable modules. For step (i), we crop the CNN at the last convolutional layer and view it as a dense descriptor extractor. This has been observed to work well for instance retrieval (6, 7, 62) and texture recognition . Namely, the output of the last convolutional layer is a H×W×D𝐻𝑊𝐷H\\times W\\times D map which can be considered as a set of D-dimensional descriptors extracted at H×W𝐻𝑊H\\times W spatial locations. For step (ii) we design a new pooling layer inspired by the Vector of Locally Aggregated Descriptors (VLAD) that pools extracted descriptors into a fixed image representation and its parameters are learnable via back-propagation. We call this new pooling layer “NetVLAD” layer and describe it in the next section. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_17",
"text": " Vector of Locally Aggregated Descriptors (VLAD) is a popular descriptor pooling method for both instance level retrieval and image classification . It captures information about the statistics of local descriptors aggregated over the image. Whereas bag-of-visual-words (14, 74) aggregation keeps counts of visual words, VLAD stores the sum of residuals (difference vector between the descriptor and its corresponding cluster centre) for each visual word. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_18",
"text": " Formally, given N𝑁N D-dimensional local image descriptors {𝐱i}subscript𝐱𝑖\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}\\} as input, and K𝐾K cluster centres (“visual words”) {𝐜k}subscript𝐜𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\} as VLAD parameters, the output VLAD image representation V𝑉V is K×D𝐾𝐷K\\times D-dimensional. For convenience we will write V𝑉V as a K×D𝐾𝐷K\\times D matrix, but this matrix is converted into a vector and, after normalization, used as the image representation. The (j,k)𝑗𝑘(j,k) element of V𝑉V is computed as follows: V(j,k)=∑i=1Nak(𝐱i)(xi(j)−ck(j)),𝑉𝑗𝑘superscriptsubscript𝑖1𝑁subscript𝑎𝑘subscript𝐱𝑖subscript𝑥𝑖𝑗subscript𝑐𝑘𝑗V(j,k)=\\sum_{i=1}^{N}a_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i})\\left(x_{i}(j)-c_{k}(j)\\right), (1) where xi(j)subscript𝑥𝑖𝑗x_{i}(j) and ck(j)subscript𝑐𝑘𝑗c_{k}(j) are the j𝑗j-th dimensions of the i𝑖i-th descriptor and k𝑘k-th cluster centre, respectively. ak(𝐱i)subscript𝑎𝑘subscript𝐱𝑖a_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}) denotes the membership of the descriptor 𝐱isubscript𝐱𝑖\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i} to k𝑘k-th visual word, i.e. it is 111 if cluster 𝐜ksubscript𝐜𝑘\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k} is the closest cluster to descriptor 𝐱isubscript𝐱𝑖\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i} and 00 otherwise. Intuitively, each D-dimensional column k𝑘k of V𝑉V records the sum of residuals (𝐱i−𝐜k)subscript𝐱𝑖subscript𝐜𝑘(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}-\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}) of descriptors which are assigned to cluster 𝐜ksubscript𝐜𝑘\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}. The matrix V𝑉V is then L2-normalized column-wise (intra-normalization ), converted into a vector, and finally L2-normalized in its entirety . ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_19",
"text": " In order to profit from years of wisdom produced in image retrieval, we propose to mimic VLAD in a CNN framework and design a trainable generalized VLAD layer, NetVLAD. The result is a powerful image representation trainable end-to-end on the target task (in our case place recognition). To construct a layer amenable to training via backpropagation, it is required that the layer’s operation is differentiable with respect to all its parameters and the input. Hence, the key challenge is to make the VLAD pooling differentiable, which we describe next. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_20",
"text": " The source of discontinuities in VLAD is the hard assignment ak(𝐱i)subscript𝑎𝑘subscript𝐱𝑖a_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}) of descriptors 𝐱isubscript𝐱𝑖\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i} to clusters centres 𝐜ksubscript𝐜𝑘\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}. To make this operation differentiable, we replace it with soft assignment of descriptors to multiple clusters a¯k(𝐱i)=e−α∥𝐱i−𝐜k∥2∑k′e−α∥𝐱i−𝐜k′∥2,subscript¯𝑎𝑘subscript𝐱𝑖superscript𝑒𝛼superscriptdelimited-∥∥subscript𝐱𝑖subscript𝐜𝑘2subscriptsuperscript𝑘′superscript𝑒𝛼superscriptdelimited-∥∥subscript𝐱𝑖subscript𝐜superscript𝑘′2\\bar{a}_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i})=\\frac{e^{-\\alpha\\lVert\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}-\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\rVert^{2}}}{\\sum_{k^{\\prime}}{e^{-\\alpha\\lVert\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}-\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k^{\\prime}}\\rVert^{2}}}}, (2) which assigns the weight of descriptor 𝐱isubscript𝐱𝑖\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i} to cluster 𝐜ksubscript𝐜𝑘\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k} proportional to their proximity, but relative to proximities to other cluster centres. a¯k(𝐱i)subscript¯𝑎𝑘subscript𝐱𝑖\\bar{a}_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}) ranges between 0 and 1, with the highest weight assigned to the closest cluster centre. α𝛼\\alpha is a parameter (positive constant) that controls the decay of the response with the magnitude of the distance. Note that for α→+∞→𝛼\\alpha\\to+\\infty this setup replicates the original VLAD exactly as a¯k(𝐱i)subscript¯𝑎𝑘subscript𝐱𝑖\\bar{a}_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}) for the closest cluster would be 111 and 00 otherwise. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_21",
"text": " By expanding the squares in (2), it is easy to see that the term e−α∥𝐱i∥2superscript𝑒𝛼superscriptdelimited-∥∥subscript𝐱𝑖2e^{-\\alpha\\lVert\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}\\rVert^{2}} cancels between the numerator and the denominator resulting in a soft-assignment of the following form a¯k(𝐱i)=e𝐰kT𝐱i+bk∑k′e𝐰k′T𝐱i+bk′,subscript¯𝑎𝑘subscript𝐱𝑖superscript𝑒superscriptsubscript𝐰𝑘𝑇subscript𝐱𝑖subscript𝑏𝑘subscriptsuperscript𝑘′superscript𝑒superscriptsubscript𝐰superscript𝑘′𝑇subscript𝐱𝑖subscript𝑏superscript𝑘′\\bar{a}_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i})=\\frac{e^{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}^{T}\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}+b_{k}}}{\\sum_{k^{\\prime}}{e^{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k^{\\prime}}^{T}\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}+b_{k^{\\prime}}}}}, (3) where vector 𝐰k=2α𝐜ksubscript𝐰𝑘2𝛼subscript𝐜𝑘\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}=2\\alpha\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k} and scalar bk=−α∥𝐜k∥2subscript𝑏𝑘𝛼superscriptdelimited-∥∥subscript𝐜𝑘2b_{k}=-\\alpha\\lVert\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\rVert^{2}. The final form of the NetVLAD layer is obtained by plugging the soft-assignment (3) into the VLAD descriptor (1) resulting in V(j,k)=∑i=1Ne𝐰kT𝐱i+bk∑k′e𝐰k′T𝐱i+bk′(xi(j)−ck(j)),𝑉𝑗𝑘superscriptsubscript𝑖1𝑁superscript𝑒superscriptsubscript𝐰𝑘𝑇subscript𝐱𝑖subscript𝑏𝑘subscriptsuperscript𝑘′superscript𝑒superscriptsubscript𝐰superscript𝑘′𝑇subscript𝐱𝑖subscript𝑏superscript𝑘′subscript𝑥𝑖𝑗subscript𝑐𝑘𝑗V(j,k)=\\sum_{i=1}^{N}\\frac{e^{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}^{T}\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}+b_{k}}}{\\sum_{k^{\\prime}}{e^{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k^{\\prime}}^{T}\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}+b_{k^{\\prime}}}}}\\left(x_{i}(j)-c_{k}(j)\\right), (4) where {𝐰k}subscript𝐰𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}\\}, {bk}subscript𝑏𝑘\\{b_{k}\\} and {𝐜k}subscript𝐜𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\} are sets of trainable parameters for each cluster k𝑘k. Similarly to the original VLAD descriptor, the NetVLAD layer aggregates the first order statistics of residuals (𝐱i−𝐜k)subscript𝐱𝑖subscript𝐜𝑘(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}-\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}) in different parts of the descriptor space weighted by the soft-assignment a¯k(𝐱i)subscript¯𝑎𝑘subscript𝐱𝑖\\bar{a}_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}) of descriptor 𝐱isubscript𝐱𝑖\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i} to cluster k𝑘k. Note however, that the NetVLAD layer has three independent sets of parameters {𝐰k}subscript𝐰𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}\\}, {bk}subscript𝑏𝑘\\{b_{k}\\} and {𝐜k}subscript𝐜𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\}, compared to just {𝐜k}subscript𝐜𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\} of the original VLAD. This enables greater flexibility than the original VLAD, as explained in figure 3. Decoupling {𝐰k,bk}subscript𝐰𝑘subscript𝑏𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k},b_{k}\\} from {𝐜k}subscript𝐜𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf c$}}{\\mbox{\\boldmath$\\textstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptstyle\\bf c$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf c$}}_{k}\\} has been proposed in as a means to adapt the VLAD to a new dataset. All parameters of NetVLAD are learnt for the specific task in an end-to-end manner. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_22",
"text": " As illustrated in figure 2 the NetVLAD layer can be visualized as a meta-layer that is further decomposed into basic CNN layers connected up in a directed acyclic graph. First, note that the first term in eq. (4) is a soft-max function σk(𝐳)=exp(zk)∑k′exp(zk′)subscript𝜎𝑘𝐳subscript𝑧𝑘subscriptsuperscript𝑘′subscript𝑧superscript𝑘′\\sigma_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf z$}}{\\mbox{\\boldmath$\\textstyle\\bf z$}}{\\mbox{\\boldmath$\\scriptstyle\\bf z$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf z$}})=\\frac{\\exp(z_{k})}{\\sum_{k^{\\prime}}{\\exp(z_{k^{\\prime}})}}. Therefore, the soft-assignment of the input array of descriptors 𝐱isubscript𝐱𝑖\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i} into K𝐾K clusters can be seen as a two step process: (i) a convolution with a set of K𝐾K filters {𝐰k}subscript𝐰𝑘\\{\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}\\} that have spatial support 1×1111\\times 1 and biases {bk}subscript𝑏𝑘\\{b_{k}\\}, producing the output sk(𝐱i)=𝐰kT𝐱i+bksubscript𝑠𝑘subscript𝐱𝑖superscriptsubscript𝐰𝑘𝑇subscript𝐱𝑖subscript𝑏𝑘s_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i})=\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf w$}}{\\mbox{\\boldmath$\\textstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptstyle\\bf w$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf w$}}_{k}^{T}\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}+b_{k}; (ii) the convolution output is then passed through the soft-max function σksubscript𝜎𝑘\\sigma_{k} to obtain the final soft-assignment a¯k(𝐱i)subscript¯𝑎𝑘subscript𝐱𝑖\\bar{a}_{k}(\\mathchoice{\\mbox{\\boldmath$\\displaystyle\\bf x$}}{\\mbox{\\boldmath$\\textstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptstyle\\bf x$}}{\\mbox{\\boldmath$\\scriptscriptstyle\\bf x$}}_{i}) that weights the different terms in the aggregation layer that implements eq. (4). The output after normalization is a (K×D)×1𝐾𝐷1(K\\times D)\\times 1 descriptor. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_23",
"text": " Other works have proposed to pool CNN activations using VLAD or Fisher Vectors (FV) (22, 13), but do not learn the VLAD/FV parameters nor the input descriptors. The most related method to ours is the one of Sydorov et al. , which proposes to learn FV parameters jointly with an SVM for the end classification objective. However, in their work it is not possible to learn the input descriptors as they are hand-engineered (SIFT), while our VLAD layer is easily pluggable into any CNN architecture as it is amenable to backpropagation. “Fisher Networks” stack Fisher Vector layers on top of each other, but the system is not trained end-to-end, only hand-crafted features are used, and the layers are trained greedily in a bottom-up fashion. Finally, our architecture is also related to bilinear networks , recently developed for a different task of fine-grained category-level recognition. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_24",
"text": " We also experiment with Max-pooling of the D-dimensional features across the H×W𝐻𝑊H\\times W spatial locations, thus producing a D-dimensional output vector, which is then L2-normalized. Both of these operations can be implemented using standard layers in public CNN packages. This setup mirrors the method of (6, 62), but a crucial difference is that we will learn the representation (section 4) while (60, 6, 62) only use pretrained networks. Results will show (section 5.2) that simply using CNNs off-the-shelf results in poor performance, and that training for the end-task is crucial. Additionally, VLAD will prove itself to be superior to the Max-pooling baseline. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_25",
"text": " In the previous section we have designed a new CNN architecture as an image representation for place recognition. Here we describe how to learn its parameters in an end-to-end manner for the place recognition task. The two main challenges are: (i) how to gather enough annotated training data and (ii) what is the appropriate loss for the place recognition task. To address theses issues, we will first show that it is possible to obtain large amounts of weakly labelled imagery depicting the same places over time from the Google Street View Time Machine. Second, we will design a new weakly supervised triplet ranking loss that can deal with the incomplete and noisy position annotations of the Street View Time Machine imagery. The details are below. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_26",
"text": " We propose to exploit a new source of data – Google Street View Time Machine – which provides multiple street-level panoramic images taken at different times at close-by spatial locations on the map. As will be seen in section 5.2, this novel data source is precious for learning an image representation for place recognition. As shown in figure 4, the same locations are depicted at different times and seasons, providing the learning algorithm with crucial information it can use to discover which features are useful or distracting, and what changes should the image representation be invariant to, in order to achieve good place recognition performance. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_27",
"text": " The downside of the Time Machine imagery is that it provides only incomplete and noisy supervision. Each Time Machine panorama comes with a GPS tag giving only its approximate location on the map, which can be used to identify close-by panoramas but does not provide correspondences between parts of the depicted scenes. In detail, as the test queries are perspective images from camera phones, each panorama is represented by a set of perspective images sampled evenly in different orientations and two elevation angles (35, 10, 24, 81). Each perspective image is labelled with the GPS position of the source panorama. As a result, two geographically close perspective images do not necessarily depict the same objects since they could be facing different directions or occlusions could take place (e.g. the two images are around a corner from each other), etc. Therefore, for a given training query q𝑞q, the GPS information can only be used as a source of (i) potential positives {piq}subscriptsuperscript𝑝𝑞𝑖\\{p^{q}_{i}\\}, i.e. images that are geographically close to the query, and (ii) definite negatives {njq}subscriptsuperscript𝑛𝑞𝑗\\{n^{q}_{j}\\}, i.e. images that are geographically far from the query.111Note that even faraway images can depict the same object. For example, the Eiffel Tower can be visible from two faraway locations in Paris. But, for the purpose of localization we consider in this paper such image pairs as negative examples because they are not taken from the same place. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_28",
"text": " We wish to learn a representation fθsubscript𝑓𝜃f_{\\theta} that will optimize place recognition performance. That is, for a given test query image q𝑞q, the goal is to rank a database image Ii∗subscript𝐼𝑖I_{i*} from a close-by location higher than all other far away images Iisubscript𝐼𝑖I_{i} in the database. In other words, we wish the Euclidean distance dθ(q,I)subscript𝑑𝜃𝑞𝐼d_{\\theta}(q,I) between the query q𝑞q and a close-by image Ii∗subscript𝐼𝑖I_{i*} to be smaller than the distance to far away images in the database Iisubscript𝐼𝑖I_{i}, i.e. dθ(q,Ii∗)<dθ(q,Ii)subscript𝑑𝜃𝑞subscript𝐼𝑖subscript𝑑𝜃𝑞subscript𝐼𝑖d_{\\theta}(q,I_{i*})<d_{\\theta}(q,I_{i}), for all images Iisubscript𝐼𝑖I_{i} further than a certain distance from the query on the map. Next we show how this requirement can be translated into a ranking loss between training triplets {q,Ii∗,Ii}𝑞subscript𝐼𝑖subscript𝐼𝑖\\{q,I_{i*},I_{i}\\}. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_29",
"text": " From the Google Street View Time Machine data, we obtain a training dataset of tuples (q,{piq},{njq})𝑞subscriptsuperscript𝑝𝑞𝑖subscriptsuperscript𝑛𝑞𝑗(q,\\{p^{q}_{i}\\},\\{n^{q}_{j}\\}), where for each training query image q𝑞q we have a set of potential positives {piq}subscriptsuperscript𝑝𝑞𝑖\\{p^{q}_{i}\\} and the set of definite negatives {njq}subscriptsuperscript𝑛𝑞𝑗\\{n^{q}_{j}\\}. The set of potential positives contains at least one positive image that should match the query, but we do not know which one. To address this ambiguity, we propose to identify the best matching potential positive image pi∗qsubscriptsuperscript𝑝𝑞𝑖p^{q}_{i*} pi∗q=argminpiqdθ(q,piq)subscriptsuperscript𝑝𝑞𝑖subscriptsubscriptsuperscript𝑝𝑞𝑖subscript𝑑𝜃𝑞subscriptsuperscript𝑝𝑞𝑖p^{q}_{i*}=\\operatorname*{\\arg\\!\\min}_{p^{q}_{i}}d_{\\theta}(q,p^{q}_{i}) (5) for each training tuple (q,{piq},{njq})𝑞subscriptsuperscript𝑝𝑞𝑖subscriptsuperscript𝑛𝑞𝑗(q,\\{p^{q}_{i}\\},\\{n^{q}_{j}\\}). The goal then becomes to learn an image representation fθsubscript𝑓𝜃f_{\\theta} so that distance dθ(q,pi∗q)subscript𝑑𝜃𝑞subscriptsuperscript𝑝𝑞𝑖d_{\\theta}(q,p^{q}_{i*}) between the training query q𝑞q and the best matching potential positive pi∗qsubscriptsuperscript𝑝𝑞𝑖p^{q}_{i*} is smaller than the distance dθ(q,njq)subscript𝑑𝜃𝑞subscriptsuperscript𝑛𝑞𝑗d_{\\theta}(q,n^{q}_{j}) between the query q𝑞q and all negative images qjsubscript𝑞𝑗q_{j}: dθ(q,pi∗q)<dθ(q,njq),∀j.subscript𝑑𝜃𝑞subscriptsuperscript𝑝𝑞𝑖subscript𝑑𝜃𝑞subscriptsuperscript𝑛𝑞𝑗for-all𝑗d_{\\theta}(q,p^{q}_{i*})<d_{\\theta}(q,n^{q}_{j}),~{}~{}~{}\\forall j. (6) Based on this intuition we define a weakly supervised ranking loss Lθsubscript𝐿𝜃L_{\\theta} for a training tuple (q,{piq},{njq})𝑞subscriptsuperscript𝑝𝑞𝑖subscriptsuperscript𝑛𝑞𝑗(q,\\{p^{q}_{i}\\},\\{n^{q}_{j}\\}) as Lθ=∑jl(minidθ2(q,piq)+m−dθ2(q,njq)),subscript𝐿𝜃subscript𝑗𝑙subscript𝑖subscriptsuperscript𝑑2𝜃𝑞subscriptsuperscript𝑝𝑞𝑖𝑚subscriptsuperscript𝑑2𝜃𝑞subscriptsuperscript𝑛𝑞𝑗L_{\\theta}=\\sum_{j}l\\left(\\min_{i}d^{2}_{\\theta}(q,p^{q}_{i})+m-d^{2}_{\\theta}(q,n^{q}_{j})\\right), (7) where l𝑙l is the hinge loss l(x)=max(x,0)𝑙𝑥𝑥0l(x)=\\max(x,0), and m𝑚m is a constant parameter giving the margin. Note that equation (7) is a sum of individual losses for negative images njqsubscriptsuperscript𝑛𝑞𝑗n^{q}_{j}. For each negative, the loss l𝑙l is zero if the distance between the query and the negative is greater by a margin than the distance between the query and the best matching positive. Conversely, if the margin between the distance to the negative image and to the best matching positive is violated, the loss is proportional to the amount of violation. Note that the above loss is related to the commonly used triplet loss (68, 87, 86, 67), but adapted to our weakly supervised scenario using a formulation (given by equation (5)) similar to multiple instance learning (20, 36, 85). ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_30",
"text": " We train the parameters θ𝜃\\theta of the representation fθsubscript𝑓𝜃f_{\\theta} using Stochastic Gradient Descent (SGD) on a large set of training tuples from Time Machine data. Details of the training procedure are given in appendix A. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_31",
"text": " In this section we describe the used datasets and evaluation methodology (section 5.1), and give quantitative (section 5.2) and qualitative (section 5.3) results to validate our approach. Finally, we also test the method on the standard image retrieval benchmarks (section 5.4). ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_32",
"text": " We report results on two publicly available datasets. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_33",
"text": " contains 250k database images downloaded from Google Street View and 24k test queries generated from Street View but taken at different times, years apart. We divide this dataset into three roughly equal parts for training, validation and testing, each containing around 83k database images and 8k queries, where the division was done geographically to ensure the sets contain independent images. To facilitate faster training, for some experiments, a smaller subset (Pitts30k) is used, containing 10k database images in each of the train/val(idation)/test sets, which are also geographically disjoint. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_34",
"text": " contains 76k database images and 315 query images taken using mobile phone cameras. This is an extremely challenging dataset where the queries were taken at daytime, sunset and night, while the database images were only taken at daytime as they originate from Google Street View as described above. To form the train/val sets we collected additional Google Street View panoramas of Tokyo using the Time Machine feature, and name this set TokyoTM; Tokyo 24/7 (=test) and TokyoTM train/val are all geographically disjoint. Further details on the splits are given in appendix B. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_35",
"text": " We follow the standard place recognition evaluation procedure (4, 24, 65, 81, 80). The query image is deemed correctly localized if at least one of the top N𝑁N retrieved database images is within d=25𝑑25d=25 meters from the ground truth position of the query. The percentage of correctly recognized queries (Recall) is then plotted for different values of N𝑁N. For Tokyo 24/7 we follow and perform spatial non-maximal suppression on ranked database images before evaluation. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_36",
"text": " We use two base architectures which are extended with Max pooling (fmaxsubscript𝑓𝑚𝑎𝑥f_{max}) and our NetVLAD (fVLADsubscript𝑓𝑉𝐿𝐴𝐷f_{VLAD}) layers: AlexNet and VGG-16 ; both are cropped at the last convolutional layer (conv5), before ReLU. For NetVLAD we use K=64𝐾64K=64 resulting in 16k and 32k-D image representations for the two base architectures, respectively. The initialization procedure, parameters used for training, procedure for sampling training tuples and other implementation details are given in appendix A. All training and evaluation code, as well as our trained networks, are online at . ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_37",
"text": " To assess benefits of our approach we compare our representations trained for place recognition against “off-the-shelf” networks pretrained on other tasks. Namely, given a base network cropped at conv5, the baselines either use Max pooling (fmaxsubscript𝑓𝑚𝑎𝑥f_{max}), or aggregate the descriptors into VLAD (fVLADsubscript𝑓𝑉𝐿𝐴𝐷f_{VLAD}), but perform no further task-specific training. The three base networks are: AlexNet , VGG-16 , both are pretrained for ImageNet classification , and Places205 , reusing the same architecture as AlexNet but pretrained for scene classification . Pretrained networks have been recently used as off-the-shelf dense descriptor extractors for instance retrieval (6, 7, 22, 60, 62) and the untrained fmaxsubscript𝑓𝑚𝑎𝑥f_{max} network corresponds to the method of (6, 62). ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_38",
"text": " Furthermore we compare our CNN representations trained for place recognition against the state-of-the-art local feature based compact descriptor, which consists of VLAD pooling with intra-normalization on top of densely extracted RootSIFTs (43, 2). The descriptor is optionally reduced to 4096 dimensions using PCA (learnt on the training set) combined with whitening and L2-normalization ; this setup together with view synthesis yields the state-of-the-art results on the challenging Tokyo 24/7 dataset (c.f. ). ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_39",
"text": " In the following we discuss figure 5, which compares place recognition performance of our method to the baselines outlined above on the Pittsburgh and Tokyo 24/7 benchmarks. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_40",
"text": " We follow the standard state-of-the-art procedure to perform dimensionality reduction of VLAD, as described earlier, i.e. the reduction into 4096-D is performed using PCA with whitening followed by L2-normalization (25, 80). Figure 5 shows that the lower dimensional fVLADsubscript𝑓𝑉𝐿𝐴𝐷f_{VLAD} (-∗∗\\ast-) performs similarly to the full size vector (-o-). ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_41",
"text": " Representations trained on the end-task of place recognition consistently outperform by a large margin off-the-shelf CNNs on both benchmarks. For example, on the Pitts250k-test our trained AlexNet with (trained) NetVLAD aggregation layer achieves recall@1 of 81.0% compared to only 55.0% obtained by off-the-shelf AlexNet with standard VLAD aggregation, i.e. a relative improvement in recall of 47%. Similar improvements can be observed on all three datasets. This confirms two important premises of this work: (i) our approach can learn rich yet compact image representations for place recognition, and (ii) the popular idea of using pretrained networks “off-the-shelf” (60, 6, 22, 7, 62) is sub-optimal as the networks trained for object or scene classification are not necessary suitable for the end-task of place recognition. We believe this could be attributed to the fact that “off-the-shelf ” conv5 activations are not trained to be comparable using Euclidean distance. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_42",
"text": " Figure 5 also shows that our trained fVLADsubscript𝑓𝑉𝐿𝐴𝐷f_{VLAD} representation with whitening based on VGG-16 ( magenta -∗∗\\ast-) convincingly outperforms RootSIFT+VLAD+whitening, as well as the method of Torii et al. , and therefore sets the state-of-the-art for compact descriptors on all benchmarks. Note that these are strong baselines that outperform most off-the-shelf CNN descriptors on the place recognition task. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_43",
"text": " By comparing fVLADsubscript𝑓𝑉𝐿𝐴𝐷f_{VLAD} (-o-) methods with their corresponding fmaxsubscript𝑓𝑚𝑎𝑥f_{max} (-x-) counterparts it is clear that VLAD pooling is much better than Max pooling for both off-the-shelf and trained representations. NetVLAD performance decreases gracefully with dimensionality: 128-D NetVLAD performs similarly to 512-D Max (42.9% vs 38.4% recall@1 on Tokyo 24/7), resulting in four times more compact representation for the same performance. Furthermore, NetVLAD+whitening outperforms Max pooling convincingly when reduced to the same dimensionality (60%). See appendix C for more details. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_44",
"text": " In Table 1 we study the benefits of training different layers for the end-task of place recognition. The largest improvements are thanks to training the NetVLAD layer, but training other layers results in further improvements, with some overfitting occurring below conv2. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_45",
"text": " Here we examine whether the network can be trained without the Time Machine (TM) data. In detail, we have modified the training query set for Pitts30k-train to be sampled from the same set as the training database images, i.e. the tuples of query and database images used in training were captured at the same time. Recall@1 with fmaxsubscript𝑓𝑚𝑎𝑥f_{max} on Pitts30k-val for the off-the-shelf AlexNet is 33.5%, and training without TM improves this to 38.7%. However, training with TM obtains 68.5% showing that Time Machine data is crucial for good place recognition accuracy as without it the network does not generalize well. The network learns, for example, that recognizing cars is important for place recognition, as the same parked cars appear in all images of a place. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_46",
"text": " To visualize what is being learnt by our place recognition architectures, we adapt the method of Zeiler and Fergus for examining occlusion sensitivity of classification networks. It can be seen in figure 6 that off-the-shelf AlexNet (pretrained on ImageNet) focuses very much on categories it has been trained to recognize (e.g. cars) and certain shapes, such as circular blobs useful for distinguishing 12 different ball types in the ImageNet categories. The Place205 network is fairly unresponsive to all occlusions as it does not aim to recognize specific places but scene-level categories, so even if an important part of the image is occluded, such as a characteristic part of a building façade, it still provides a similar output feature which corresponds to an uninformative “a building façade” image descriptor. In contrast to these two, our network trained for specific place recognition automatically learns to ignore confusing features, such as cars and people, which are not discriminative for specific locations, and instead focuses on describing building façades and skylines. More qualitative examples are provided in appendix C. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_47",
"text": " We use our best performing network (VGG-16, fVLADsubscript𝑓𝑉𝐿𝐴𝐷f_{VLAD} with whitening down to 256-D) trained completely on Pittsburgh, to extract image representations for standard object and image retrieval benchmarks. Our representation sets the state-of-the-art for compact image representations (256-D) by a large margin on all three datasets, obtaining an mAP of 63.5%, 73.5% and 79.9% on Oxford 5k , Paris 6k , Holidays , respectively; for example, this is a +20% relative improvement on Oxford 5k. Appendix C contains more detailed results. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_48",
"text": " We have designed a new convolutional neural network architecture that is trained for place recognition in an end-to-end manner from weakly supervised Street View Time Machine data. Our trained representation significantly outperforms off-the-shelf CNN models and significantly improves over the state-of-the-art on the challenging 24/7 Tokyo dataset, as well as on the Oxford and Paris image retrieval benchmarks. The two main components of our architecture – (i) the NetVLAD pooling layer and (ii) weakly supervised ranking loss – are generic CNN building blocks applicable beyond the place recognition task. The NetVLAD layer offers a powerful pooling mechanism with learnable parameters that can be easily plugged into any other CNN architecture. The weakly supervised ranking loss opens up the possibility of end-to-end learning for other ranking tasks where large amounts of weakly labelled data are available, for example, images described with natural language . ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
},
{
"id": "1511.07247_all_49",
"text": " This work was partly supported by RVO13000 - Conceptual development of research organization, the ERC grant LEAP (no. 336845), ANR project Semapolis (ANR-13-CORD-0003), JSPS KAKENHI Grant Number 15H05313, the Inria CityLab IPL, and the Intelligence Advanced Research Projects Activity (IARPA) via Air Force Research Laboratory, contract FA8650-12-C-7212. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. Government. ",
"title": "NetVLAD: CNN Architecture for Weakly Supervised Place Recognition"
}
] |
Increasing the input size improved the detection for small objects. Is this true?
|
According to above evidential sentence, the answer is True [19].
|
[
19
] |
[
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective Search work through the current leading results on PASCAL VOC, COCO, and ILSVRC detection all based on Faster R-CNN albeit with deeper features such as . While accurate, these approaches have been too computationally intensive for embedded systems and, even with high-end hardware, too slow for real-time applications. Often detection speed for these approaches is measured in seconds per frame (SPF), and even the fastest high-accuracy detector, Faster R-CNN, operates at only 7 frames per second (FPS). There have been many attempts to build faster detectors by attacking each stage of the detection pipeline (see related work in Sec. 4), but so far, significantly increased speed comes only at the cost of significantly decreased detection accuracy. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_1",
"text": " This paper presents the first deep network based object detector that does not resample pixels or features for bounding box hypotheses and and is as accurate as approaches that do. This results in a significant improvement in speed for high-accuracy detection (59 FPS with mAP 74.3% on VOC2007 test, vs. Faster R-CNN 7 FPS with mAP 73.2% or YOLO 45 FPS with mAP 63.4%). The fundamental improvement in speed comes from eliminating bounding box proposals and the subsequent pixel or feature resampling stage. We are not the first to do this (cf (4, 5)), but by adding a series of improvements, we manage to increase the accuracy significantly over previous attempts. Our improvements include using a small convolutional filter to predict object categories and offsets in bounding box locations, using separate predictors (filters) for different aspect ratio detections, and applying these filters to multiple feature maps from the later stages of a network in order to perform detection at multiple scales. With these modifications—especially using multiple layers for prediction at different scales—we can achieve high-accuracy using relatively low resolution input, further increasing detection speed. While these contributions may seem small independently, we note that the resulting system improves accuracy on real-time detection for PASCAL VOC from 63.4% mAP for YOLO to 74.3% mAP for our SSD. This is a larger relative improvement in detection accuracy than that from the recent, very high-profile work on residual networks . Furthermore, significantly improving the speed of high-quality detection can broaden the range of settings where computer vision is useful. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_2",
"text": " We summarize our contributions as follows: • We introduce SSD, a single-shot detector for multiple categories that is faster than the previous state-of-the-art for single shot detectors (YOLO), and significantly more accurate, in fact as accurate as slower techniques that perform explicit region proposals and pooling (including Faster R-CNN). • The core of SSD is predicting category scores and box offsets for a fixed set of default bounding boxes using small convolutional filters applied to feature maps. • To achieve high detection accuracy we produce predictions of different scales from feature maps of different scales, and explicitly separate predictions by aspect ratio. • These design features lead to simple end-to-end training and high accuracy, even on low resolution input images, further improving the speed vs accuracy trade-off. • Experiments include timing and accuracy analysis on models with varying input size evaluated on PASCAL VOC, COCO, and ILSVRC and are compared to a range of recent state-of-the-art approaches. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_3",
"text": " This section describes our proposed SSD framework for detection (Sec. 2.1) and the associated training methodology (Sec. 2.2). Afterwards, Sec. 3 presents dataset-specific model details and experimental results. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_4",
"text": " The SSD approach is based on a feed-forward convolutional network that produces a fixed-size collection of bounding boxes and scores for the presence of object class instances in those boxes, followed by a non-maximum suppression step to produce the final detections. The early network layers are based on a standard architecture used for high quality image classification (truncated before any classification layers), which we will call the base network222We use the VGG-16 network as a base, but other networks should also produce good results.. We then add auxiliary structure to the network to produce detections with the following key features: ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_5",
"text": " Multi-scale feature maps for detection We add convolutional feature layers to the end of the truncated base network. These layers decrease in size progressively and allow predictions of detections at multiple scales. The convolutional model for predicting detections is different for each feature layer (cf Overfeat and YOLO that operate on a single scale feature map). ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_6",
"text": " Convolutional predictors for detection Each added feature layer (or optionally an existing feature layer from the base network) can produce a fixed set of detection predictions using a set of convolutional filters. These are indicated on top of the SSD network architecture in Fig. 2. For a feature layer of size m×n𝑚𝑛m\\times n with p𝑝p channels, the basic element for predicting parameters of a potential detection is a 3×3×p33𝑝3\\times 3\\times p small kernel that produces either a score for a category, or a shape offset relative to the default box coordinates. At each of the m×n𝑚𝑛m\\times n locations where the kernel is applied, it produces an output value. The bounding box offset output values are measured relative to a default box position relative to each feature map location (cf the architecture of YOLO that uses an intermediate fully connected layer instead of a convolutional filter for this step). ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_7",
"text": " Default boxes and aspect ratios We associate a set of default bounding boxes with each feature map cell, for multiple feature maps at the top of the network. The default boxes tile the feature map in a convolutional manner, so that the position of each box relative to its corresponding cell is fixed. At each feature map cell, we predict the offsets relative to the default box shapes in the cell, as well as the per-class scores that indicate the presence of a class instance in each of those boxes. Specifically, for each box out of k𝑘k at a given location, we compute c𝑐c class scores and the 444 offsets relative to the original default box shape. This results in a total of (c+4)k𝑐4𝑘(c+4)k filters that are applied around each location in the feature map, yielding (c+4)kmn𝑐4𝑘𝑚𝑛(c+4)kmn outputs for a m×n𝑚𝑛m\\times n feature map. For an illustration of default boxes, please refer to Fig. 1. Our default boxes are similar to the anchor boxes used in Faster R-CNN , however we apply them to several feature maps of different resolutions. Allowing different default box shapes in several feature maps let us efficiently discretize the space of possible output box shapes. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_8",
"text": " The key difference between training SSD and training a typical detector that uses region proposals, is that ground truth information needs to be assigned to specific outputs in the fixed set of detector outputs. Some version of this is also required for training in YOLO and for the region proposal stage of Faster R-CNN and MultiBox. Once this assignment is determined, the loss function and back propagation are applied end-to-end. Training also involves choosing the set of default boxes and scales for detection as well as the hard negative mining and data augmentation strategies. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_9",
"text": " During training we need to determine which default boxes correspond to a ground truth detection and train the network accordingly. For each ground truth box we are selecting from default boxes that vary over location, aspect ratio, and scale. We begin by matching each ground truth box to the default box with the best jaccard overlap (as in MultiBox ). Unlike MultiBox, we then match default boxes to any ground truth with jaccard overlap higher than a threshold (0.5). This simplifies the learning problem, allowing the network to predict high scores for multiple overlapping default boxes rather than requiring it to pick only the one with maximum overlap. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_10",
"text": " The SSD training objective is derived from the MultiBox objective (7, 8) but is extended to handle multiple object categories. Let xijp={1,0}superscriptsubscript𝑥𝑖𝑗𝑝10x_{ij}^{p}=\\{1,0\\} be an indicator for matching the i𝑖i-th default box to the j𝑗j-th ground truth box of category p𝑝p. In the matching strategy above, we can have ∑ixijp≥1subscript𝑖superscriptsubscript𝑥𝑖𝑗𝑝1\\sum_{i}x_{ij}^{p}\\geq 1. The overall objective loss function is a weighted sum of the localization loss (loc) and the confidence loss (conf): L(x,c,l,g)=1N(Lconf(x,c)+αLloc(x,l,g))𝐿𝑥𝑐𝑙𝑔1𝑁subscript𝐿𝑐𝑜𝑛𝑓𝑥𝑐𝛼subscript𝐿𝑙𝑜𝑐𝑥𝑙𝑔L(x,c,l,g)=\\frac{1}{N}(L_{conf}(x,c)+\\alpha L_{loc}(x,l,g)) (1) where N is the number of matched default boxes. If N=0𝑁0N=0, wet set the loss to 0. The localization loss is a Smooth L1 loss between the predicted box (l𝑙l) and the ground truth box (g𝑔g) parameters. Similar to Faster R-CNN , we regress to offsets for the center (cx,cy𝑐𝑥𝑐𝑦cx,cy) of the default bounding box (d𝑑d) and for its width (w𝑤w) and height (hℎh). Lloc(x,l,g)=∑i∈PosN∑m∈{cx,cy,w,h}xijksmoothL1(lim−g^jm)g^jcx=(gjcx−dicx)/diwg^jcy=(gjcy−dicy)/dihg^jw=log(gjwdiw)g^jh=log(gjhdih)formulae-sequencesubscript𝐿𝑙𝑜𝑐𝑥𝑙𝑔superscriptsubscript𝑖𝑃𝑜𝑠𝑁subscript𝑚𝑐𝑥𝑐𝑦𝑤ℎsuperscriptsubscript𝑥𝑖𝑗𝑘subscriptsmoothL1superscriptsubscript𝑙𝑖𝑚superscriptsubscript^𝑔𝑗𝑚superscriptsubscript^𝑔𝑗𝑐𝑥superscriptsubscript𝑔𝑗𝑐𝑥superscriptsubscript𝑑𝑖𝑐𝑥superscriptsubscript𝑑𝑖𝑤superscriptsubscript^𝑔𝑗𝑐𝑦superscriptsubscript𝑔𝑗𝑐𝑦superscriptsubscript𝑑𝑖𝑐𝑦superscriptsubscript𝑑𝑖ℎsuperscriptsubscript^𝑔𝑗𝑤superscriptsubscript𝑔𝑗𝑤superscriptsubscript𝑑𝑖𝑤superscriptsubscript^𝑔𝑗ℎsuperscriptsubscript𝑔𝑗ℎsuperscriptsubscript𝑑𝑖ℎ\\begin{split}L_{loc}(x,l,g)=\\sum_{i\\in Pos}^{N}\\sum_{m\\in\\{cx,cy,w,h\\}}&x_{ij}^{k}\\text{smooth}_{\\text{L1}}(l_{i}^{m}-\\hat{g}_{j}^{m})\\\\ \\hat{g}_{j}^{cx}=(g_{j}^{cx}-d_{i}^{cx})/d_{i}^{w}\\quad\\quad&\\hat{g}_{j}^{cy}=(g_{j}^{cy}-d_{i}^{cy})/d_{i}^{h}\\\\ \\hat{g}_{j}^{w}=\\log\\Big{(}\\frac{g_{j}^{w}}{d_{i}^{w}}\\Big{)}\\quad\\quad&\\hat{g}_{j}^{h}=\\log\\Big{(}\\frac{g_{j}^{h}}{d_{i}^{h}}\\Big{)}\\end{split} (2) The confidence loss is the softmax loss over multiple classes confidences (c𝑐c). Lconf(x,c)=−∑i∈PosNxijplog(c^ip)−∑i∈Neglog(c^i0)wherec^ip=exp(cip)∑pexp(cip)formulae-sequencesubscript𝐿𝑐𝑜𝑛𝑓𝑥𝑐superscriptsubscript𝑖𝑃𝑜𝑠𝑁superscriptsubscript𝑥𝑖𝑗𝑝𝑙𝑜𝑔superscriptsubscript^𝑐𝑖𝑝subscript𝑖𝑁𝑒𝑔𝑙𝑜𝑔superscriptsubscript^𝑐𝑖0wheresuperscriptsubscript^𝑐𝑖𝑝superscriptsubscript𝑐𝑖𝑝subscript𝑝superscriptsubscript𝑐𝑖𝑝L_{conf}(x,c)=-\\sum_{i\\in Pos}^{N}x_{ij}^{p}log(\\hat{c}_{i}^{p})-\\sum_{i\\in Neg}log(\\hat{c}_{i}^{0})\\quad\\text{where}\\quad\\hat{c}_{i}^{p}=\\frac{\\exp(c_{i}^{p})}{\\sum_{p}\\exp(c_{i}^{p})} (3) and the weight term α𝛼\\alpha is set to 1 by cross validation. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_11",
"text": " To handle different object scales, some methods (4, 9) suggest processing the image at different sizes and combining the results afterwards. However, by utilizing feature maps from several different layers in a single network for prediction we can mimic the same effect, while also sharing parameters across all object scales. Previous works (10, 11) have shown that using feature maps from the lower layers can improve semantic segmentation quality because the lower layers capture more fine details of the input objects. Similarly, showed that adding global context pooled from a feature map can help smooth the segmentation results. Motivated by these methods, we use both the lower and upper feature maps for detection. Figure 1 shows two exemplar feature maps (8×8888\\times 8 and 4×4444\\times 4) which are used in the framework. In practice, we can use many more with small computational overhead. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_12",
"text": " Feature maps from different levels within a network are known to have different (empirical) receptive field sizes . Fortunately, within the SSD framework, the default boxes do not necessary need to correspond to the actual receptive fields of each layer. We design the tiling of default boxes so that specific feature maps learn to be responsive to particular scales of the objects. Suppose we want to use m𝑚m feature maps for prediction. The scale of the default boxes for each feature map is computed as: sk=smin+smax−sminm−1(k−1),k∈(1,m)formulae-sequencesubscript𝑠𝑘subscript𝑠minsubscript𝑠maxsubscript𝑠min𝑚1𝑘1𝑘1𝑚s_{k}=s_{\\text{min}}+\\frac{s_{\\text{max}}-s_{\\text{min}}}{m-1}(k-1),\\quad k\\in(1,m) (4) where sminsubscript𝑠mins_{\\text{min}} is 0.2 and smaxsubscript𝑠maxs_{\\text{max}} is 0.9, meaning the lowest layer has a scale of 0.2 and the highest layer has a scale of 0.9, and all layers in between are regularly spaced. We impose different aspect ratios for the default boxes, and denote them as ar∈{1,2,3,12,13}subscript𝑎𝑟1231213a_{r}\\in\\{1,2,3,\\frac{1}{2},\\frac{1}{3}\\}. We can compute the width (wka=skarsuperscriptsubscript𝑤𝑘𝑎subscript𝑠𝑘subscript𝑎𝑟w_{k}^{a}=s_{k}\\sqrt{a_{r}}) and height (hka=sk/arsuperscriptsubscriptℎ𝑘𝑎subscript𝑠𝑘subscript𝑎𝑟h_{k}^{a}=s_{k}/\\sqrt{a_{r}}) for each default box. For the aspect ratio of 1, we also add a default box whose scale is sk′=sksk+1subscriptsuperscript𝑠′𝑘subscript𝑠𝑘subscript𝑠𝑘1s^{\\prime}_{k}=\\sqrt{s_{k}s_{k+1}}, resulting in 6 default boxes per feature map location. We set the center of each default box to (i+0.5|fk|,j+0.5|fk|)𝑖0.5subscript𝑓𝑘𝑗0.5subscript𝑓𝑘(\\frac{i+0.5}{|f_{k}|},\\frac{j+0.5}{|f_{k}|}), where |fk|subscript𝑓𝑘|f_{k}| is the size of the k𝑘k-th square feature map, i,j∈(0,|fk|)𝑖𝑗0subscript𝑓𝑘i,j\\in(0,|f_{k}|). In practice, one can also design a distribution of default boxes to best fit a specific dataset. How to design the optimal tiling is an open question as well. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_13",
"text": " By combining predictions for all default boxes with different scales and aspect ratios from all locations of many feature maps, we have a diverse set of predictions, covering various input object sizes and shapes. For example, in Fig. 1, the dog is matched to a default box in the 4×4444\\times 4 feature map, but not to any default boxes in the 8×8888\\times 8 feature map. This is because those boxes have different scales and do not match the dog box, and therefore are considered as negatives during training. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_14",
"text": " After the matching step, most of the default boxes are negatives, especially when the number of possible default boxes is large. This introduces a significant imbalance between the positive and negative training examples. Instead of using all the negative examples, we sort them using the highest confidence loss for each default box and pick the top ones so that the ratio between the negatives and positives is at most 3:1. We found that this leads to faster optimization and a more stable training. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_15",
"text": " To make the model more robust to various input object sizes and shapes, each training image is randomly sampled by one of the following options: • Use the entire original input image. • Sample a patch so that the minimum jaccard overlap with the objects is 0.1, 0.3, 0.5, 0.7, or 0.9. • Randomly sample a patch. The size of each sampled patch is (0.1, 1) of the original image size, and the aspect ratio is between 1212\\frac{1}{2} and 2. We keep the overlapped part of the ground truth box if the center of it is in the sampled patch. After the aforementioned sampling step, each sampled patch is resized to fixed size and is horizontally flipped with probability of 0.5, in addition to applying some photo-metric distortions similar to those described in . ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_16",
"text": " Our experiments are all based on VGG16 , which is pre-trained on the ILSVRC CLS-LOC dataset . Similar to DeepLab-LargeFOV , we convert fc6 and fc7 to convolutional layers, subsample parameters from fc6 and fc7, change pool5 from 2×2−s222𝑠22\\times 2-s2 to 3×3−s133𝑠13\\times 3-s1, and use the à trous algorithm to fill the ”holes”. We remove all the dropout layers and the fc8 layer. We fine-tune the resulting model using SGD with initial learning rate 10−3superscript10310^{-3}, 0.9 momentum, 0.0005 weight decay, and batch size 32. The learning rate decay policy is slightly different for each dataset, and we will describe details later. The full training and testing code is built on Caffe and is open source at: https://github.com/weiliu89/caffe/tree/ssd . ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_17",
"text": " On this dataset, we compare against Fast R-CNN and Faster R-CNN on VOC2007 test (4952 images). All methods fine-tune on the same pre-trained VGG16 network. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_18",
"text": " Figure 2 shows the architecture details of the SSD300 model. We use conv4_3, conv7 (fc7), conv8_2, conv9_2, conv10_2, and conv11_2 to predict both location and confidences. We set default box with scale 0.1 on conv4_3333For SSD512 model, we add extra conv12_2 for prediction, set sminsubscript𝑠mins_{\\text{min}} to 0.15, and 0.07 on conv4_3.. We initialize the parameters for all the newly added convolutional layers with the ”xavier” method . For conv4_3, conv10_2 and conv11_2, we only associate 4 default boxes at each feature map location – omitting aspect ratios of 1313\\frac{1}{3} and 3. For all other layers, we put 6 default boxes as described in Sec. 2.2.3. Since, as pointed out in , conv4_3 has a different feature scale compared to the other layers, we use the L2 normalization technique introduced in to scale the feature norm at each location in the feature map to 20 and learn the scale during back propagation. We use the 10−3superscript10310^{-3} learning rate for 40k iterations, then continue training for 10k iterations with 10−4superscript10410^{-4} and 10−5superscript10510^{-5}. When training on VOC2007 trainval, Table 1 shows that our low resolution SSD300 model is already more accurate than Fast R-CNN. When we train SSD on a larger 512×512512512512\\times 512 input image, it is even more accurate, surpassing Faster R-CNN by 1.7% mAP. If we train SSD with more (i.e. 07+12) data, we see that SSD300 is already better than Faster R-CNN by 1.1% and that SSD512 is 3.6% better. If we take models trained on COCO trainval35k as described in Sec. 3.4 and fine-tuning them on the 07+12 dataset with SSD512, we achieve the best results: 81.6% mAP. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_19",
"text": " To understand the performance of our two SSD models in more details, we used the detection analysis tool from . Figure 3 shows that SSD can detect various object categories with high quality (large white area). The majority of its confident detections are correct. The recall is around 85-90%, and is much higher with “weak” (0.1 jaccard overlap) criteria. Compared to R-CNN , SSD has less localization error, indicating that SSD can localize objects better because it directly learns to regress the object shape and classify object categories instead of using two decoupled steps. However, SSD has more confusions with similar object categories (especially for animals), partly because we share locations for multiple categories. Figure 4 shows that SSD is very sensitive to the bounding box size. In other words, it has much worse performance on smaller objects than bigger objects. This is not surprising because those small objects may not even have any information at the very top layers. Increasing the input size (e.g. from 300×300300300300\\times 300 to 512×512512512512\\times 512) can help improve detecting small objects, but there is still a lot of room to improve. On the positive side, we can clearly see that SSD performs really well on large objects. And it is very robust to different object aspect ratios because we use default boxes of various aspect ratios per feature map location. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_20",
"text": " To understand SSD better, we carried out controlled experiments to examine how each component affects performance. For all the experiments, we use the same settings and input size (300×300300300300\\times 300), except for specified changes to the settings or component(s). ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_21",
"text": " Data augmentation is crucial. Fast and Faster R-CNN use the original image and the horizontal flip to train. We use a more extensive sampling strategy, similar to YOLO . Table 2 shows that we can improve 8.8% mAP with this sampling strategy. We do not know how much our sampling strategy will benefit Fast and Faster R-CNN, but they are likely to benefit less because they use a feature pooling step during classification that is relatively robust to object translation by design. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_22",
"text": " More default box shapes is better. As described in Sec. 2.2.3, by default we use 6 default boxes per location. If we remove the boxes with 1313\\frac{1}{3} and 3 aspect ratios, the performance drops by 0.6%. By further removing the boxes with 1212\\frac{1}{2} and 2 aspect ratios, the performance drops another 2.1%. Using a variety of default box shapes seems to make the task of predicting boxes easier for the network. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_23",
"text": " Atrous is faster. As described in Sec. 3, we used the atrous version of a subsampled VGG16, following DeepLab-LargeFOV . If we use the full VGG16, keeping pool5 with 2×2−s222𝑠22\\times 2-s2 and not subsampling parameters from fc6 and fc7, and add conv5_3 for prediction, the result is about the same while the speed is about 20% slower. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_24",
"text": " We use the same settings as those used for our basic VOC2007 experiments above, except that we use VOC2012 trainval and VOC2007 trainval and test (21503 images) for training, and test on VOC2012 test (10991 images). We train the models with 10−3superscript10310^{-3} learning rate for 60k iterations, then 10−4superscript10410^{-4} for 20k iterations. Table 4 shows the results of our SSD300 and SSD512444\\ssmallhttp://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?cls=mean&challengeid=11&compid=4 model. We see the same performance trend as we observed on VOC2007 test. Our SSD300 improves accuracy over Fast/Faster R-CNN. By increasing the training and testing image size to 512×512512512512\\times 512, we are 4.5% more accurate than Faster R-CNN. Compared to YOLO, SSD is significantly more accurate, likely due to the use of convolutional default boxes from multiple feature maps and our matching strategy during training. When fine-tuned from models trained on COCO, our SSD512 achieves 80.0% mAP, which is 4.1% higher than Faster R-CNN. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_25",
"text": " To further validate the SSD framework, we trained our SSD300 and SSD512 architectures on the COCO dataset. Since objects in COCO tend to be smaller than PASCAL VOC, we use smaller default boxes for all layers. We follow the strategy mentioned in Sec. 2.2.3, but now our smallest default box has a scale of 0.15 instead of 0.2, and the scale of the default box on conv4_3 is 0.07 (e.g. 21 pixels for a 300×300300300300\\times 300 image)555For SSD512 model, we add extra conv12_2 for prediction, set sminsubscript𝑠mins_{\\text{min}} to 0.1, and 0.04 on conv4_3.. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_26",
"text": " We use the trainval35k for training. We first train the model with 10−3superscript10310^{-3} learning rate for 160k iterations, and then continue training for 40k iterations with 10−4superscript10410^{-4} and 40k iterations with 10−5superscript10510^{-5}. Table 5 shows the results on test-dev2015. Similar to what we observed on the PASCAL VOC dataset, SSD300 is better than Fast R-CNN in both [email protected] and mAP@(0.5:0.95). SSD300 has a similar [email protected] as ION and Faster R-CNN , but is worse in [email protected]. By increasing the image size to 512×512512512512\\times 512, our SSD512 is better than Faster R-CNN in both criteria. Interestingly, we observe that SSD512 is 5.3% better in [email protected], but is only 1.2% better in [email protected]. We also observe that it has much better AP (4.8%) and AR (4.6%) for large objects, but has relatively less improvement in AP (1.3%) and AR (2.0%) for small objects. Compared to ION, the improvement in AR for large and small objects is more similar (5.4% vs. 3.9%). We conjecture that Faster R-CNN is more competitive on smaller objects with SSD because it performs two box refinement steps, in both the RPN part and in the Fast R-CNN part. In Fig. 3.2, we show some detection examples on COCO test-dev with the SSD512 model. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_27",
"text": " We applied the same network architecture we used for COCO to the ILSVRC DET dataset . We train a SSD300 model using the ILSVRC2014 DET train and val1 as used in . We first train the model with 10−3superscript10310^{-3} learning rate for 320k iterations, and then continue training for 80k iterations with 10−4superscript10410^{-4} and 40k iterations with 10−5superscript10510^{-5}. We can achieve 43.4 mAP on the val2 set . Again, it validates that SSD is a general framework for high quality real-time detection. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_28",
"text": " ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_29",
"text": " Without a follow-up feature resampling step as in Faster R-CNN, the classification task for small objects is relatively hard for SSD, as demonstrated in our analysis (see Fig. 4). The data augmentation strategy described in Sec. 2.2 helps to improve the performance dramatically, especially on small datasets such as PASCAL VOC. The random crops generated by the strategy can be thought of as a ”zoom in” operation and can generate many larger training examples. To implement a ”zoom out” operation that creates more small training examples, we first randomly place an image on a canvas of 16×16\\times of the original image size filled with mean values before we do any random crop operation. Because we have more training images by introducing this new ”expansion” data augmentation trick, we have to double the training iterations. We have seen a consistent increase of 2%-3% mAP across multiple datasets, as shown in Table 6. In specific, Figure 3.2 shows that the new augmentation trick significantly improves the performance on small objects. This result underscores the importance of the data augmentation strategy for the final model accuracy. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_30",
"text": " An alternative way of improving SSD is to design a better tiling of default boxes so that its position and scale are better aligned with the receptive field of each position on a feature map. We leave this for future work. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_31",
"text": " ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_32",
"text": " Considering the large number of boxes generated from our method, it is essential to perform non-maximum suppression (nms) efficiently during inference. By using a confidence threshold of 0.01, we can filter out most boxes. We then apply nms with jaccard overlap of 0.45 per class and keep the top 200 detections per image. This step costs about 1.7 msec per image for SSD300 and 20 VOC classes, which is close to the total time (2.4 msec) spent on all newly added layers. We measure the speed with batch size 8 using Titan X and cuDNN v4 with Intel Xeon [email protected]. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_33",
"text": " Table 7 shows the comparison between SSD, Faster R-CNN, and YOLO. Both our SSD300 and SSD512 method outperforms Faster R-CNN in both speed and accuracy. Although Fast YOLO can run at 155 FPS, it has lower accuracy by almost 22% mAP. To the best of our knowledge, SSD300 is the first real-time method to achieve above 70% mAP. Note that about 80% of the forward time is spent on the base network (VGG16 in our case). Therefore, using a faster base network could even further improve the speed, which can possibly make the SSD512 model real-time as well. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_34",
"text": " There are two established classes of methods for object detection in images, one based on sliding windows and the other based on region proposal classification. Before the advent of convolutional neural networks, the state of the art for those two approaches – Deformable Part Model (DPM) and Selective Search – had comparable performance. However, after the dramatic improvement brought on by R-CNN , which combines selective search region proposals and convolutional network based post-classification, region proposal object detection methods became prevalent. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_35",
"text": " The original R-CNN approach has been improved in a variety of ways. The first set of approaches improve the quality and speed of post-classification, since it requires the classification of thousands of image crops, which is expensive and time-consuming. SPPnet speeds up the original R-CNN approach significantly. It introduces a spatial pyramid pooling layer that is more robust to region size and scale and allows the classification layers to reuse features computed over feature maps generated at several image resolutions. Fast R-CNN extends SPPnet so that it can fine-tune all layers end-to-end by minimizing a loss for both confidences and bounding box regression, which was first introduced in MultiBox for learning objectness. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_36",
"text": " The second set of approaches improve the quality of proposal generation using deep neural networks. In the most recent works like MultiBox (7, 8), the Selective Search region proposals, which are based on low-level image features, are replaced by proposals generated directly from a separate deep neural network. This further improves the detection accuracy but results in a somewhat complex setup, requiring the training of two neural networks with a dependency between them. Faster R-CNN replaces selective search proposals by ones learned from a region proposal network (RPN), and introduces a method to integrate the RPN with Fast R-CNN by alternating between fine-tuning shared convolutional layers and prediction layers for these two networks. This way region proposals are used to pool mid-level features and the final classification step is less expensive. Our SSD is very similar to the region proposal network (RPN) in Faster R-CNN in that we also use a fixed set of (default) boxes for prediction, similar to the anchor boxes in the RPN. But instead of using these to pool features and evaluate another classifier, we simultaneously produce a score for each object category in each box. Thus, our approach avoids the complication of merging RPN with Fast R-CNN and is easier to train, faster, and straightforward to integrate in other tasks. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_37",
"text": " Another set of methods, which are directly related to our approach, skip the proposal step altogether and predict bounding boxes and confidences for multiple categories directly. OverFeat , a deep version of the sliding window method, predicts a bounding box directly from each location of the topmost feature map after knowing the confidences of the underlying object categories. YOLO uses the whole topmost feature map to predict both confidences for multiple categories and bounding boxes (which are shared for these categories). Our SSD method falls in this category because we do not have the proposal step but use the default boxes. However, our approach is more flexible than the existing methods because we can use default boxes of different aspect ratios on each feature location from multiple feature maps at different scales. If we only use one default box per location from the topmost feature map, our SSD would have similar architecture to OverFeat ; if we use the whole topmost feature map and add a fully connected layer for predictions instead of our convolutional predictors, and do not explicitly consider multiple aspect ratios, we can approximately reproduce YOLO . ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_38",
"text": " This paper introduces SSD, a fast single-shot object detector for multiple categories. A key feature of our model is the use of multi-scale convolutional bounding box outputs attached to multiple feature maps at the top of the network. This representation allows us to efficiently model the space of possible box shapes. We experimentally validate that given appropriate training strategies, a larger number of carefully chosen default bounding boxes results in improved performance. We build SSD models with at least an order of magnitude more box predictions sampling location, scale, and aspect ratio, than existing methods (5, 7). We demonstrate that given the same VGG-16 base architecture, SSD compares favorably to its state-of-the-art object detector counterparts in terms of both accuracy and speed. Our SSD512 model significantly outperforms the state-of-the-art Faster R-CNN in terms of accuracy on PASCAL VOC and COCO, while being 3×3\\times faster. Our real time SSD300 model runs at 59 FPS, which is faster than the current real time YOLO alternative, while producing markedly superior detection accuracy. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_39",
"text": " Apart from its standalone utility, we believe that our monolithic and relatively simple SSD model provides a useful building block for larger systems that employ an object detection component. A promising future direction is to explore its use as part of a system using recurrent neural networks to detect and track objects in video simultaneously. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_40",
"text": " This work was started as an internship project at Google and continued at UNC. We would like to thank Alex Toshev for helpful discussions and are indebted to the Image Understanding and DistBelief teams at Google. We also thank Philip Ammirato and Patrick Poirson for helpful comments. We thank NVIDIA for providing GPUs and acknowledge support from NSF 1452851, 1446631, 1526367, 1533771. ",
"title": "SSD: Single Shot MultiBox Detector"
}
] |
What is a calibrated stereo twin?
|
calibrated stereo twin is the supervision method used by Garg et al [5].
|
[
5
] |
[
{
"id": "1704.07813_all_0",
"text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision has failed to recreate similar modeling capabilities for real-world scenes (e.g., where non-rigidity, occlusion and lack of texture are present). So why do humans excel at this task? One hypothesis is that we develop a rich, structural understanding of the world through our past visual experience that has largely consisted of moving around and observing vast numbers of scenes and developing consistent modeling of our observations. From millions of such observations, we have learned about the regularities of the world—roads are flat, buildings are straight, cars are supported by roads etc., and we can apply this knowledge when perceiving a new scene, even from a single monocular image. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_1",
"text": " In this work, we mimic this approach by training a model that observes sequences of images and aims to explain its observations by predicting likely camera motion and the scene structure (as shown in Fig. 1). We take an end-to-end approach in allowing the model to map directly from input pixels to an estimate of ego-motion (parameterized as 6-DoF transformation matrices) and the underlying scene structure (parameterized as per-pixel depth maps under a reference view). We are particularly inspired by prior work that has suggested view synthesis as a metric and recent work that tackles the calibrated, multi-view 3D case in an end-to-end framework . Our method is unsupervised, and can be trained simply using sequences of images with no manual labeling or even camera motion information. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_2",
"text": " Our approach builds upon the insight that a geometric view synthesis system only performs consistently well when its intermediate predictions of the scene geometry and the camera poses correspond to the physical ground-truth. While imperfect geometry and/or pose estimation can cheat with reasonable synthesized views for certain types of scenes (e.g., textureless), the same model would fail miserably when presented with another set of scenes with more diverse layout and appearance structures. Thus, our goal is to formulate the entire view synthesis pipeline as the inference procedure of a convolutional neural network, so that by training the network on large-scale video data for the ‘meta’-task of view synthesis the network is forced to learn about intermediate tasks of depth and camera pose estimation in order to come up with a consistent explanation of the visual world. Empirical evaluation on the KITTI benchmark demonstrates the effectiveness of our approach on both single-view depth and camera pose estimation. Our code will be made available at https://github.com/tinghuiz/SfMLearner. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_3",
"text": " The simultaneous estimation of structure and motion is a well studied problem with an established toolchain of techniques (12, 50, 38). Whilst the traditional toolchain is effective and efficient in many cases, its reliance on accurate image correspondence can cause problems in areas of low texture, complex geometry/photometry, thin structures, and occlusions. To address these issues, several of the pipeline stages have been recently tackled using deep learning, e.g., feature matching , pose estimation , and stereo (10, 27, 53). These learning-based techniques are attractive in that they are able to leverage external supervision during training, and potentially overcome the above issues when applied to test data. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_4",
"text": " One important application of geometric scene understanding is the task of novel view synthesis, where the goal is to synthesize the appearance of the scene seen from novel camera viewpoints. A classic paradigm for view synthesis is to first either estimate the underlying 3D geometry explicitly or establish pixel correspondence among input views, and then synthesize the novel views by compositing image patches from the input views (e.g., (4, 55, 43, 6, 9)). Recently, end-to-end learning has been applied to reconstruct novel views by transforming the input based on depth or flow, e.g., DeepStereo , Deep3D and Appearance Flows . In these methods, the underlying geometry is represented by quantized depth planes (DeepStereo), probabilistic disparity maps (Deep3D) and view-dependent flow fields (Appearance Flows), respectively. Unlike methods that directly map from input views to the target view (e.g., ), warping-based methods are forced to learn intermediate predictions of geometry and/or correspondence. In this work, we aim to distill such geometric reasoning capability from CNNs trained to perform warping-based view synthesis. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_5",
"text": " Our work is closely related to a line of recent research on learning single-view 3D inference from registered 2D observations. Garg et al. propose to learn a single-view depth estimation CNN using projection errors to a calibrated stereo twin for supervision. Concurrently, Deep3D predicts a second stereo viewpoint from an input image using stereoscopic film footage as training data. A similar approach was taken by Godard et al. , with the addition of a left-right consistency constraint, and a better architecture design that led to impressive performance. Like our approach, these techniques only learn from image observations of the world, unlike methods that require explicit depth for training, e.g., (20, 42, 7, 27, 30). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_6",
"text": " These techniques bear some resemblance to direct methods for structure and motion estimation , where the camera parameters and scene depth are adjusted to minimize a pixel-based error function. However, rather than directly minimizing the error to obtain the estimation, the CNN-based methods only take a gradient step for each batch of input instances, which allows the network to learn an implicit prior from a large corpus of related imagery. Several authors have explored building differentiable rendering operations into their models that are trained in this way, e.g., (19, 29, 34). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_7",
"text": " While most of the above techniques (including ours) are mainly focused on inferring depth maps as the scene geometry output, recent work (e.g., (13, 41, 46, 52)) has also shown success in learning 3D volumetric representations from 2D observations based on similar principles of projective geometry. Fouhey et al. further show that it is even possible to learn 3D inference without 3D labels (or registered 2D views) by utilizing scene regularity. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_8",
"text": " Another line of related work to ours is visual representation learning from video, where the general goal is to design pretext tasks for learning generic visual features from video data that can later be re-purposed for other vision tasks such as object detection and semantic segmentation. Such pretext tasks include ego-motion estimation (2, 24), tracking , temporal coherence , temporal order verification , and object motion mask prediction . While we focus on inferring the explicit scene geometry and ego-motion in this work, intuitively, the internal representation learned by the deep network (especially the single-view depth CNN) should capture some level of semantics that could generalize to other tasks as well. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_9",
"text": " Concurrent to our work, Vijayanarasimhan et al. independently propose a framework for joint training of depth, camera motion and scene motion from videos. While both methods are conceptually similar, ours is focused on the unsupervised aspect, whereas their framework adds the capability to incorporate supervision (e.g., depth, camera motion or scene motion). There are significant differences in how scene dynamics are modeled during training, in which they explicitly solve for object motion whereas our explainability mask discounts regions undergoing motion, occlusion and other factors. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_10",
"text": " Here we propose a framework for jointly training a single-view depth CNN and a camera pose estimation CNN from unlabeled video sequences. Despite being jointly trained, the depth model and the pose estimation model can be used independently during test-time inference. Training examples to our model consist of short image sequences of scenes captured by a moving camera. While our training procedure is robust to some degree of scene motion, we assume that the scenes we are interested in are mostly rigid, i.e., the scene appearance change across different frames is dominated by the camera motion. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_11",
"text": " The key supervision signal for our depth and pose prediction CNNs comes from the task of novel view synthesis: given one input view of a scene, synthesize a new image of the scene seen from a different camera pose. We can synthesize a target view given a per-pixel depth in that image, plus the pose and visibility in a nearby view. As we will show next, this synthesis process can be implemented in a fully differentiable manner with CNNs as the geometry and pose estimation modules. Visibility can be handled, along with non-rigidity and other non-modeled factors, using an “explanability” mask, which we discuss later (Sec. 3.3). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_12",
"text": " Let us denote <I1,…,IN><I_{1},\\ldots,I_{N}> as a training image sequence with one of the frames Itsubscript𝐼𝑡I_{t} being the target view and the rest being the source views Is(1≤s≤N,s≠t)I_{s}(1\\leq s\\leq N,s\\neq t). The view synthesis objective can be formulated as ℒvs=∑s∑p|It(p)−I^s(p)|,subscriptℒ𝑣𝑠subscript𝑠subscript𝑝subscript𝐼𝑡𝑝subscript^𝐼𝑠𝑝\\mathcal{L}_{vs}=\\sum_{s}\\sum_{p}|I_{t}(p)-\\hat{I}_{s}(p)|~{}, (1) where p𝑝p indexes over pixel coordinates, and I^ssubscript^𝐼𝑠\\hat{I}_{s} is the source view Issubscript𝐼𝑠I_{s} warped to the target coordinate frame based on a depth image-based rendering module (described in Sec. 3.2), taking the predicted depth D^tsubscript^𝐷𝑡\\hat{D}_{t}, the predicted 4×4444\\times 4 camera transformation matrix111In practice, the CNN estimates the Euler angles and the 3D translation vector, which are then converted to the transformation matrix. T^t→ssubscript^𝑇→𝑡𝑠\\hat{T}_{t\\rightarrow s} and the source view Issubscript𝐼𝑠I_{s} as input. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_13",
"text": " Note that the idea of view synthesis as supervision has also been recently explored for learning single-view depth estimation (14, 16) and multi-view stereo . However, to the best of our knowledge, all previous work requires posed image sets during training (and testing too in the case of DeepStereo), while our framework can be applied to standard videos without pose information. Furthermore, it predicts the poses as part of the learning framework. See Figure 2 for an illustration of our learning pipeline for depth and pose estimation. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_14",
"text": " As indicated in Eq. 1, a key component of our learning framework is a differentiable depth image-based renderer that reconstructs the target view Itsubscript𝐼𝑡I_{t} by sampling pixels from a source view Issubscript𝐼𝑠I_{s} based on the predicted depth map D^tsubscript^𝐷𝑡\\hat{D}_{t} and the relative pose T^t→ssubscript^𝑇→𝑡𝑠\\hat{T}_{t\\rightarrow s}. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_15",
"text": " Let ptsubscript𝑝𝑡p_{t} denote the homogeneous coordinates of a pixel in the target view, and K𝐾K denote the camera intrinsics matrix. We can obtain ptsubscript𝑝𝑡p_{t}’s projected coordinates onto the source view pssubscript𝑝𝑠p_{s} by222For notation simplicity, we omit showing the necessary conversion to homogeneous coordinates along the steps of matrix multiplication. ps∼KT^t→sD^t(pt)K−1ptsimilar-tosubscript𝑝𝑠𝐾subscript^𝑇→𝑡𝑠subscript^𝐷𝑡subscript𝑝𝑡superscript𝐾1subscript𝑝𝑡p_{s}\\sim K\\hat{T}_{t\\rightarrow s}\\hat{D}_{t}(p_{t})K^{-1}p_{t} (2) Notice that the projected coordinates pssubscript𝑝𝑠p_{s} are continuous values. To obtain Is(ps)subscript𝐼𝑠subscript𝑝𝑠I_{s}(p_{s}) for populating the value of I^s(pt)subscript^𝐼𝑠subscript𝑝𝑡\\hat{I}_{s}(p_{t}) (see Figure 3), we then use the differentiable bilinear sampling mechanism proposed in the spatial transformer networks that linearly interpolates the values of the 444-pixel neighbors (top-left, top-right, bottom-left, and bottom-right) of pssubscript𝑝𝑠p_{s} to approximate Is(ps)subscript𝐼𝑠subscript𝑝𝑠I_{s}(p_{s}), i.e. I^s(pt)=Is(ps)=∑i∈{t,b},j∈{l,r}wijIs(psij),subscript^𝐼𝑠subscript𝑝𝑡subscript𝐼𝑠subscript𝑝𝑠subscriptformulae-sequence𝑖𝑡𝑏𝑗𝑙𝑟superscript𝑤𝑖𝑗subscript𝐼𝑠superscriptsubscript𝑝𝑠𝑖𝑗\\hat{I}_{s}(p_{t})=I_{s}(p_{s})=\\sum_{i\\in\\{t,b\\},j\\in\\{l,r\\}}w^{ij}I_{s}(p_{s}^{ij}), where wijsuperscript𝑤𝑖𝑗w^{ij} is linearly proportional to the spatial proximity between pssubscript𝑝𝑠p_{s} and psijsuperscriptsubscript𝑝𝑠𝑖𝑗p_{s}^{ij} , and ∑i,jwij=1subscript𝑖𝑗superscript𝑤𝑖𝑗1\\sum_{i,j}w^{ij}=1. A similar strategy is used in for learning to directly warp between different views, while here the coordinates for pixel warping are obtained through projective geometry that enables the factorization of depth and camera pose. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_16",
"text": " Note that when applied to monocular videos the above view synthesis formulation implicitly assumes 1) the scene is static without moving objects; 2) there is no occlusion/disocclusion between the target view and the source views; 3) the surface is Lambertian so that the photo-consistency error is meaningful. If any of these assumptions are violated in a training sequence, the gradients could be corrupted and potentially inhibit training. To improve the robustness of our learning pipeline to these factors, we additionally train a explainability prediction network (jointly and simultaneously with the depth and pose networks) that outputs a per-pixel soft mask E^ssubscript^𝐸𝑠\\hat{E}_{s} for each target-source pair, indicating the network’s belief in where direct view synthesis will be successfully modeled for each target pixel. Based on the predicted E^ssubscript^𝐸𝑠\\hat{E}_{s}, the view synthesis objective is weighted correspondingly by ℒvs=∑<I1,…,IN>∈𝒮∑pE^s(p)|It(p)−I^s(p)|.subscriptℒ𝑣𝑠subscriptabsentsubscript𝐼1…subscript𝐼𝑁absent𝒮subscript𝑝subscript^𝐸𝑠𝑝subscript𝐼𝑡𝑝subscript^𝐼𝑠𝑝\\mathcal{L}_{vs}=\\sum_{<I_{1},\\ldots,I_{N}>\\in\\mathcal{S}}\\sum_{p}\\hat{E}_{s}(p)|I_{t}(p)-\\hat{I}_{s}(p)|~{}. (3) Since we do not have direct supervision for E^ssubscript^𝐸𝑠\\hat{E}_{s}, training with the above loss would result in a trivial solution of the network always predicting E^ssubscript^𝐸𝑠\\hat{E}_{s} to be zero, which perfectly minimizes the loss. To resolve this, we add a regularization term ℒreg(E^s)subscriptℒ𝑟𝑒𝑔subscript^𝐸𝑠\\mathcal{L}_{reg}(\\hat{E}_{s}) that encourages nonzero predictions by minimizing the cross-entropy loss with constant label 111 at each pixel location. In other words, the network is encouraged to minimize the view synthesis objective, but allowed a certain amount of slack for discounting the factors not considered by the model. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_17",
"text": " One remaining issue with the above learning pipeline is that the gradients are mainly derived from the pixel intensity difference between I(pt)𝐼subscript𝑝𝑡I(p_{t}) and the four neighbors of I(ps)𝐼subscript𝑝𝑠I(p_{s}), which would inhibit training if the correct pssubscript𝑝𝑠p_{s} (projected using the ground-truth depth and pose) is located in a low-texture region or far from the current estimation. This is a well known issue in motion estimation . Empirically, we found two strategies to be effective for overcoming this issue: 1) using a convolutional encoder-decoder architecture with a small bottleneck for the depth network that implicitly constrains the output to be globally smooth and facilitates gradients to propagate from meaningful regions to nearby regions; 2) explicit multi-scale and smoothness loss (e.g., as in (14, 16)) that allows gradients to be derived from larger spatial regions directly. We adopt the second strategy in this work as it is less sensitive to architectural choices. For smoothness, we minimize the L1subscript𝐿1L_{1} norm of the second-order gradients for the predicted depth maps (similar to ). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_18",
"text": " Our final objective becomes ℒfinal=∑lℒvsl+λsℒsmoothl+λe∑sℒreg(E^sl),subscriptℒ𝑓𝑖𝑛𝑎𝑙subscript𝑙superscriptsubscriptℒ𝑣𝑠𝑙subscript𝜆𝑠subscriptsuperscriptℒ𝑙𝑠𝑚𝑜𝑜𝑡ℎsubscript𝜆𝑒subscript𝑠subscriptℒ𝑟𝑒𝑔superscriptsubscript^𝐸𝑠𝑙\\mathcal{L}_{final}=\\sum_{l}\\mathcal{L}_{vs}^{l}+\\lambda_{s}\\mathcal{L}^{l}_{smooth}+\\lambda_{e}\\sum_{s}\\mathcal{L}_{reg}(\\hat{E}_{s}^{l})~{}, (4) where l𝑙l indexes over different image scales, s𝑠s indexes over source images, and λssubscript𝜆𝑠\\lambda_{s} and λesubscript𝜆𝑒\\lambda_{e} are the weighting for the depth smoothness loss and the explainability regularization, respectively. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_19",
"text": " For single-view depth prediction, we adopt the DispNet architecture proposed in that is mainly based on an encoder-decoder design with skip connections and multi-scale side predictions (see Figure 4). All conv layers are followed by ReLU activation except for the prediction layers, where we use 1/(α∗sigmoid(x)+β)1𝛼𝑠𝑖𝑔𝑚𝑜𝑖𝑑𝑥𝛽1/(\\alpha*sigmoid(x)+\\beta) with α=10𝛼10\\alpha=10 and β=0.01𝛽0.01\\beta=0.01 to constrain the predicted depth to be always positive within a reasonable range. We also experimented with using multiple views as input to the depth network, but did not find this to improve the results. This is in line with the observations in , where optical flow constraints need to be enforced to utilize multiple views effectively. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_20",
"text": " The input to the pose estimation network is the target view concatenated with all the source views (along the color channels), and the outputs are the relative poses between the target view and each of the source views. The network consists of 777 stride-2 convolutions followed by a 1×1111\\times 1 convolution with 6∗(N−1)6𝑁16*(N-1) output channels (corresponding to 333 Euler angles and 333-D translation for each source view). Finally, global average pooling is applied to aggregate predictions at all spatial locations. All conv layers are followed by ReLU except for the last layer where no nonlinear activation is applied. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_21",
"text": " The explainability prediction network shares the first five feature encoding layers with the pose network, followed by 555 deconvolution layers with multi-scale side predictions. All conv/deconv layers are followed by ReLU except for the prediction layers with no nonlinear activation. The number of output channels for each prediction layer is 2∗(N−1)2𝑁12*(N-1), with every two channels normalized by softmax to obtain the explainability prediction for the corresponding source-target pair (the second channel after normalization is E^ssubscript^𝐸𝑠\\hat{E}_{s} and used in computing the loss in Eq. 3). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_22",
"text": " Here we evaluate the performance of our system, and compare with prior approaches on single-view depth as well as ego-motion estimation. We mainly use the KITTI dataset for benchmarking, but also use the Make3D dataset for evaluating cross-dataset generalization ability. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_23",
"text": " We implemented the system using the publicly available TensorFlow framework. For all the experiments, we set λs=0.5/lsubscript𝜆𝑠0.5𝑙\\lambda_{s}=0.5/l (l𝑙l is the downscaling factor for the corresponding scale) and λe=0.2subscript𝜆𝑒0.2\\lambda_{e}=0.2. During training, we used batch normalization for all the layers except for the output layers, and the Adam optimizer with β1=0.9subscript𝛽10.9\\beta_{1}=0.9, β2=0.999subscript𝛽20.999\\beta_{2}=0.999, learning rate of 0.00020.00020.0002 and mini-batch size of 444. The training typically converges after about 150K150𝐾150K iterations. All the experiments are performed with image sequences captured with a monocular camera. We resize the images to 128×416128416128\\times 416 during training, but both the depth and pose networks can be run fully-convolutionally for images of arbitrary size at test time. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_24",
"text": " We train our system on the split provided by , and exclude all the frames from the testing scenes as well as static sequences with mean optical flow magnitude less than 111 pixel for training. We fix the length of image sequences to be 333 frames, and treat the central frame as the target view and the ±1plus-or-minus1\\pm 1 frames as the source views. We use images captured by both color cameras, but treated them independently when forming training sequences. This results in a total of 44,5404454044,540 sequences, out of which we use 40,1094010940,109 for training and 4,43144314,431 for validation. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_25",
"text": " To the best of our knowledge, no previous systems exist that learn single-view depth estimation in an unsupervised manner from monocular videos. Nonetheless, here we provide comparison with prior methods with depth supervision and recent methods that use calibrated stereo images (i.e. with pose supervision) for training (14, 16). Since the depth predicted by our method is defined up to a scale factor, for evaluation we multiply the predicted depth maps by a scalar s^^𝑠\\hat{s} that matches the median with the ground-truth, i.e. s^=median(Dgt)/median(Dpred)^𝑠𝑚𝑒𝑑𝑖𝑎𝑛subscript𝐷𝑔𝑡𝑚𝑒𝑑𝑖𝑎𝑛subscript𝐷𝑝𝑟𝑒𝑑\\hat{s}=median(D_{gt})/median(D_{pred}). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_26",
"text": " Similar to , we also experimented with first pre-training the system on the larger Cityscapes dataset (sample predictions are shown in Figure 5), and then fine-tune on KITTI, which results in slight performance improvement. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_27",
"text": " Here we evaluate the single-view depth performance on the 697697697 images from the test split of . As shown in Table 1, our unsupervised method performs comparably with several supervised methods (e.g. Eigen et al. and Garg et al. ), but falls short of concurrent work by Godard et al. that uses calibrated stereo images (i.e. with pose supervision) with left-right cycle consistency loss for training. For future work, it would be interesting to see if incorporating the similar cycle consistency loss into our framework could further improve the results. Figure 6 provides examples of visual comparison between our results and some supervised baselines over a variety of examples. One can see that although trained in an unsupervised manner, our results are comparable to that of the supervised baselines, and sometimes preserve the depth boundaries and thin structures such as trees and street lights better. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_28",
"text": " We show sample predictions made by our initial Cityscapes model and the final model (pre-trained on Cityscapes and then fine-tuned on KITTI) in Figure 7. Due to the domain gap between the two datasets, our Cityscapes model sometimes has difficulty in recovering the complete shape of the car/bushes, and mistakes them with distant objects. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_29",
"text": " We also performed an ablation study of the explainability modeling (see Table 1), which turns out only offering a modest performance boost. This is likely because 1) most of the KITTI scenes are static without significant scene motions, and 2) the occlusion/visibility effects only occur in small regions in sequences across a short time span (333-frames), which make the explainability modeling less essential to the success of training. Nonetheless, our explainability prediction network does seem to capture the factors like scene motion and visibility well (see Sec. 4.3), and could potentially be more important for other more challenging datasets. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_30",
"text": " To evaluate the generalization ability of our single-view depth model, we directly apply our model trained on Cityscapes + KITTI to the Make3D dataset unseen during training. While there still remains a significant performance gap between our method and others supervised using Make3D ground-truth depth (see Table 2), our predictions are able to capture the global scene layout reasonably well without any training on the Make3D images (see Figure 8). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_31",
"text": " To evaluate the performance of our pose estimation network, we applied our system to the official KITTI odometry split (containing 111111 driving sequences with ground truth odometry obtained through the IMU/GPS readings, which we use for evaluation purpose only), and used sequences 000000-080808 for training and 090909-101010 for testing. In this experiment, we fix the length of input image sequences to our system to 555 frames. We compare our ego-motion estimation with two variants of monocular ORB-SLAM (a well-established SLAM system): 1) ORB-SLAM (full), which recovers odometry using all frames of the driving sequence (i.e. allowing loop closure and re-localization), and 2) ORB-SLAM (short), which runs on 555-frame snippets (same as our input setting). Another baseline we compare with is the dataset mean of car motion (using ground-truth odometry) for 555-frame snippets. To resolve scale ambiguity during evaluation, we first optimize the scaling factor for the predictions made by each method to best align with the ground truth, and then measure the Absolute Trajectory Error (ATE) as the metric. ATE is computed on 555-frame snippets and averaged over the full sequence.333For evaluating ORB-SLAM (full) we break down the trajectory of the full sequence into 555-frame snippets with the reference coordinate frame adjusted to the central frame of each snippet. As shown in Table 3 and Fig. 9, our method outperforms both baselines (mean odometry and ORB-SLAM (short)) that share the same input setting as ours, but falls short of ORB-SLAM (full), which leverages whole sequences (159115911591 for seq. 090909 and 120112011201 for seq. 101010) for loop closure and re-localization. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_32",
"text": " For better understanding of our pose estimation results, we show in Figure 9 the ATE curve with varying amount of side-rotation by the car between the beginning and the end of a sequence. Figure 9 suggests that our method is significantly better than ORB-SLAM (short) when the side-rotation is small (i.e. car mostly driving forward), and comparable to ORB-SLAM (full) across the entire spectrum. The large performance gap between ours and ORB-SLAM (short) suggests that our learned ego-motion could potentially be used as an alternative to the local estimation modules in monocular SLAM systems. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_33",
"text": " We visualize example explainability masks predicted by our network in Figure 10. The first three rows suggest that the network has learned to identify dynamic objects in the scene as unexplainable by our model, and similarly, rows 4–5 are examples of objects that disappear from the frame in subsequent views. The last two rows demonstrate the potential downside of explainability-weighted loss: the depth CNN has low confidence in predicting thin structures well, and tends to mask them as unexplainable. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_34",
"text": " We have presented an end-to-end learning pipeline that utilizes the task of view synthesis for supervision of single-view depth and camera pose estimation. The system is trained on unlabeled videos, and yet performs comparably with approaches that require ground-truth depth or pose for training. Despite good performance on the benchmark evaluation, our method is by no means close to solving the general problem of unsupervised learning of 3D scene structure inference. A number of major challenges are yet to be addressed: 1) our current framework does not explicitly estimate scene dynamics and occlusions (although they are implicitly taken into account by the explainability masks), both of which are critical factors in 3D scene understanding. Direct modeling of scene dynamics through motion segmentation (e.g. (48, 40)) could be a potential solution; 2) our framework assumes the camera intrinsics are given, which forbids the use of random Internet videos with unknown camera types/calibration – we plan to address this in future work; 3) depth maps are a simplified representation of the underlying 3D scene. It would be interesting to extend our framework to learn full 3D volumetric representations (e.g. ). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_35",
"text": " Another interesting area for future work would be to investigate in more detail the representation learned by our system. In particular, the pose network likely uses some form of image correspondence in estimating the camera motion, whereas the depth estimation network likely recognizes common structural features of scenes and objects. It would be interesting to probe these, and investigate the extent to which our network already performs, or could be re-purposed to perform, tasks such as object detection and semantic segmentation. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_36",
"text": " We thank our colleagues, Sudheendra Vijayanarasimhan, Susanna Ricco, Cordelia Schmid, Rahul Sukthankar, and Katerina Fragkiadaki for their help. We also thank the anonymous reviewers for their valuable comments. TZ would like to thank Shubham Tulsiani for helpful discussions, and Clement Godard for sharing the evaluation code. This work is also partially funded by Intel/NSF VEC award IIS-1539099. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
}
] |
Could the authors have used a BiLSTM instead of an LSTM to improve the performance of their proposed model further?
|
While the paper shows that LSTM has shown good performance on some sequence tasks, since there is no evidential information about BiLSTM in this paper this question cannot be answered in this paper [13]. To answer the question, external knowledge about BiLSTM is required to compare how it would work compared to existing LSTM model [17].
|
[
13,
17
] |
[
{
"id": "1411.4555_all_0",
"text": " Being able to automatically describe the content of an image using properly formed English sentences is a very challenging task, but it could have great impact, for instance by helping visually impaired people better understand the content of images on the web. This task is significantly harder, for example, than the well-studied image classification or object recognition tasks, which have been a main focus in the computer vision community . Indeed, a description must capture not only the objects contained in an image, but it also must express how these objects relate to each other as well as their attributes and the activities they are involved in. Moreover, the above semantic knowledge has to be expressed in a natural language like English, which means that a language model is needed in addition to visual understanding. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_1",
"text": " Most previous attempts have proposed to stitch together existing solutions of the above sub-problems, in order to go from an image to its description (6, 16). In contrast, we would like to present in this work a single joint model that takes an image I𝐼I as input, and is trained to maximize the likelihood p(S|I)𝑝conditional𝑆𝐼p(S|I) of producing a target sequence of words S={S1,S2,…}𝑆subscript𝑆1subscript𝑆2…S=\\{S_{1},S_{2},\\ldots\\} where each word Stsubscript𝑆𝑡S_{t} comes from a given dictionary, that describes the image adequately. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_2",
"text": " The main inspiration of our work comes from recent advances in machine translation, where the task is to transform a sentence S𝑆S written in a source language, into its translation T𝑇T in the target language, by maximizing p(T|S)𝑝conditional𝑇𝑆p(T|S). For many years, machine translation was also achieved by a series of separate tasks (translating words individually, aligning words, reordering, etc), but recent work has shown that translation can be done in a much simpler way using Recurrent Neural Networks (RNNs) (3, 2, 30) and still reach state-of-the-art performance. An “encoder” RNN reads the source sentence and transforms it into a rich fixed-length vector representation, which in turn in used as the initial hidden state of a “decoder” RNN that generates the target sentence. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_3",
"text": " Here, we propose to follow this elegant recipe, replacing the encoder RNN by a deep convolution neural network (CNN). Over the last few years it has been convincingly shown that CNNs can produce a rich representation of the input image by embedding it to a fixed-length vector, such that this representation can be used for a variety of vision tasks . Hence, it is natural to use a CNN as an image “encoder”, by first pre-training it for an image classification task and using the last hidden layer as an input to the RNN decoder that generates sentences (see Fig. 1). We call this model the Neural Image Caption, or NIC. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_4",
"text": " Our contributions are as follows. First, we present an end-to-end system for the problem. It is a neural net which is fully trainable using stochastic gradient descent. Second, our model combines state-of-art sub-networks for vision and language models. These can be pre-trained on larger corpora and thus can take advantage of additional data. Finally, it yields significantly better performance compared to state-of-the-art approaches; for instance, on the Pascal dataset, NIC yielded a BLEU score of 59, to be compared to the current state-of-the-art of 25, while human performance reaches 69. On Flickr30k, we improve from 56 to 66, and on SBU, from 19 to 28. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_5",
"text": " The problem of generating natural language descriptions from visual data has long been studied in computer vision, but mainly for video (7, 32). This has led to complex systems composed of visual primitive recognizers combined with a structured formal language, e.g. And-Or Graphs or logic systems, which are further converted to natural language via rule-based systems. Such systems are heavily hand-designed, relatively brittle and have been demonstrated only on limited domains, e.g. traffic scenes or sports. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_6",
"text": " The problem of still image description with natural text has gained interest more recently. Leveraging recent advances in recognition of objects, their attributes and locations, allows us to drive natural language generation systems, though these are limited in their expressivity. Farhadi et al. use detections to infer a triplet of scene elements which is converted to text using templates. Similarly, Li et al. start off with detections and piece together a final description using phrases containing detected objects and relationships. A more complex graph of detections beyond triplets is used by Kulkani et al. , but with template-based text generation. More powerful language models based on language parsing have been used as well (23, 1, 17, 18, 5). The above approaches have been able to describe images “in the wild”, but they are heavily hand-designed and rigid when it comes to text generation. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_7",
"text": " A large body of work has addressed the problem of ranking descriptions for a given image (11, 8, 24). Such approaches are based on the idea of co-embedding of images and text in the same vector space. For an image query, descriptions are retrieved which lie close to the image in the embedding space. Most closely, neural networks are used to co-embed images and sentences together or even image crops and subsentences but do not attempt to generate novel descriptions. In general, the above approaches cannot describe previously unseen compositions of objects, even though the individual objects might have been observed in the training data. Moreover, they avoid addressing the problem of evaluating how good a generated description is. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_8",
"text": " In this work we combine deep convolutional nets for image classification with recurrent networks for sequence modeling , to create a single network that generates descriptions of images. The RNN is trained in the context of this single “end-to-end” network. The model is inspired by recent successes of sequence generation in machine translation (3, 2, 30), with the difference that instead of starting with a sentence, we provide an image processed by a convolutional net. The closest works are by Kiros et al. who use a neural net, but a feedforward one, to predict the next word given the image and previous words. A recent work by Mao et al. uses a recurrent NN for the same prediction task. This is very similar to the present proposal but there are a number of important differences: we use a more powerful RNN model, and provide the visual input to the RNN model directly, which makes it possible for the RNN to keep track of the objects that have been explained by the text. As a result of these seemingly insignificant differences, our system achieves substantially better results on the established benchmarks. Lastly, Kiros et al. propose to construct a joint multimodal embedding space by using a powerful computer vision model and an LSTM that encodes text. In contrast to our approach, they use two separate pathways (one for images, one for text) to define a joint embedding, and, even though they can generate text, their approach is highly tuned for ranking. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_9",
"text": " In this paper, we propose a neural and probabilistic framework to generate descriptions from images. Recent advances in statistical machine translation have shown that, given a powerful sequence model, it is possible to achieve state-of-the-art results by directly maximizing the probability of the correct translation given an input sentence in an “end-to-end” fashion – both for training and inference. These models make use of a recurrent neural network which encodes the variable length input into a fixed dimensional vector, and uses this representation to “decode” it to the desired output sentence. Thus, it is natural to use the same approach where, given an image (instead of an input sentence in the source language), one applies the same principle of “translating” it into its description. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_10",
"text": " Thus, we propose to directly maximize the probability of the correct description given the image by using the following formulation: ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_11",
"text": " θ⋆=argmaxθ∑(I,S)logp(S|I;θ)superscript𝜃⋆subscript𝜃subscript𝐼𝑆𝑝conditional𝑆𝐼𝜃\\theta^{\\star}=\\arg\\max_{\\theta}\\sum_{(I,S)}\\log p(S|I;\\theta) (1) where θ𝜃\\theta are the parameters of our model, I𝐼I is an image, and S𝑆S its correct transcription. Since S𝑆S represents any sentence, its length is unbounded. Thus, it is common to apply the chain rule to model the joint probability over S0,…,SNsubscript𝑆0…subscript𝑆𝑁S_{0},\\ldots,S_{N}, where N𝑁N is the length of this particular example as ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_12",
"text": " logp(S|I)=∑t=0Nlogp(St|I,S0,…,St−1)𝑝conditional𝑆𝐼superscriptsubscript𝑡0𝑁𝑝conditionalsubscript𝑆𝑡𝐼subscript𝑆0…subscript𝑆𝑡1\\log p(S|I)=\\sum_{t=0}^{N}\\log p(S_{t}|I,S_{0},\\ldots,S_{t-1}) (2) where we dropped the dependency on θ𝜃\\theta for convenience. At training time, (S,I)𝑆𝐼(S,I) is a training example pair, and we optimize the sum of the log probabilities as described in (2) over the whole training set using stochastic gradient descent (further training details are given in Section 4). ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_13",
"text": " It is natural to model p(St|I,S0,…,St−1)𝑝conditionalsubscript𝑆𝑡𝐼subscript𝑆0…subscript𝑆𝑡1p(S_{t}|I,S_{0},\\ldots,S_{t-1}) with a Recurrent Neural Network (RNN), where the variable number of words we condition upon up to t−1𝑡1t-1 is expressed by a fixed length hidden state or memory htsubscriptℎ𝑡h_{t}. This memory is updated after seeing a new input xtsubscript𝑥𝑡x_{t} by using a non-linear function f𝑓f: ht+1=f(ht,xt).subscriptℎ𝑡1𝑓subscriptℎ𝑡subscript𝑥𝑡h_{t+1}=f(h_{t},x_{t})\\;. (3) To make the above RNN more concrete two crucial design choices are to be made: what is the exact form of f𝑓f and how are the images and words fed as inputs xtsubscript𝑥𝑡x_{t}. For f𝑓f we use a Long-Short Term Memory (LSTM) net, which has shown state-of-the art performance on sequence tasks such as translation. This model is outlined in the next section. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_14",
"text": " For the representation of images, we use a Convolutional Neural Network (CNN). They have been widely used and studied for image tasks, and are currently state-of-the art for object recognition and detection. Our particular choice of CNN uses a novel approach to batch normalization and yields the current best performance on the ILSVRC 2014 classification competition . Furthermore, they have been shown to generalize to other tasks such as scene classification by means of transfer learning . The words are represented with an embedding model. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_15",
"text": " The choice of f𝑓f in (3) is governed by its ability to deal with vanishing and exploding gradients , the most common challenge in designing and training RNNs. To address this challenge, a particular form of recurrent nets, called LSTM, was introduced and applied with great success to translation (3, 30) and sequence generation . ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_16",
"text": " The core of the LSTM model is a memory cell c𝑐c encoding knowledge at every time step of what inputs have been observed up to this step (see Figure 2) . The behavior of the cell is controlled by “gates” – layers which are applied multiplicatively and thus can either keep a value from the gated layer if the gate is 111 or zero this value if the gate is 00. In particular, three gates are being used which control whether to forget the current cell value (forget gate f𝑓f), if it should read its input (input gate i𝑖i) and whether to output the new cell value (output gate o𝑜o). The definition of the gates and cell update and output are as follows: itsubscript𝑖𝑡\\displaystyle i_{t} =\\displaystyle= σ(Wixxt+Wimmt−1)𝜎subscript𝑊𝑖𝑥subscript𝑥𝑡subscript𝑊𝑖𝑚subscript𝑚𝑡1\\displaystyle\\sigma(W_{ix}x_{t}+W_{im}m_{t-1}) (4) ftsubscript𝑓𝑡\\displaystyle f_{t} =\\displaystyle= σ(Wfxxt+Wfmmt−1)𝜎subscript𝑊𝑓𝑥subscript𝑥𝑡subscript𝑊𝑓𝑚subscript𝑚𝑡1\\displaystyle\\sigma(W_{fx}x_{t}+W_{fm}m_{t-1}) (5) otsubscript𝑜𝑡\\displaystyle o_{t} =\\displaystyle= σ(Woxxt+Wommt−1)𝜎subscript𝑊𝑜𝑥subscript𝑥𝑡subscript𝑊𝑜𝑚subscript𝑚𝑡1\\displaystyle\\sigma(W_{ox}x_{t}+W_{om}m_{t-1}) (6) ctsubscript𝑐𝑡\\displaystyle c_{t} =\\displaystyle= ft⊙ct−1+it⊙h(Wcxxt+Wcmmt−1)direct-productsubscript𝑓𝑡subscript𝑐𝑡1direct-productsubscript𝑖𝑡ℎsubscript𝑊𝑐𝑥subscript𝑥𝑡subscript𝑊𝑐𝑚subscript𝑚𝑡1\\displaystyle f_{t}\\odot c_{t-1}+i_{t}\\odot h(W_{cx}x_{t}+W_{cm}m_{t-1}) (7) mtsubscript𝑚𝑡\\displaystyle m_{t} =\\displaystyle= ot⊙ctdirect-productsubscript𝑜𝑡subscript𝑐𝑡\\displaystyle o_{t}\\odot c_{t} (8) pt+1subscript𝑝𝑡1\\displaystyle p_{t+1} =\\displaystyle= Softmax(mt)Softmaxsubscript𝑚𝑡\\displaystyle\\textrm{Softmax}(m_{t}) (9) where ⊙direct-product\\odot represents the product with a gate value, and the various W𝑊W matrices are trained parameters. Such multiplicative gates make it possible to train the LSTM robustly as these gates deal well with exploding and vanishing gradients . The nonlinearities are sigmoid σ(⋅)𝜎⋅\\sigma(\\cdot) and hyperbolic tangent h(⋅)ℎ⋅h(\\cdot). The last equation mtsubscript𝑚𝑡m_{t} is what is used to feed to a Softmax, which will produce a probability distribution ptsubscript𝑝𝑡p_{t} over all words. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_17",
"text": " The LSTM model is trained to predict each word of the sentence after it has seen the image as well as all preceding words as defined by p(St|I,S0,…,St−1)𝑝conditionalsubscript𝑆𝑡𝐼subscript𝑆0…subscript𝑆𝑡1p(S_{t}|I,S_{0},\\ldots,S_{t-1}). For this purpose, it is instructive to think of the LSTM in unrolled form – a copy of the LSTM memory is created for the image and each sentence word such that all LSTMs share the same parameters and the output mt−1subscript𝑚𝑡1m_{t-1} of the LSTM at time t−1𝑡1t-1 is fed to the LSTM at time t𝑡t (see Figure 3). All recurrent connections are transformed to feed-forward connections in the unrolled version. In more detail, if we denote by I𝐼I the input image and by S=(S0,…,SN)𝑆subscript𝑆0…subscript𝑆𝑁S=(S_{0},\\ldots,S_{N}) a true sentence describing this image, the unrolling procedure reads: x−1subscript𝑥1\\displaystyle x_{-1} =\\displaystyle= CNN(I)CNN𝐼\\displaystyle\\textrm{CNN}(I) (10) xtsubscript𝑥𝑡\\displaystyle x_{t} =\\displaystyle= WeSt,t∈{0…N−1}subscript𝑊𝑒subscript𝑆𝑡𝑡0…𝑁1\\displaystyle W_{e}S_{t},\\quad t\\in\\{0\\ldots N-1\\}\\quad (11) pt+1subscript𝑝𝑡1\\displaystyle p_{t+1} =\\displaystyle= LSTM(xt),t∈{0…N−1}LSTMsubscript𝑥𝑡𝑡0…𝑁1\\displaystyle\\textrm{LSTM}(x_{t}),\\quad t\\in\\{0\\ldots N-1\\}\\quad (12) where we represent each word as a one-hot vector Stsubscript𝑆𝑡S_{t} of dimension equal to the size of the dictionary. Note that we denote by S0subscript𝑆0S_{0} a special start word and by SNsubscript𝑆𝑁S_{N} a special stop word which designates the start and end of the sentence. In particular by emitting the stop word the LSTM signals that a complete sentence has been generated. Both the image and the words are mapped to the same space, the image by using a vision CNN, the words by using word embedding Wesubscript𝑊𝑒W_{e}. The image I𝐼I is only input once, at t=−1𝑡1t=-1, to inform the LSTM about the image contents. We empirically verified that feeding the image at each time step as an extra input yields inferior results, as the network can explicitly exploit noise in the image and overfits more easily. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_18",
"text": " Our loss is the sum of the negative log likelihood of the correct word at each step as follows: L(I,S)=−∑t=1Nlogpt(St).𝐿𝐼𝑆superscriptsubscript𝑡1𝑁subscript𝑝𝑡subscript𝑆𝑡L(I,S)=-\\sum_{t=1}^{N}\\log p_{t}(S_{t})\\;. (13) The above loss is minimized w.r.t. all the parameters of the LSTM, the top layer of the image embedder CNN and word embeddings Wesubscript𝑊𝑒W_{e}. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_19",
"text": " There are multiple approaches that can be used to generate a sentence given an image, with NIC. The first one is Sampling where we just sample the first word according to p1subscript𝑝1p_{1}, then provide the corresponding embedding as input and sample p2subscript𝑝2p_{2}, continuing like this until we sample the special end-of-sentence token or some maximum length. The second one is BeamSearch: iteratively consider the set of the k𝑘k best sentences up to time t𝑡t as candidates to generate sentences of size t+1𝑡1t+1, and keep only the resulting best k𝑘k of them. This better approximates S=argmaxS′p(S′|I)𝑆subscriptsuperscript𝑆′𝑝conditionalsuperscript𝑆′𝐼S=\\arg\\max_{S^{\\prime}}p(S^{\\prime}|I). We used the BeamSearch approach in the following experiments, with a beam of size 20. Using a beam size of 1 (i.e., greedy search) did degrade our results by 2 BLEU points on average. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_20",
"text": " We performed an extensive set of experiments to assess the effectiveness of our model using several metrics, data sources, and model architectures, in order to compare to prior art. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_21",
"text": " Although it is sometimes not clear whether a description should be deemed successful or not given an image, prior art has proposed several evaluation metrics. The most reliable (but time consuming) is to ask for raters to give a subjective score on the usefulness of each description given the image. In this paper, we used this to reinforce that some of the automatic metrics indeed correlate with this subjective score, following the guidelines proposed in , which asks the graders to evaluate each generated sentence with a scale from 1 to 4111 The raters are asked whether the image is described without any errors, described with minor errors, with a somewhat related description, or with an unrelated description, with a score of 4 being the best and 1 being the worst.. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_22",
"text": " For this metric, we set up an Amazon Mechanical Turk experiment. Each image was rated by 2 workers. The typical level of agreement between workers is 65%percent6565\\%. In case of disagreement we simply average the scores and record the average as the score. For variance analysis, we perform bootstrapping (re-sampling the results with replacement and computing means/standard deviation over the resampled results). Like we report the fraction of scores which are larger or equal than a set of predefined thresholds. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_23",
"text": " The rest of the metrics can be computed automatically assuming one has access to groundtruth, i.e. human generated descriptions. The most commonly used metric so far in the image description literature has been the BLEU score , which is a form of precision of word n-grams between generated and reference sentences 222In this literature, most previous work report BLEU-1, i.e., they only compute precision at the unigram level, whereas BLEU-n is a geometric average of precision over 1- to n-grams.. Even though this metric has some obvious drawbacks, it has been shown to correlate well with human evaluations. In this work, we corroborate this as well, as we show in Section 4.3. An extensive evaluation protocol, as well as the generated outputs of our system, can be found at \\urlhttp://nic.droppages.com/. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_24",
"text": " Besides BLEU, one can use the perplexity of the model for a given transcription (which is closely related to our objective function in (1)). The perplexity is the geometric mean of the inverse probability for each predicted word. We used this metric to perform choices regarding model selection and hyperparameter tuning in our held-out set, but we do not report it since BLEU is always preferred 333Even though it would be more desirable, optimizing for BLEU score yields a discrete optimization problem. In general, perplexity and BLEU scores are fairly correlated.. A much more detailed discussion regarding metrics can be found in , and research groups working on this topic have been reporting other metrics which are deemed more appropriate for evaluating caption. We report two such metrics - METEOR and Cider - hoping for much more discussion and research to arise regarding the choice of metric. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_25",
"text": " Lastly, the current literature on image description has also been using the proxy task of ranking a set of available descriptions with respect to a given image (see for instance ). Doing so has the advantage that one can use known ranking metrics like recall@k. On the other hand, transforming the description generation task into a ranking task is unsatisfactory: as the complexity of images to describe grows, together with its dictionary, the number of possible sentences grows exponentially with the size of the dictionary, and the likelihood that a predefined sentence will fit a new image will go down unless the number of such sentences also grows exponentially, which is not realistic; not to mention the underlying computational complexity of evaluating efficiently such a large corpus of stored sentences for each image. The same argument has been used in speech recognition, where one has to produce the sentence corresponding to a given acoustic sequence; while early attempts concentrated on classification of isolated phonemes or words, state-of-the-art approaches for this task are now generative and can produce sentences from a large dictionary. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_26",
"text": " Now that our models can generate descriptions of reasonable quality, and despite the ambiguities of evaluating an image description (where there could be multiple valid descriptions not in the groundtruth) we believe we should concentrate on evaluation metrics for the generation task rather than for ranking. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_27",
"text": " For evaluation we use a number of datasets which consist of images and sentences in English describing these images. The statistics of the datasets are as follows: Dataset name size train valid. test Pascal VOC 2008 - - 1000 Flickr8k 6000 1000 1000 Flickr30k 28000 1000 1000 MSCOCO 82783 40504 40775 SBU 1M - - With the exception of SBU, each image has been annotated by labelers with 5 sentences that are relatively visual and unbiased. SBU consists of descriptions given by image owners when they uploaded them to Flickr. As such they are not guaranteed to be visual or unbiased and thus this dataset has more noise. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_28",
"text": " The Pascal dataset is customary used for testing only after a system has been trained on different data such as any of the other four dataset. In the case of SBU, we hold out 1000 images for testing and train on the rest as used by . Similarly, we reserve 4K random images from the MSCOCO validation set as test, called COCO-4k, and use it to report results in the following section. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_29",
"text": " Since our model is data driven and trained end-to-end, and given the abundance of datasets, we wanted to answer questions such as “how dataset size affects generalization”, “what kinds of transfer learning it would be able to achieve”, and “how it would deal with weakly labeled examples”. As a result, we performed experiments on five different datasets, explained in Section 4.2, which enabled us to understand our model in depth. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_30",
"text": " Many of the challenges that we faced when training our models had to do with overfitting. Indeed, purely supervised approaches require large amounts of data, but the datasets that are of high quality have less than 100000 images. The task of assigning a description is strictly harder than object classification and data driven approaches have only recently become dominant thanks to datasets as large as ImageNet (with ten times more data than the datasets we described in this paper, with the exception of SBU). As a result, we believe that, even with the results we obtained which are quite good, the advantage of our method versus most current human-engineered approaches will only increase in the next few years as training set sizes will grow. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_31",
"text": " Nonetheless, we explored several techniques to deal with overfitting. The most obvious way to not overfit is to initialize the weights of the CNN component of our system to a pretrained model (e.g., on ImageNet). We did this in all the experiments (similar to ), and it did help quite a lot in terms of generalization. Another set of weights that could be sensibly initialized are Wesubscript𝑊𝑒W_{e}, the word embeddings. We tried initializing them from a large news corpus , but no significant gains were observed, and we decided to just leave them uninitialized for simplicity. Lastly, we did some model level overfitting-avoiding techniques. We tried dropout and ensembling models, as well as exploring the size (i.e., capacity) of the model by trading off number of hidden units versus depth. Dropout and ensembling gave a few BLEU points improvement, and that is what we report throughout the paper. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_32",
"text": " We trained all sets of weights using stochastic gradient descent with fixed learning rate and no momentum. All weights were randomly initialized except for the CNN weights, which we left unchanged because changing them had a negative impact. We used 512 dimensions for the embeddings and the size of the LSTM memory. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_33",
"text": " Descriptions were preprocessed with basic tokenization, keeping all words that appeared at least 5 times in the training set. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_34",
"text": " We report our main results on all the relevant datasets in Tables 1 and 2. Since PASCAL does not have a training set, we used the system trained using MSCOCO (arguably the largest and highest quality dataset for this task). The state-of-the-art results for PASCAL and SBU did not use image features based on deep learning, so arguably a big improvement on those scores comes from that change alone. The Flickr datasets have been used recently (11, 21, 14), but mostly evaluated in a retrieval framework. A notable exception is , where they did both retrieval and generation, and which yields the best performance on the Flickr datasets up to now. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_35",
"text": " Human scores in Table 2 were computed by comparing one of the human captions against the other four. We do this for each of the five raters, and average their BLEU scores. Since this gives a slight advantage to our system, given the BLEU score is computed against five reference sentences and not four, we add back to the human scores the average difference of having five references instead of four. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_36",
"text": " Given that the field has seen significant advances in the last years, we do think it is more meaningful to report BLEU-4, which is the standard in machine translation moving forward. Additionally, we report metrics shown to correlate better with human evaluations in Table 1444We used the implementation of these metrics kindly provided in \\urlhttp://www.mscoco.org.. Despite recent efforts on better evaluation metrics , our model fares strongly versus human raters. However, when evaluating our captions using human raters (see Section 4.3.6), our model fares much more poorly, suggesting more work is needed towards better metrics. On the official test set for which labels are only available through the official website, our model had a 27.2 BLEU-4. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_37",
"text": " Since we have trained many models and we have several testing sets, we wanted to study whether we could transfer a model to a different dataset, and how much the mismatch in domain would be compensated with e.g. higher quality labels or more training data. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_38",
"text": " The most obvious case for transfer learning and data size is between Flickr30k and Flickr8k. The two datasets are similarly labeled as they were created by the same group. Indeed, when training on Flickr30k (with about 4 times more training data), the results obtained are 4 BLEU points better. It is clear that in this case, we see gains by adding more training data since the whole process is data-driven and overfitting prone. MSCOCO is even bigger (5 times more training data than Flickr30k), but since the collection process was done differently, there are likely more differences in vocabulary and a larger mismatch. Indeed, all the BLEU scores degrade by 10 points. Nonetheless, the descriptions are still reasonable. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_39",
"text": " Since PASCAL has no official training set and was collected independently of Flickr and MSCOCO, we report transfer learning from MSCOCO (in Table 2). Doing transfer learning from Flickr30k yielded worse results with BLEU-1 at 53 (cf. 59). ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_40",
"text": " Lastly, even though SBU has weak labeling (i.e., the labels were captions and not human generated descriptions), the task is much harder with a much larger and noisier vocabulary. However, much more data is available for training. When running the MSCOCO model on SBU, our performance degrades from 28 down to 16. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_41",
"text": " Having trained a generative model that gives p(S|I)𝑝conditional𝑆𝐼p(S|I), an obvious question is whether the model generates novel captions, and whether the generated captions are both diverse and high quality. Table 3 shows some samples when returning the N-best list from our beam search decoder instead of the best hypothesis. Notice how the samples are diverse and may show different aspects from the same image. The agreement in BLEU score between the top 15 generated sentences is 58, which is similar to that of humans among them. This indicates the amount of diversity our model generates. In bold are the sentences that are not present in the training set. If we take the best candidate, the sentence is present in the training set 80% of the times. This is not too surprising given that the amount of training data is quite small, so it is relatively easy for the model to pick “exemplar” sentences and use them to generate descriptions. If we instead analyze the top 15 generated sentences, about half of the times we see a completely novel description, but still with a similar BLEU score, indicating that they are of enough quality, yet they provide a healthy diversity. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_42",
"text": " While we think ranking is an unsatisfactory way to evaluate description generation from images, many papers report ranking scores, using the set of testing captions as candidates to rank given a test image. The approach that works best on these metrics (MNLM), specifically implemented a ranking-aware loss. Nevertheless, NIC is doing surprisingly well on both ranking tasks (ranking descriptions given images, and ranking images given descriptions), as can be seen in Tables 4 and 5. Note that for the Image Annotation task, we normalized our scores similar to what used. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_43",
"text": " Figure 4 shows the result of the human evaluations of the descriptions provided by NIC, as well as a reference system and groundtruth on various datasets. We can see that NIC is better than the reference system, but clearly worse than the groundtruth, as expected. This shows that BLEU is not a perfect metric, as it does not capture well the difference between NIC and human descriptions assessed by raters. Examples of rated images can be seen in Figure 5. It is interesting to see, for instance in the second image of the first column, how the model was able to notice the frisbee given its size. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_44",
"text": " In order to represent the previous word St−1subscript𝑆𝑡1S_{t-1} as input to the decoding LSTM producing Stsubscript𝑆𝑡S_{t}, we use word embedding vectors , which have the advantage of being independent of the size of the dictionary (contrary to a simpler one-hot-encoding approach). Furthermore, these word embeddings can be jointly trained with the rest of the model. It is remarkable to see how the learned representations have captured some semantic from the statistics of the language. Table 6 shows, for a few example words, the nearest other words found in the learned embedding space. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_45",
"text": " Note how some of the relationships learned by the model will help the vision component. Indeed, having “horse”, “pony”, and “donkey” close to each other will encourage the CNN to extract features that are relevant to horse-looking animals. We hypothesize that, in the extreme case where we see very few examples of a class (e.g., “unicorn”), its proximity to other word embeddings (e.g., “horse”) should provide a lot more information that would be completely lost with more traditional bag-of-words based approaches. ",
"title": "Show and tell: A neural image caption generator"
},
{
"id": "1411.4555_all_46",
"text": " We have presented NIC, an end-to-end neural network system that can automatically view an image and generate a reasonable description in plain English. NIC is based on a convolution neural network that encodes an image into a compact representation, followed by a recurrent neural network that generates a corresponding sentence. The model is trained to maximize the likelihood of the sentence given the image. Experiments on several datasets show the robustness of NIC in terms of qualitative results (the generated sentences are very reasonable) and quantitative evaluations, using either ranking metrics or BLEU, a metric used in machine translation to evaluate the quality of generated sentences. It is clear from these experiments that, as the size of the available datasets for image description increases, so will the performance of approaches like NIC. Furthermore, it will be interesting to see how one can use unsupervised data, both from images alone and text alone, to improve image description approaches. ",
"title": "Show and tell: A neural image caption generator"
}
] |
Are the three stages sequentially conducted in the model?
|
No [18]. It is hard to see the three stages are conducted sequentially [19].
|
[
18,
19
] |
[
{
"id": "1904.09223_all_0",
"text": " Language representation pre-training Mikolov et al. (2013); Devlin et al. (2018) has been shown effective for improving many natural language processing tasks such as named entity recognition, sentiment analysis, and question answering. In order to get reliable word representation, neural language models are designed to learn word co-occurrence and then obtain word embedding with unsupervised learning. The methods in Word2Vec Mikolov et al. (2013) and Glove Pennington et al. (2014) represent words as vectors, where similar words have similar word representations. These word representations provide an initialization for the word vectors in other deep learning models. Recently, lots of works such as Cove McCann et al. (2017), Elmo Peters et al. (2018), GPT Radford et al. (2018) and BERT Devlin et al. (2018) improved word representation via different strategies, which has been shown to be more effective for down-stream natural language processing tasks. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_1",
"text": " The vast majority of these studies model the representations by predicting the missing word only through the contexts. These works do not consider the prior knowledge in the sentence. For example, In the sentence ” Harry Potter is a series of fantasy novels written by J. K. Rowling”. Harry Potter is a novel name and J. K. Rowling is the writer. It is easy for the model to predict the missing word of the entity Harry Potter by word collocations inside this entity without the help of long contexts. The model cannot predict Harry Potter according to the relationship between Harry Potter and J. K. Rowling. It is intuitive that if the model learns more about prior knowledge, the model can obtain more reliable language representation. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_2",
"text": " In this paper, we propose a model called ERNIE (enhanced representation through knowledge integration) by using knowledge masking strategies. In addition to basic masking strategy, we use two kinds of knowledge strategies: phrase-level strategy and entity-level strategy. We take a phrase or a entity as one unit, which is usually composed of several words. All of the words in the same unit are masked during word representation training, instead of only one word or character being masked. In this way, the prior knowledge of phrases and entities are implicitly learned during the training procedure. Instead of adding the knowledge embedding directly, ERNIE implicitly learned the information about knowledge and longer semantic dependency, such as the relationship between entities, the property of a entity and the type of a event, to guide word embedding learning. This can make the model have better generalization and adaptability. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_3",
"text": " In order to reduce the training cost of the model, ERNIE is pre-trained on heterogeneous Chinese data, and then applied to 5 Chinese NLP tasks. ERNIE advances the state-of-the-art results on all of these tasks. An additional experiment on the cloze test shows that ERNIE has better knowledge inference capacity over other strong baseline methods. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_4",
"text": " Our Contribution are as follows: ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_5",
"text": " (1) We introduce a new learning processing of language model which masking the units such as phrases and entities in order to implicitly learn both syntactic and semantic information from these units. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_6",
"text": " (2) ERNIE significantly outperforms the previous state-of-the art methods on various Chinese natural language processing tasks. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_7",
"text": " (3) We released the codes of ERNIE and pre-trained models, which are available in https://github.com/PaddlePaddle/LARK/tree/develop/ERNIE . ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_8",
"text": " Representation of words as continuous vectors has a long history. A very popular model architecture for estimating neural network language model (NNLM) was proposed in Bengio et al. (2003), where a feed forward neural network with a linear projection layer and a non-linear hidden layer was used to learn the word vector representation. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_9",
"text": " It is effective to learn general language representation by using a large number of unlabeled data to pretrain a language model. Traditional methods focused on context-independent word embedding. Methods such as Word2Vec Mikolov et al. (2013) and Glove Pennington et al. (2014) take a large corpus of text as inputs and produces a word vectors, typically in several hundred dimensions. They generate a single word embedding representation for each word in the vocabulary. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_10",
"text": " However, a word can have completely different senses or meanings in the contexts. Skip-thought Kiros et al. (2015) proposed a approach for unsupervised learning of a generic, distributed sentence encoder. Cove McCann et al. (2017) show that adding these context vectors improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks. ULMFit Howard and Ruder (2018) proposed an effective transfer learning method that can be applied to any task in NLP. ELMo Peters et al. (2018) generalizes traditional word embedding research along a different dimension. They propose to extract context-sensitive features from a language model. The GPT Radford et al. (2018) enhanced the context-sensitive embedding by adapting the Transformer. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_11",
"text": " BERT Devlin et al. (2018) uses two different pretraining tasks for language modeling. BERT randomly masks a certain percentage of words in the sentences and learn to predict those masked words. Moreover, BERT learn to predict whether two sentences are adjacent. This task tries to model the relationship between two sentences which is not captured by traditional language models. Consequently, this particular pretraining scheme helps BERT to outperform state-of-the-art techniques by a large margin on various key NLP datasets such as GLUE Wang et al. (2018) and SQUAD Rajpurkar et al. (2016) and so on. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_12",
"text": " Some other researchers try to add more information based on these models. MT-DNN Liu et al. (2019) combine pre-training learning and multi-task learning to improve the performances over several different tasks in GLUE Wang et al. (2018). GPT-2 Radford et al. (2019) adds task information into the pre-training process and adapt their model to zero-shot tasks. XLM Lample and Conneau (2019) adds language embedding to the pre-training process which achieved better results in cross-lingual tasks. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_13",
"text": " Semantic encoder pre-trained on heterogeneous unsupervised data can improve the transfer learning performance. Universal sentence encoder Cer et al. (2018) adopts heterogeneous training data drawn from Wikipedia, web news, web QA pages and discussion forum. Sentence encoder Yang et al. (2018) based on response prediction benefits from query-response pair data drawn from Reddit conversation. XLM Lample and Conneau (2019) introduce parallel corpus to BERT, which is trained jointly with masked language model task. With transformer model pre-trained on heterogeneous data, XLM shows great performance gain on supervise/unsupervised MT task and classification task. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_14",
"text": " We introduce ERNIE and its detailed implementation in this section. We first describe the model’s transformer encoder,and then introduce the knowledge integration method in Section 3.2. The comparisons between BERT and ERNIE are shown visually in Figure 1. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_15",
"text": " ERNIE use multi-layer Transformer Vaswani et al. (2017) as basic encoder like previous pre-traning model such as GPT, BERT and XLM. The Transformer can capture the contextual information for each token in the sentence via self-attention, and generates a sequence of contextual embeddings. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_16",
"text": " For Chinese corpus, we add spaces around every character in the CJK Unicode range and use the WordPiece Wu et al. (2016) to tokenize Chinese sentences. For a given token, its input representation is constructed by summing the corresponding token, segment and position embeddings. The first token of every sequence is the special classification embedding((CLS)). ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_17",
"text": " we use prior knowledge to enhance our pretrained language model. Instead of adding the knowledge embedding directly, we proposed a multi-stage knowledge masking strategy to integrate phrase and entity level knowledge into the Language representation. The different masking level of a sentence is described in Figure 2. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_18",
"text": " The first learning stage is to use basic level masking, It treat a sentence as a sequence of basic Language unit, for English, the basic language unit is word, and for Chinese, the basic language unit is Chinese Character. In the training process, We randomly mask 15 percents of basic language units, and using other basic units in the sentence as inputs, and train a transformer to predict the mask units. Based on basic level mask, we can obtain a basic word representation. Because it is trained on a random mask of basic semantic units, high level semantic knowledge is hard to be fully modeled. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_19",
"text": " The second stage is to employ phrase-level masking. Phrase is a small group of words or characters together acting as a conceptual unit. For English, we use lexical analysis and chunking tools to get the boundary of phrases in the sentences, and use some language dependent segmentation tools to get the word/phrase information in other language such as Chinese. In phrase-level mask stage, we also use basic language units as training input, unlike random basic units mask, this time we randomly select a few phrases in the sentence, mask and predict all the basic units in the same phrase. At this stage, phrase information is encoded into the word embedding. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_20",
"text": " The third stage is entity-level masking. Name entities contain persons, locations, organizations, products, etc., which can be denoted with a proper name. It can be abstract or have a physical existence. Usually entities contain important information in the sentences. As in the phrase masking stage, we first analyze the named entities in a sentence, and then mask and predict all slots in the entities. After three stage learning,a word representation enhanced by richer semantic information is obtained. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_21",
"text": " ERNIE was chosen to have the same model size as BERT-base for comparison purposes. ERNIE uses 12 encoder layers, 768 hidden units and 12 attention heads. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_22",
"text": " ERNIE adopts Heterogeneous corpus for pre-training. Following Cer et al. (2018), we draw the mixed corpus Chinese Wikepedia, Baidu Baike, Baidu news and Baidu Tieba. The number of sentences are 21M, 51M, 47M, 54M. respectively. Baidu Baike contains encyclopedia articles written in formal languages, which is used as a strong basis for language modeling. Baidu news provides the latest information about movie names, actor names, football team names, etc. Baidu Tieba is an open discussion forum like Reddits, where each post can be regarded as a dialogue thread. Tieba corpus is used in our DLM task, which will be discussed in the next section. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_23",
"text": " We perform traditional-to-simplified conversion on the Chinese characters, and upper-to-lower conversion on English letters. We use a shared vocabulary of 17,964 unicode characters for our model. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_24",
"text": " Dialogue data is important for semantic representation, since the corresponding query semantics of the same replies are often similar. ERNIE models the Query-Response dialogue structure on the DLM (Dialogue Language Model) task. As shown in figure 3, our method introduces dialogue embedding to identify the roles in the dialogue, which is different from that of universal sentence encoder Cer et al. (2018). ERNIE’s Dialogue embedding plays the same roles as token type embedding in BERT, except that ERNIE can also represent multi-turn conversations (e.g. QRQ, QRR, QQR, where Q and R stands for ”Query” and ”Response” respectively). Like MLM in BERT, masks are applied to enforce the model to predict missing words conditioned on both query and response. What’s more, we generate fake samples by replacing the query or the response with a randomly selected sentence. The model is designed to judge whether the multi-turn conversation is real or fake. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_25",
"text": " The DLM task helps ERNIE to learn the implicit relationship in dialogues, which also enhances the model’s ability to learn semantic representation. The model architecture of DLM task is compatible with that of the MLM task, thus it is pre-trained alternatively with the MLM task. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_26",
"text": " ERNIE is applied to 5 Chinese NLP tasks, including natural language inference, semantic similarity, named entity recognition, sentiment analysis, and question answering. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_27",
"text": " The Cross-lingual Natural Language Inference (XNLI) corpus Liu et al. (2019) is a crowd-sourced collection for the MultiNLI corpus. The pairs are annotated with textual entailment and translated into 14 languages including Chinese. The labels contains contradiction, neutral and entailment. We follow the Chinese experiments in BERTDevlin et al. (2018). ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_28",
"text": " The Large-scale Chinese Question Matching Corpus (LCQMC) Liu et al. (2018) aims at identifying whether two sentences have the same intention. Each pair of sentences in the dataset is associated with a binary label indicating whether the two sentences share the same intention, and the task can be formalized as predicting a binary label. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_29",
"text": " The MSRA-NER dataset is designed for named entity recognition, which is published by Microsoft Research Asia. The entities contains several types including person name, place name, organization name and so on. This task can be seen as a sequence labeling task. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_30",
"text": " ChnSentiCorp Song-bo is a dataset which aims at judging the sentiment of a sentence. It includes comments in several domains such as hotels, books and electronic computers. the goal of this task is to judge whether the sentence is positive or negative. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_31",
"text": " The goal of NLPCC-DBQA dataset ( http://tcci.ccf.org.cn/conference/2016/dldoc/evagline2.pdf) is to select answers of the corresponding questions. The evaluation methods on this dataset include MRR Voorhees (2001) and F1 score. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_32",
"text": " The test results on 5 Chinese NLP tasks are presented in Table 1. It can be seen that ERNIE outperforms BERT on all tasks, creating new state-of-the-art results on these Chinese NLP tasks. For the XNLI, MSRA-NER, ChnSentiCorp and nlpcc-dbqa tasks, ERNIE obtains more than 1% absolute accuracy improvement over BERT. The gain of ERNIE is attributed to its knowledge integration strategy. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_33",
"text": " To better understand ERNIE, we perform ablation experiments over every strategy of ERNIE in this section. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_34",
"text": " We sample 10% training data from the whole corpus to verify the effectiveness of the knowledge masking strategy. Results are presented in Table 2. We can see that adding phrase-level mask to the baseline word-level mask can improve the performance of the model. Based on this, we add the entity-level masking strategy,the performance of the model is further improved. In addition. The results also show that with 10 times larger size of the pre-training dataset, 0.8% performance gain is achieved on XNLI test set. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_35",
"text": " Ablation study is also performed on the DLM task. we use 10% of all training corpus with different proportions to illustrate the contributions of DLM task on XNLI develop set. we pre-train ERNIE from scratch on these datasets, and report average result on XNLI task from 5 random restart of fine-tuning. Detail experiment setting and develop set result is presented in Table 3, We can see that 0.7%/1.0% of improvement in develop/test accuracy is achieved on this DLM task. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_36",
"text": " To verify ERNIE’s knowledge learning ability, We use several Cloze test samples Taylor (1953) to examine the model. In the experiment, the name entity is removed from the paragraphs and the model need to infer what it is. Some cases are show in Figure 4. We compared the predictions of BERT and ERNIE. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_37",
"text": " In case 1, BERT try to copy the name appeared in the context while ERNIE remembers the knowledge about relationship mentioned in the article. In cases 2 and Case 5, BERT can successfully learn the patterns according to the contexts, therefore correctly predicting the named entity type but failing to fill in the slot with the correct entity. on the contrary, ERNIE can fill in the slots with the correct entities. In cases 3, 4, 6, BERT fills in the slots with several characters related to sentences, but it is hard to predict the semantic concept. ERNIE predicts correct entities except case 4. Although ERNIE predicts the wrong entity in Case 4, it can correctly predict the semantic type and fills in the slot with one of an Australian city. In summary, these cases show that ERNIE performs better in context-based knowledge reasoning. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_38",
"text": " In this paper, we presents a novel method to integrate knowledge into pre-training language model. Experiments on 5 Chinese language processing tasks show that our method outperforms BERT over all of these tasks. We also confirmed that both the knowledge integration and pre-training on heterogeneous data enable the model to obtain better language representation. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
},
{
"id": "1904.09223_all_39",
"text": " In future we will integrate other types of knowledge into semantic representation models, such as using syntactic parsing or weak supervised signals from other tasks. In addition We will also validate this idea in other languages. ",
"title": "ERNIE: Enhanced Representation through Knowledge Integration"
}
] |
Is it true that large text to image models cannot mimic and create novel rendition of images in a reference set?
|
This is true that large text to image models cannot mimic and create novel rendition of images in a reference set [1].
|
[
1
] |
[
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requires synthesizing instances of specific subjects (e.g., objects, animals) in new contexts such that they naturally and seamlessly blend into the scene. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_1",
"text": " Recently developed large text-to-image models have shown unprecedented capabilities, by enabling high-quality and diverse synthesis of images based on a text prompt written in natural language (61, 54). One of the main advantages of such models is the strong semantic prior learned from a large collection of image-caption pairs. Such a prior learns, for instance, to bind the word “dog” with various instances of dogs that can appear in different poses and contexts in an image. While the synthesis capabilities of these models are unprecedented, they lack the ability to mimic the appearance of subjects in a given reference set, and synthesize novel renditions of the same subjects in different contexts. The main reason is that the expressiveness of their output domain is limited; even the most detailed textual description of an object may yield instances with different appearances. Furthermore, even models whose text embedding lies in a shared language-vision space cannot accurately reconstruct the appearance of given subjects but only create variations of the image content (Figure 2). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_2",
"text": " In this work, we present a new approach for “personalization” of text-to-image diffusion models (adapting them to user-specific image generation needs). Our goal is to expand the language-vision dictionary of the model such that it binds new words with specific subjects the user wants to generate. Once the new dictionary is embedded in the model, it can use these words to synthesize novel photorealistic images of the subject, contextualized in different scenes, while preserving their key identifying features. The effect is akin to a “magic photo booth”—once a few images of the subject are taken, the booth generates photos of the subject in different conditions and scenes, as guided by simple and intuitive text prompts (Figure 1). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_3",
"text": " More formally, given a few images of a subject (∼similar-to\\sim3-5), our objective is to implant the subject into the output domain of the model such that it can be synthesized with a unique identifier. To that end, we propose a technique to represent a given subject with rare token identifiers and fine-tune a pre-trained, diffusion-based text-to-image framework. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_4",
"text": " We fine-tune the text-to-image model with the input images and text prompts containing a unique identifier followed by the class name of the subject (e.g., “A (V) dog”). The latter enables the model to use its prior knowledge on the subject class while the class-specific instance is bound with the unique identifier. In order to prevent language drift (34, 40) that causes the model to associate the class name (e.g., “dog”) with the specific instance, we propose an autogenous, class-specific prior preservation loss, which leverages the semantic prior on the class that is embedded in the model, and encourages it to generate diverse instances of the same class as our subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_5",
"text": " We apply our approach to a myriad of text-based image generation applications including recontextualization of subjects, modification of their properties, original art renditions, and more, paving the way to a new stream of previously unassailable tasks. We highlight the contribution of each component in our method via ablation studies, and compare with alternative baselines and related work. We also conduct a user study to evaluate subject and prompt fidelity in our synthesized images, compared to alternative approaches. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_6",
"text": " To the best of our knowledge, ours is the first technique that tackles this new challenging problem of subject-driven generation, allowing users, from just a few casually captured images of a subject, synthesize novel renditions of the subject in different contexts while maintaining its distinctive features. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_7",
"text": " To evaluate this new task, we also construct a new dataset that contains various subjects captured in different contexts, and propose a new evaluation protocol that measures the subject fidelity and prompt fidelity of the generated results. We make our dataset and evaluation protocol publicly available on the project webpage. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_8",
"text": " Image Composition. Image composition techniques (70, 13, 38) aim to clone a given subject into a new background such that the subject melds into the scene. To consider composition in novel poses, one may apply 3D reconstruction techniques (41, 6, 8, 68, 49) which usually works on rigid objects and require a larger number of views. Some drawbacks include scene integration (lighting, shadows, contact) and the inability to generate novel scenes. In contrast, our approach enable generation of subjects in novel poses and new contexts. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_9",
"text": " Text-to-Image Editing and Synthesis. Text-driven image manipulation has recently achieved significant progress using GANs (22, 9, 28, 29, 30) combined with image-text representations such as CLIP , yielding realistic manipulations using text (48, 21, 71, 2, 7, 43). These methods work well on structured scenarios (e.g. human face editing) and can struggle over diverse datasets where subjects are varied. Crowson et al. use VQ-GAN and train over more diverse data to alleviate this concern. Other works (4, 31) exploit the recent diffusion models (25, 63, 65, 25, 64, 58, 45, 66, 60, 62), which achieve state-of-the-art generation quality over highly diverse datasets, often surpassing GANs . While most works that require only text are limited to global editing (14, 33), Bar-Tal et al. proposed a text-based localized editing technique without using masks, showing impressive results. While most of these editing approaches allow modification of global properties or local editing of a given image, none enables generating novel renditions of a given subject in new contexts. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_10",
"text": " There also exists work on text-to-image synthesis (16, 24, 67, 35, 36, 50, 51, 55, 74, 14, 19, 58, 27). Recent large text-to-image models such as Imagen , DALL-E2 , Parti , CogView2 and Stable Diffusion demonstrated unprecedented semantic generation. These models do not provide fine-grained control over a generated image and use text guidance only. Specifically, it is challenging or impossible to preserve the identity of a subject consistently across synthesized images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_11",
"text": " Controllable Generative Models. There are various approaches to control generative models, where some of them might prove to be viable directions for subject-driven prompt-guided image synthesis. Liu et al. propose a diffusion-based technique allowing for image variations guided by reference image or text. To overcome subject modification, several works (44, 3) assume a user-provided mask to restrict the modified area. Inversion (12, 15, 54) can be used to preserve a subject while modifying context. Prompt-to-prompt allows for local and global editing without an input mask. These methods fall short of identity-preserving novel sample generation of a subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_12",
"text": " In the context of GANs, Pivotal Tuning allows for real image editing by finetuning the model with an inverted latent code anchor, and Nitzan et al. extended this work to GAN finetuning on faces to train a personalized prior, which requires around 100 images and are limited to the face domain. Casanova et al. propose an instance conditioned GAN that can generate variations of an instance, although it can struggle with unique subjects and does not preserve all subject details. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_13",
"text": " Finally, the concurrent work of Gal et al. proposes a method to represent visual concepts, like an object or a style, through new tokens in the embedding space of a frozen text-to-image model, resulting in small personalized token embeddings. While this method is limited by the expressiveness of the frozen diffusion model, our fine-tuning approach enables us to embed the subject within the model’s output domain, resulting in the generation of novel images of the subject which preserve its key visual features. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_14",
"text": " Given only a few (typically 3-5) casually captured images of a specific subject, without any textual description, our objective is to generate new images of the subject with high detail fidelity and with variations guided by text prompts. Example variations include changing the subject location, changing subject properties such as color or shape, modifying the subject’s pose, viewpoint, and other semantic modifications. We do not impose any restrictions on input image capture settings and the subject image can have varying contexts. We next provide some background on text-to-image diffusion models (Sec. 3.1), then present our fine-tuning technique to bind a unique identifier with a subject described in a few images (Sec. 3.2), and finally propose a class-specific prior-preservation loss that enables us to overcome language drift in our fine-tuned model (Sec. 3.3). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_15",
"text": " Diffusion models are probabilistic generative models that are trained to learn a data distribution by the gradual denoising of a variable sampled from a Gaussian distribution. Specifically, we are interested in a pre-trained text-to-image diffusion model 𝐱^θsubscript^𝐱𝜃\\hat{\\mathbf{x}}_{\\theta} that, given an initial noise map ϵ∼𝒩(𝟎,𝐈)similar-tobold-italic-ϵ𝒩0𝐈{\\bm{\\epsilon}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) and a conditioning vector 𝐜=Γ(𝐏)𝐜Γ𝐏\\mathbf{c}=\\Gamma(\\mathbf{P}) generated using a text encoder ΓΓ\\Gamma and a text prompt 𝐏𝐏\\mathbf{P}, generates an image 𝐱gen=𝐱^θ(ϵ,𝐜)subscript𝐱gensubscript^𝐱𝜃bold-italic-ϵ𝐜\\mathbf{x}_{\\text{gen}}=\\hat{\\mathbf{x}}_{\\theta}({\\bm{\\epsilon}},\\mathbf{c}). They are trained using a squared error loss to denoise a variably-noised image or latent code 𝐳t≔αt𝐱+σtϵ≔subscript𝐳𝑡subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ\\mathbf{z}_{t}\\coloneqq\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}} as follows: 𝔼𝐱,𝐜,ϵ,t(wt‖𝐱^θ(αt𝐱+σtϵ,𝐜)−𝐱‖22)subscript𝔼𝐱𝐜bold-italic-ϵ𝑡delimited-()subscript𝑤𝑡subscriptsuperscriptnormsubscript^𝐱𝜃subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ𝐜𝐱22\\mathbb{E}_{\\mathbf{x},\\mathbf{c},{\\bm{\\epsilon}},t}\\!\\left(w_{t}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}},\\mathbf{c})-\\mathbf{x}\\|^{2}_{2}\\right) (1) where 𝐱𝐱\\mathbf{x} is the ground-truth image, 𝐜𝐜\\mathbf{c} is a conditioning vector (e.g., obtained from a text prompt), and αt,σt,wtsubscript𝛼𝑡subscript𝜎𝑡subscript𝑤𝑡\\alpha_{t},\\sigma_{t},w_{t} are terms that control the noise schedule and sample quality, and are functions of the diffusion process time t∼𝒰((0,1))similar-to𝑡𝒰01t\\sim\\mathcal{U}((0,1)). A more detailed description is given in the supplementary material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_16",
"text": " Our first task is to implant the subject instance into the output domain of the model such that we can query the model for varied novel images of the subject. One natural idea is to fine-tune the model using the few-shot dataset of the subject. Careful care had to be taken when fine-tuning generative models such as GANs in a few-shot scenario as it can cause overfitting and mode-collapse - as well as not capturing the target distribution sufficiently well. There has been research on techniques to avoid these pitfalls (56, 47, 37, 42, 69), although, in contrast to our work, this line of work primarily seeks to generate images that resemble the target distribution but has no requirement of subject preservation. With regards to these pitfalls, we observe the peculiar finding that, given a careful fine-tuning setup using the diffusion loss from Eq 1, large text-to-image diffusion models seem to excel at integrating new information into their domain without forgetting the prior or overfitting to a small set of training images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_17",
"text": " Our goal is to “implant” a new (unique identifier, subject) pair into the diffusion model’s “dictionary” . In order to bypass the overhead of writing detailed image descriptions for a given image set we opt for a simpler approach and label all input images of the subject “a (identifier) (class noun)”, where (identifier) is a unique identifier linked to the subject and (class noun) is a coarse class descriptor of the subject (e.g. cat, dog, watch, etc.). The class descriptor can be provided by the user or obtained using a classifier. We use a class descriptor in the sentence in order to tether the prior of the class to our unique subject and find that using a wrong class descriptor, or no class descriptor increases training time and language drift while decreasing performance. In essence, we seek to leverage the model’s prior of the specific class and entangle it with the embedding of our subject’s unique identifier so we can leverage the visual prior to generate new poses and articulations of the subject in different contexts. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_18",
"text": " We generally find existing English words (e.g. “unique”, “special”) suboptimal since the model has to learn to disentangle them from their original meaning and to re-entangle them to reference our subject. This motivates the need for an identifier that has a weak prior in both the language model and the diffusion model. A hazardous way of doing this is to select random characters in the English language and concatenate them to generate a rare identifier (e.g. “xxy5syt00”). In reality, the tokenizer might tokenize each letter separately, and the prior for the diffusion model is strong for these letters. We often find that these tokens incur the similar weaknesses as using common English words. Our approach is to find rare tokens in the vocabulary, and then invert these tokens into text space, in order to minimize the probability of the identifier having a strong prior. We perform a rare-token lookup in the vocabulary and obtain a sequence of rare token identifiers f(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}), where f𝑓f is a tokenizer; a function that maps character sequences to tokens and 𝐕^^𝐕\\hat{\\mathbf{V}} is the decoded text stemming from the tokens f(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}). The sequence can be of variable length k𝑘k, and find that relatively short sequences of k={1,…,3}𝑘1…3k=\\{1,...,3\\} work well. Then, by inverting the vocabulary using the de-tokenizer on f(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}) we obtain a sequence of characters that define our unique identifier 𝐕^^𝐕\\hat{\\mathbf{V}}. For Imagen, we find that using uniform random sampling of tokens that correspond to 3 or fewer Unicode characters (without spaces) and using tokens in the T5-XXL tokenizer range of {5000,…,10000}5000…10000\\{5000,...,10000\\} works well. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_19",
"text": " In our experience, the best results for maximum subject fidelity are achieved by fine-tuning all layers of the model. This includes fine-tuning layers that are conditioned on the text embeddings, which gives rise to the problem of language drift. Language drift has been an observed problem in language models (34, 40), where a model that is pre-trained on a large text corpus and later fine-tuned for a specific task progressively loses syntactic and semantic knowledge of the language. To the best of our knowledge, we are the first to find a similar phenomenon affecting diffusion models, where to model slowly forgets how to generate subjects of the same class as the target subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_20",
"text": " Another problem is the possibility of reduced output diversity. Text-to-image diffusion models naturally posses high amounts of output diversity. When fine-tuning on a small set of images we would like to be able to generate the subject in novel viewpoints, poses and articulations. Yet, there is a risk of reducing the amount of variability in the output poses and views of the subject (e.g. snapping to the few-shot views). We observe that this is often the case, especially when the model is trained for too long. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_21",
"text": " To mitigate the two aforementioned issues, we propose an autogenous class-specific prior preservation loss that encourages diversity and counters language drift. In essence, our method is to supervise the model with its own generated samples, in order for it to retain the prior once the few-shot fine-tuning begins. This allows it to generate diverse images of the class prior, as well as retain knowledge about the class prior that it can use in conjunction with knowledge about the subject instance. Specifically, we generate data 𝐱pr=𝐱^(𝐳t1,𝐜pr)subscript𝐱pr^𝐱subscript𝐳subscript𝑡1subscript𝐜pr\\mathbf{x}_{\\text{pr}}=\\hat{\\mathbf{x}}(\\mathbf{z}_{t_{1}},\\mathbf{c}_{\\text{pr}}) by using the ancestral sampler on the frozen pre-trained diffusion model with random initial noise 𝐳t1∼𝒩(𝟎,𝐈)similar-tosubscript𝐳subscript𝑡1𝒩0𝐈\\mathbf{z}_{t_{1}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) and conditioning vector 𝐜pr≔Γ(f(”a (class noun)”))≔subscript𝐜prΓ𝑓”a (class noun)”\\mathbf{c}_{\\text{pr}}\\coloneqq\\Gamma(f(\\text{\"a (class noun)\"})). The loss becomes: 𝔼𝐱,𝐜,ϵ,ϵ′,t(wt∥𝐱^θ(αt𝐱+σtϵ,𝐜)−𝐱∥22+λwt′∥𝐱^θ(αt′𝐱pr+σt′ϵ′,𝐜pr)−𝐱pr∥22),subscript𝔼𝐱𝐜bold-italic-ϵsuperscriptbold-italic-ϵ′𝑡delimited-()subscript𝑤𝑡subscriptsuperscriptdelimited-∥∥subscript^𝐱𝜃subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ𝐜𝐱22𝜆subscript𝑤superscript𝑡′subscriptsuperscriptdelimited-∥∥subscript^𝐱𝜃subscript𝛼superscript𝑡′subscript𝐱prsubscript𝜎superscript𝑡′superscriptbold-italic-ϵ′subscript𝐜prsubscript𝐱pr22\\mathbb{E}_{\\mathbf{x},\\mathbf{c},{\\bm{\\epsilon}},{\\bm{\\epsilon}}^{\\prime},t}(w_{t}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}},\\mathbf{c})-\\mathbf{x}\\|^{2}_{2}+\\\\ \\lambda w_{t^{\\prime}}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t^{\\prime}}\\mathbf{x}_{\\text{pr}}+\\sigma_{t^{\\prime}}{\\bm{\\epsilon}}^{\\prime},\\mathbf{c}_{\\text{pr}})-\\mathbf{x}_{\\text{pr}}\\|^{2}_{2}), (2) where the second term is the prior-preservation term that supervises the model with its own generated images, and λ𝜆\\lambda controls for the relative weight of this term. Figure 3 illustrates the model fine-tuning with the class-generated samples and prior-preservation loss. Despite being simple, we find this prior-preservation loss is effective in encouraging output diversity and in overcoming language-drift. We also find that we can train the model for more iterations without risking overfitting. We find that ∼similar-to\\sim 1000 iterations with λ=1𝜆1\\lambda=1 and learning rate 10−5superscript10510^{-5} for Imagen and 5×10−65superscript1065\\times 10^{-6} for Stable Diffusion , and with a subject dataset size of 3-5 images is enough to achieve good results. During this process, ∼1000similar-toabsent1000\\sim 1000 “a (class noun)” samples are generated - but less can be used. The training process takes about 5 minutes on one TPUv4 for Imagen, and 5 minutes on a NVIDIA A100 for Stable Diffusion. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_22",
"text": " In this section, we show experiments and applications. Our method enables a large expanse of text-guided semantic modifications of our subject instances, including recontextualization, modification of subject properties such as material and species, art rendition, and viewpoint modification. Importantly, across all of these modifications, we are able to preserve the unique visual features that give the subject its identity and essence. If the task is recontextualization, then the subject features are unmodified, but appearance (e.g., pose) may change. If the task is a stronger semantic modification, such as crossing between our subject and another species/object, then the key features of the subject are preserved after modification. In this section, we reference the subject’s unique identifier using (V). We include specific Imagen and Stable Diffusion implementation details in the supp. material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_23",
"text": " We collected a dataset of 30 subjects, including unique objects and pets such as backpacks, stuffed animals, dogs, cats, sunglasses, cartoons, etc. We separate each subject into two categories: objects and live subjects/pets. 21 of the 30 subjects are objects, and 9 are live subjects/pets. We provide one sample image for each of the subjects in Figure 5. Images for this dataset were collected by the authors or sourced from Unsplash . We also collected 25 prompts: 20 recontextualization prompts and 5 property modification prompts for objects; 10 recontextualization, 10 accessorization, and 5 property modification prompts for live subjects/pets. The full list of prompts can be found in the supplementary material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_24",
"text": " For the evaluation suite we generate four images per subject and per prompt, totaling 3,000 images. This allows us to robustly measure performances and generalization capabilities of a method. We make our dataset and evaluation protocol publicly available on the project webpage for future use in evaluating subject-driven generation. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_25",
"text": " One important aspect to evaluate is subject fidelity: the preservation of subject details in generated images. For this, we compute two metrics: CLIP-I and DINO . CLIP-I is the average pairwise cosine similarity between CLIP embeddings of generated and real images. Although this metric has been used in other work , it is not constructed to distinguish between different subjects that could have highly similar text descriptions (e.g. two different yellow clocks). Our proposed DINO metric is the average pairwise cosine similarity between the ViT-S/16 DINO embeddings of generated and real images. This is our preferred metric, since, by construction and in contrast to supervised networks, DINO is not trained to ignore differences between subjects of the same class. Instead, the self-supervised training objective encourages distinction of unique features of a subject or image. The second important aspect to evaluate is prompt fidelity, measured as the average cosine similarity between prompt and image CLIP embeddings. We denote this as CLIP-T. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_26",
"text": " We compare our results with Textual Inversion, the recent concurrent work of Gal et al. , using the hyperparameters provided in their work. We find that this work is the only comparable work in the literature that is subject-driven, text-guided and generates novel images. We generate images for DreamBooth using Imagen, DreamBooth using Stable Diffusion and Textual Inversion using Stable Diffusion. We compute DINO and CLIP-I subject fidelity metrics and the CLIP-T prompt fidelity metric. In Table 1 we show sizeable gaps in both subject and prompt fidelity metrics for DreamBooth over Textual Inversion. We find that DreamBooth (Imagen) achieves higher scores for both subject and prompt fidelity than DreamBooth (Stable Diffusion), approaching the upper-bound of subject fidelity for real images. We believe that this is due to the larger expressive power and higher output quality of Imagen. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_27",
"text": " Further, we compare Textual Inversion (Stable Diffusion) and DreamBooth (Stable Diffusion) by conducting a user study. For subject fidelity, we asked 72 users to answer questionnaires of 25 comparative questions (3 users per questionnaire), totaling 1800 answers. Samples are randomly selected from a large pool. Each question shows the set of real images for a subject, and one generated image of that subject by each method (with a random prompt). Users are asked to answer the question: “Which of the two images best reproduces the identity (e.g. item type and details) of the reference item?”, and we include a “Cannot Determine / Both Equally” option. Similarly for prompt fidelity, we ask “Which of the two images is best described by the reference text?”. We average results using majority voting and present them in Table 2. We find an overwhelming preference for DreamBooth for both subject fidelity and prompt fidelity. This shines a light on results in Table 1, where DINO differences of around 0.10.10.1 and CLIP-T differences of 0.050.050.05 are significant in terms of user preference. Finally, we show qualitative comparisons in Figure 4. We observe that DreamBooth better preserves subject identity, and is more faithful to prompts. We show samples of the user study in the supp. material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_28",
"text": " We fine-tune Imagen on 15 subjects from our dataset, with and without our proposed prior preservation loss (PPL). The prior preservation loss seeks to combat language drift and preserve the prior. We compute a prior preservation metric (PRES) by computing the average pairwise DINO embeddings between generated images of random subjects of the prior class and real images of our specific subject. The higher this metric, the more similar random subjects of the class are to our specific subject, indicating collapse of the prior. We report results in Table 3 and observe that PPL substantially counteracts language drift and helps retain the ability to generate diverse images of the prior class. Additionally, we compute a diversity metric (DIV) using the average LPIPS cosine similarity between generated images of same subject with same prompt. We observe that our model trained with PPL achieves higher diversity (with slightly diminished subject fidelity), which can also be observed qualitatively in Figure 6, where our model trained with PPL overfits less to the environment of the reference images and can generate the dog in more diverse poses and articulations. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_29",
"text": " We finetune Imagen on a subset of our dataset subjects (5 subjects) with no class noun, a randomly sampled incorrect class noun, and the correct class noun. With the correct class noun for our subject, we are able to faithfully fit to the subject, take advantage of the class prior, allowing us to generate our subject in various contexts. When an incorrect class noun (e.g. “can” for a backpack) is used, we run into contention between our subject and and the class prior - sometimes obtaining cylindrical backpacks, or otherwise misshapen subjects. If we train with no class noun, the model does not leverage the class prior, has difficulty learning the subject and converging, and can generate erroneous samples. Subject fidelity results are shown in Table 4, with substantially higher subject fidelity for our proposed approach. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_30",
"text": " We can generate novel images for a specific subject in different contexts (Figure 7) with descriptive prompts (“a (V) (class noun) (context description)”). Importantly, we are able to generate the subject in new poses and articulations, with previously unseen scene structure and realistic integration of the subject in the scene (e.g. contact, shadows, reflections). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_31",
"text": " Given a prompt “a painting of a (V) (class noun) in the style of (famous painter)” or “a statue of a (V) (class noun) in the style of (famous sculptor)” we are able to generate artistic renditions of our subject. Unlike style transfer, where the source structure is preserved and only the style is transferred, we are able to generate meaningful, novel variations depending on the artistic style, while preserving subject identity. E.g, as shown in Figure 8, “Michelangelo”, we generated a pose that is novel and not seen in the input images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_32",
"text": " We are able to render the subject under novel viewpoints. In Figure 8, we generate new images of the input cat (with consistent complex fur patterns) under new viewpoints. We highlight that the model has not seen this specific cat from behind, below, or above - yet it is able to extrapolate knowledge from the class prior to generate these novel views given only 4 frontal images of the subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_33",
"text": " We are able to modify subject properties. For example, we show crosses between a specific Chow Chow dog and different animal species in the bottom row of Figure 8. We prompt the model with sentences of the following structure: “a cross of a (V) dog and a (target species)”. In particular, we can see in this example that the identity of the dog is well preserved even when the species changes - the face of the dog has certain unique features that are well preserved and melded with the target species. Other property modifications are possible, such as material modification (e.g. “a transparent (V) teapot” in Figure 7). Some are harder than others and depend on the prior of the base generation model. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_34",
"text": " We illustrate some failure models of our method in Figure 9. The first is related to not being able to accurately generate the prompted context. Possible reasons are a weak prior for these contexts, or difficulty in generating both the subject and specified concept together due to low probability of co-occurrence in the training set. The second is context-appearance entanglement, where the appearance of the subject changes due to the prompted context, exemplified in Figure 9 with color changes of the backpack. Third, we also observe overfitting to the real images that happen when the prompt is similar to the original setting in which the subject was seen. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_35",
"text": " Other limitations are that some subjects are easier to learn than others (e.g. dogs and cats). Occasionally, with subjects that are rarer, the model is unable to support as many subject variations. Finally, there is also variability in the fidelity of the subject and some generated images might contain hallucinated subject features, depending on the strength of the model prior, and the complexity of the semantic modification. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_36",
"text": " We presented an approach for synthesizing novel renditions of a subject using a few images of the subject and the guidance of a text prompt. Our key idea is to embed a given subject instance in the output domain of a text-to-image diffusion model by binding the subject to a unique identifier. Remarkably - this fine-tuning process can work given only 3-5 subject images, making the technique particularly accessible. We demonstrated a variety of applications with animals and objects in generated photorealistic scenes, in most cases indistinguishable from real images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_37",
"text": " We thank Rinon Gal, Adi Zicher, Ron Mokady, Bill Freeman, Dilip Krishnan, Huiwen Chang and Daniel Cohen-Or for their valuable inputs that helped improve this work, and to Mohammad Norouzi, Chitwan Saharia and William Chan for providing us with their support and the pretrained Imagen models. Finally, a special thanks to David Salesin for his feedback, advice and for his support for the project. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
}
] |
How do the specific contributions of this work make the construction of deep generative models, like VAEs, for language modeling more practical?
|
The main thesis of this work is around the idea that large VAE models for language tasks can work effectively, and the authors attempt to provide initial evidence for this by implementing a large model which they named OPTIMUS [2]. The first major contribution the authors make is in showing how the KL vanishing issue is addressed in the pretraining phase [4]. Next, the authors explain how conditioning vectors can be injected into GPT without the need for retraining, which brings down the cost and barrier to entry to develop models such as these [44]. Finally, the authors also discuss how to combine multiple pretrained language models (PLMs) such as BERT and GPT, which have very different input formats (ie [7].
|
[
2,
4,
44,
7
] |
[
{
"id": "2004.04092_all_0",
"text": " Pre-trained language models (PLMs) have substantially advanced the state-of-the-art across a variety of natural language processing (NLP) tasks Peters et al. (2018); Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019); Shoeybi et al. (2019). PLMs are often trained to predict words based on their context on massive text data, and the learned models can be fine-tuned to adapt to various downstream tasks. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_1",
"text": " PLMs can generally play two different roles: (i)i(\\textup{\\it i}) a generic encoder such as BERT Devlin et al. (2019) to provide contextualized representations for language understanding tasks, and (ii)ii(\\textup{\\it ii}) a powerful decoder such as GPT-2 Radford et al. (2019) to generate text sequences in an auto-regressive manner. In a bid to combine language understanding and generation tasks in one unified framework, several model variants have been proposed, including UniLM Dong et al. (2019), BART Lewis et al. (2019), and T5 Raffel et al. (2019). Although significant performance improvement has been reported on a wide range of NLP tasks, these models lack of explicit modeling of structures in a compact latent space, rendering it difficult to control language generation/representation from an abstract level. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_2",
"text": " Variational Autoencoders (VAEs) Kingma and Welling (2013); Rezende et al. (2014) provide a tractable method to train latent-variable generative models. In NLP, latent variables may assume the role of higher-level sentence representations, which govern a lower-level word-by-word generation process, thus facilitating controlled text generation Bowman et al. (2016); Hu et al. (2017). By representing sentences in a low-dimensional latent space, VAEs allow easy manipulation of sentences using the corresponding compact vector representations, such as feature regularization specified by prior distributions, and guided sentence generation with interpretable vector operators. Despite the attractive theoretical strengths, the current language VAEs are often built with shallow network architectures, such as two-layer LSTMs Hochreiter and Schmidhuber (1997). This limits the model’s capacity and leads to sub-optimal performance. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_3",
"text": " In this paper, we propose Optimus, the first large-scale pre-trained deep latent variable models for natural language. Optimus is pre-trained using the sentence-level (variational) auto-encoder objectives on large text corpus. This leads to a universal latent space to organize sentences (hence named Optimus). Optimus enjoys several favorable properties: (i)i(\\textup{\\it i}) It combines the strengths of VAE, BERT and GPT, and supports both natural language understanding and generation tasks. (ii)ii(\\textup{\\it ii}) Comparing to BERT, Optimus learns a more structured semantic space due to the use of the prior distribution in training. As a result, the language representations learned by Optimus are more universal / general in that they can be more easily adapted to a new domain/task. (iii)iii(\\textup{\\it iii}) Different from GPT-2, which generates human-like text but may lack effective means of controlling its high-level semantics (such as tense, topics, sentiment), Optimus can be easily deployed for guided text generation. The effectiveness of Optimus has been demonstrated with extensive experiments on language modeling, dialog response generation, text style transfer and low-resource language understanding. It achieves lower perplexity than GPT-2 on standard benchmarks, produces strong performance on guided text generation, and improves BERT on feature-based language understanding tasks. The code and pre-trained models are released on Github222https://github.com/ChunyuanLI/Optimus. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_4",
"text": " Along the way to build the first big VAE language model, there are several technical contributions/implications that are novel: (i)i(\\textup{\\it i}) Latent vector injection: this work demonstrates two schemes to discuss how to effectively inject conditioning vectors into GPT-2 without re-training it. (ii)ii(\\textup{\\it ii}) The design idea to combine BERT/GPT-2 serves as a practical recipe to inspire people to integrate and reuse existing PLMs for larger and complex models. (iii)iii(\\textup{\\it iii}) Pre-training on massive datasets itself is an effective approach to reduce KL vanishing, as demonstrated by the state of-the-art performance on four VAE language modeling datasets. (iv)iv(\\textup{\\it iv}) The proof of VAE objective from the lens of IB, showing that VAE is a principled approach to balance the compactness and usability of learned representations. (v)v(\\textup{\\it v}) Improved performance on several language tasks shows the importance and necessity of pre-training a latent space. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_5",
"text": " Large-scale Transformer-based PLMs have recently achieved state-of-the-art performance on various natural language understanding and generation tasks Devlin et al. (2019); Yang et al. (2019); Radford et al. (2019); Liu et al. (2019); Keskar et al. (2019). Prior to Transformer-based PLMs, non-generative methods have seen some early success in pre-training sequence models for supervised downstream tasks including standard sequence auto-encoders Dai and Le (2015); Li et al. (2015), skip-thought models Kiros et al. (2015) and paragraph vector models Le and Mikolov (2014) etc. However, all of these models do not generally learn a smooth, interpretable feature space for sentence encoding, or generating novel sentences. In this work, we aim to fill the gap to learn such a universal latent space in the field of Transformer-based PLMs. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_6",
"text": " Language VAEs have inspired new applications in NLP, via exploiting many interesting properties of the model’s latent space Bowman et al. (2016); Kim et al. (2018b). Its modeling capacity and empirical performance is somewhat limited, partially due to the KL vanishing issue described in Section 4.3. Several attempts have been made to alleviate this issue, including different KL annealing/thresholding schemes Bowman et al. (2016); Fu et al. (2019); Higgins et al. (2017); Li et al. (2019), decoder architectures Yang et al. (2017); Dieng et al. (2018), auxiliary loss Zhao et al. (2017), semi-amortized inference Kim et al. (2018a), aggressive encoder training schedule He et al. (2019), batch normalized inference Zhu et al. (2020) and flexible posterior Fang et al. (2019). Subramanian et al. (2018) have shown some promise that general encoder can benefit language generation. Transformers Vaswani et al. (2017) are recently considered in VAEs for classification Gururangan et al. (2019) and storytelling Wang and Wan (2019). Pre-training VAEs has been recently considered in conditional text generation to amortize the training of decoders and to allow easy adaptation in new generation tasks Duan et al. (2019). ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_7",
"text": " All these efforts utilize simple LSTM Hochreiter and Schmidhuber (1997) and shallow Transformer Vaswani et al. (2017) architectures, thus with limited capacity. Our paper is the first big VAE model at the same scale of recent PLMs such as BERT and GPT-2. More importantly, we show that pre-training a meaningful latent space on a large text corpus can largely reduce the KL vanishing issue, and lead to new state-of-the-art performance. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_8",
"text": " To generate a text sequence of length T𝑇T, 𝒙=(x1,⋯,xT)𝒙subscript𝑥1⋯subscript𝑥𝑇\\boldsymbol{x}=(x_{1},\\cdots,x_{T}), neural language models (NLM) Mikolov et al. (2010) generate every token xtsubscript𝑥𝑡x_{t} conditioned on the previous word tokens: p(𝒙)=∏t=1Tp𝜽(xt|x<t),𝑝𝒙superscriptsubscriptproduct𝑡1𝑇subscript𝑝𝜽conditionalsubscript𝑥𝑡subscript𝑥absent𝑡\\displaystyle\\vspace{-2mm}p(\\boldsymbol{x})=\\prod_{t=1}^{T}p_{\\boldsymbol{\\theta}}(x_{t}|x_{<t}),\\vspace{-2mm} (1) where x<tsubscript𝑥absent𝑡x_{<t} indicates all tokens before t𝑡t, and 𝜽𝜽\\boldsymbol{\\theta} is the model parameter. In NLMs, each one-step-ahead conditional in (1) is modeled by an expressive family of neural networks, and is typically trained via maximum likelihood estimate (MLE). Perhaps the most well-known NLM instance is GPT-2 Radford et al. (2019), which employs Transformers Vaswani et al. (2017) for each conditional, and 𝜽𝜽\\boldsymbol{\\theta} is learned on a huge amount of OpenWeb text corpus. GPT-2 has shown surprisingly realistic text generation results, and low perplexity on several benchmarks. GPT-3 Brown et al. (2020) was recently proposed to further scale up NLMs to 175 billion parameters, showing impressive results on few-shot learning on multiple language tasks. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_9",
"text": " However, the only source of variation in NLMs, GPT2 and GPT3 is modeled in the conditionals at every step: the text generation process only depends on previous word tokens, and there is limited capacity for the generation to be guided by the higher-level structures that are likely presented in natural language, such as tense, topics or sentiment. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_10",
"text": " To facilitate high-level guidance in sentence generation, Optimus organizes sentences in a universal latent (or semantic) space, via pre-training on large text corpora. Each sample in this space can be interpreted as outlines of the corresponding sentences, guiding the language generation process performed in the symbolic space Subramanian et al. (2018). This naturally fits within the learning paradigm of latent variable models such as VAEs Kingma and Welling (2013); Bowman et al. (2016), where the latent representations capture the high-level semantics/patterns. It consists of two parts, generation and inference, enabling a bidirectional mapping between the latent space and symbolic space. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_11",
"text": " The generative model (decoder) draws a latent vector 𝒛𝒛\\boldsymbol{z} from the continuous latent space with prior p(𝒛)𝑝𝒛p(\\boldsymbol{z}), and generates the text sequence 𝒙𝒙\\boldsymbol{x} from a conditional distribution p𝜽(𝒙|𝒛)subscript𝑝𝜽conditional𝒙𝒛p_{\\boldsymbol{\\theta}}(\\boldsymbol{x}|\\boldsymbol{z}); p(𝒛)𝑝𝒛p(\\boldsymbol{z}) is typically assumed a multivariate Gaussian, and 𝜽𝜽\\boldsymbol{\\theta} represents the neural network parameters. The following auto-regressive decoding process is usually used: p𝜽(𝒙|𝒛)=∏t=1Tp𝜽(xt|x<t,𝒛).subscript𝑝𝜽conditional𝒙𝒛superscriptsubscriptproduct𝑡1𝑇subscript𝑝𝜽conditionalsubscript𝑥𝑡subscript𝑥absent𝑡𝒛\\displaystyle\\vspace{-2mm}p_{\\boldsymbol{\\theta}}(\\boldsymbol{x}|\\boldsymbol{z})=\\prod_{t=1}^{T}p_{\\boldsymbol{\\theta}}(x_{t}|x_{<t},\\boldsymbol{z}).\\vspace{-4mm} (2) Intuitively, VAE provides a “hierachical” generation procedure: 𝒛∼p(𝒛)similar-to𝒛𝑝𝒛\\boldsymbol{z}\\sim p(\\boldsymbol{z}) determines the high-level semantics, followed by (2) to produce the output sentences with low-level syntactic and lexical details. This contrasts with (1) in the explicit dependency on 𝒛𝒛\\boldsymbol{z}. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_12",
"text": " Similar to GPT-2, parameters 𝜽𝜽\\boldsymbol{\\theta} are typically learned by maximizing the marginal log likelihood logp𝜽(𝒙)=log∫p(𝒛)p𝜽(𝒙|𝒛)d𝒛subscript𝑝𝜽𝒙𝑝𝒛subscript𝑝𝜽conditional𝒙𝒛d𝒛\\log p_{\\boldsymbol{\\theta}}(\\boldsymbol{x})=\\log\\int p(\\boldsymbol{z})p_{\\boldsymbol{\\theta}}(\\boldsymbol{x}|\\boldsymbol{z})\\mbox{d}\\boldsymbol{z}. However, this marginal term is intractable to compute for many decoder choices. Thus, variational inference is considered, and the true posterior p𝜽(𝒛|𝒙)∝p𝜽(𝒙|𝒛)p(𝒛)proportional-tosubscript𝑝𝜽conditional𝒛𝒙subscript𝑝𝜽conditional𝒙𝒛𝑝𝒛p_{\\boldsymbol{\\theta}}(\\boldsymbol{z}|\\boldsymbol{x})\\propto p_{\\boldsymbol{\\theta}}(\\boldsymbol{x}|\\boldsymbol{z})p(\\boldsymbol{z}) is approximated via the variational distribution qϕ(𝒛|𝒙)subscript𝑞bold-italic-ϕconditional𝒛𝒙q_{\\boldsymbol{\\phi}}(\\boldsymbol{z}|\\boldsymbol{x}) is (often known as the inference model or encoder), implemented via a ϕbold-italic-ϕ\\boldsymbol{\\phi}-parameterized neural network. It yields the evidence lower bound objective (ELBO): logp𝜽(𝒙)≥ℒELBO=subscript𝑝𝜽𝒙subscriptℒELBOabsent\\displaystyle\\log p_{\\boldsymbol{\\theta}}(\\boldsymbol{x})\\geq\\mathcal{L}_{\\text{ELBO}}= (3) 𝔼qϕ(𝒛|𝒙)(logp𝜽(𝒙|𝒛))−KL(qϕ(𝒛|𝒙)||p(𝒛))\\displaystyle\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\boldsymbol{z}|\\boldsymbol{x})}\\big{(}\\log p_{\\boldsymbol{\\theta}}(\\boldsymbol{x}|\\boldsymbol{z})\\big{)}-\\mbox{KL}(q_{\\boldsymbol{\\phi}}(\\boldsymbol{z}|\\boldsymbol{x})||p(\\boldsymbol{z})) ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_13",
"text": " Typically, qϕ(𝒛|𝒙)subscript𝑞bold-italic-ϕconditional𝒛𝒙q_{\\boldsymbol{\\phi}}(\\boldsymbol{z}|\\boldsymbol{x}) is modeled as a Gaussian distribution, and the re-parametrization trick is used for efficient learning Kingma and Welling (2013). ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_14",
"text": " There is an alternative interpretation of the ELBO: the VAE objective can be viewed as a regularized version of the autoencoder (AE) Goodfellow et al. (2016). It is thus natural to extend the negative of ℒELBOsubscriptℒELBO\\mathcal{L}_{\\text{ELBO}} in (3) by introducing a hyper-parameter β𝛽\\beta to control the strength of regularization: ℒβsubscriptℒ𝛽\\displaystyle\\mathcal{L}_{\\beta} =ℒE+βℒR,withabsentsubscriptℒ𝐸𝛽subscriptℒ𝑅with\\displaystyle=\\mathcal{L}_{E}+\\beta\\mathcal{L}_{R},~{}~{}\\text{with} (4) ℒEsubscriptℒ𝐸\\displaystyle\\mathcal{L}_{E} =−𝔼qϕ(𝒛|𝒙)(logp𝜽(𝒙|𝒛))absentsubscript𝔼subscript𝑞bold-italic-ϕconditional𝒛𝒙delimited-()subscript𝑝𝜽conditional𝒙𝒛\\displaystyle=-\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\boldsymbol{z}|\\boldsymbol{x})}\\big{(}\\log p_{\\boldsymbol{\\theta}}(\\boldsymbol{x}|\\boldsymbol{z})\\big{)} (5) ℒRsubscriptℒ𝑅\\displaystyle\\mathcal{L}_{R} =KL(qϕ(𝒛|𝒙)||p(𝒛))\\displaystyle=\\mbox{KL}(q_{\\boldsymbol{\\phi}}(\\boldsymbol{z}|\\boldsymbol{x})||p(\\boldsymbol{z})) (6) where ℒEsubscriptℒ𝐸\\mathcal{L}_{E} is the reconstruction error (or negative log-likelihood (NLL)), and ℒRsubscriptℒ𝑅\\mathcal{L}_{R} is a KL regularizer. The cost function ℒβsubscriptℒ𝛽\\mathcal{L}_{\\beta} provides a unified perspective for understanding various autoencoder variants and training methods. We consider two types of latent space with the following objectives: ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_15",
"text": " • AE. Only ℒEsubscriptℒ𝐸\\mathcal{L}_{E} is considered (β=0𝛽0\\beta=0), while the Gaussian sampling in qϕ(𝒛|𝒙)subscript𝑞bold-italic-ϕconditional𝒛𝒙q_{\\boldsymbol{\\phi}}(\\boldsymbol{z}|\\boldsymbol{x}) remains. In other words, the regularization is removed, and a point-estimate is likely to be learned to represent the text sequence’s latent feature. Note our reconstruction is on sentence-level, while other PLMs Devlin et al. (2019); Yang et al. (2019) employ masked LM loss, performing token-level reconstruction. • VAE. The full VAE objective is considered (β>0𝛽0\\beta>0). It tends to learn a smooth latent space due to ℒRsubscriptℒ𝑅\\mathcal{L}_{R}. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_16",
"text": " From an information theory perspective, information bottleneck (IB) provides a principled approach to find the trade-off between predictive power and complexity (compactness) when summarizing observed data in learned representations. We show that our Optimus pre-training objectives effectively practice the IB principle as follows. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_17",
"text": " The objective in (4) shows the β𝛽\\beta-VAE loss for one single sentence 𝒙𝒙\\boldsymbol{x}. The training objective over the dataset q(𝒙)𝑞𝒙q(\\boldsymbol{x}) can be written as: ℱβ=−ℱE+βℱRsubscriptℱ𝛽subscriptℱ𝐸𝛽subscriptℱ𝑅\\displaystyle\\mathcal{F}_{\\beta}=-\\mathcal{F}_{E}+\\beta\\mathcal{F}_{R}\\vspace{-2mm} (7) where ℱE=Eq(𝒙),𝒛∼q(𝒛|𝒙)(logp(𝒙~|𝒛))subscriptℱ𝐸subscript𝐸similar-to𝑞𝒙𝒛𝑞conditional𝒛𝒙delimited-()𝑝conditional~𝒙𝒛\\mathcal{F}_{E}=E_{q(\\boldsymbol{x}),\\boldsymbol{z}\\sim q(\\boldsymbol{z}|\\boldsymbol{x})}(\\log p(\\tilde{\\boldsymbol{x}}|\\boldsymbol{z})) is the aggregated reconstruction term (𝒙~~𝒙\\tilde{\\boldsymbol{x}} is the reconstruction target), and ℱR=𝔼q(𝒙)(KL(q(𝒛|𝒙)||p(𝒛)))\\mathcal{F}_{R}=\\mathbb{E}_{q(\\boldsymbol{x})}(\\mbox{KL}(q(\\boldsymbol{z}|\\boldsymbol{x})||p(\\boldsymbol{z}))) is the aggregated KL term. With the detailed proof shown in Section A of Appendix, we see that ℱβsubscriptℱ𝛽\\mathcal{F}_{\\beta} is an upper bound of IB: ℱβ≥−Iq(𝒛,𝒙~)+βIq(𝒛,𝒙)=ℒIB,subscriptℱ𝛽subscript𝐼𝑞𝒛~𝒙𝛽subscript𝐼𝑞𝒛𝒙subscriptℒIB\\displaystyle\\mathcal{F}_{\\beta}\\geq-I_{q}(\\boldsymbol{z},\\tilde{\\boldsymbol{x}})+\\beta I_{q}(\\boldsymbol{z},\\boldsymbol{x})=\\mathcal{L}_{\\text{IB}}, (8) where ℒIBsubscriptℒIB\\mathcal{L}_{\\text{IB}} is the Lagrange relaxation form of IB presented by Tishby et al. (2000), Iq(⋅,⋅)subscript𝐼𝑞⋅⋅I_{q}(\\cdot,\\cdot) is the mutual information (MI) measured by probability q𝑞q. The goal of IB is to maximize the predictive power of 𝒛𝒛\\boldsymbol{z} on target 𝒙~~𝒙\\tilde{\\boldsymbol{x}}, subject to the constraint on the amount of information about original 𝒙𝒙\\boldsymbol{x} that 𝒛𝒛\\boldsymbol{z} carries. When β=0𝛽0\\beta=0, we have the AE variant of our Optimus, the model fully focuses on maximizing the MI to recover sentences from the latent space. As β𝛽\\beta increases, the model gradually transits towards fitting the aggregated latent distribution q(𝒛)=∫𝒙q(𝒛|𝒙)q(𝒙)𝑑𝒙𝑞𝒛subscript𝒙𝑞conditional𝒛𝒙𝑞𝒙differential-d𝒙q(\\boldsymbol{z})=\\int_{\\boldsymbol{x}}q(\\boldsymbol{z}|\\boldsymbol{x})q(\\boldsymbol{x})d\\boldsymbol{x} to the given prior p(𝒛)𝑝𝒛p(\\boldsymbol{z}), leading the VAE variant of our Optimus. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_18",
"text": " The model architecture of Optimus is composed of multi-layer Transformer-based encoder and decoder, based on the original implementation described in Vaswani et al. (2017). The overall architecture is illustrated in Figure 1. To leverage the expressiveness power of existing PLMs, we initialize our encoder and decoder with weights of BERT ϕBERTsubscriptbold-italic-ϕBERT\\boldsymbol{\\phi}_{\\text{BERT}} and GPT-2 𝜽GPT-2subscript𝜽GPT-2\\boldsymbol{\\theta}_{\\text{GPT-2}}, respectively. This procedure is seamless, as all of these models are trained in a self-supervised/unsupervised manner. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_19",
"text": " We denote the number of layers (i.e., Transformer blocks) as L𝐿L, the hidden size as H𝐻H, and the number of self-attention heads as A𝐴A. Specifically, we consider BERTBASEBASE{}_{\\text{BASE}} (L=12, H=768, A=12, Total Parameters=110M) and GPT-2 (L=12, H=768, A=12, Total Parameters=117M). We hope that our approach can provide a practical recipe to inspire future work to integrate larger pre-trained encoder and decoder for higher performance models. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_20",
"text": " Two technical questions remain, when pre-training Optimus from BERT & GPT-2: (i)i(\\textup{\\it i}) How to represent sentences, since the two PLMs employ different tokenization schemes? (ii)ii(\\textup{\\it ii}) How to adapt a pre-trained GPT-2 to arbitrary conditional input without re-training the model again? Controllable GPT-2 models have been studied in Keskar et al. (2019); Zellers et al. (2019); Peng et al. (2020a, b) when prescribed control codes/tokens are provided, but it is still unknown how to ground GPT-2 to arbitrary conditional inputs. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_21",
"text": " In BERT, WordPiece Embeddings (WPE) is used for tokenization (vocabulary size is 28996 for the cased version). In GPT-2, the modified Byte Pair Encoding (BPE) Radford et al. (2019) is used for tokenization (vocabulary size is 50260). A given token is represented as 𝒉Embsubscript𝒉Emb{\\boldsymbol{h}}_{\\texttt{Emb}}, by summing the corresponding token, position and segment embeddings 333Optimus does not require segment embeddings, but we remain it due to BERT initialization.. For a sentence, we present it in both types of tokenization: the input of encoder is WPE, and the output of decoder is BPE to compute the reconstruction loss. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_22",
"text": " Similar to BERT, the first token of every sentence is always a special classification token ((CLS)). The last-layer hidden state 𝒉(CLS)∈ℝHsubscript𝒉(CLS)superscriptℝ𝐻{\\boldsymbol{h}}_{\\texttt{(CLS)}}\\in\\mathbb{R}^{H} corresponding to this token is used as the sentence-level representation. It further constructs the latent representation 𝒛=𝐖E𝒉(CLS)𝒛subscript𝐖Esubscript𝒉(CLS)\\boldsymbol{z}={{\\bf W}}_{\\text{E}}{\\boldsymbol{h}}_{\\texttt{(CLS)}}, where 𝒛∈ℝP𝒛superscriptℝ𝑃\\boldsymbol{z}\\in\\mathbb{R}^{P} is a P𝑃P-dimensional vector and 𝐖E∈ℝP×Hsubscript𝐖Esuperscriptℝ𝑃𝐻{{\\bf W}}_{\\text{E}}\\in\\mathbb{R}^{P\\times H} is the weight matrix. To facilitate 𝒛𝒛\\boldsymbol{z} in GPT-2 decoding without re-training the weights, we consider two schemes, illustrated in Figure 2: • Memory: 𝒛𝒛\\boldsymbol{z} plays the role of an additional memory vector 𝒉Memsubscript𝒉Mem{\\boldsymbol{h}}_{\\texttt{Mem}} for GPT2 to attend. Specifically, 𝒉Mem=𝐖M𝒛subscript𝒉Memsubscript𝐖M𝒛{\\boldsymbol{h}}_{\\texttt{Mem}}={{\\bf W}}_{\\text{M}}\\boldsymbol{z}, where 𝐖M∈ℝLH×Psubscript𝐖Msuperscriptℝ𝐿𝐻𝑃{{\\bf W}}_{\\text{M}}\\in\\mathbb{R}^{LH\\times P} is the weight matrix. 𝒉Mem∈ℝLHsubscript𝒉Memsuperscriptℝ𝐿𝐻{\\boldsymbol{h}}_{\\texttt{Mem}}\\in\\mathbb{R}^{LH} is separated into L𝐿L vectors of length H𝐻H, each of which is attended by GPT-2 in one layer. • Embedding: 𝒛𝒛\\boldsymbol{z} is added on the original embedding layer, and directly used in every decoding step. The new embedding representation is 𝒉Emb′=𝒉Emb+𝐖D𝒛superscriptsubscript𝒉Emb′subscript𝒉Embsubscript𝐖D𝒛{\\boldsymbol{h}}_{\\texttt{Emb}}^{\\prime}={\\boldsymbol{h}}_{\\texttt{Emb}}+{{\\bf W}}_{\\text{D}}\\boldsymbol{z}, where 𝐖D∈ℝH×Psubscript𝐖Dsuperscriptℝ𝐻𝑃{{\\bf W}}_{\\text{D}}\\in\\mathbb{R}^{H\\times P}. We study their empirical performance in Section B.1 of Appendix, and observe that Memory is significantly more effective than Embedding, and the integration of both schemes yields slightly better results. We hypothesize that the reason why Memory is superior is because it allows the decoder to attend the latent information at every layer of the network directly, while the Embedding method only allows the decoder to see the latent information at the input and output layer. In our experiments, we use the integration scheme by default. In summary, the encoder parameters ϕ={ϕBERT,𝐖E}bold-italic-ϕsubscriptbold-italic-ϕBERTsubscript𝐖E\\boldsymbol{\\phi}=\\{\\boldsymbol{\\phi}_{\\text{BERT}},{{\\bf W}}_{\\text{E}}\\}, and decoder parameters 𝜽={𝜽GPT-2,𝐖M,𝐖D}𝜽subscript𝜽GPT-2subscript𝐖Msubscript𝐖D\\boldsymbol{\\theta}=\\{\\boldsymbol{\\theta}_{\\text{GPT-2}},{{\\bf W}}_{\\text{M}},{{\\bf W}}_{\\text{D}}\\}. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_23",
"text": " We train the model parameters {ϕ,𝜽}bold-italic-ϕ𝜽\\{\\boldsymbol{\\phi},\\boldsymbol{\\theta}\\} using two objectives: AE and VAE, discussed in Section 4.1. Pre-training AE using (5) is straightforward. However, pre-training VAE can be challenging due to the notorious KL vanishing issue Bowman et al. (2016), where (i)i(\\textup{\\it i}) an encoder that produces posteriors almost identical to the Gaussian prior for all sentences (rather than a more interesting posterior); and (ii)ii(\\textup{\\it ii}) a decoder that completely ignores 𝒛𝒛\\boldsymbol{z} in (2), and a learned model that reduces to a simpler NLM. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_24",
"text": " To reduce this issue, we follow the intuition that if the encoder is providing useful information from the beginning of decoder training, the decoder is more likely to make use of 𝒛𝒛\\boldsymbol{z} Fu et al. (2019); He et al. (2019). Specifically, we use the cyclical schedule to anneal β𝛽\\beta for 10 periods Fu et al. (2019). Within one period, there are three consecutive stages: Training AE (β=0𝛽0\\beta=0) for 0.5 proportion, annealing β𝛽\\beta from 0 to 1 for 0.25 proportion, and fixing β=1𝛽1\\beta=1 for 0.25 proportion. When β>0𝛽0\\beta>0, we use the KL thresholding scheme Li et al. (2019); Kingma et al. (2016), and replace the KL term ℒRsubscriptℒ𝑅\\mathcal{L}_{R} in (6) with a hinge loss term that maxes each component of the original KL with a constant λ𝜆\\lambda: ℒR′=∑imax(λ,KL(qϕ(zi|𝒙)||p(zi)))\\displaystyle\\mathcal{L}_{R}^{\\prime}=\\sum_{i}\\max(\\lambda,\\mbox{KL}(q_{\\boldsymbol{\\phi}}(z_{i}|\\boldsymbol{x})||p(z_{i}))) (9) Here, zisubscript𝑧𝑖z_{i} denotes the i𝑖ith dimension of 𝒛𝒛\\boldsymbol{z}. Using the thresholding objective causes learning to give up driving down KL for dimensions of 𝒛𝒛\\boldsymbol{z} that are already beneath the target compression rate. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_25",
"text": " The pre-training procedure largely follows the existing literature on language model pre-training. We use English Wikipedia to pre-train our AE and VAE objectives. As our main interest is to model sentences (rather than text sequences of a fixed length), we pre-process Wikipedia with maximum sentences length 64. It leads to 1990K sentences, which accounts 96.45% Wikipedia sentences used in BERT. More data pre-processing details are in Section B.2 of Appendix. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_26",
"text": " We consider to apply the pre-trained Optimus models to three types of downstream tasks: (i)i(\\textup{\\it i}) language modeling, where Optimus is compared with SoTA VAE methods and GPT-2. (ii)ii(\\textup{\\it ii}) Guided language generation, where Optimus shows its unique advantage in producing controllable sentences in contrast to GPT-2. (iii)iii(\\textup{\\it iii}) Low-resource language understanding, where the learned structured latent features can be used for fast adaptation in new tasks. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_27",
"text": " Fine-tuning LM on new datasets is straightforward. We load the pre-trained Optimus, and update the model with one additional β𝛽\\beta scheduling cycle for one epoch. The semantic latent vectors are first pre-trained off-the-shelf, and then easily leveraged to train the decoder on downstream datasets. From this perspective, our pre-training can be viewed as an effective approach to reduce KL vanishing. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_28",
"text": " We consider four datasets: the Penn Treebank (𝙿𝚃𝙱𝙿𝚃𝙱\\mathtt{PTB}) Marcus et al. (1993), 𝚂𝙽𝙻𝙸𝚂𝙽𝙻𝙸\\mathtt{SNLI} Bowman et al. (2015), 𝚈𝚊𝚑𝚘𝚘𝚈𝚊𝚑𝚘𝚘\\mathtt{Yahoo}, and 𝚈𝚎𝚕𝚙𝚈𝚎𝚕𝚙\\mathtt{Yelp} corpora Yang et al. (2017); He et al. (2019). ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_29",
"text": " There are two types of metrics to evaluate language VAEs. (i)i(\\textup{\\it i}) Generation capability: we use perplexity (PPL). Note that NLM and GPT-2 has exactly PPL, while VAEs does not. Following He et al. (2019), we use the importance weighted bound in Burda et al. (2015) to approximate logp(𝒙)𝑝𝒙\\log p(\\boldsymbol{x}), and report PPL. (ii)ii(\\textup{\\it ii}) Representation learning capability: Active units (AU) of 𝒛𝒛\\boldsymbol{z} and its Mutual Information (MI) with 𝒙𝒙\\boldsymbol{x}. We report the full results with ELBO, KL and Reconstruction in Appendix, but note that higher ELBO does not necessarily yield better language modeling. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_30",
"text": " (i)i(\\textup{\\it i}) GPT-2. A large-scale LM trained on OpoenWebText Radford et al. (2019). We load the pre-trained GPT-2 weights, and refine the model for 1 epoch on the new datasets. (ii)ii(\\textup{\\it ii}) Annealing. β𝛽\\beta is gradually annealed from 0 to 1. This annealing procedure can be used once (M.A.) Bowman et al. (2016) or multiple times (C.A.) Fu et al. (2019). (iii)iii(\\textup{\\it iii}) Aggressive Training He et al. (2019). Training the encoder multiple times per decoder update. (iv)iv(\\textup{\\it iv}) AE-FB Li et al. (2019). Training AE, and then VAE using the KL thresholding in (9), the results on λ=0.50𝜆0.50\\lambda\\!=\\!0.50 are reported as a good trade-off. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_31",
"text": " The results are shown in Table 1. Various λ𝜆\\lambda values are used, we observe a trade-off between language modeling and representation learning, controlled by λ𝜆\\lambda. Compared with existing VAE methods, Optimus achieve significantly lower perplexity, and higher MI/AU. This indicates that our pre-training method is an effective approach to reduce KL vanishing issue and training VAEs, especially given the fact that we only fine-tune on these datasets for one epoch. Optimus achieves lower perplexity compared with GPT-2 on three out of four datasets. Intuitively, this is because the model can leverage the prior language knowledge encoded in 𝒛𝒛\\boldsymbol{z}. This gap is larger, when the sentences in the dataset exhibit common regularities, such as 𝚂𝙽𝙻𝙸𝚂𝙽𝙻𝙸\\mathtt{SNLI}, where the prior plays a more important/effective role in this scenario. Though the form of our model is simple, Optimus shows stronger empirical performance than sophisticated models that are particularly designed for long-text, such as hVAE in Shen et al. (2019). For example, the KL and PPL of Optimus (15.09 and 22.79) are much better than hVAE (6.8 and 45.8) on Yelp dataset. This verifies the importance of pre-training a latent space. The full experimental results are shown in Table 8, 9, 10 and 11 of Appendix. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_32",
"text": " Different from the traditional NLMs or GPT-2, VAEs learns bidirectional mappings between the latent and symbolic space. It enables high-level sentence editing as arithmetic latent vector operations, and thus allows guided language generation. The reason that Optimus supports arithmetic operations are two-fold: (1) Pre-training on large datasets with large networks allows all sentences to be densely and faithfully represented in the latent space. (2) The continuity property of neural nets and KL regularization of VAE encourage latent vectors with similar semantics are smoothly organized together. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_33",
"text": " This is demonstrated with two simple schemes to manipulate pre-trained latent spaces: sentence transfer and interpolation, with results in Table 2 and Table 3, respectively. Details and more results are shown in Appendix. They showcase that Optimus enables new ways that one can play with language generation using pre-trained models, compared with GPT-2 that can only fulfill text sequences with given prompts. A website demo444http://aka.ms/optimus is released to the public to interact with the model, exhibiting the power of latent-vector-based controllable text generation. We demonstrate more sophisticated ways to manipulate pre-trained latent spaces in three real applications as follows. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_34",
"text": " The open-domain dialog response generation task is considered: generating responses 𝒙𝒙\\boldsymbol{x} given a dialog history 𝒄𝒄{\\boldsymbol{c}}. Following Gao et al. (2019a), we embed the history and response in a joint latent space as 𝒛S2Ssubscript𝒛S2S\\boldsymbol{z}_{\\text{S2S}} and 𝒛AEsubscript𝒛AE\\boldsymbol{z}_{\\text{AE}}, respectively. A fusion regularization is used to match the responses to the context. We consider 𝙳𝚊𝚒𝚕𝚢𝚍𝚒𝚊𝚕𝚘𝚐𝙳𝚊𝚒𝚕𝚢𝚍𝚒𝚊𝚕𝚘𝚐\\mathtt{Dailydialog} Li et al. (2017c) used in Gu et al. (2019), which has 13,118 daily conversations. Each utterance is processed as the response of previous 10 context utterances from both speakers. The baseline methods are described in Appendix. We measure the performance using Bleu Chen and Cherry (2014), and compute the precision, recall and F1 in Table 4. Optimus shows higher Bleu scores than all existing baselines. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_35",
"text": " Following StyleFusion Gao et al. (2019b), we consider generating responses for 𝙳𝚊𝚒𝚕𝚢𝚍𝚒𝚊𝚕𝚘𝚐𝙳𝚊𝚒𝚕𝚢𝚍𝚒𝚊𝚕𝚘𝚐\\mathtt{Dailydialog} in the style of Holmes. The comparison is shown in Table 5. In addition to Bleu, we use neural and N-gram classifier scores to evaluate the accuracy of the generated responses that belong to the desired style. Optimus achieves better performance on all metrics. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_36",
"text": " The short 𝚈𝚎𝚕𝚙𝚈𝚎𝚕𝚙\\mathtt{Yelp} dataset collected in Shen et al. (2017) is used. It contains 444K training sentences, and we use separated datasets of 10K sentences for validation/testing, respectively. The goal is to generate text reviews given the positive/negative sentiment. We fine-tune Optimus using the VAE objective on the dataset, then freeze backbone weights. A conditional GAN Mirza and Osindero (2014) is trained on the fixed latent space. The generation process is to first produce a latent vector 𝒛ysubscript𝒛𝑦\\boldsymbol{z}_{y} based on a given label y𝑦y using conditional GAN, then generate sentences conditioned on 𝒛ysubscript𝒛𝑦\\boldsymbol{z}_{y} using the decoder. The baselines are described in Appendix. G-score computes the geometric mean of Accuracy and Bleu, measuring the comprehensive quality of both content and style. Self-Bleu measures the diversity of the generated sentences. The results are shown in Table 6, Optimus achieves the best performance on all metrics. This verifies the importance of learning a smooth and meaningful latent space. The conditional generated sentences are shown in Appendix. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_37",
"text": " Due to the regularization term ℒRsubscriptℒ𝑅\\mathcal{L}_{R}, Optimus can organize sentences in the way specified by the prior distribution. For basic VAEs, a smooth feature space is learned, which is specifically beneficial for better generalization when the number of task-specific labeled data is low. To have a fair comparison, we follow the BERT paper, where the hidden feature of (CLS) is used as the sentence-level representation. In this way, the linear classifiers for both models have the same number of trainable parameters. Though the latent vector 𝒛𝒛\\boldsymbol{z} is typically used as sentence-level representation in VAE literature, we argue that the KL regularization applied on 𝒛𝒛\\boldsymbol{z} has a large impact on the preceding layer feature 𝒉(CLS)subscript𝒉(CLS){\\boldsymbol{h}}_{\\texttt{(CLS)}}. Specifically, 𝒉(CLS)subscript𝒉(CLS){\\boldsymbol{h}}_{\\texttt{(CLS)}} is fed into an linear classifier 𝐖C∈ℝK×Hsubscript𝐖Csuperscriptℝ𝐾𝐻{{\\bf W}}_{\\text{C}}\\in\\mathbb{R}^{K\\times H}, where K𝐾K is the number of classes, with objective −log(softmax(𝒉(CLS)𝐖C⊤))softmaxsubscript𝒉(CLS)superscriptsubscript𝐖Ctop-\\log(\\text{softmax}({\\boldsymbol{h}}_{\\texttt{(CLS)}}{{\\bf W}}_{\\text{C}}^{\\top})). Two schemes are used: (i)i(\\textup{\\it i}) Fine-tuning, where both the pre-trained model and the classifier are updated; (ii)ii(\\textup{\\it ii}) Feature-based, where pre-trained model weights are frozen to provide embeddings for the classifier update. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_38",
"text": " A varying number of training samples are randomly chosen, ranging from 1 to 10K per class. 10 trials are used when the number of available training samples are small, each is trained in 100 training epochs. The results are shown in Figure 3. When pre-trained models are used to provide sentence embeddings, the proposed Optimus consistently outperforms BERT. It demonstrates that the latent structure learned by Optimus is more separated, and helps generalize better. When the entire network is fine-tuned, Optimus can adapt faster than BERT, when the available number of training samples is small. The two methods perform quite similarly when more training data is provided. This is because the pre-trained backbone network size is much larger than the classifier, where the performance is dominated by the backbone networks. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_39",
"text": " We use tSNE Maaten and Hinton (2008) to visualize the learned feature on a 2D map. The validation set of Yelp is used to extract the latent features. Compared with BERT, Optimus learns a smoother space and more structured latent patterns, which explains why Optimus can yield better classification performance and faster adaptation. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_40",
"text": " We further consider the GLUE benchmark Wang et al. (2019), which consists of nine datasets for general language understanding. Following the finetuning schedule in Devlin et al. (2019), we use learning rate (2,3,4,5)×10−52345superscript105(2,3,4,5)\\times 10^{-5} and train the model for 3 epochs. We select the best performance among different runs. We show the results on the validation set in Table 7. With the feature-based scheme, Optimus yields higher performance than BERT, especially on the large datasets such as MNLI, QQP and QNLI. When the full models are fine-tuned, the two methods perform quite similarly. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_41",
"text": " In summary, the scenarios that Optimus fit the low-resource settings are two-fold: (1) The required computing resource is low: the feature-based approach only updates the classifier, whose computing requirement is much lower than full-model fine-tuning; (2) The number of required labelled data is low: when labelled data is rare, Optimus adapts better. The results confirm that Optimus can maintain and exploit the structures learned in pre-training, and presents a more general representation that can be adapted to new tasks more easily than BERT – feature-based adaption is much faster and easier to perform than fine-tuning. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_42",
"text": " We present Optimus, a large-scale pre-trained deep latent variable model for natural language. It introduces a smooth and universal latent space, by combining the advantages of VAEs, BERT and GPT-2 in one model. Experimental results on a wide range of tasks and datasets have demonstrated the strong performance of Optimus, including new state-of-the-art for language VAEs. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_43",
"text": " There are several limitations in current Optimus. First, our pre-trained language VAE is still under-trained due to limited compute resource, as the training reconstruction loss can still decrease. One may further train the models with higher latent dimension and longer time to fully release the power of pre-trained latent spaces. Second, the current model can only control sentences of moderate length. One future direction is to consider more sophisticated mechanisms to gain stronger control-ability over longer sentences while maintaining the compactness of latent representations. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
},
{
"id": "2004.04092_all_44",
"text": " While deep generative models (DGMs) such as VAEs are theoretically attractive due to its principle nature, it is now rarely used by practitioners in the modern pre-trained language modeling era where BERT/GPT dominate with strong empirical performance. That’s why this paper makes a timely contribution to making DGMs practical for NLP. We hope that this paper will help renew interest in DGMs for this purpose. Hence, we deliberately keep a simple model, believing that the first pre-trained big VAE model itself and its implications are novel: it helps the community to recognize the importance of DGMs in the pre-training era, and revisit DGMs to make it more practical. Indeed, Optimus is uniquely positioned to learn a smooth latent space to organize sentences, which can enable guided language generation compared with GPT-2, and yield better generalization in low-resource language understanding tasks than BERT. ",
"title": "Optimus: Organizing Sentences via Pre-trained Modeling of a Latent Space"
}
] |
Difference between data preparation for the proposed model and the SRCNN?
|
The proposed model's input size is the same as the receptive field size, and images were divided with no overlap [38].
|
[
38
] |
[
{
"id": "1511.04587_all_0",
"text": " We address the problem of generating a high-resolution (HR) image given a low-resolution (LR) image, commonly referred as single image super-resolution (SISR) , , . SISR is widely used in computer vision applications ranging from security and surveillance imaging to medical imaging where more image details are required on demand. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_1",
"text": " Many SISR methods have been studied in the computer vision community. Early methods include interpolation such as bicubic interpolation and Lanczos resampling more powerful methods utilizing statistical image priors (20, 13) or internal patch recurrence . ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_2",
"text": " Currently, learning methods are widely used to model a mapping from LR to HR patches. Neighbor embedding (4, 15) methods interpolate the patch subspace. Sparse coding (25, 26, 21, 22) methods use a learned compact dictionary based on sparse signal representation. Lately, random forest and convolutional neural network (CNN) have also been used with large improvements in accuracy. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_3",
"text": " Among them, Dong et al. has demonstrated that a CNN can be used to learn a mapping from LR to HR in an end-to-end manner. Their method, termed SRCNN, does not require any engineered features that are typically necessary in other methods (25, 26, 21, 22) and shows the state-of-the-art performance. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_4",
"text": " While SRCNN successfully introduced a deep learning technique into the super-resolution (SR) problem, we find its limitations in three aspects: first, it relies on the context of small image regions; second, training converges too slowly; third, the network only works for a single scale. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_5",
"text": " In this work, we propose a new method to practically resolve the issues. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_6",
"text": " Context We utilize contextual information spread over very large image regions. For a large scale factor, it is often the case that information contained in a small patch is not sufficient for detail recovery (ill-posed). Our very deep network using large receptive field takes a large image context into account. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_7",
"text": " Convergence We suggest a way to speed-up the training: residual-learning CNN and extremely high learning rates. As LR image and HR image share the same information to a large extent, explicitly modelling the residual image, which is the difference between HR and LR images, is advantageous. We propose a network structure for efficient learning when input and output are highly correlated. Moreover, our initial learning rate is 104superscript10410^{4} times higher than that of SRCNN . This is enabled by residual-learning and gradient clipping. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_8",
"text": " Scale Factor We propose a single-model SR approach. Scales are typically user-specified and can be arbitrary including fractions. For example, one might need smooth zoom-in in an image viewer or resizing to a specific dimension. Training and storing many scale-dependent models in preparation for all possible scenarios is impractical. We find a single convolutional network is sufficient for multi-scale-factor super-resolution. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_9",
"text": " Contribution In summary, in this work, we propose a highly accurate SR method based on a very deep convolutional network. Very deep networks converge too slowly if small learning rates are used. Boosting convergence rate with high learning rates lead to exploding gradients and we resolve the issue with residual-learning and gradient clipping. In addition, we extend our work to cope with multi-scale SR problem in a single network. Our method is relatively accurate and fast in comparison to state-of-the-art methods as illustrated in Figure 1. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_10",
"text": " SRCNN is a representative state-of-art method for deep learning-based SR approach. So, let us analyze and compare it with our proposed method. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_11",
"text": " Model SRCNN consists of three layers: patch extraction/representation, non-linear mapping and reconstruction. Filters of spatial sizes 9×9999\\times 9, 1×1111\\times 1, and 5×5555\\times 5 were used respectively. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_12",
"text": " In , Dong et al. attempted to prepare deeper models, but failed to observe superior performance after a week of training. In some cases, deeper models gave inferior performance. They conclude that deeper networks do not result in better performance (Figure 9). ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_13",
"text": " However, we argue that increasing depth significantly boosts performance. We successfully use 20 weight layers (3×3333\\times 3 for each layer). Our network is very deep (20 vs. 3 ) and information used for reconstruction (receptive field) is much larger (41×41414141\\times 41 vs. 13×13131313\\times 13). ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_14",
"text": " Training For training, SRCNN directly models high-resolution images. A high-resolution image can be decomposed into a low frequency information (corresponding to low-resolution image) and high frequency information (residual image or image details). Input and output images share the same low-frequency information. This indicates that SRCNN serves two purposes: carrying the input to the end layer and reconstructing residuals. Carrying the input to the end is conceptually similar to what an auto-encoder does. Training time might be spent on learning this auto-encoder so that the convergence rate of learning the other part (image details) is significantly decreased. In contrast, since our network models the residual images directly, we can have much faster convergence with even better accuracy. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_15",
"text": " Scale As in most existing SR methods, SRCNN is trained for a single scale factor and is supposed to work only with the specified scale. Thus, if a new scale is on demand, a new model has to be trained. To cope with multiple scale SR (possibly including fractional factors), we need to construct individual single scale SR system for each scale of interest. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_16",
"text": " However, preparing many individual machines for all possible scenarios to cope with multiple scales is inefficient and impractical. In this work, we design and train a single network to handle multiple scale SR problem efficiently. This turns out to work very well. Our single machine is compared favorably to a single-scale expert for the given sub-task. For three scales factors (×2,3,4\\times 2,3,4), we can reduce the number of parameters by three-fold. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_17",
"text": " In addition to the aforementioned issues, there are some minor differences. Our output image has the same size as the input image by padding zeros every layer during training whereas output from SRCNN is smaller than the input. Finally, we simply use the same learning rates for all layers while SRCNN uses different learning rates for different layers in order to achieve stable convergence. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_18",
"text": " For SR image reconstruction, we use a very deep convolutional network inspired by Simonyan and Zisserman . The configuration is outlined in Figure 2. We use d𝑑d layers where layers except the first and the last are of the same type: 64 filter of the size 3×3×6433643\\times 3\\times 64, where a filter operates on 3×3333\\times 3 spatial region across 64 channels (feature maps). The first layer operates on the input image. The last layer, used for image reconstruction, consists of a single filter of size 3×3×6433643\\times 3\\times 64. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_19",
"text": " The network takes an interpolated low-resolution image (to the desired size) as input and predicts image details. Modelling image details is often used in super-resolution methods (21, 22, 15, 3) and we find that CNN-based methods can benefit from this domain-specific knowledge. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_20",
"text": " In this work, we demonstrate that explicitly modelling image details (residuals) has several advantages. These are further discussed later in Section 4.2. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_21",
"text": " One problem with using a very deep network to predict dense outputs is that the size of the feature map gets reduced every time convolution operations are applied. For example, when an input of size (n+1)×(n+1)𝑛1𝑛1(n+1)\\times(n+1) is applied to a network with receptive field size n×n𝑛𝑛n\\times n, the output image is 1×1111\\times 1. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_22",
"text": " This is in accordance with other super-resolution methods since many require surrounding pixels to infer center pixels correctly. This center-surround relation is useful since the surrounding region provides more constraints to this ill-posed problem (SR). For pixels near the image boundary, this relation cannot be exploited to the full extent and many SR methods crop the result image. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_23",
"text": " This methodology, however, is not valid if the required surround region is very big. After cropping, the final image is too small to be visually pleasing. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_24",
"text": " To resolve this issue, we pad zeros before convolutions to keep the sizes of all feature maps (including the output image) the same. It turns out that zero-padding works surprisingly well. For this reason, our method differs from most other methods in the sense that pixels near the image boundary are also correctly predicted. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_25",
"text": " Once image details are predicted, they are added back to the input ILR image to give the final image (HR). We use this structure for all experiments in our work. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_26",
"text": " We now describe the objective to minimize in order to find optimal parameters of our model. Let 𝐱𝐱{\\bf x} denote an interpolated low-resolution image and 𝐲𝐲{\\bf y} a high-resolution image. Given a training dataset {𝐱(i),𝐲(i)}i=1N\\{{\\bf x}^{(i)},{\\bf y}^{(i)}\\}{}_{i=1}^{N}, our goal is to learn a model f𝑓f that predicts values 𝐲^=f(𝐱)^𝐲𝑓𝐱\\mathbf{\\hat{y}}=f(\\mathbf{x}), where 𝐲^^𝐲\\mathbf{\\hat{y}} is an estimate of the target HR image. We minimize the mean squared error 12‖𝐲−f(𝐱)‖212superscriptnorm𝐲𝑓𝐱2\\frac{1}{2}||\\mathbf{y}-f(\\mathbf{x})||^{2} averaged over the training set is minimized. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_27",
"text": " Residual-Learning In SRCNN, the exact copy of the input has to go through all layers until it reaches the output layer. With many weight layers, this becomes an end-to-end relation requiring very long-term memory. For this reason, the vanishing/exploding gradients problem can be critical. We can solve this problem simply with residual-learning. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_28",
"text": " As the input and output images are largely similar, we define a residual image 𝐫=𝐲−𝐱𝐫𝐲𝐱{\\bf r}={\\bf y}-{\\bf x}, where most values are likely to be zero or small. We want to predict this residual image. The loss function now becomes 12‖𝐫−f(𝐱)‖212superscriptnorm𝐫𝑓𝐱2\\frac{1}{2}||\\mathbf{r}-f(\\mathbf{x})||^{2}, where f(𝐱)𝑓𝐱f(\\bf{x}) is the network prediction. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_29",
"text": " In networks, this is reflected in the loss layer as follows. Our loss layer takes three inputs: residual estimate, network input (ILR image) and ground truth HR image. The loss is computed as the Euclidean distance between the reconstructed image (the sum of network input and output) and ground truth. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_30",
"text": " Training is carried out by optimizing the regression objective using mini-batch gradient descent based on back-propagation (LeCun et al. ). We set the momentum parameter to 0.9. The training is regularized by weight decay (L2subscript𝐿2L_{2} penalty multiplied by 0.0001). ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_31",
"text": " High Learning Rates for Very Deep Networks Training deep models can fail to converge in realistic limit of time. SRCNN fails to show superior performance with more than three weight layers. While there can be various reasons, one possibility is that they stopped their training procedure before networks converged. Their learning rate 10−5superscript10510^{-5} is too small for a network to converge within a week on a common GPU. Looking at Fig. 9 of , it is not easy to say their deeper networks have converged and their performances were saturated. While more training will eventually resolve the issue, but increasing depth to 20 does not seems practical with SRCNN. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_32",
"text": " It is a basic rule of thumb to make learning rate high to boost training. But simply setting learning rate high can also lead to vanishing/exploding gradients . For the reason, we suggest an adjustable gradient clipping for maximal boost in speed while suppressing exploding gradients. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_33",
"text": " Adjustable Gradient Clipping Gradient clipping is a technique that is often used in training recurrent neural networks . But, to our knowledge, its usage is limited in training CNNs. While there exist many ways to limit gradients, one of the common strategies is to clip individual gradients to the predefined range (−θ,θ)𝜃𝜃(-\\theta,\\theta). ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_34",
"text": " With clipping, gradients are in a certain range. With stochastic gradient descent commonly used for training, learning rate is multiplied to adjust the step size. If high learning rate is used, it is likely that θ𝜃\\theta is tuned to be small to avoid exploding gradients in a high learning rate regime. But as learning rate is annealed to get smaller, the effective gradient (gradient multiplied by learning rate) approaches zero and training can take exponentially many iterations to converge if learning rate is decreased geometrically. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_35",
"text": " For maximal speed of convergence, we clip the gradients to (−θγ,θγ)𝜃𝛾𝜃𝛾(-\\frac{\\theta}{\\gamma},\\frac{\\theta}{\\gamma}), where γ𝛾\\gamma denotes the current learning rate. We find the adjustable gradient clipping makes our convergence procedure extremely fast. Our 20-layer network training is done within 4 hours whereas 3-layer SRCNN takes several days to train. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_36",
"text": " Multi-Scale While very deep models can boost performance, more parameters are now needed to define a network. Typically, one network is created for each scale factor. Considering that fractional scale factors are often used, we need an economical way to store and retrieve networks. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_37",
"text": " For this reason, we also train a multi-scale model. With this approach, parameters are shared across all predefined scale factors. Training a multi-scale model is straightforward. Training datasets for several specified scales are combined into one big dataset. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_38",
"text": " Data preparation is similar to SRCNN with some differences. Input patch size is now equal to the size of the receptive field and images are divided into sub-images with no overlap. A mini-batch consists of 64 sub-images, where sub-images from different scales can be in the same batch. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_39",
"text": " We implement our model using the MatConvNet111http://www.vlfeat.org/matconvnet/ package . ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_40",
"text": " In this section, we study three properties of our proposed method. First, we show that large depth is necessary for the task of SR. A very deep network utilizes more contextual information in an image and models complex functions with many nonlinear layers. We experimentally verify that deeper networks give better performances than shallow ones. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_41",
"text": " Second, we show that our residual-learning network converges much faster than the standard CNN. Moreover, our network gives a significant boost in performance. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_42",
"text": " Third, we show that our method with a single network performs as well as a method using multiple networks trained for each scale. We can effectively reduce model capacity (the number of parameters) of multi-network approaches. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_43",
"text": " Convolutional neural networks exploit spatially-local correlation by enforcing a local connectivity pattern between neurons of adjacent layers . In other words, hidden units in layer m𝑚m take as input a subset of units in layer m−1𝑚1m-1. They form spatially contiguous receptive fields. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_44",
"text": " Each hidden unit is unresponsive to variations outside of the receptive field with respect to the input. The architecture thus ensures that the learned filters produce the strongest response to a spatially local input pattern. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_45",
"text": " However, stacking many such layers leads to filters that become increasingly “global” (i.e. responsive to a larger region of pixel space). In other words, a filter of very large support can be effectively decomposed into a series of small filters. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_46",
"text": " In this work, we use filters of the same size, 3×\\times3, for all layers. For the first layer, the receptive field is of size 3×\\times3. For the next layers, the size of the receptive field increases by 2 in both height and width. For depth D𝐷D network, the receptive field has size (2D+1)×(2D+1)2𝐷12𝐷1(2D+1)\\times(2D+1). Its size is proportional to the depth. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_47",
"text": " In the task of SR, this corresponds to the amount of contextual information that can be exploited to infer high-frequency components. A large receptive field means the network can use more context to predict image details. As SR is an ill-posed inverse problem, collecting and analyzing more neighbor pixels give more clues. For example, if there are some image patterns entirely contained in a receptive field, it is plausible that this pattern is recognized and used to super-resolve the image. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_48",
"text": " In addition, very deep networks can exploit high nonlinearities. We use 19 rectified linear units and our networks can model very complex functions with moderate number of channels (neurons). The advantages of making a thin deep network is well explained in Simonyan and Zisserman . ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_49",
"text": " We now experimentally show that very deep networks significantly improve SR performance. We train and test networks of depth ranging from 5 to 20 (only counting weight layers excluding nonlinearity layers). In Figure 3, we show the results. In most cases, performance increases as depth increases. As depth increases, performance improves rapidly. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_50",
"text": " As we already have a low-resolution image as the input, predicting high-frequency components is enough for the purpose of SR. Although the concept of predicting residuals has been used in previous methods (21, 22, 26), it has not been studied in the context of deep-learning-based SR framework. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_51",
"text": " In this work, we have proposed a network structure that learns residual images. We now study the effect of this modification to a standard CNN structure in detail. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_52",
"text": " First, we find that this residual network converges much faster. Two networks are compared experimentally: the residual network and the standard non-residual network. We use depth 10 (weight layers) and scale factor 2. Performance curves for various learning rates are shown in Figure 4. All use the same learning rate scheduling mechanism that has been mentioned above. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_53",
"text": " Second, at convergence, the residual network shows superior performance. In Figure 4, residual networks give higher PSNR when training is done. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_54",
"text": " Another remark is that if small learning rates are used, networks do not converge in the given number of epochs. If initial learning rate 0.1 is used, PSNR of a residual-learning network reaches 36.90 within 10 epochs. But if 0.001 is used instead, the network never reaches the same level of performance (its performance is 36.52 after 80 epochs). In a similar manner, residual and non-residual networks show dramatic performance gaps after 10 epochs (36.90 vs. 27.42 for rate 0.1). ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_55",
"text": " In short, this simple modification to a standard non-residual network structure is very powerful and one can explore the validity of the idea in other image restoration problems where input and output images are highly correlated. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_56",
"text": " Scale augmentation during training is a key technique to equip a network with super-resolution machines of multiple scales. Many SR processes for different scales can be executed with our multi-scale machine with much smaller capacity than that of single-scale machines combined. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_57",
"text": " We start with an interesting experiment as follows: we train our network with a single scale factor strainsubscript𝑠trains_{\\text{train}} and it is tested under another scale factor stestsubscript𝑠tests_{\\text{test}}. Here, factors 2,3 and 4 that are widely used in SR comparisons are considered. Possible pairs (strainsubscript𝑠trains_{\\text{train}},stestsubscript𝑠tests_{\\text{test}}) are tried for the dataset ‘Set5’ . Experimental results are summarized in Table 2. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_58",
"text": " Performance is degraded if strain≠stestsubscript𝑠trainsubscript𝑠tests_{\\text{train}}\\neq s_{\\text{test}}. For scale factor 2, the model trained with factor 2 gives PSNR of 37.10 (in dB), whereas models trained with factor 3 and 4 give 30.05 and 28.13, respectively. A network trained over single-scale data is not capable of handling other scales. In many tests, it is even worse than bicubic interpolation, the method used for generating the input image. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_59",
"text": " We now test if a model trained with scale augmentation is capable of performing SR at multiple scale factors. The same network used above is trained with multiple scale factors strain={2,3,4}subscript𝑠train234s_{\\text{train}}=\\{2,3,4\\}. In addition, we experiment with the cases strain={2,3},{2,4},{3,4}subscript𝑠train232434s_{\\text{train}}=\\{2,3\\},\\{2,4\\},\\{3,4\\} for more comparisons. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_60",
"text": " We observe that the network copes with any scale used during training. When strain={2,3,4}subscript𝑠train234s_{\\text{train}}=\\{2,3,4\\} (×2,3,4\\times 2,3,4 in Table 2), its PSNR for each scale is comparable to those achieved from the corresponding result of single-scale network: 37.06 vs. 37.10 (×2absent2\\times 2), 33.27 vs. 32.89 (×3absent3\\times 3), 30.95 vs. 30.86 (×4absent4\\times 4). ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_61",
"text": " Another pattern is that for large scales (×3,4\\times 3,4), our multi-scale network outperforms single-scale network: our model (×2,3\\times 2,3), (×3,4\\times 3,4) and (×2,3,4\\times 2,3,4) give PSNRs 33.22, 33.24 and 33.27 for test scale 3, respectively, whereas (×3absent3\\times 3) gives 32.89. Similarly, (×2,4\\times 2,4), (×3,4\\times 3,4) and (×2,3,4\\times 2,3,4) give 30.86, 30.94 and 30.95 (vs. 30.84 by ×4absent4\\times 4 model), respectively. From this, we observe that training multiple scales boosts the performance for large scales. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_62",
"text": " In this section, we evaluate the performance of our method on several datasets. We first describe datasets used for training and testing our method. Next, parameters necessary for training are given. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_63",
"text": " After outlining our experimental setup, we compare our method with several state-of-the-art SISR methods. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_64",
"text": " Training dataset Different learning-based methods use different training images. For example, RFL has two methods, where the first one uses 91 images from Yang et al. and the second one uses 291 images with the addition of 200 images from Berkeley Segmentation Dataset . SRCNN uses a very large ImageNet dataset. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_65",
"text": " We use 291 images as in for benchmark with other methods in this section. In addition, data augmentation (rotation or flip) is used. For results in previous sections, we used 91 images to train network fast, so performances can be slightly different. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_66",
"text": " Test dataset For benchmark, we use four datasets. Datasets ‘Set5’ and ‘Set14’ are often used for benchmark in other works (22, 21, 5). Dataset ‘Urban100’, a dataset of urban images recently provided by Huang et al. , is very interesting as it contains many challenging images failed by many of the existing methods. Finally, dataset ‘B100’, natural images in the Berkeley Segmentation Dataset used in Timofte et al. and Yang and Yang for benchmark, is also employed. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_67",
"text": " We provide parameters used to train our final model. We use a network of depth 20. Training uses batches of size 64. Momentum and weight decay parameters are set to 0.9 and 0.00010.00010.0001, respectively. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_68",
"text": " For weight initialization, we use the method described in He et al. . This is a theoretically sound procedure for networks utilizing rectified linear units (ReLu). ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_69",
"text": " We train all experiments over 80 epochs (9960 iterations with batch size 64). Learning rate was initially set to 0.1 and then decreased by a factor of 10 every 20 epochs. In total, the learning rate was decreased 3 times, and the learning is stopped after 80 epochs. Training takes roughly 4 hours on GPU Titan Z. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_70",
"text": " For benchmark, we follow the publicly available framework of Huang et al. . It enables the comparison of many state-of-the-art results with the same evaluation procedure. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_71",
"text": " The framework applies bicubic interpolation to color components of an image and sophisticated models to luminance components as in other methods , , . This is because human vision is more sensitive to details in intensity than in color. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_72",
"text": " This framework crops pixels near image boundary. For our method, this procedure is unnecessary as our network outputs the full-sized image. For fair comparison, however, we also crop pixels to the same amount. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_73",
"text": " We provide quantitative and qualitative comparisons. Compared methods are A+ , RFL, SelfEx and SRCNN . In Table 3, we provide a summary of quantitative evaluation on several datasets. Our methods outperform all previous methods in these datasets. Moreover, our methods are relatively fast. The public code of SRCNN based on a CPU implementation is slower than the code used by Dong et. al in their paper based on a GPU implementation. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_74",
"text": " In Figures 6 and 7, we compare our method with top-performing methods. In Figure 6, only our method perfectly reconstructs the line in the middle. Similarly, in Figure 7, contours are clean and vivid in our method whereas they are severely blurred or distorted in other methods. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
},
{
"id": "1511.04587_all_75",
"text": " In this work, we have presented a super-resolution method using very deep networks. Training a very deep network is hard due to a slow convergence rate. We use residual-learning and extremely high learning rates to optimize a very deep network fast. Convergence speed is maximized and we use gradient clipping to ensure the training stability. We have demonstrated that our method outperforms the existing method by a large margin on benchmarked images. We believe our approach is readily applicable to other image restoration problems such as denoising and compression artifact removal. ",
"title": "Accurate Image Super-Resolution Using Very Deep Convolutional Networks"
}
] |
What are the CNN architectures that were explored in this paper?
|
The paper uses AlexNet, CifarNet, and GoogLeNet with various numbers of parameters [19].
|
[
19
] |
[
{
"id": "1602.03409_all_0",
"text": " Tremendous progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets (i.e. ImageNet (1, 2)) and the recent revival of deep convolutional neural networks (CNN) (3, 4). For data-driven learning, large-scale well-annotated datasets with representative data distribution characteristics are crucial to learning more accurate or generalizable models (5, 4). Unlike previous image datasets used in computer vision, ImageNet offers a very comprehensive database of more than 1.2 million categorized natural images of 1000+ classes. The CNN models trained upon this database serve as the backbone for significantly improving many object detection and image segmentation problems using other datasets (6, 7), e.g., PASCAL and medical image categorization (9, 10, 11, 12). However, there exists no large-scale annotated medical image dataset comparable to ImageNet, as data acquisition is difficult, and quality annotation is costly. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_1",
"text": " There are currently three major techniques that successfully employ CNNs to medical image classification: 1) training the “CNN from scratch” (13, 14, 15, 16, 17); 2) using “off-the-shelf CNN” features (without retraining the CNN) as complementary information channels to existing hand-crafted image features, for Chest X-rays and CT lung nodule identification (9, 12); and 3) performing unsupervised pre-training on natural or medical images and fine-tuning on medical target images using CNN or other types of deep learning models (18, 19, 20, 21). A decompositional 2.5D view resampling and an aggregation of random view classification scores are used to eliminate the “curse-of-dimensionality” issue in , in order to acquire a sufficient number of training image samples. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_2",
"text": " Previous studies have analyzed three-dimensional patch creation for LN detection (23, 24), atlas creation from chest CT and the extraction of multi-level image features (26, 27). At present, there are several extensions or variations of the decompositional view representation introduced in (22, 28), such as: using a novel vessel-aligned multi-planar image representation for pulmonary embolism detection , fusing unregistered multiview for mammogram analysis and classifying pulmonary peri-fissural nodules via an ensemble of 2D views . ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_3",
"text": " Although natural images and medical images differ significantly, conventional image descriptors developed for object recognition in natural images, such as the scale-invariant feature transform (SIFT) and the histogram of oriented gradients (HOG) , have been widely used for object detection and segmentation in medical image analysis. Recently, ImageNet pre-trained CNNs have been used for chest pathology identification and detection in X-ray and CT modalities (10, 9, 12). They have yielded the best performance results by integrating low-level image features (e.g., GIST , bag of visual words (BoVW) and bag-of-frequency ). However, the fine-tuning of an ImageNet pre-trained CNN model on medical image datasets has not yet been exploited. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_4",
"text": " In this paper, we exploit three important, but previously under-studied factors of employing deep convolutional neural networks to computer-aided detection problems. Particularly, we explore and evaluate different CNN architectures varying in width (ranging from 5 thousand to 160 million parameters) and depth (various numbers of layers), describe the effects of varying dataset scale and spatial image context on performance, and discuss when and why transfer learning from pre-trained ImageNet CNN models can be valuable. We further verify our hypothesis by inheriting and adapting rich hierarchical image features (5, 33) from the large-scale ImageNet dataset for computer aided diagnosis (CAD). We also explore CNN architectures of the most studied seven-layered “AlexNet-CNN” , a shallower “Cifar-CNN” , and a much deeper version of “GoogLeNet-CNN” (with our modifications on CNN structures). This study is partially motivated by recent studies (34, 35) in computer vision. The thorough quantitative analysis and evaluation on deep CNN or sparsity image coding methods elucidate the emerging techniques of the time and provide useful suggestions for their future stages of development, respectively. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_5",
"text": " Two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification are studied in this work. On mediastinal LN detection, we surpass all currently reported results. We obtain 86%percent8686\\% sensitivity on 3 false positives (FP) per patient, versus the prior state-of-art sensitivities of 78%percent7878\\% (stacked shallow learning) and 70%percent7070\\% (CNN), as prior state-of-the-art. For the first time, ILD classification results under the patient-level five-fold cross-validation protocol (CV5) are investigated and reported. The ILD dataset contains 905 annotated image slices with 120 patients and 6 ILD labels. Such sparsely annotated datasets are generally difficult for CNN learning, due to the paucity of labeled instances. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_6",
"text": " Evaluation protocols and details are critical to deriving significant empirical findings . Our experimental results suggest that different CNN architectures and dataset re-sampling protocols are critical for the LN detection tasks where the amount of labeled training data is sufficient and spatial contexts are local. Since LN images are more flexible than ILD images with respect to resampling and reformatting, LN datasets may be more readily augmented by such image transformations. As a result, LN datasets contain more training and testing data instances (due to data auugmentation) than ILD datasets. They nonetheless remain less comprehensive than natural image datasets, such as ImageNet. Fine-tuning ImageNet-trained models for ILD classification is clearly advantageous and yields early promising results, when the amount of labeled training data is highly insufficient and multi-class categorization is used, as opposed to the LN dataset’s binary class categorization. Another significant finding is that CNNs trained from scratch or fine-tuned from ImageNet models consistently outperform CNNs that merely use off-the-shelf CNN features, in both the LN and ILD classification problems. We further analyze, via CNN activation visualizations, when and why transfer learning from non-medical to medical images in CADe problems can be valuable. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_7",
"text": " We employ CNNs (with the characteristics defined above) to thoraco-abdominal lymph node (LN) detection (evaluated separately on the mediastinal and abdominal regions) and interstitial lung disease (ILD) detection. For LN detection, we use randomly sampled 2.5D views in CT . We use 2D CT slices (38, 39, 40) for ILD detection. We then evaluate and compare CNN performance results. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_8",
"text": " Until the detection aggregation approach (22, 41), thoracoabdominal lymph node (LN) detection via CADe mechanisms has yielded poor performance results. In , each 3D LN candidate produces up to 100 random 2.5D orthogonally sampled images or views which are then used to train an effective CNN model. The best performance on abdominal LN detection is achieved at 83%percent8383\\% recall on 3FP per patient , using a “Cifar-10” CNN. Using the thoracoabdominal LN detection datasets , we aim to surpass this CADe performance level, by testing different CNN architectures, exploring various dataset re-sampling protocols, and applying transfer learning from ImageNet pre-trained CNN models. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_9",
"text": " Interstitial lung disease (ILD) comprises more than 150 lung diseases affecting the interstitium, which can severely impair the patient’s ability to breathe. Gao et al. investigate the ILD classification problem in two scenarios: 1) slice-level classification: assigning a holistic two-dimensional axial CT slice image with its occurring ILD disease label(s); and 2) patch-level classification: a/ sampling patches within the 2D ROIs (Regions of Interest provided by ), then b/ classifying patches into seven category labels ( six disease labels and one “healthy” label). Song et al. (38, 39) only address the second sub-task of patch-level classification under the “leave-one-patient-out” (LOO) criterion. By training on the moderate-to-small scale ILD dataset , our main objective is to exploit and benchmark CNN based ILD classification performances under the CV5 metric (which is more realistic and unbiased than LOO (38, 39) and hard-split ), with and without transfer learning. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_10",
"text": " Thoracoabdominal Lymph Node Datasets. We use the publicly available dataset from (22, 41). There are 388 mediastinal LNs labeled by radiologists in 90 patient CT scans, and 595 abdominal LNs in 86 patient CT scans. To facilitate comparison, we adopt the data preparation protocol of , where positive and negative LN candidates are sampled with the fields-of-view (FOVs) of 30mm to 45mm, surrounding the annotated and detected LN centers (obtained by a candidate generation process). More precisely, (22, 41, 36) follow a coarse-to-fine CADe scheme, partially inspired by , which operates with ∼100%similar-toabsentpercent100\\sim 100\\% detection recalls at the cost of approximately 40 false or negative LN candidates per patient scan. In this work, positive and negative LN candidate are first sampled up to 200 times with translations and rotations. Afterwards, negative LN samples are randomly re-selected at a lower rate close to the total number of positives. LN candidates are randomly extracted from fields-of-view (FOVs) spanning 35mm to 128mm in soft-tissue window (-100, 200HU). This allows us to capture multiple spatial scales of image context (43, 44)). The samples are then rescaled to a 64×64646464\\times 64 pixel resolution via B-spline interpolation. A few examples of LNs with axial, coronal, and sagittal views encoded in RGB color images are shown in Figure 1. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_11",
"text": " Unlike the heart or the liver, lymph nodes have no pre-determined anatomic orientation. Hence, the purely random image resampling (with respect to scale, displacement and orientation) and reformatting (the axial, coronal, and sagittal views are in any system randomly resampled coordinates) is a natural choice, which also happens to yield high CNN performance. Although we integrate three channels of information from three orthogonal views for LN detection, the pixel-wise spatial correlations between or among channels are not necessary. The convolutional kernels in the lower level CNN architectures can learn the optimal weights to linearly combine the observations from the axial, coronal, and sagittal channels by computing their dot-products. Transforming axial, coronal, and sagittal representations to RGB also facilitates transfer learning from CNN models trained on ImageNet. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_12",
"text": " This learning representation (i.e., “built-in CNN”) is flexible, in that it naturally combines multiple sources or channels of information. In the recent literature , even heterogeneous class-conditional probability maps can be combined with raw images to improve performance. This set-up is similar to that of other works in computer vision, such as , where heterogeneous image information channels are jointly fed into the CNN convolutional layers for high-accuracy human parsing and segmentation. Finally, if there are correlations among CNN input channels, one may observe the corresponding correlated patterns in the learned filters. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_13",
"text": " In summary, the assumption that there are or must be pixel-wise spatial correlations among input channels does not apply to the CNN model representation. For other medical imaging problems, such as pulmonary embolism detection , in which orientation can be constrained along the attached vessel axis, vessel-aligned multi-planar image representation (MPR) is more effective than randomly aligned MPR. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_14",
"text": " Interstitial Lung Disease Dataset. We utilize the publicly available dataset of . It contains 905 image slices from 120 patients, with six lung tissue types annotations containing at least one of the following: healthy (NM), emphysema (EM), ground glass (GG), fibrosis (FB), micronodules (MN) and consolidation (CD) (Figure 3). At the slice level, the objective is to classify the status of “presence/absence” of any of the six ILD classes for an input axial CT slice . Characterizing an arbitrary CT slice against any possible ILD type, without any manual ROI (in contrast to (38, 39)), can be useful for large-scale patient screening. For slice-level ILD classification, we sampled the slices 12 times with random translations and rotations. After this, we balanced the numbers of CT slice samples for the six classes by randomly sampling several instances at various rates. For patch-based classification, we sampled up to 100 patches of size 64×64646464\\times 64 from each ROI. This dataset is divided into five folds with disjoint patient subsets. The average number of CT slices (training instances) per fold is small, as shown in Table I. Slice-level ILD classification is a very challenging task where CNN models need to learn from very small numbers of training examples and predict ILD labels on unseen patients. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_15",
"text": " In the publicly available ILD dataset, very few CT slices are labeled as normal or healthy. The remaining CT slices cannot be simply classified as normal, because many ILD disease regions or slices have not yet been labeled. ILD is a partially labeled database; this is one of its main limitations. Research is being conducted to address this issue. In particular, has proposed to fully label the ILD dataset pixel-wise via proposed segmentation label propagation. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_16",
"text": " To leverage the CNN architectures designed for color images and to transfer CNN parameters pre-trained on ImageNet, we transform all gray-scale axial CT slice images via three CT window ranges: lung window range (-1400, -200HU), high-attenuation range (-160, 240HU), and low-attenuation range (-1400; -950HU). We then encode the transformed images into RGB channels (to be aligned with the input channels of CNN models (4, 33) pre-trained from natural image datasets ). The low-attenuation CT window is useful for visualizing certain texture patterns of lung diseases (especially emphysema). The usage of different CT attenuation channels improves classification results over the usage of a single CT windowing channel, as demonstrated in . More importantly, these CT windowing processes do not depend on the lung segmentation, which instead is directly defined in the CT HU space. Figure 4 shows a representative example of lung, high-attenuation, and low-attenuation CT windowing for an axis lung CT slice. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_17",
"text": " As observed in , lung segmentation is crucial to holistic slice-level ILD classification. We empirically compare performance in two scenarios with a rough lung segmentation111This can be achieved by segmenting the lung using simple label-fusion methods . In the first case, we overlay the target image slice with the average lung mask among the training folds. In the second, we perform simple morphology operations to obtain the lung boundary. In order to retain information from the inside of the lung, we apply Gaussian smoothing to the regions outside of the lung boundary. There is no significant difference between two setups. Due to the high precision of CNN based image processing, highly accurate lung segmentation is not necessary . The localization of ILD regions within the lung is simultaneously learned through selectively weighted CNN reception fields in the deepest convolutional layers during the classification based CNN training (49, 50). Some areas outside of the lung appear in both healthy or diseased images. CNN training learns to ignore them by setting very small filter weights around the corresponding regions (Figure 13). This observation is validated by . ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_18",
"text": " In this study, we explore, evaluate and analyze the influence of various CNN Architectures, dataset characteristics (when we need more training data or better models for object detection ) and CNN transfer learning from non-medical to medical image domains. These three key elements of building effective deep CNN models for CADe problems are described below. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_19",
"text": " We mainly explore three convolutional neural network architectures (CifarNet (5, 22), AlexNet and GoogLeNet ) with different model training parameter values. The current deep learning models (22, 52, 53) in medical image tasks are at least 2∼5similar-to252\\sim 5 orders of magnitude smaller than even AlexNet . More complex CNN models (22, 52) have only about 150K or 15K parameters. Roth et al. adopt the CNN architecture tailored to the Cifar-10 dataset and operate on image windows of 32×32×33232332\\times 32\\times 3 pixels for lymph node detection, while the simplest CNN in has only one convolutional, pooling, and FC layer, respectively. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_20",
"text": " We use CifarNet as used in as a baseline for the LN detection. AlexNet and GoogLeNet are also modified to evaluate these state-of-the-art CNN architecture from ImageNet classification task to our CADe problems and datasets. A simplified illustration of three CNN architectures exploited is shown in Figure 5. CifarNet always takes 32×32×33232332\\times 32\\times 3 image patches as input while AlexNet and GoogLeNet are originally designed for the fixed image dimension of 256×256×32562563256\\times 256\\times 3 pixels. We also reduced the filter size, stride and pooling parameters of AlexNet and GoogLeNet to accommodate a smaller input size of 64×64×36464364\\times 64\\times 3 pixels. We do so to produce and evaluate “simplified” AlexNet and GoogLeNet versions that are better suited to the smaller scale training datasets common in CADe problems. Throughout the paper, we refer to the models as CifarNet (32x32) or CifarNet (dropping 32x32); AlexNet (256x256) or AlexNet-H (high resolution); AlexNet (64x64) or AlexNet-L (low resolution); GoogLeNet (256x256) or GoogLeNet-H and GoogLeNet (64x64) or GoogLeNet-L (dropping 3 since all image inputs are three channels). ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_21",
"text": " CifarNet, introduced in , was the state-of-the-art model for object recognition on the Cifar10 dataset, which consists of 32×32323232\\times 32 images of 10 object classes. The objects are normally centered in the images. Some example images and class categories from the Cifar10 dataset are shown in Figure 7. CifarNet has three convolution layers, three pooling layers, and one fully-connected layer. This CNN architecture, also used in has about 0.15 million free parameters. We adopt it as a baseline model for the LN detection. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_22",
"text": " The AlexNet architecture was published in , achieved significantly improved performance over the other non-deep learning methods for ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012. This success has revived the interest in CNNs in computer vision. ImageNet consists of 1.2 million 256×256256256256\\times 256 images belonging to 1000 categories. At times, the objects in the image are small and obscure, and thus pose more challenges for learning a successful classification model. More details about the ImageNet dataset will be discussed in Sec. III-B. AlexNet has five convolution layers, three pooling layers, and two fully-connected layers with approximately 60 million free parameters. AlexNet is our default CNN architecture for evaluation and analysis in the remainder of the paper. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_23",
"text": " The GoogLeNet model proposed in , is significantly more complex and deep than all previous CNN architectures. More importantly, it also introduces a new module called “Inception”, which concatenates filters of different sizes and dimensions into a single new filter (refer to Figure 6). Overall, GoogLeNet has two convolution layers, two pooling layers, and nine “Inception” layers. Each “Inception” layer consists of six convolution layers and one pooling layer. An illustration of an “Inception” layer (inception3a) from GoogLeNet is shown in Figure 6. GoogLeNet is the current state-of-the-art CNN architecture for the ILSVRC challenge, where it achieved 5.5% top-5 classification error on the ImageNet challenge, compared to AlexNet’s 15.3% top-5 classification error. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_24",
"text": " ImageNet has more than 1.2 million 256×256256256256\\times 256 images categorized under 1000 object class categories. There are more than 1000 training images per class. The database is organized according to the WordNet hierarchy, which currently contains only nouns in 1000 object categories. The image-object labels are obtained largely through crowd-sourcing, e.g., Amazon Mechanical Turk, and human inspection. Some examples of object categories in ImageNet are “sea snake”, “sandwich”, “vase”, “leopard”, etc. ImageNet is currently the largest image dataset among other standard datasets for visual recognition. Indeed, the Caltech101, Caltech256 and Cifar10 dataset merely contain 60000 32×32323232\\times 32 images and 10 object classes. Furthermore, due to the large number (1000+) of object classes, the objects belonging to each ImageNet class category can be occluded, partial and small, relative to those in the previous public image datasets. This significant intra-class variation poses greater challenges to any data-driven learning system that builds a classifier to fit given data and generalize to unseen data. For comparison, some example images of Cifar10 dataset and ImageNet images in the “tennis ball” class category are shown in Figure 7. The ImageNet dataset is publicly available, and the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has become the standard benchmark for large-scale object recognition. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_25",
"text": " When learned from scratch, all the parameters of CNN models are initialized with random Gaussian distributions and trained for 30 epochs with the mini-batch size of 50 image instances. Training convergence can be observed within 30 epochs. The other hyperparameters are momentum: 0.9; weight decay: 0.0005; (base) learning rate: 0.01, decreased by a factor of 10 at every 10 epochs. We use the Caffe framework and NVidia K40 GPUs to train the CNNs. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_26",
"text": " AlexNet and GoogLeNet CNN models can be either learned from scratch or fine-tuned from pre-trained models. Girshick et al. find that, by applying ImageNet pre-trained ALexNet to PASCAL dataset , performances of semantic 20-class object detection and segmentation tasks significantly improve over previous methods that use no deep CNNs. AlexNet can be fine-tuned on the PASCAL dataset to surpass the performance of the ImageNet pre-trained AlexNet, although the difference is not as significant as that between the CNN and non-CNN methods. Similarly, (57, 58) also demonstrate that better performing deep models are learned via CNN transfer learning from ImageNet to other datasets of limited scales. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_27",
"text": " Our hypothesis on CNN parameter transfer learning is the following: despite the disparity between natural images and natural images, CNNs comprehensively trained on the large scale well-annotated ImageNet may still be transferred to make medical image recognition tasks more effective. Collecting and annotating large numbers of medical images still poses significant challenges. On the other hand, the mainstream deep CNN architectures (e.g., AlexNet and GoogLeNet) contain tens of millions of free parameters to train, and thus require sufficiently large numbers of labeled medical images. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_28",
"text": " For transfer learning, we follow the approach of (57, 6) where all CNN layers except the last are fine-tuned at a learning rate 10 times smaller than the default learning rate. The last fully-connected layer is random initialized and freshly trained, in order to accommodate the new object categories in our CADe applications. Its learning rate is kept at the original 0.01. We denote the models with random initialization or transfer learning as AlexNet-RI and AlexNet-TL, and GoogLeNet-RI and GoogLeNet-TL. We found that the transfer learning strategy yields the best performance results. Determining the optimal learning rate for different layers is challenging, especially for very deep networks such as GoogLeNet. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_29",
"text": " We also perform experiments using “off-the-shelf” CNN features of AlexNet pre-trained on ImageNet and training only the final classifier layer to complete the new CADe classification tasks. Parameters in the convolutional and fully connected layers are fixed and are used as deep image extractors, as in (10, 9, 12). We refer to this model as AlexNet-ImNet in the remainder of the paper. Note that (10, 9, 12) train support vector machines and random forest classifiers using ImageNet pre-trained CNN features. Our simplified implementation is intended to determine whether fine-tuning the “end-to-end” CNN network is necessary to improve performance, as opposed to merely training the final classification layer. This is a slight modification from the method described in (10, 9, 12). ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_30",
"text": " Finally, transfer learning in CNN representation, as empirically verified in previous literature (59, 60, 61, 11, 62), can be effective in various cross-modality imaging settings (RGB images to depth images (59, 60), natural images to general CT and MRI images , and natural images to neuroimaging or ultrasound data). More thorough theoretical studies on cross-modality imaging statistics and transferability will be needed for future studies. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_31",
"text": " In this section, we evaluate and compare the performances of nine CNN model configurations (CifarNet, AlexNet-ImNet, AlexNet-RI-H, AlexNet-TL-H, AlexNet-RI-L, GoogLeNet-RI-H, GoogLeNet-TL-H, GoogLeNet-RI-L and combined) on two important CADe problems using publicly available datasets (22, 41, 37). ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_32",
"text": " We train and evaluate CNNs using three-fold cross-validation (folds are split into disjoint sets of patients), with the different CNN architectures described above. In testing, each LN candidate has multiple random 2.5D views tested by CNN classifiers to generate LN class probability scores. We follow the random view aggregation by averaging probabilities, as in . ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_33",
"text": " We first sample the LN image patches at a 64×64646464\\times 64 pixel resolution. We then up-sample the 64×64646464\\times 64 pixel LN images via bi-linear interpolation to 256×256256256256\\times 256 pixels, in order to accommodate AlexNet-RI-L, AlexNet-TL-H, GoogLeNet-RI-H and GoogLeNet-TL-H. For the modified AlexNet-RI-L at (64×64646464\\times 64) pixel resolution, we reduce the number of first layer convolution filters from 96 to 64 and reduce the stride from 4 to 2. For the modified GoogLeNet-RI (64×64646464\\times 64), we decrease the number of first layer convolution filters from 64 to 32, the pad size from 3 to 2, the kernel size from 7 to 5, stride from 2 to 1 and the stride of the subsequent pooling layer from 2 to 1. We slightly reduce the number of convolutional filters in order to accommodate the smaller input image sizes of target medical image datasets (22, 37), while preventing over-fitting. This eventually improves performance on patch-based classification. CifarNet is used in to detect LN samples of 32×32×33232332\\times 32\\times 3 images. For consistency purposes, we down-sample 64×64×36464364\\times 64\\times 3 resolution LN sample images to the dimension of 32×32×33232332\\times 32\\times 3. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_34",
"text": " Results for lymph node detection in the mediastinum and abdomen are reported in Table II. FROC curves are illustrated in Figure 8. The area-under-the-FROC-curve (AUC) and true positive rate (TPR, recall or sensitivity) at three false positives per patient (TPR/3FP) are used as performance metrics. Of the nine investigated CNN models, CifarNet, AlexNet-ImNet and GoogLeNet-RI-H generally yielded the least competitive detection accuracy results. Our LN datasets are significantly more complex (i.e., display much larger within-class appearance variations), especially due to the extracted fields-of-view (FOVs) of (35mm-128mm) compared to (30mm-45mm) in , where CifarNet is also employed. In this experiment, CifarNet is under-trained with respect to our enhanced LN datasets, due to its limited input resolution and parameter complexity. The inferior performance of AlexNet-ImNet implies that using the pre-trained ImageNet CNNs alone as “off-the-shelf” deep image feature extractors may not be optimal or adequate for mediastinal and abdominal LN detection tasks. To complement “off-the-shelf” CNN features, (10, 9, 12) all add and integrate various other hand-crafted image features as hybrid inputs for the final CADe classification. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_35",
"text": " GoogLeNet-RI-H performs poorly, as it is susceptible to over-fitting. No sufficient data samples are available to train GoogLeNet-RI-H with random initialization. Indeed, due to GoogLeNet-RI-H’s complexity and 22-layer depth, million-image datasets may be required to properly train this model. However, GoogLeNet-TL-H significantly improves upon GoogLeNet-RI-H (0.81 versus 0.61 TPR/3FP in mediastinum; 0.70 versus 0.48 TPR/3FP in abdomen). This indicates that transfer learning offers a much better initialization of CNN parameters than random initialization. Likewise, AlexNet-TL-H consistently outperforms AlexNet-RI-H, though by smaller margins (0.81 versus 0.79 TPR/3FP in mediastinum; 0.69 versus 0.67 TPR/3FP in abdomen). This is also consistent with the findings reported for ILD detection in Table III and Figure 11. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_36",
"text": " GoogLeNet-TL-H yields results similar to AlexNet-TL-H’s for the mediastinal LN detection, and slightly outperforms Alex-Net-H for abdominal LN detection. AlexNet-RI-H exhibits less severe over-fitting than GoogLeNet-RI-H. We also evaluate a simple ensemble by averaging the probability scores from five CNNs: AlexNet-RI-H, AlexNet-TL-H, AlexNet-RI-H, GoogLeNet-TL-H and GoogLeNet-RI-L. This combined ensemble outputs the classification accuracies matching or slightly exceeding the best performing individual CNN models on the mediastinal or abdominal LN detection tasks, respectively. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_37",
"text": " Many of our CNN models achieve notably better (FROC-AUC and TPR/3FP) results than the previous state-of-the-art models for mediastinal LN detection: GoogLeNet-RI-L obtains an AUC=0.95 and 0.85 TPR/3FP, versus AUC=0.92 and 0.70 TPR/3FP and 0.78 TPR/3FP which uses stacked shallow learning. This difference lies in the fact that annotated lymph node segmentation masks are required to learn a mid-level semantic boundary detector , whereas CNN approaches only need LN locations for training . In abdominal LN detection, obtains the best trade-off between its CNN model complexity and sampled data configuration. Our best performing CNN model is GoogLeNet-TL (256x256) which obtains an AUC=0.92 and 0.70 TPR/3FP. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_38",
"text": " The main difference between our dataset preparation protocol and that from is a more aggressive extraction of random views within a much larger range of FOVs. The usage of larger FOVs to capture more image spatial context is inspired by deep zoom-out features that improve semantic segmentation. This image sampling scheme contributes to our best reported performance results in both mediastinal LN detection (in this paper) and automated pancreas segmentation . As shown in Figure 1, abdominal LNs are surrounded by many other similar looking objects. Meanwhile, mediastinal LNs are more easily distinguishable, due to the images’ larger spatial contexts. Finally, from the perspective of the data-model trade-off: “Do We Need More Training Data or Better Models?” , more abdomen CT scans from distinct patient populations need to be acquired and annotated, in order to take full advantage of deep CNN models of high capacity. Nevertheless, deeper and wider CNN models (e.g., GoogLeNet-RI-L and GoogLeNet-TL-H versus Cifar-10 ) have shown improved results in the mediastinal LN detection. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_39",
"text": " Figure 9 provides examples of misclassified lymph nodes (in axial view) (both false negatives (Left) and false positives(Right)), from the Abdomen and Mediastinum datasets. The overall reported LN detection results are clinically significant, as indicated in . ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_40",
"text": " The CNN models evaluated in this experiment are 1) AlexNet-RI (training from scratch on the ILD dataset with random initialization); 2) AlexNet-TL (with transfer learning from ); 3) AlexNet-ImNet: pre-trained ImageNet-CNN model with only the last cost function layer retrained from random initialization, according to the six ILD classes (similar to but without using additional hand-crafted non-deep feature descriptors, such as GIST and BoVW); 4) GoogLeNet-RI (random initialization); 5) GoogLeNet-TL (GoogLeNet with transfer learning from ). All ILD images (patches of 64×64646464\\times 64 and CT axial slices of 512×512512512512\\times 512) are re-sampled to a fixed dimension of 256×256256256256\\times 256 pixels. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_41",
"text": " We evaluate the ILD classification task with five-fold CV on patient-level split, as it is more informative for real clinical performance than LOO. The classification accuracy rates for interstitial lung disease detection are shown in Table III. Two sub-tasks on ILD patch and slice classifications are conducted. In general, patch-level ILD classification is less challenging than slice-level classification, as far more data samples can be sampled from the manually annotated ROIs (up to 100 image patches per ROI), available from . From Table III, all five deep models evaluated obtain comparable results within the range of classification accuracy rates (0.74,0.76)0.740.76(0.74,0.76). Their averaged model achieves a slightly better accuracy of 0.79. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_42",
"text": " F1-scores (38, 39, 54) and the confusion matrix (Table V) for patch-level ILD classification using GoogLeNet-TL under five-fold cross-validation (we denote as Patch-CV5) are also computed. F1-scores are reported on patch classification only (32×32323232\\times 32 pixel patches extracted from manual ROIs) (38, 39, 54), as shown in Table IV. Both and use the evaluation protocol of “leave-one-patient-out” (LOO), which is arguably much easier and not directly comparable to 10-fold CV or our Patch-CV5. In this study, we classify six ILD classes by adding a consolidation (CD) class to five classes of healthy (normal - NM), emphysema (EM), ground glass (GG), fibrosis (FB), and micronodules (MN) in (38, 39, 54). Patch-CV10 and Patch-CV5 report similar medium to high F-scores. This implies that the ILD dataset (although one of the mainstream public medical image datasets) may not adequately represent ILD disease CT lung imaging patterns, over a population of only 120 patients. Patch-CV5 yields higher F-scores than and classifies the extra consolidation (CD) class. At present, the most pressing task is to drastically expand the dataset or to explore across-dataset deep learning on the combined ILD and LTRC datasets . ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_43",
"text": " Recently, Gao et al. have argued that a new CADe protocol on holistic classification of ILD diseases directly, using axial CT slice attenuation patterns and CNN, may be more realistic for clinical applications. We refer to this as slice-level classification, as image patch sampling from manual ROIs can be completely avoided (hence, no manual ROI inputs will be provided). The experimental results in are conducted with a patient-level hard split of 100 (training) and 20 (testing). The method’s testing F-scores (i.e., Slice-Test) are given in Table IV. Note that the F-scores in are not directly comparable to our results, due to different evaluation criteria. Only Slice-Test is evaluated and reported in , and we find that F-scores can change drastically from different rounds of the five-fold CV. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_44",
"text": " While it is a more practical CADe scheme, slice-level CNN learning is very challenging, as it is restricted to only 905 CT image slices with tagged ILD labels. We only benchmark the slice-level ILD classification results in this section. Even with the help of data augmentation (described in Sec. II), the classification accuracy of GoogLeNet-TL from Table III is only 0.57. However, transfer learning from ImageNet pre-trained model is consistently beneficial, as evidenced by AlexNet-TL (0.46) versus AlexNet-RI (0.44), and GoogLeNet-TL (0.57) versus GoogLeNet-RI (0.41). It especially prevents GoogLeNet from over-fitting on the limited CADe datasets. Finally, when the cross-validation is conducted by randomly splitting the set of all 905 CT axial slices into five folds, markedly higher F-scores are obtained (Slice-Random in Table IV). This further validates the claim that the dataset poorly generalizes ILDs for different patients. Figure 10 shows examples of misclassified ILD patches (in axial view), with their ground truth labels and inaccurately classified labels. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_45",
"text": " No existing work has reached the performance requirements for a realistic clinical setting , in which simple ROI-guided image patch extraction and classification (which requires manual ROI selection by clinicians) is implemented. The main goal of this paper is to investigate the three factors (CNN architectures, dataset characteristics and transfer learning) that affect performance on a specific medical image analysis problem and to ultimately deliver clinically relevant results. For ILD classification, the most critical performance bottlenecks are the challenge of cross-dataset learning and the limited patient population size. We attempt to overcome these obstacles by merging the ILD and LTRC datasets. Although the ILD and LTRC datasets (used in ) were generated and annotated separately, they contain many common disease labels. For instance, the ILD disease classes emphysema (EM), ground glass (GG), fibrosis (FB), and micronodules (MN) belong to both datasets, and thus can be jointly trained/tested to form a larger and unified dataset. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_46",
"text": " Adapting fully convolutional CNN or FCNN to parse every pixel location in the ILD lung CT images or slices, or adapting other methods from CNN based semantic image segmentation using PASCAL or ImageNet, may improve accuracy and efficiency. However, current FCNN approaches (65, 66) lack adequate spatial resolution in their directly output label space. A segmentation label propagation method was recently proposed to provide full pixel-wise labeling of the ILD data images. In this work, we sample image patches from the slice using the ROIs for the ILD provided in the dataset, in order to be consistent with previous methods in patch-level (38, 39, 54) and slice-level classification . ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_47",
"text": " In this work, we mainly focus on AlexNet and GoogLeNet. AlexNet is the first notably successful CNN architecture on the ImageNet challenge and has rekindled significant research interests on CNN. GoogLeNet is the state-of-the-art deep model, which has outperformed other notable models, such as AlexNet, OverFeat, and VGGNet (67, 68) in various computer vision benchmarks. Likewise, a reasonable assumption is that OverFeat and VGGNet may generate quantitative performance results ranked between AlexNet’s and GoogLeNet’s. For completeness, we include the Overfeat and VGGNet in the following evaluations, to bolster our hypothesis. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_48",
"text": " OverFeat is described in as an integrated framework for using CNN for classification, localization and detection. Its architecture is similar to that of AlexNet, but contains far more parameters (e.g., 1024 convolution filters in both “conv4” and “conv5” layers compared to 384 and 256 convolution kernels in the “conv4” and “conv5” layers of AlexNet), and operates more densely (e.g., smaller kernel size of 2 in “pool2” layer “pool5” compared to the kernel size 3 in “pool2” and “pool5” of AlexNet) on the input image. Overfeat is the winning model of the ILSVRC 2013 in detection and classification tasks. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_49",
"text": " The VGGNet architecture is introduced in , where it is designed to significantly increase the depth of the existing CNN architectures with 16 or 19 layers. Very small 3×3333\\times 3 size convolutional filters are used in all convolution layers with a convolutional stride of size 1, in order to reduce the number of parameters in deeper networks. Since VGGNet is substantially deeper than the other CNN models, VGGNet is more susceptible to the vanishing gradient problem (69, 70, 71). Hence, the network may be more difficult to train. Training the network requires far more memory and computation time than AlexNet. We use the 16 layer variant as our default VGGNet model in our study. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_50",
"text": " The classification accuracy results for ILD slice and patch level classification of five CNN architectures (CifarNet, AlexNet, Overfeat, VGGNet and GoogLeNet) are shown in Table VI. Based on the analysis in Sec. IV-B, transfer learning is only used for the slice level classification task. From Table VI, quantitative classification accuracy rates increase as the CNN model becomes more complex (CifarNet, AlexNet, Overfeat, VGGNet and GoogLeNet, in ascending order), for both ILD slice and patch level classification problems. The reported results validate our assumption that OverFeat’s and VGGNet’s performance levels fall between AlexNet’s and GoogLeNet‘s (this observation is consistent with the computer vision findings). CifarNet is designed for images with smaller dimensions (32×32323232\\times 32 images), and thus is not catered to classification tasks involving 256×256256256256\\times 256 images. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_51",
"text": " To investigate the performance difference between five-fold cross-validation (CV) in Sec. IV-B and leave-one-patient-out (LOO) validation, this experiment is performed under the LOO protocol. By comparing results in Table III (CV-5) to those in Table VI (LOO), one can see that LOO’s quantitative performances are remarkably better than CV-5’s. For example, in ILD slice-level classification, the accuracy level drastically increases from 0.46 to 0.867 using AlexNet-TL, and from 0.57 to 0.902 for GoogLeNet-TL. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_52",
"text": " CNN training is implemented with the Caffe deep learning framework, using a NVidia K40 GPU on Ubuntu 14.04 Linux OS. All models are trained for up to 90 epochs with early stopping criteria, where a model snapshot with low validation loss is taken for the final model. Other hyper-parameters are fixed as follows: momentum: 0.9; weight decay: 0.0005; and a step learning rate schedule with base learning rate of 0.01, decreased by a factor of 10 every 30 epochs. The image batch size is set to 128, except for GoogLeNet’s (64) and VGG-16’s (32), which are the maximum batch sizes that can fit in the NVidia K40 GPU with 12GB of memory capacity. Table VII illustrates the training time and memory requirements of the five CNN architectures on ILD patch-based classification up to 90 epochs. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_53",
"text": " Medical datasets are often “biased”, in that the number of healthy samples is much larger than the number of diseased instances, or that the numbers of images per class are uneven. In ILD dataset, the number of fibrosis samples is about 3.5 times greater than the number of emphysema samples. The number of non-LNs is 3∼4similar-to343\\sim 4 times greater than the number of LNs in lymph node detection. Different sampling or resampling rates are routinely applied to both ILD and LN detection to balance the data sample number or scale per class, as in. We refer this as “Equal Prior”. If we use the same sampling rate, that will lead to a “Biased Prior” across different classes. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_54",
"text": " Without loss of generality, after GoogLeNet is trained on the training sets under “Equal” or “Biased” priors, we compare its classification results on the balanced validation sets. Evaluating a classifier on a biased validation set will cause unfair assessment of its performance. For instance, a classifier that predicts every image patch as “non-LN” will still achieve a 70%percent7070\\% accuracy rate on a biased set with 3.53.53.5 times as many non-LN samples as LN samples. The classification accuracy results of GoogLeNet trained under two configurations are shown in Table VIII. Overall, it achieves lower accuracy results when trained with a “biased prior” in both tasks, and the accuracy difference for ILD patch-based classification is small. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_55",
"text": " In this section, we determine and analyze, via CNN visualization, the reasons for which transfer learning is beneficial to achieve better performance on CAD applications. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_56",
"text": " Thoracoabdominal LN Detection. In Figure 12, the first layer convolution filters from five different CNN architectures are visualized. We notice that without transfer learning (57, 6), somewhat blurry filters are learned (AlexNet-RI (256x256), AlexNet-RI (64x64), GoogLeNet-RI (256x256) and GoogLeNet-RI (64x64)). However, in AlexNet-TL (256x256), many higher orders of contrast- or edge-preserving patterns (that enable capturing image appearance details) are evidently learned through fine-tuning from ImageNet. With a smaller input resolution, AlexNet-RI (64x64) and GoogLeNet-RI (64x64) can learn image contrast filters to some degree; whereas, GoogLeNet-RI (256x256) and AlexNet-RI (256x256) have over-smooth low-level filters throughout. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_57",
"text": " ILD classification. We focus on analyzing visual CNN optimization traces and activations from the ILD dataset, as its slice-level setting is most similar to ImageNet’s. Indeed, both datasets use full-size images. The traces of the training loss, validation loss and validation accuracy of AlexNet-RI and AlexNet-TL, are shown in Figure 11. For AlexNet-RI in Figure 11 (a), the training loss significantly decreases as the number of training epochs increases, while the validation loss notably increases and the validation accuracy does not improve much before reaching a plateau. With transfer learning and fine-tuning, much better and consistent performances of training loss, validation loss and validation accuracy traces are obtained (see Figure 11 (b)). We begin the optimization problem – that of fine-tuning the ImageNet pre-trained CNN to classify a comprehensive set of images – by initializing the parameters close to an optimal solution. One could compare this process to making adults learn to classify ILDs, as opposed to babies. During the process, the validation loss, having remained at lower values throughout, achieves higher final accuracy levels than the validation loss on a similar problem with random initialization. Meanwhile, the training losses in both cases decrease to values near zero. This indicates that both AlexNet-RI and AlexNet-TL over-fit on the ILD dataset, due to its small instance size. The quantitative results in Table III indicate that AlexNet-TL and GoogLeNet-TL have consistently better classification accuracies than AlexNet-RI and GoogLeNet-RI, respectively. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_58",
"text": " The last pooling layer (pool-5) activation maps of the ImageNet pre-trained AlexNet (analogical to AlexNet-ImNet) and AlexNet-TL, obtained by processing two input images of Figure 2 (b,c), are shown in Figure 13 (a,b). The last pooling layer activation map summarizes the entire input image by highlighting which relative locations or neural reception fields relative to the image are activated. There are a total of 256 (6x6) reception fields in AlexNet . Pooling units where the relative image location of the disease region is present in the image are highlighted with green boxes. Next, we reconstruct the original ILD images using the process of de-convolution, back-propagating with convolution and un-pooling from the activation maps of the chosen pooling units . From the reconstructed images (Figure 13 bottom), we observe that with fine-tuning, AlexNet-TL detects and localizes objects of interest (ILD disease regions depicted in in Figure 2 (b) and (c)) better than AlexNet-ImNet. The filters shown in Figure 13 that better localize regions on the input images (Figure 2 (b) and (c)) respectively, produce relatively higher activations (in the top 5%) among all 512 reception field responses in the fine-tuned AlexNet-TL model. As observed in , the final CNN classification score can not be driven solely by a single strong activation in the receptions fields, but often by a sparse set of high activations (i.e., varying selective or sparse activations per input image). ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_59",
"text": " We summarize our findings as follows. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_60",
"text": " • Deep CNN architectures with 8, even 22 layers (4, 33), can be useful even for CADe problems where the available training datasets are limited. Previously, CNN models used in medical image analysis applications have often been 2∼5similar-to252\\sim 5 orders of magnitude smaller. • The trade-off between using better learning models and using more training data should be carefully considered when searching for an optimal solution to any CADe problem (e.g., mediastinal and abdominal LN detection). • Limited datasets can be a bottleneck to further advancement of CADe. Building progressively growing (in scale), well annotated datasets is at least as crucial as developing new algorithms. This has been accomplished, for instance, in the field of computer vision. The well-known scene recognition problem has made tremendous progress, thanks to the steady and continuous development of Scene-15, MIT Indoor-67, SUN-397 and Place datasets . • Transfer learning from the large scale annotated natural image datasets (ImageNet) to CADe problems has been consistently beneficial in our experiments. This sheds some light on cross-dataset CNN learning in the medical image domain, e.g., the union of the ILD and LTRC datasets , as suggested in this paper. • Finally, applications of off-the-shelf deep CNN image features to CADe problems can be improved by either exploring the performance-complementary properties of hand-crafted features (10, 9, 12), or by training CNNs from scratch and better fine-tuning CNNs on the target medical image dataset, as evaluated in this paper. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
},
{
"id": "1602.03409_all_61",
"text": " In this paper, we exploit and extensively evaluate three important, previously under-studied factors on deep convolutional neural networks (CNN) architecture, dataset characteristics, and transfer learning. We evaluate CNN performance on two different computer-aided diagnosis applications: thoraco-abdominal lymph node detection and interstitial lung disease classification. The empirical evaluation, CNN model visualization, CNN performance analysis, and conclusive insights can be generalized to the design of high performance CAD systems for other medical imaging tasks. ",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning"
}
] |
How long is this challenge been running?
|
The challenge has been running for past 5 years [175].
|
[
175
] |
[
{
"id": "1409.0575_all_0",
"text": " The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.111In this paper, we will be using the term object recognition broadly to encompass both image classification (a task requiring an algorithm to determine what object classes are present in the image) as well as object detection (a task requiring an algorithm to localize all objects present in the image). ILSVRC follows in the footsteps of the PASCAL VOC challenge (Everingham et al.,, 2012), established in 2005, which set the precedent for standardized evaluation of recognition algorithms in the form of yearly competitions. As in PASCAL VOC, ILSVRC consists of two components: (1) a publically available dataset, and (2) an annual competition and corresponding workshop. The dataset allows for the development and comparison of categorical object recognition algorithms, and the competition and workshop provide a way to track the progress and discuss the lessons learned from the most successful and innovative entries each year. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_1",
"text": " The publically released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.222 In 2010, the test annotations were later released publicly; since then the test annotation have been kept hidden. Participants train their algorithms using the training images and then automatically annotate the test images. These predicted annotations are submitted to the evaluation server. Results of the evaluation are revealed at the end of the competition period and authors are invited to share insights at the workshop held at the International Conference on Computer Vision (ICCV) or European Conference on Computer Vision (ECCV) in alternate years. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_2",
"text": " ILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_3",
"text": " In creating the dataset, several challenges had to be addressed. Scaling up from 19,737 images in PASCAL VOC 2010 to 1,461,406 in ILSVRC 2010 and from 20 object classes to 1000 object classes brings with it several challenges. It is no longer feasible for a small group of annotators to annotate the data as is done for other datasets (Fei-Fei et al.,, 2004; Criminisi,, 2004; Everingham et al.,, 2012; Xiao et al.,, 2010). Instead we turn to designing novel crowdsourcing approaches for collecting large-scale annotations (Su et al.,, 2012; Deng et al.,, 2009, 2014). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_4",
"text": " Some of the 1000 object classes may not be as easy to annotate as the 20 categories of PASCAL VOC: e.g., bananas which appear in bunches may not be as easy to delineate as the basic-level categories of aeroplanes or cars. Having more than a million images makes it infeasible to annotate the locations of all objects (much less with object segmentations, human body parts, and other detailed annotations that subsets of PASCAL VOC contain). New evaluation criteria have to be defined to take into account the facts that obtaining perfect manual annotations in this setting may be infeasible. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_5",
"text": " Once the challenge dataset was collected, its scale allowed for unprecedented opportunities both in evaluation of object recognition algorithms and in developing new techniques. Novel algorithmic innovations emerge with the availability of large-scale training data. The broad spectrum of object categories motivated the need for algorithms that are even able to distinguish classes which are visually very similar. We highlight the most successful of these algorithms in this paper, and compare their performance with human-level accuracy. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_6",
"text": " Finally, the large variety of object classes in ILSVRC allows us to perform an analysis of statistical properties of objects and their impact on recognition algorithms. This type of analysis allows for a deeper understanding of object recognition, and for designing the next generation of general object recognition algorithms. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_7",
"text": " This paper has three key goals: 1. To discuss the challenges of creating this large-scale object recognition benchmark dataset, 2. To highlight the developments in object classification and detection that have resulted from this effort, and 3. To take a closer look at the current state of the field of categorical object recognition. The paper may be of interest to researchers working on creating large-scale datasets, as well as to anybody interested in better understanding the history and the current state of large-scale object recognition. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_8",
"text": " The collected dataset and additional information about ILSVRC can be found at: http://image-net.org/challenges/LSVRC/ ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_9",
"text": " We briefly discuss some prior work in constructing benchmark image datasets. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_10",
"text": " Caltech 101 (Fei-Fei et al.,, 2004) was among the first standardized datasets for multi-category image classification, with 101 object classes and commonly 15-30 training images per class. Caltech 256 (Griffin et al.,, 2007) increased the number of object classes to 256 and added images with greater scale and background variability. The TinyImages dataset (Torralba et al.,, 2008) contains 80 million 32x32 low resolution images collected from the internet using synsets in WordNet (Miller,, 1995) as queries. However, since this data has not been manually verified, there are many errors, making it less suitable for algorithm evaluation. Datasets such as 15 Scenes (Oliva and Torralba,, 2001; Fei-Fei and Perona,, 2005; Lazebnik et al.,, 2006) or recent Places (Zhou et al.,, 2014) provide a single scene category label (as opposed to an object category). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_11",
"text": " The ImageNet dataset (Deng et al.,, 2009) is the backbone of ILSVRC. ImageNet is an image dataset organized according to the WordNet hierarchy (Miller,, 1995). Each concept in WordNet, possibly described by multiple words or word phrases, is called a “synonym set” or “synset”. ImageNet populates 21,841 synsets of WordNet with an average of 650 manually verified and full resolution images. As a result, ImageNet contains 14,197,122 annotated images organized by the semantic hierarchy of WordNet (as of August 2014). ImageNet is larger in scale and diversity than the other image classification datasets. ILSVRC uses a subset of ImageNet images for training the algorithms and some of ImageNet’s image collection protocols for annotating additional images for testing the algorithms. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_12",
"text": " Many datasets aim to provide richer image annotations beyond image-category labels. LabelMe (Russell et al.,, 2007) contains general photographs with multiple objects per image. It has bounding polygon annotations around objects, but the object names are not standardized: annotators are free to choose which objects to label and what to name each object. The SUN2012 (Xiao et al.,, 2010) dataset contains 16,873 manually cleaned up and fully annotated images more suitable for standard object detection training and evaluation. SIFT Flow (Liu et al.,, 2011) contains 2,688 images labeled using the LabelMe system. The LotusHill dataset (Yao et al.,, 2007) contains very detailed annotations of objects in 636,748 images and video frames, but it is not available for free. Several datasets provide pixel-level segmentations: for example, MSRC dataset (Criminisi,, 2004) with 591 images and 23 object classes, Stanford Background Dataset (Gould et al.,, 2009) with 715 images and 8 classes, and the Berkeley Segmentation dataset (Arbelaez et al.,, 2011) with 500 images annotated with object boundaries. OpenSurfaces segments surfaces from consumer photographs and annotates them with surface properties, including material, texture, and contextual information (Bell et al.,, 2013) . ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_13",
"text": " The closest to ILSVRC is the PASCAL VOC dataset (Everingham et al.,, 2010, 2014), which provides a standardized test bed for object detection, image classification, object segmentation, person layout, and action classification. Much of the design choices in ILSVRC have been inspired by PASCAL VOC and the similarities and differences between the datasets are discussed at length throughout the paper. ILSVRC scales up PASCAL VOC’s goal of standardized training and evaluation of recognition algorithms by more than an order of magnitude in number of object classes and images: PASCAL VOC 2012 has 20 object classes and 21,738 images compared to ILSVRC2012 with 1000 object classes and 1,431,167 annotated images. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_14",
"text": " The recently released COCO dataset (Lin et al., 2014b, ) contains more than 328,000 images with 2.5 million object instances manually segmented. It has fewer object categories than ILSVRC (91 in COCO versus 200 in ILSVRC object detection) but more instances per category (27K on average compared to about 1K in ILSVRC object detection). Further, it contains object segmentation annotations which are not currently available in ILSVRC. COCO is likely to become another important large-scale benchmark. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_15",
"text": " ILSVRC makes extensive use of Amazon Mechanical Turk to obtain accurate annotations (Sorokin and Forsyth,, 2008). Works such as (Welinder et al.,, 2010; Sheng et al.,, 2008; Vittayakorn and Hays,, 2011) describe quality control mechanisms for this marketplace. (Vondrick et al.,, 2012) provides a detailed overview of crowdsourcing video annotation. A related line of work is to obtain annotations through well-designed games, e.g. (von Ahn and Dabbish,, 2005). Our novel approaches to crowdsourcing accurate image annotations are in Sections 3.1.3, 3.2.1 and 3.3.3. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_16",
"text": " There are several datasets with standardized online evaluation similar to ILSVRC: the aforementioned PASCAL VOC (Everingham et al.,, 2012), Labeled Faces in the Wild (Huang et al.,, 2007) for unconstrained face recognition, Reconstruction meets Recognition (Urtasun et al.,, 2014) for 3D reconstruction and KITTI (Geiger et al.,, 2013) for computer vision in autonomous driving. These datasets along with ILSVRC help benchmark progress in different areas of computer vision. Works such as (Torralba and Efros,, 2011) emphasize the importance of examining the bias inherent in any standardized dataset. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_17",
"text": " We begin with a brief overview of ILSVRC challenge tasks in Section 2. Dataset collection and annotation are described at length in Section 3. Section 4 discusses the evaluation criteria of algorithms in the large-scale recognition setting. Section 5 provides an overview of the methods developed by ILSVRC participants. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_18",
"text": " Section 6 contains an in-depth analysis of ILSVRC results: Section 6.1 documents the progress of large-scale recognition over the years, Section 6.2 concludes that ILSVRC results are statistically significant, Section 6.3 thoroughly analyzes the current state of the field of object recognition, and Section 6.4 compares state-of-the-art computer vision accuracy with human accuracy. We conclude and discuss lessons learned from ILSVRC in Section 7. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_19",
"text": " The goal of ILSVRC is to estimate the content of photographs for the purpose of retrieval and automatic annotation. Test images are presented with no initial annotation, and algorithms have to produce labelings specifying what objects are present in the images. New test images are collected and labeled especially for this competition and are not part of the previously published ImageNet dataset (Deng et al.,, 2009). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_20",
"text": " ILSVRC over the years has consisted of one or more of the following tasks (years in parentheses):333In addition, ILSVRC in 2012 also included a taster fine-grained classification task, where algorithms would classify dog photographs into one of 120 dog breeds (Khosla et al.,, 2011). Fine-grained classification has evolved into its own Fine-Grained classification challenge in 2013 (Berg et al.,, 2013), which is outside the scope of this paper. 1. Image classification (2010-2014): Algorithms produce a list of object categories present in the image. 2. Single-object localization (2011-2014): Algorithms produce a list of object categories present in the image, along with an axis-aligned bounding box indicating the position and scale of one instance of each object category. 3. Object detection (2013-2014): Algorithms produce a list of object categories present in the image along with an axis-aligned bounding box indicating the position and scale of every instance of each object category. This section provides an overview and history of each of the three tasks. Table 1 shows summary statistics. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_21",
"text": " Data for the image classification task consists of photographs collected from Flickr444www.flickr.com and other search engines, manually labeled with the presence of one of 1000 object categories. Each image contains one ground truth label. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_22",
"text": " For each image, algorithms produce a list of object categories present in the image. The quality of a labeling is evaluated based on the label that best matches the ground truth label for the image (see Section 4.1). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_23",
"text": " Constructing ImageNet was an effort to scale up an image classification dataset to cover most nouns in English using tens of millions of manually verified photographs (Deng et al.,, 2009). The image classification task of ILSVRC came as a direct extension of this effort. A subset of categories and images was chosen and fixed to provide a standardized benchmark while the rest of ImageNet continued to grow. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_24",
"text": " The single-object localization task, introduced in 2011, built off of the image classification task to evaluate the ability of algorithms to learn the appearance of the target object itself rather than its image context. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_25",
"text": " Data for the single-object localization task consists of the same photographs collected for the image classification task, hand labeled with the presence of one of 1000 object categories. Each image contains one ground truth label. Additionally, every instance of this category is annotated with an axis-aligned bounding box. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_26",
"text": " For each image, algorithms produce a list of object categories present in the image, along with a bounding box indicating the position and scale of one instance of each object category. The quality of a labeling is evaluated based on the object category label that best matches the ground truth label, with the additional requirement that the location of the predicted instance is also accurate (see Section 4.2). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_27",
"text": " The object detection task went a step beyond single-object localization and tackled the problem of localizing multiple object categories in the image. This task has been a part of the PASCAL VOC for many years on the scale of 20 object categories and tens of thousands of images, but scaling it up by an order of magnitude in object categories and in images proved to be very challenging from a dataset collection and annotation point of view (see Section 3.3). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_28",
"text": " Data for the detection tasks consists of new photographs collected from Flickr using scene-level queries. The images are annotated with axis-aligned bounding boxes indicating the position and scale of every instance of each target object category. The training set is additionally supplemented with (a) data from the single-object localization task, which contains annotations for all instances of just one object category, and (b) negative images known not to contain any instance of some object categories. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_29",
"text": " For each image, algorithms produce bounding boxes indicating the position and scale of all instances of all target object categories. The quality of labeling is evaluated by recall, or number of target object instances detected, and precision, or the number of spurious detections produced by the algorithm (see Section 4.3). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_30",
"text": " Our process of constructing large-scale object recognition image datasets consists of three key steps. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_31",
"text": " The first step is defining the set of target object categories. To do this, we select from among the existing ImageNet (Deng et al.,, 2009) categories. By using WordNet as a backbone (Miller,, 1995), ImageNet already takes care of disambiguating word meanings and of combining together synonyms into the same object category. Since the selection of object categories needs to be done only once per challenge task, we use a combination of automatic heuristics and manual post-processing to create the list of target categories appropriate for each task. For example, for image classification we may include broader scene categories such as a type of beach, but for single-object localization and object detection we want to focus only on object categories which can be unambiguously localized in images (Sections 3.1.1 and 3.3.1). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_32",
"text": " The second step is collecting a diverse set of candidate images to represent the selected categories. We use both automatic and manual strategies on multiple search engines to do the image collection. The process is modified for the different ILSVRC tasks. For example, for object detection we focus our efforts on collecting scene-like images using generic queries such as “African safari” to find pictures likely to contain multiple animals in one scene (Section 3.3.2). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_33",
"text": " The third (and most challenging) step is annotating the millions of collected images to obtain a clean dataset. We carefully design crowdsourcing strategies targeted to each individual ILSVRC task. For example, the bounding box annotation system used for localization and detection tasks consists of three distinct parts in order to include automatic crowdsourced quality control (Section 3.2.1). Annotating images fully with all target object categories (on a reasonable budget) for object detection requires an additional hierarchical image labeling system (Section 3.3.3). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_34",
"text": " We describe the data collection and annotation procedure for each of the ILSVRC tasks in order: image classification (Section 3.1), single-object localization (Section 3.2), and object detection (Section 3.3), focusing on the three key steps for each dataset. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_35",
"text": " The image classification task tests the ability of an algorithm to name the objects present in the image, without necessarily localizing them. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_36",
"text": " We describe the choices we made in constructing the ILSVRC image classification dataset: selecting the target object categories from ImageNet (Section 3.1.1), collecting a diverse set of candidate images by using multiple search engines and an expanded set of queries in multiple languages (Section 3.1.2), and finally filtering the millions of collected images using the carefully designed crowdsourcing strategy of ImageNet (Deng et al.,, 2009) (Section 3.1.3). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_37",
"text": " The 1000 categories used for the image classification task were selected from the ImageNet (Deng et al.,, 2009) categories. The 1000 synsets are selected such that there is no overlap between synsets: for any synsets i𝑖i and j𝑗j, i𝑖i is not an ancestor of j𝑗j in the ImageNet hierarchy. These synsets are part of the larger hierarchy and may have children in ImageNet; however, for ILSVRC we do not consider their child subcategories. The synset hierarchy of ILSVRC can be thought of as a “trimmed” version of the complete ImageNet hierarchy. Figure 1 visualizes the diversity of the ILSVRC2012 object categories. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_38",
"text": " The exact 1000 synsets used for the image classification and single-object localization tasks have changed over the years. There are 639 synsets which have been used in all five ILSVRC challenges so far. In the first year of the challenge synsets were selected randomly from the available ImageNet synsets at the time, followed by manual filtering to make sure the object categories were not too obscure. With the introduction of the object localization challenge in 2011 there were 321 synsets that changed: categories such as “New Zealand beach” which were inherently difficult to localize were removed, and some new categories from ImageNet containing object localization annotations were added. In ILSVRC2012, 90 synsets were replaced with categories corresponding to dog breeds to allow for evaluation of more fine-grained object classification, as shown in Figure 2. The synsets have remained consistent since year 2012. Appendix A provides the complete list of object categories used in ILSVRC2012-2014. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_39",
"text": " Image collection for ILSVRC classification task is the same as the strategy employed for constructing ImageNet (Deng et al.,, 2009). Training images are taken directly from ImageNet. Additional images are collected for the ILSVRC using this strategy and randomly partitioned into the validation and test sets. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_40",
"text": " We briefly summarize the process; (Deng et al.,, 2009) contains further details. Candidate images are collected from the Internet by querying several image search engines. For each synset, the queries are the set of WordNet synonyms. Search engines typically limit the number of retrievable images (on the order of a few hundred to a thousand). To obtain as many images as possible, we expand the query set by appending the queries with the word from parent synsets, if the same word appears in the glossary of the target synset. For example, when querying “whippet”, according to WordNet’s glossary a “small slender dog of greyhound type developed in England”, we also use “whippet dog” and “whippet greyhound.” To further enlarge and diversify the candidate pool, we translate the queries into other languages, including Chinese, Spanish, Dutch and Italian. We obtain accurate translations using WordNets in those languages. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_41",
"text": " Annotating images with corresponding object classes follows the strategy employed by ImageNet (Deng et al.,, 2009). We summarize it briefly here. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_42",
"text": " To collect a highly accurate dataset, we rely on humans to verify each candidate image collected in the previous step for a given synset. This is achieved by using Amazon Mechanical Turk (AMT), an online platform on which one can put up tasks for users for a monetary reward. With a global user base, AMT is particularly suitable for large scale labeling. In each of our labeling tasks, we present the users with a set of candidate images and the definition of the target synset (including a link to Wikipedia). We then ask the users to verify whether each image contains objects of the synset. We encourage users to select images regardless of occlusions, number of objects and clutter in the scene to ensure diversity. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_43",
"text": " While users are instructed to make accurate judgment, we need to set up a quality control system to ensure this accuracy. There are two issues to consider. First, human users make mistakes and not all users follow the instructions. Second, users do not always agree with each other, especially for more subtle or confusing synsets, typically at the deeper levels of the tree. The solution to these issues is to have multiple users independently label the same image. An image is considered positive only if it gets a convincing majority of the votes. We observe, however, that different categories require different levels of consensus among users. For example, while five users might be necessary for obtaining a good consensus on “Burmese cat” images, a much smaller number is needed for “cat” images. We develop a simple algorithm to dynamically determine the number of agreements needed for different categories of images. For each synset, we first randomly sample an initial subset of images. At least 10 users are asked to vote on each of these images. We then obtain a confidence score table, indicating the probability of an image being a good image given the consensus among user votes. For each of the remaining candidate images in this synset, we proceed with the AMT user labeling until a pre-determined confidence score threshold is reached. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_44",
"text": " Evaluation of the accuracy of the large-scale crowdsourced image annotation system was done on the entire ImageNet (Deng et al.,, 2009). A total of 80 synsets were randomly sampled at every tree depth of the mammal and vehicle subtrees. An independent group of subjects verified the correctness of each of the images. An average of 99.7%percent99.799.7\\% precision is achieved across the synsets. We expect similar accuracy on ILSVRC image classification dataset since the image annotation pipeline has remained the same. To verify, we manually checked 1500 ILSVRC2012-2014 image classification test set images (the test set has remained unchanged in these three years). We found 5 annotation errors, corresponding as expected to 99.7%percent99.799.7\\% precision. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_45",
"text": " Using the image collection and annotation procedure described in previous sections, we collected a large-scale dataset used for ILSVRC classification task. There are 1000 object classes and approximately 1.2 million training images, 50 thousand validation images and 100 thousand test images. Table 2 (top) documents the size of the dataset over the years of the challenge. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_46",
"text": " The single-object localization task evaluates the ability of an algorithm to localize one instance of an object category. It was introduced as a taster task in ILSVRC 2011, and became an official part of ILSVRC in 2012. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_47",
"text": " The key challenge was developing a scalable crowdsourcing method for object bounding box annotation. Our three-step self-verifying pipeline is described in Section 3.2.1. Having the dataset collected, we perform detailed analysis in Section 3.2.2 to ensure that the dataset is sufficiently varied to be suitable for evaluation of object localization algorithms. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_48",
"text": " The object classes for single-object localization task are the same as the object classes for image classification task described above in Section 3.1. The training images for localization task are a subset of the training images used for image classification task, and the validation and test images are the same between both tasks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_49",
"text": " Recall that for the image classification task every image was annotated with one object class label, corresponding to one object that is present in an image. For the single-object localization task, every validation and test image and a subset of the training images are annotated with axis-aligned bounding boxes around every instance of this object. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_50",
"text": " Every bounding box is required to be as small as possible while including all visible parts of the object instance. An alternate annotation procedure could be to annotate the full (estimated) extent of the object: e.g., if a person’s legs are occluded and only the torso is visible, the bounding box could be drawn to include the likely location of the legs. However, this alternative procedure is inherently ambiguous and ill-defined, leading to disagreement among annotators and among researchers (what is the true “most likely” extent of this object?). We follow the standard protocol of only annotating visible object parts (Russell et al.,, 2007; Everingham et al.,, 2010).555Some datasets such as PASCAL VOC (Everingham et al.,, 2010) and LabelMe (Russell et al.,, 2007) are able to provide more detailed annotations: for example, marking individual object instances as being truncated. We chose not to provide this level of detail in favor of annotating more images and more object instances. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_51",
"text": " We summarize the crowdsourced bounding box annotation system described in detail in (Su et al.,, 2012). The goal is to build a system that is fully automated, highly accurate, and cost-effective. Given a collection of images where the object of interest has been verified to exist, for each image the system collects a tight bounding box for every instance of the object. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_52",
"text": " There are two requirements: ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_53",
"text": " • Quality Each bounding box needs to be tight, i.e. the smallest among all bounding boxes that contains all visible parts of the object. This facilitates the object detection learning algorithms by providing the precise location of each object instance; • Coverage Every object instance needs to have a bounding box. This is important for training localization algorithms because it tells the learning algorithms with certainty what is not the object. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_54",
"text": " The core challenge of building such a system is effectively controlling the data quality with minimal cost. Our key observation is that drawing a bounding box is significantly more difficult and time consuming than giving answers to multiple choice questions. Thus quality control through additional verification tasks is more cost-effective than consensus-based algorithms. This leads to the following workflow with simple basic subtasks: ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_55",
"text": " 1. Drawing A worker draws one bounding box around one instance of an object on the given image. 2. Quality verification A second worker checks if the bounding box is correctly drawn. 3. Coverage verification A third worker checks if all object instances have bounding boxes. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_56",
"text": " The sub-tasks are designed following two principles. First, the tasks are made as simple as possible. For example, instead of asking the worker to draw all bounding boxes on the same image, we ask the worker to draw only one. This reduces the complexity of the task. Second, each task has a fixed and predictable amount of work. For example, assuming that the input images are clean (object presence is correctly verified) and the coverage verification tasks give correct results, the amount of work of the drawing task is always that of providing exactly one bounding box. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_57",
"text": " Quality control on Tasks 2 and 3 is implemented by embedding “gold standard” images where the correct answer is known. Worker training for each of these subtasks is described in detail in (Su et al.,, 2012). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_58",
"text": " The system is evaluated on 10 categories with ImageNet (Deng et al.,, 2009): balloon, bear, bed, bench, beach, bird, bookshelf, basketball hoop, bottle, and people. A subset of 200 images are randomly sampled from each category. On the image level, our evaluation shows that 97.9%percent97.997.9\\% images are completely covered with bounding boxes. For the remaining 2.1%percent2.12.1\\%, some bounding boxes are missing. However, these are all difficult cases: the size is too small, the boundary is blurry, or there is strong shadow. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_59",
"text": " On the bounding box level, 99.2%percent99.299.2\\% of all bounding boxes are accurate (the bounding boxes are visibly tight). The remaining 0.8%percent0.80.8\\% are somewhat off. No bounding boxes are found to have less than 50%percent5050\\% intersection over union overlap with ground truth. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_60",
"text": " Additional evaluation of the overall cost and an analysis of quality control can be found in (Su et al.,, 2012). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_61",
"text": " Using the annotation procedure described above, we collect a large set of bounding box annotations for the ILSVRC single-object classification task. All 50 thousand images in the validation set and 100 thousand images in the test set are annotated with bounding boxes around all instances of the ground truth object class (one object class per image). In addition, in ILSVRC2011 25%percent2525\\% of training images are annotated with bounding boxes the same way, yielding more than 310 thousand annotated images with more than 340 thousand annotated object instances. In ILSVRC2012 40%percent4040\\% of training images are annotated, yielding more than 520 thousand annotated images with more than 590 thousand annotated object instances. Table 2 (bottom) documents the size of this dataset. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_62",
"text": " In addition to the size of the dataset, we also analyze the level of difficulty of object localization in these images compared to the PASCAL VOC benchmark. We compute statistics on the ILSVRC2012 single-object localization validation set images compared to PASCAL VOC 2012 validation images. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_63",
"text": " Real-world scenes are likely to contain multiple instances of some objects, and nearby object instances are particularly difficult to delineate. The average object category in ILSVRC has 1.611.611.61 target object instances on average per positive image, with each instance having on average 0.470.470.47 neighbors (adjacent instances of the same object category). This is comparable to 1.691.691.69 instances per positive image and 0.520.520.52 neighbors per instance for an average object class in PASCAL. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_64",
"text": " As described in (Hoiem et al.,, 2012), smaller objects tend to be significantly more difficult to localize. In the average object category in PASCAL the object occupies 24.1%percent24.124.1\\% of the image area, and in ILSVRC 35.8%percent35.835.8\\%. However, PASCAL has only 20 object categories while ILSVRC has 1000. The 537 object categories of ILSVRC with the smallest objects on average occupy the same fraction of the image as PASCAL objects: 24.1%percent24.124.1\\%. Thus even though on average the object instances tend to be bigger in ILSVRC images, there are more than 25 times more object categories than in PASCAL VOC with the same average object scale. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_65",
"text": " Appendix B and (Russakovsky et al.,, 2013) have additional comparisons. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_66",
"text": " The ILSVRC task of object detection evaluates the ability of an algorithm to name and localize all instances of all target objects present in an image. It is much more challenging than object localization because some object instances may be small/occluded/difficult to accurately localize, and the algorithm is expected to locate them all, not just the one it finds easiest. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_67",
"text": " There are three key challenges in collecting the object detection dataset. The first challenge is selecting the set of common objects which tend to appear in cluttered photographs and are well-suited for benchmarking object detection performance. Our approach relies on statistics of the object localization dataset and the tradition of the PASCAL VOC challenge (Section 3.3.1). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_68",
"text": " The second challenge is obtaining a much more varied set of scene images than those used for the image classification and single-object localization datasets. Section 3.3.2 describes the procedure for utilizing as much data from the single-object localization dataset as possible and supplementing it with Flickr images queried using hundreds of manually designed high-level queries. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_69",
"text": " The third, and biggest, challenge is completely annotating this dataset with all the objects. This is done in two parts. Section 3.3.3 describes the first part: our hierarchical strategy for obtaining the list of all target objects which occur within every image. This is necessary since annotating in a straight-forward way by creating a task for every (image, object class) pair is no longer feasible at this scale. Appendix E describes the second part: annotating the bounding boxes around these objects, using the single-object localization bounding box annotation pipeline of Section 3.2.1 along with extra verification to ensure that every instance of the object is annotated with exactly one bounding box. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_70",
"text": " There are 200 object classes hand-selected for the detection task, eacg corresponding to a synset within ImageNet. These were chosen to be mostly basic-level object categories that would be easy for people to identify and label. The rationale is that the object detection system developed for this task can later be combined with a fine-grained classification model to further classify the objects if a finer subdivision is desired.666Some of the training objects are actually annotated with more detailed classes: for example, one of the 200 object classes is the category “dog,” and some training instances are annotated with the specific dog breed. As with the 1000 classification classes, the synsets are selected such that there is no overlap: for any synsets i𝑖i and j𝑗j, i𝑖i is not an ancestor of j𝑗j in the ImageNet hierarchy. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_71",
"text": " The selection of the 200 object detection classes in 2013 was guided by the ILSVRC 2012 classification and localization dataset. Starting with 1000 object classes and their bounding box annotations we first eliminated all object classes which tended to be too “big” in the image (on average the object area was greater than 50%percent5050\\% of the image area). These were classes such as T-shirt, spiderweb, or manhole cover. We then manually eliminated all classes which we did not feel were well-suited for detection, such as hay, barbershop, or poncho. This left 494 object classes which were merged into basic-level categories: for example, different species of birds were merged into just the “bird” class. The classes remained the same in ILSVRC2014. Appendix D contains the complete list of object categories used in ILSVRC2013-2014 (in the context of the hierarchy described in Section 3.3.3). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_72",
"text": " Staying mindful of the tradition of the PASCAL VOC dataset we also tried to ensure that the set of 200 classes contains as many of the 20 PASCAL VOC classes as possible. Table 3 shows the correspondences. The changes that were done were to ensure more accurate and consistent crowdsourced annotations. The object class with the weakest correspondence is “potted plant” in PASCAL VOC, corresponding to “flower pot” in ILSVRC. “Potted plant” was one of the most challenging object classes to annotate consistently among the PASCAL VOC classes, and in order to obtain accurate annotations using crowdsourcing we had to restrict the definition to a more concrete object. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_73",
"text": " Many images for the detection task were collected differently than the images in ImageNet and the classification and single-object localization tasks. Figure 3 summarizes the types of images that were collected. Ideally all of these images would be scene images fully annotated with all target categories. However, given budget constraints our goal was to provide as much suitable detection data as possible, even if the images were drawn from a few different sources and distributions. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_74",
"text": " The validation and test detection set images come from two sources (percent of images from each source in parentheses). The first source (77%)percent77(77\\%) is images from ILSVRC2012 single-object localization validation and test sets corresponding to the 200 detection classes (or their children in the ImageNet hierarchy). Images where the target object occupied more than 50%percent5050\\% of the image area were discarded, since they were unlikely to contain other objects of interest. The second source (23%)percent23(23\\%) is images from Flickr collected specifically for detection task. We queried Flickr using a large set of manually defined queries, such as “kitchenette” or “Australian zoo” to retrieve images of scenes likely to contain several objects of interest. Appendix C contains the full list. We also added pairwise queries, or queries with two target object names such as “tiger lion,” which also often returned cluttered scenes. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_75",
"text": " Figure 4 shows a random set of both types of validation images. Images were randomly split, with 33%percent3333\\% going into the validation set and 67%percent6767\\% into the test set.777The validation/test split is consistent with ILSVRC2012: validation images of ILSVRC2012 remained in the validation set of ILSVRC2013, and ILSVRC2012 test images remained in ILSVRC2013 test set. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_76",
"text": " The training set for the detection task comes from three sources of images (percent of images from each source in parentheses). The first source (63%)percent63(63\\%) is all training images from ILSVRC2012 single-object localization task corresponding to the 200 detection classes (or their children in the ImageNet hierarchy). We did not filter by object size, allowing teams to take advantage of all the positive examples available. The second source (24%)percent24(24\\%) is negative images which were part of the original ImageNet collection process but voted as negative: for example, some of the images were collected from Flickr and search engines for the ImageNet synset “animals” but during the manual verification step did not collect enough votes to be considered as containing an “animal.” These images were manually re-verified for the detection task to ensure that they did not in fact contain the target objects. The third source (13%)percent13(13\\%) is images collected from Flickr specifically for the detection task. These images were added for ILSVRC2014 following the same protocol as the second type of images in the validation and test set. This was done to bring the training and testing distributions closer together. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_77",
"text": " The key challenge in annotating images for the object detection task is that all objects in all images need to be labeled. Suppose there are N inputs (images) which need to be annotated with the presence or absence of K labels (objects). A naïve approach would query humans for each combination of input and label, requiring NK𝑁𝐾NK queries. However, N and K can be very large and the cost of this exhaustive approach quickly becomes prohibitive. For example, annotating 60,0006000060,000 validation and test images with the presence or absence of 200200200 object classes for the detection task naïvely would take 808080 times more effort than annotating 150,000150000150,000 validation and test images with 111 object each for the classification task – and this is not even counting the additional cost of collecting bounding box annotations around each object instance. This quickly becomes infeasible. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_78",
"text": " In (Deng et al.,, 2014) we study strategies for scalable multilabel annotation, or for efficiently acquiring multiple labels from humans for a collection of items. We exploit three key observations for labels in real world applications (illustrated in Figure LABEL:fig:chipull): ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_79",
"text": " 1. Correlation. Subsets of labels are often highly correlated. Objects such as a computer keyboard, mouse and monitor frequently co-occur in images. Similarly, some labels tend to all be absent at the same time. For example, all objects that require electricity are usually absent in pictures taken outdoors. This suggests that we could potentially “fill in” the values of multiple labels by grouping them into only one query for humans. Instead of checking if dog, cat, rabbit etc. are present in the photo, we just check about the “animal” group If the answer is no, then this implies a no for all categories in the group. 2. Hierarchy. The above example of grouping dog, cat, rabbit etc. into animal has implicitly assumed that labels can be grouped together and humans can efficiently answer queries about the group as a whole. This brings up our second key observation: humans organize semantic concepts into hierarchies and are able to efficiently categorize at higher semantic levels (Thorpe et al.,, 1996), e.g. humans can determine the presence of an animal in an image as fast as every type of animal individually. This leads to substantial cost savings. 3. Sparsity. The values of labels for each image tend to be sparse, i.e. an image is unlikely to contain more than a dozen types of objects, a small fraction of the hundreds of object categories. This enables rapid elimination of many objects by quickly filling in no. With a high degree of sparsity, an efficient algorithm can have a cost which grows logarithmically with the number of objects instead of linearly. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_80",
"text": " We propose algorithmic strategies that exploit the above intuitions. The key is to select a sequence of queries for humans such that we achieve the same labeling results with only a fraction of the cost of the naïve approach. The main challenges include how to measure cost and utility of queries, how to construct good queries, and how to dynamically order them. A detailed description of the generic algorithm, along with theoretical analysis and empirical evaluation, is presented in (Deng et al.,, 2014). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_81",
"text": " The generic algorithm automatically selects the most informative queries to ask based on object label statistics learned from the training set. In our case of 200 object classes, since obtaining the training set was by itself challenging we chose to design the queries by hand. We created a hierarchy of queries of the type “is there a… in the image?” For example, one of the high-level questions was “is there an animal in the image?” We ask the crowd workers this question about every image we want to label. The children of the “animal” question would correspond to specific examples of animals: for example, “is there a mammal in the image?” or “is there an animal with no legs?” To annotate images efficiently, these questions are asked only on images determined to contain an animal. The 200 leaf node questions correspond to the 200 target objects, e.g., “is there a cat in the image?”. A few sample iterations of the algorithm are shown in Figure 6. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_82",
"text": " Algorithm 1 is the formal algorithm for labeling an image with the presence or absence of each target object category. With this algorithm in mind, the hierarchy of questions was constructed following the principle that false positives only add extra cost whereas false negatives can significantly affect the quality of the labeling. Thus, it is always better to stick with more general but less ambiguous questions, such as “is there a mammal in the image?” as opposed to asking overly specific but potentially ambiguous questions, such as “is there an animal that can climb trees?” Constructing this hierarchy was a surprisingly time-consuming process, involving multiple iterations to ensure high accuracy of labeling and avoid question ambiguity. Appendix D shows the constructed hierarchy. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_83",
"text": " Once all images are labeled with the presence or absence of all object categories we use the bounding box system described in Section 3.2.1 along with some additional modifications of Appendix E to annotate the location of every instance of every present object category. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_84",
"text": " Using the procedure described above, we collect a large-scale dataset for ILSVRC object detection task. There are 200 object classes and approximately 450K training images, 20K validation images and 40K test images. Table 4 documents the size of the dataset over the years of the challenge. The major change between ILSVRC2013 and ILSVRC2014 was the addition of 60,658 fully annotated training images. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_85",
"text": " Prior to ILSVRC, the object detection benchmark was the PASCAL VOC challenge (Everingham et al.,, 2010). ILSVRC has 101010 times more object classes than PASCAL VOC (200 vs 20), 10.610.610.6 times more fully annotated training images (60,658 vs 5,717), 35.235.235.2 times more training objects (478,807 vs 13,609), 3.53.53.5 times more validation images (20,121 vs 5823) and 3.53.53.5 times more validation objects (55,501 vs 15,787). ILSVRC has 2.82.82.8 annotated objects per image on the validation set, compared to 2.72.72.7 in PASCAL VOC. The average object in ILSVRC takes up 17.0%percent17.017.0\\% of the image area and in PASCAL VOC takes up 20.7%percent20.720.7\\%; Table 3 contains per-class comparisons. Additionally, ILSVRC contains a wide variety of objects, including tiny objects such as sunglasses (1.3%percent1.31.3\\% of image area on average), ping-pong balls (1.5%percent1.51.5\\% of image area on average) and basketballs (2.0%percent2.02.0\\% of image area on average). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_86",
"text": " Once the dataset has been collected, we need to define a standardized evaluation procedure for algorithms. Some measures have already been established by datasets such as the Caltech 101 (Fei-Fei et al.,, 2004) for image classification and PASCAL VOC (Everingham et al.,, 2012) for both image classification and object detection. To adapt these procedures to the large-scale setting we had to address three key challenges. First, for the image classification and single-object localization tasks only one object category could be labeled in each image due to the scale of the dataset. This created potential ambiguity during evaluation (addressed in Section 4.1). Second, evaluating localization of object instances is inherently difficult in some images which contain a cluster of objects (addressed in Section 4.2). Third, evaluating localization of object instances which occupy few pixels in the image is challenging (addressed in Section 4.3). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_87",
"text": " In this section we describe the standardized evaluation criteria for each of the three ILSVRC tasks. We elaborate further on these and other more minor challenges with large-scale evaluation. Appendix F describes the submission protocol and other details of running the competition itself. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_88",
"text": " The scale of ILSVRC classification task (1000 categories and more than a million of images) makes it very expensive to label every instance of every object in every image. Therefore, on this dataset only one object category is labeled in each image. This creates ambiguity in evaluation. For example, an image might be labeled as a “strawberry” but contain both a strawberry and an apple. Then an algorithm would not know which one of the two objects to name. For the image classification task we allowed an algorithm to identify multiple (up to 5) objects in an image and not be penalized as long as one of the objects indeed corresponded to the ground truth label. Figure 7(top row) shows some examples. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_89",
"text": " Concretely, each image i𝑖i has a single class label Cisubscript𝐶𝑖C_{i}. An algorithm is allowed to return 5 labels ci1,…ci5subscript𝑐𝑖1…subscript𝑐𝑖5c_{i1},\\dots c_{i5}, and is considered correct if cij=Cisubscript𝑐𝑖𝑗subscript𝐶𝑖c_{ij}=C_{i} for some j𝑗j. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_90",
"text": " Let the error of a prediction dij=d(cij,Ci)subscript𝑑𝑖𝑗𝑑subscript𝑐𝑖𝑗subscript𝐶𝑖d_{ij}=d(c_{ij},C_{i}) be 111 if cij≠Cisubscript𝑐𝑖𝑗subscript𝐶𝑖c_{ij}\\neq C_{i} and 00 otherwise. The error of an algorithm is the fraction of test images on which the algorithm makes a mistake: error =1N∑i=1Nminjdijabsent1𝑁superscriptsubscript𝑖1𝑁subscript𝑗subscript𝑑𝑖𝑗\\displaystyle=\\frac{1}{N}\\sum_{i=1}^{N}\\min_{j}d_{ij} (1) ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_91",
"text": " We used two additional measures of error. First, we evaluated top-1 error. In this case algorithms were penalized if their highest-confidence output label ci1subscript𝑐𝑖1c_{i1} did not match ground truth class Cisubscript𝐶𝑖C_{i}. Second, we evaluated hierarchical error. The intuition is that confusing two nearby classes (such as two different breeds of dogs) is not as harmful as confusing a dog for a container ship. For the hierarchical criteria, the cost of one misclassification, d(cij,Ci)𝑑subscript𝑐𝑖𝑗subscript𝐶𝑖d(c_{ij},C_{i}), is defined as the height of the lowest common ancestor of cijsubscript𝑐𝑖𝑗c_{ij} and Cisubscript𝐶𝑖C_{i} in the ImageNet hierarchy. The height of a node is the length of the longest path to a leaf node (leaf nodes have height zero). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_92",
"text": " However, in practice we found that all three measures of error (top-5, top-1, and hierarchical) produced the same ordering of results. Thus, since ILSVRC2012 we have been exclusively using the top-5 metric which is the simplest and most suitable to the dataset. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_93",
"text": " The evaluation for single-object localization is similar to object classification, again using a top-5 criteria to allow the algorithm to return unannotated object classes without penalty. However, now the algorithm is considered correct only if it both correctly identifies the target class Cisubscript𝐶𝑖C_{i} and accurately localizes one of its instances. Figure 7(middle row) shows some examples. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_94",
"text": " Concretely, an image is associated with object class Cisubscript𝐶𝑖C_{i}, with all instances of this object class annotated with bounding boxes Biksubscript𝐵𝑖𝑘B_{ik}. An algorithm returns {(cij,bij)}j=15superscriptsubscriptsubscript𝑐𝑖𝑗subscript𝑏𝑖𝑗𝑗15\\{(c_{ij},b_{ij})\\}_{j=1}^{5} of class labels cijsubscript𝑐𝑖𝑗c_{ij} and associated locations bijsubscript𝑏𝑖𝑗b_{ij}. The error of a prediction j𝑗j is: dijsubscript𝑑𝑖𝑗\\displaystyle d_{ij} =max(d(cij,Ci),minkd(bij,Bik))absent𝑑subscript𝑐𝑖𝑗subscript𝐶𝑖subscript𝑘𝑑subscript𝑏𝑖𝑗subscript𝐵𝑖𝑘\\displaystyle=\\max(d(c_{ij},C_{i}),\\min_{k}d(b_{ij},B_{ik})) (2) Here d(bij,Bik)𝑑subscript𝑏𝑖𝑗subscript𝐵𝑖𝑘d(b_{ij},B_{ik}) is the error of localization, defined as 00 if the area of intersection of boxes bijsubscript𝑏𝑖𝑗b_{ij} and Biksubscript𝐵𝑖𝑘B_{ik} divided by the areas of their union is greater than 0.50.50.5, and 111 otherwise. (Everingham et al.,, 2010) The error of an algorithm is computed as in Eq. 1. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_95",
"text": " Evaluating localization is inherently difficult in some images. Consider a picture of a bunch of bananas or a carton of apples. It is easy to classify these images as containing bananas or apples, and even possible to localize a few instances of each fruit. However, in order for evaluation to be accurate every instance of banana or apple needs to be annotated, and that may be impossible. To handle the images where localizing individual object instances is inherently ambiguous we manually discarded 3.5%percent3.53.5\\% of images since ILSVRC2012. Some examples of discarded images are shown in Figure 8. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_96",
"text": " The criteria for object detection was adopted from PASCAL VOC (Everingham et al.,, 2010). It is designed to penalize the algorithm for missing object instances, for duplicate detections of one instance, and for false positive detections. Figure 7(bottom row) shows examples. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_97",
"text": " For each object class and each image Iisubscript𝐼𝑖I_{i}, an algorithm returns predicted detections (bij,sij)subscript𝑏𝑖𝑗subscript𝑠𝑖𝑗(b_{ij},s_{ij}) of predicted locations bijsubscript𝑏𝑖𝑗b_{ij} with confidence scores sijsubscript𝑠𝑖𝑗s_{ij}. These detections are greedily matched to the ground truth boxes {Bik}subscript𝐵𝑖𝑘\\{B_{ik}\\} using Algorithm 2. For every detection j𝑗j on image i𝑖i the algorithm returns zij=1subscript𝑧𝑖𝑗1z_{ij}=1 if the detection is matched to a ground truth box according to the threshold criteria, and 00 otherwise. For a given object class, let N𝑁N be the total number of ground truth instances across all images. Given a threshold t𝑡t, define recall as the fraction of the N𝑁N objects detected by the algorithm, and precision as the fraction of correct detections out of the total detections returned by the algorithm. Concretely, Recall(t)𝑅𝑒𝑐𝑎𝑙𝑙𝑡\\displaystyle Recall(t) =∑ij1(sij≥t)zijNabsentsubscript𝑖𝑗1delimited-()subscript𝑠𝑖𝑗𝑡subscript𝑧𝑖𝑗𝑁\\displaystyle=\\frac{\\sum_{ij}1(s_{ij}\\geq t)z_{ij}}{N} (3) Precision(t)𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛𝑡\\displaystyle Precision(t) =∑ij1(sij≥t)zij∑ij1(sij≥t)absentsubscript𝑖𝑗1delimited-()subscript𝑠𝑖𝑗𝑡subscript𝑧𝑖𝑗subscript𝑖𝑗1delimited-()subscript𝑠𝑖𝑗𝑡\\displaystyle=\\frac{\\sum_{ij}1(s_{ij}\\geq t)z_{ij}}{\\sum_{ij}1(s_{ij}\\geq t)} (4) ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_98",
"text": " The final metric for evaluating an algorithm on a given object class is average precision over the different levels of recall achieved by varying the threshold t𝑡t. The winner of each object class is then the team with the highest average precision, and then winner of the challenge is the team that wins on the most object classes.888In this paper we focus on the mean average precision across all categories as the measure of a team’s performance. This is done for simplicity and is justified since the ordering of teams by mean average precision was always the same as the ordering by object categories won. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_99",
"text": " Evaluating localization of object instances which occupy very few pixels in the image is challenging. The PASCAL VOC approach was to label such instances as “difficult” and ignore them during evaluation. However, since ILSVRC contains a more diverse set of object classes including, for example, “nail” and “ping pong ball” which have many very small instances, it is important to include even very small object instances in evaluation. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_100",
"text": " In Algorithm 2, a predicted bounding box b𝑏b is considered to have properly localized by a ground truth bounding box B𝐵B if IOU(b,B)≥thr(B)𝐼𝑂𝑈𝑏𝐵thr𝐵IOU(b,B)\\geq\\mbox{thr}(B). The PASCAL VOC metric uses the threshold thr(B)=0.5thr𝐵0.5\\mbox{thr}(B)=0.5. However, for small objects even deviations of a few pixels would be unacceptable according to this threshold. For example, consider an object B𝐵B of size 10×10101010\\times 10 pixels, with a detection window of 20×20202020\\times 20 pixels which fully contains that object. This would be an error of approximately 555 pixels on each dimension, which is average human annotation error. However, the IOU in this case would be 100/400=0.251004000.25100/400=0.25, far below the threshold of 0.50.50.5. Thus for smaller objects we loosen the threshold in ILSVRC to allow for the annotation to extend up to 5 pixels on average in each direction around the object. Concretely, if the ground truth box B𝐵B is of dimensions w×h𝑤ℎw\\times h then thr(B)=min(0.5,wh(w+10)(h+10))thr𝐵0.5𝑤ℎ𝑤10ℎ10\\mbox{thr}(B)=\\min\\left(0.5,\\frac{wh}{(w+10)(h+10)}\\right) (5) In practice, this changes the threshold only on objects which are smaller than approximately 25×25252525\\times 25 pixels, and affects 5.5%percent5.55.5\\% of objects in the detection validation set. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_101",
"text": " One additional practical consideration for ILSVRC detection evaluation is subtle and comes directly as a result of the scale of ILSVRC. In PASCAL, algorithms would often return many detections per class on the test set, including ones with low confidence scores. This allowed the algorithms to reach the level of high recall at least in the realm of very low precision. On ILSVRC detection test set if an algorithm returns 10 bounding boxes per object per image this would result in 10×200×40K=801020040𝐾8010\\times 200\\times 40K=80M detections. Each detection contains an image index, a class index, 4 bounding box coordinates, and the confidence score, so it takes on the order of 28 bytes. The full set of detections would then require 2.242.242.24Gb to store and submit to the evaluation server, which is impractical. This means that algorithms are implicitly required to limit their predictions to only the most confident locations. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_102",
"text": " The ILSVRC dataset and the competition has allowed significant algorithmic advances in large-scale image recognition and retrieval. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_103",
"text": " This section is organized chronologically, highlighting the particularly innovative and successful methods which participated in the ILSVRC each year. Tables LABEL:table:sub10-12, LABEL:table:sub13 and LABEL:table:sub14 list all the participating teams. We see a turning point in 2012 with the development of large-scale convolutional neural networks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_104",
"text": " The first year the challenge consisted of just the classification task. The winning entry from NEC team (Lin et al.,, 2011) used SIFT (Lowe,, 2004) and LBP (Ahonen et al.,, 2006) features with two non-linear coding representations (Zhou et al.,, 2010; Wang et al.,, 2010) and a stochastic SVM. The honorable mention XRCE team (Perronnin et al.,, 2010) used an improved Fisher vector representation (Perronnin and Dance,, 2007) along with PCA dimensionality reduction and data compression followed by a linear SVM. Fisher vector-based methods have evolved over five years of the challenge and continued performing strongly in every ILSVRC from 2010 to 2014. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_105",
"text": " The winning classification entry in 2011 was the 2010 runner-up team XRCE, applying high-dimensional image signatures (Perronnin et al.,, 2010) with compression using product quantization (Sanchez and Perronnin,, 2011) and one-vs-all linear SVMs. The single-object localization competition was held for the first time, with two brave entries. The winner was the UvA team using a selective search approach to generate class-independent object hypothesis regions (van de Sande et al., 2011b, ), followed by dense sampling and vector quantization of several color SIFT features (van de Sande et al.,, 2010), pooling with spatial pyramid matching (Lazebnik et al.,, 2006), and classifying with a histogram intersection kernel SVM (Maji and Malik,, 2009) trained on a GPU (van de Sande et al., 2011a, ). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_106",
"text": " This was a turning point for large-scale object recognition, when large-scale deep neural networks entered the scene. The undisputed winner of both the classification and localization tasks in 2012 was the SuperVision team. They trained a large, deep convolutional neural network on RGB values, with 60 million parameters using an efficient GPU implementation and a novel hidden-unit dropout trick (Krizhevsky et al.,, 2012; Hinton et al.,, 2012). The second place in image classification went to the ISI team, which used Fisher vectors (Sanchez and Perronnin,, 2011) and a streamlined version of Graphical Gaussian Vectors (Harada and Kuniyoshi,, 2012), along with linear classifiers using Passive-Aggressive (PA) algorithm (Crammer et al.,, 2006). The second place in single-object localization went to the VGG, with an image classification system including dense SIFT features and color statistics (Lowe,, 2004), a Fisher vector representation (Sanchez and Perronnin,, 2011), and a linear SVM classifier, plus additional insights from (Arandjelovic and Zisserman,, 2012; Sanchez et al.,, 2012). Both ISI and VGG used (Felzenszwalb et al.,, 2010) for object localization; SuperVision used a regression model trained to predict bounding box locations. Despite the weaker detection model, SuperVision handily won the object localization task. A detailed analysis and comparison of the SuperVision and VGG submissions on the single-object localization task can be found in (Russakovsky et al.,, 2013). The influence of the success of the SuperVision model can be clearly seen in ILSVRC2013 and ILSVRC2014. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_107",
"text": " There were 24 teams participating in the ILSVRC2013 competition, compared to 21 in the previous three years combined. Following the success of the deep learning-based method in 2012, the vast majority of entries in 2013 used deep convolutional neural networks in their submission. The winner of the classification task was Clarifai, with several large deep convolutional networks averaged together. The network architectures were chosen using the visualization technique of (Zeiler and Fergus,, 2013), and they were trained on the GPU following (Zeiler et al.,, 2011) using the dropout technique (Krizhevsky et al.,, 2012). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_108",
"text": " The winning single-object localization OverFeat submission was based on an integrated framework for using convolutional networks for classification, localization and detection with a multiscale sliding window approach (Sermanet et al.,, 2013). They were the only team tackling all three tasks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_109",
"text": " The winner of object detection task was UvA team, which utilized a new way of efficient encoding (van de Sande et al.,, 2014) densely sampled color descriptors (van de Sande et al.,, 2010) pooled using a multi-level spatial pyramid in a selective search framework (Uijlings et al.,, 2013). The detection results were rescored using a full-image convolutional network classifier. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_110",
"text": " 2014 attracted the most submissions, with 36 teams submitting 123 entries compared to just 24 teams in 2013 – a 1.5x increase in participation.999Table LABEL:table:sub14 omits 4 teams which submitted results but chose not to officially participate in the challenge. As in 2013 almost all teams used convolutional neural networks as the basis for their submission. Significant progress has been made in just one year: image classification error was almost halved since ILSVRC2013 and object detection mean average precision almost doubled compared to ILSVRC2013. Please refer to Section 6.1 for details. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_111",
"text": " In 2014 teams were allowed to use outside data for training their models in the competition, so there were six tracks: provided and outside data tracks in each of image classification, single-object localization, and object detection tasks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_112",
"text": " The winning image classification with provided data team was GoogLeNet, which explored an improved convolutional neural network architecture combining the multi-scale idea with intuitions gained from the Hebbian principle. Additional dimension reduction layers allowed them to increase both the depth and the width of the network significantly without incurring significant computational overhead. In the image classification with external data track, CASIAWS won by using weakly supervised object localization from only classification labels to improve image classification. MCG region proposals (Arbeláez et al.,, 2014) pretrained on PASCAL VOC 2012 data are used to extract region proposals, regions are represented using convolutional networks, and a multiple instance learning strategy is used to learn weakly supervised object detectors to represent the image. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_113",
"text": " In the single-object localization with provided data track, the winning team was VGG, which explored the effect of convolutional neural network depth on its accuracy by using three different architectures with up to 19 weight layers with rectified linear unit non-linearity, building off of the implementation of Caffe (Jia,, 2013). For localization they used per-class bounding box regression similar to OverFeat (Sermanet et al.,, 2013). In the single-object localization with external data track, Adobe used 2000 additional ImageNet classes to train the classifiers in an integrated convolutional neural network framework for both classification and localization, with bounding box regression. At test time they used k-means to find bounding box clusters and rank the clusters according to the classification scores. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_114",
"text": " In the object detection with provided data track, the winning team NUS used the RCNN framework (Girshick et al.,, 2013) with the network-in-network method (Lin et al., 2014a, ) and improvements of (Howard,, 2014). Global context information was incorporated following (Chen et al.,, 2014). In the object detection with external data track, the winning team was GoogLeNet (which also won image classification with provided data). It is truly remarkable that the same team was able to win at both image classification and object detection, indicating that their methods are able to not only classify the image based on scene information but also accurately localize multiple object instances. Just like most teams participating in this track, GoogLeNet used the image classification dataset as extra training data. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_115",
"text": " ILSVRC over the past five years has paved the way for several breakthroughs in computer vision. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_116",
"text": " The field of categorical object recognition has dramatically evolved in the large-scale setting. Section 5.1 documents the progress, starting from coded SIFT features and evolving to large-scale convolutional neural networks dominating at all three tasks of image classification, single-object localization, and object detection. With the availability of so much training data (along with an efficient algorithmic implementation and GPU computing resources) it became possible to learn neural networks directly from the image data, without needing to create multi-stage hand-tuned pipelines of extracted features and discriminative classifiers. The major breakthrough came in 2012 with the win of the SuperVision team on image classification and single-object localization tasks (Krizhevsky et al.,, 2012), and by 2014 all of the top contestants were relying heavily on convolutional neural networks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_117",
"text": " Further, over the past few years there has been a lot of focus on large-scale recognition in the computer vision community . Best paper awards at top vision conferences in 2013 were awarded to large-scale recognition methods: at CVPR 2013 to ”Fast, Accurate Detection of 100,000 Object Classes on a Single Machine” (Dean et al.,, 2013) and at ICCV 2013 to ”From Large Scale Image Categorization to Entry-Level Categories” (Ordonez et al.,, 2013). Additionally, several influential lines of research have emerged, such as large-scale weakly supervised localization work of (Kuettel et al.,, 2012) which was awarded the best paper award in ECCV 2012 and large-scale zero-shot learning, e.g., (Frome et al.,, 2013). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_118",
"text": " State-of-the-art accuracy has improved significantly from ILSVRC2010 to ILSVRC2014, showcasing the massive progress that has been made in large-scale object recognition over the past five years. The performance of the winning ILSVRC entries for each task and each year are shown in Figure 9. The improvement over the years is clearly visible. In this section we quantify and analyze this improvement. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_119",
"text": " There has been a 4.2x reduction in image classification error (from 28.2%percent28.228.2\\% to 6.7%percent6.76.7\\%) and a 1.7x reduction in single-object localization error (from 42.5%percent42.542.5\\% to 25.3%percent25.325.3\\%) since the beginning of the challenge. For consistency, here we consider only teams that use the provided training data. Even though the exact object categories have changed (Section 3.1.1), the large scale of the dataset has remained the same (Table 2), making the results comparable across the years. The dataset has not changed since 2012, and there has been a 2.4x reduction in image classification error (from 16.4%percent16.416.4\\% to 6.7%percent6.76.7\\%) and a 1.3x in single-object localization error (from 33.5%percent33.533.5\\% to 25.3%percent25.325.3\\%) in the past three years. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_120",
"text": " Object detection accuracy as measured by the mean average precision (mAP) has increased 1.9x since the introduction of this task, from 22.6%percent22.622.6\\% mAP in ILSVRC2013 to 43.9%percent43.943.9\\% mAP in ILSVRC2014. However, these results are not directly comparable for two reasons. First, the size of the object detection training data has increased significantly from 2013 to 2014 (Section 3.3). Second, the 43.9%percent43.943.9\\% mAP result was obtained with the addition of the image classification and single-object localization training data. Here we attempt to understand the relative effects of the training set size increase versus algorithmic improvements. All models are evaluated on the same ILSVRC2013-2014 object detection test set. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_121",
"text": " First, we quantify the effects of increasing detection training data between the two challenges by comparing the same model trained on ILSVRC2013 detection data versus ILSVRC2014 detection data. The UvA team’s framework from 2013 achieved 22.6%percent22.622.6\\% with ILSVRC2013 data (Table LABEL:table:sub13) and 26.3%percent26.326.3\\% with ILSVRC2014 data and no other modifications.101010Personal communication with members of the UvA team. The absolute increase in mAP was 3.7%percent3.73.7\\%. The RCNN model achieved 31.4%percent31.431.4\\% mAP with ILSVRC2013 detection plus image classification data (Girshick et al.,, 2013) and 34.5%percent34.534.5\\% mAP with ILSVRC2014 detection plus image classification data (Berkeley team in Table LABEL:table:sub14). The absolute increase in mAP by expanding ILSVRC2013 detection data to ILSVRC2014 was 3.1%percent3.13.1\\%. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_122",
"text": " Second, we quantify the effects of adding in the external data for training object detection models. The NEC model in 2013 achieved 19.6%percent19.619.6\\% mAP trained on ILSVRC2013 detection data alone and 20.9%percent20.920.9\\% mAP trained on ILSVRC2013 detection plus classification data (Table LABEL:table:sub13). The absolute increase in mAP was 1.3%percent1.31.3\\%. The UvA team’s best entry in 2014 achieved 32.0%percent32.032.0\\% mAP trained on ILSVRC2014 detection data and 35.4%percent35.435.4\\% mAP trained on ILSVRC2014 detection plus classification data. The absolute increase in mAP was 3.4%percent3.43.4\\%. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_123",
"text": " Thus, we conclude based on the evidence so far that expanding the ILSVRC2013 detection set to the ILSVRC2014 set, as well as adding in additional training data from the classification task, all account for approximately 1−4%1percent41-4\\% in absolute mAP improvement for the models. For comparison, we can also attempt to quantify the effect of algorithmic innovation. The UvA team’s 2013 framework achieved 26.3%percent26.326.3\\% mAP on ILSVRC2014 data as mentioned above, and their improved method in 2014 obtained 32.0%percent32.032.0\\% mAP (Table LABEL:table:sub14). This is 5.8%percent5.85.8\\% absolute increase in mAP over just one year from algorithmic innovation alone. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_124",
"text": " In summary, we conclude that the absolute 21.3%percent21.321.3\\% increase in mAP between winning entries of ILSVRC2013 (22.6%percent22.622.6\\% mAP) and of ILSVRC2014 (43.9%percent43.943.9\\% mAP) is the result of impressive algorithmic innovation and not just a consequence of increased training data. However, increasing the ISLVRC2014 object detection training dataset further is likely to produce additional improvements in detection accuracy for current algorithms. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_125",
"text": " One important question to ask is whether results of different submissions to ILSVRC are statistically significantly different from each other. Given the large scale, it is no surprise that even minor differences in accuracy are statistically significant; we seek to quantify exactly how much of a difference is enough. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_126",
"text": " Following the strategy employed by PASCAL VOC (Everingham et al.,, 2014), for each method we obtain a confidence interval of its score using bootstrap sampling. During each bootstrap round, we sample N𝑁N images with replacement from all the available N𝑁N test images and evaluate the performance of the algorithm on those sampled images. This can be done very efficiently by precomputing the accuracy on each image. Given the results of all the bootstrapping rounds we discard the lower and the upper α𝛼\\alpha fraction. The range of the remaining results represents the 1−2α12𝛼1-2\\alpha confidence interval. We run a large number of bootstrapping rounds (from 20,000 until convergence). Table 5 shows the results of the top entries to each task of ILSVRC2012-2014. The winning methods are statistically significantly different from the other methods, even at the 99.9%percent99.999.9\\% level. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_127",
"text": " Besides looking at just the average accuracy across hundreds of object categories and tens of thousands of images, we can also delve deeper to understand where mistakes are being made and where researchers’ efforts should be focused to expedite progress. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_128",
"text": " To do so, in this section we will be analyzing an “optimistic” measurement of state-of-the-art recognition performance instead of focusing on the differences in individual algorithms. For each task and each object class, we compute the best performance of any entry submitted to any ILSVRC2012-2014, including methods using additional training data. Since the test sets have remained the same, we can directly compare all the entries in the past three years to obtain the most “optimistic” measurement of state-of-the-art accuracy on each category. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_129",
"text": " For consistency with the object detection metric (higher is better), in this section we will be using image classification and single-object localization accuracy instead of error, where accuracy=1−error𝑎𝑐𝑐𝑢𝑟𝑎𝑐𝑦1𝑒𝑟𝑟𝑜𝑟accuracy=1-error. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_130",
"text": " Figure 10 shows the distribution of accuracy achieved by the “optimistic” models across the object categories. The image classification model achieves 94.6%percent94.694.6\\% accuracy on average (or 5.4%percent5.45.4\\% error), but there remains a 41.0%percent41.041.0\\% absolute difference inaccuracy between the most and least accurate object class. The single-object localization model achieves 81.5%percent81.581.5\\% accuracy on average (or 18.5%percent18.518.5\\% error), with a 77.0%percent77.077.0\\% range in accuracy across the object classes. The object detection model achieves 44.7%percent44.744.7\\% average precision, with an 84.7%percent84.784.7\\% range across the object classes. It is clear that the ILSVRC dataset is far from saturated: performance on many categories has remained poor despite the strong overall performance of the models. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_131",
"text": " Figures 11 and 12 show the easiest and hardest classes for each task, i.e., classes with the best and worst results obtained with the “optimistic” models. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_132",
"text": " For image classification, 121 out of 1000 object classes have 100%percent100100\\% image classification accuracy according to the optimistic estimate. Figure 11 (top) shows a random set of 10 of them. They contain a variety of classes, such as mammals like “red fox” and animals with distinctive structures like “stingray”. The hardest classes in the image classification task, with accuracy as low as 59.0%percent59.059.0\\%, include metallic and see-through man-made objects, such as “hook” and “water bottle,” the material “velvet” and the highly varied scene class “restaurant.” ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_133",
"text": " For single-object localization, the 10 easiest classes with 99.0−100%99.0percent10099.0-100\\% accuracy are all mammals and birds. The hardest classes include metallic man-made objects such as “letter opener” and “ladle”, plus thin structures such as “pole” and “spacebar” and highly varied classes such as “wing”. The most challenging class “spacebar” has a only 23.0%percent23.023.0\\% localization accuracy. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_134",
"text": " Object detection results are shown in Figure 12. The easiest classes are living organisms such as “dog” and “tiger”, plus “basketball” and “volleyball” with distinctive shape and color, and a somewhat surprising “snowplow.” The easiest class “butterfly” is not yet perfectly detected but is very close with 92.7%percent92.792.7\\% AP. The hardest classes are as expected small thin objects such as “flute” and “nail”, and the highly varied “lamp” and “backpack” classes, with as low as 8.0%percent8.08.0\\% AP. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_135",
"text": " We now take a closer look at the image properties to try to understand why current algorithms perform well on some object classes but not others. One hypothesis is that variation in accuracy comes from the fact that instances of some classes tend to be much smaller in images than instances of other classes, and smaller objects may be harder for computers to recognize. In this section we argue that while accuracy is correlated with object scale in the image, not all variation in accuracy can be accounted for by scale alone. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_136",
"text": " For every object class, we compute its average scale, or the average fraction of image area occupied by an instance of the object class on the ILSVRC2012-2014 validation set. Since the images and object classes in the image classification and single-object localization tasks are the same, we use the bounding box annotations of the single-object localization dataset for both tasks. In that dataset the object classes range from “swimming trunks” with scale of 1.5%percent1.51.5\\% to “spider web” with scale of 85.6%percent85.685.6\\%. In the object detection validation dataset the object classes range from “sunglasses” with scale of 1.3%percent1.31.3\\% to “sofa” with scale of 44.4%percent44.444.4\\%. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_137",
"text": " Figure 13 shows the performance of the “optimistic” method as a function of the average scale of the object in the image. Each dot corresponds to one object class. We observe a very weak positive correlation between object scale and image classification accuracy: ρ=0.14𝜌0.14\\rho=0.14. For single-object localization and object detection the correlation is stronger, at ρ=0.40𝜌0.40\\rho=0.40 and ρ=0.41𝜌0.41\\rho=0.41 respectively. It is clear that not all variation in accuracy can be accounted for by scale alone. Nevertheless, in the next section we will normalize for object scale to ensure that this factor is not affecting our conclusions. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_138",
"text": " Besides considering image-level properties we can also observe how accuracy changes as a function of intrinsic object properties. We define three properties inspired by human vision: the real-world size of the object, whether it’s deformable within instance, and how textured it is. For each property, the object classes are assigned to one of a few bins (listed below). These properties are illustrated in Figure 1. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_139",
"text": " Human subjects annotated each of the 1000 image classification and single-object localization object classes from ILSVRC2012-2014 with these properties. (Russakovsky et al.,, 2013). By construction (see Section 3.3.1), each of the 200 object detection classes is either also one of 1000 object classes or is an ancestor of one or more of the 1000 classes in the ImageNet hierarchy. To compute the values of the properties for each object detection class, we simply average the annotated values of the descendant classes. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_140",
"text": " In this section we draw the following conclusions about state-of-the-art recognition accuracy as a function of these object properties: • Real-world size: XS for extra small (e.g. nail), small (e.g. fox), medium (e.g. bookcase), large (e.g. car) or XL for extra large (e.g. church) The image classification and single-object localization “optimistic” models performs better on large and extra large real-world objects than on smaller ones. The “optimistic” object detection model surprisingly performs better on extra small objects than on small or medium ones. • Deformability within instance: Rigid (e.g., mug) or deformable (e.g., water snake) The “optimistic” model on each of the three tasks performs statistically significantly better on deformable objects compared to rigid ones. However, this effect disappears when analyzing natural objects separately from man-made objects. • Amount of texture: none (e.g. punching bag), low (e.g. horse), medium (e.g. sheep) or high (e.g. honeycomb) The “optimistic” model on each of the three tasks is significantly better on objects with at least low level of texture compared to untextured objects. These and other findings are justified and discussed in detail below. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_141",
"text": " We observed in Section 6.3.3 that objects that occupy a larger area in the image tend to be somewhat easier to recognize. To make sure that differences in object scale are not influencing results in this section, we normalize each bin by object scale. We discard object classes with the largest scales from each bin as needed until the average object scale of object classes in each bin across one property is the same (or as close as possible). For real-world size property for example, the resulting average object scale in each of the five bins is 31.6%−31.7%percent31.6percent31.731.6\\%-31.7\\% in the image classification and single-object localization tasks, and 12.9%−13.4%percent12.9percent13.412.9\\%-13.4\\% in the object detection task.111111For rigid versus deformable objects, the average scale in each bin is 34.1%−34.2%percent34.1percent34.234.1\\%-34.2\\% for classification and localization, and 13.5%−13.7%percent13.5percent13.713.5\\%-13.7\\% for detection. For texture, the average scale in each of the four bins is 31.1%−31.3%percent31.1percent31.331.1\\%-31.3\\% for classification and localization, and 12.7%−12.8%percent12.7percent12.812.7\\%-12.8\\% for detection. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_142",
"text": " Figure 14 shows the average performance of the “optimistic” model on the object classes that fall into each bin for each property. We analyze the results in detail below. Unless otherwise specified, the reported accuracies below are after the scale normalization step. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_143",
"text": " To evaluate statistical significance, we compute the 95%percent9595\\% confidence interval for accuracy using bootstrapping: we repeatedly sample the object classes within the bin with replacement, discard some as needed to normalize by scale, and compute the average accuracy of the “optimistic” model on the remaining classes. We report the 95%percent9595\\% confidence intervals (CI) in parentheses. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_144",
"text": " In Figure 14(top, left) we observe that in the image classification task the “optimistic” model tends to perform significantly better on objects which are larger in the real-world. The classification accuracy is 93.6%−93.9%percent93.6percent93.993.6\\%-93.9\\% on XS, S and M objects compared to 97.0%percent97.097.0\\% on L and 96.4%percent96.496.4\\% on XL objects. Since this is after normalizing for scale and thus can’t be explained by the objects’ size in the image, we conclude that either (1) larger real-world objects are easier for the model to recognize, or (2) larger real-world objects usually occur in images with very distinctive backgrounds. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_145",
"text": " To distinguish between the two cases we look Figure 14(top, middle). We see that in the single-object localization task, the L objects are easy to localize at 82.4%percent82.482.4\\% localization accuracy. XL objects, however, tend to be the hardest to localize with only 73.4%percent73.473.4\\% localization accuracy. We conclude that the appearance of L objects must be easier for the model to learn, while XL objects tend to appear in distinctive backgrounds. The image background make these XL classes easier for the image-level classifier, but the individual instances are difficult to accurately localize. Some examples of L objects are “killer whale,” “schooner,” and “lion,” and some examples of XL objects are “boathouse,” “mosque,” “toyshop” and “steel arch bridge.” ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_146",
"text": " In Figure 14(top,right) corresponding to the object detection task, the influence of real-world object size is not as apparent. One of the key reasons is that many of the XL and L object classes of the image classification and single-object localization datasets were removed in constructing the detection dataset (Section 3.3.1) since they were not basic categories well-suited for detection. There were only 3 XL object classes remaining in the dataset (“train,” “airplane” and “bus”), and none after scale normalization.We omit them from the analysis. The average precision of XS, S, M objects (44.5%percent44.544.5\\%, 39.0%percent39.039.0\\%, and 38.5%percent38.538.5\\% mAP respectively) is statistically insignificant from average precision on L objects: 95%percent9595\\% confidence interval of L objects is 37.5%−59.5%percent37.5percent59.537.5\\%-59.5\\%. This may be due to the fact that there are only 6 L object classes remaining after scale normalization; all other real-world size bins have at least 18 object classes. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_147",
"text": " Finally, it is interesting that performance on XS objects of 44.5%percent44.544.5\\% mAP (CI 40.5%−47.6%percent40.5percent47.640.5\\%-47.6\\%) is statistically significantly better than performance on S or M objects with 39.0%percent39.039.0\\% mAP and 38.5%percent38.538.5\\% mAP respectively. Some examples of XS objects are “strawberry,” “bow tie” and “rugby ball.” ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_148",
"text": " In Figure 14(second row) it is clear that the “optimistic” model performs statistically significantly worse on rigid objects than on deformable objects. Image classification accuracy is 93.2%percent93.293.2\\% on rigid objects (CI 92.6%−93.8%percent92.6percent93.892.6\\%-93.8\\%), much smaller than 95.7%percent95.795.7\\% on deformable ones. Single-object localization accuracy is 76.2%percent76.276.2\\% on rigid objects (CI 74.9%−77.4%percent74.9percent77.474.9\\%-77.4\\%), much smaller than 84.7%percent84.784.7\\% on deformable ones. Object detection mAP is 40.1%percent40.140.1\\% on rigid objects (CI 37.2%−42.9%percent37.2percent42.937.2\\%-42.9\\%), much smaller than 44.8%percent44.844.8\\% on deformable ones. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_149",
"text": " We can further analyze the effects of deformability after separating object classes into “natural” and “man-made” bins based on the ImageNet hierarchy. Deformability is highly correlated with whether the object is natural or man-made: 0.720.720.72 correlation for image classification and single-object localization classes, and 0.610.610.61 for object detection classes. Figure 14(third row) shows the effect of deformability on performance of the model for man-made and natural objects separately. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_150",
"text": " Man-made classes are significantly harder than natural classes: classification accuracy 92.8%percent92.892.8\\% (CI 92.3%−93.3%percent92.3percent93.392.3\\%-93.3\\%) for man-made versus 97.0%percent97.097.0\\% for natural, localization accuracy 75.5%percent75.575.5\\% (CI 74.3%−76.5%percent74.3percent76.574.3\\%-76.5\\%) for man-made versus 88.5%percent88.588.5\\% for natural, and detection mAP 38.7%percent38.738.7\\% (CI 35.6−41.3%35.6percent41.335.6-41.3\\%) for man-made versus 50.9%percent50.950.9\\% for natural. However, whether the classes are rigid or deformable within this subdivision is no longer significant in most cases. For example, the image classification accuracy is 92.3%percent92.392.3\\% (CI 91.4%−93.1%percent91.4percent93.191.4\\%-93.1\\%) on man-made rigid objects and 91.8%percent91.891.8\\% on man-made deformable objects – not statistically significantly different. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_151",
"text": " There are two cases where the differences in performance are statistically significant. First, for single-object localization, natural deformable objects are easier than natural rigid objects: localization accuracy of 87.9%percent87.987.9\\% (CI 85.9%−90.1%percent85.9percent90.185.9\\%-90.1\\%) on natural deformable objects is higher than 85.8%percent85.885.8\\% on natural rigid objects – falling slightly outside the 95%percent9595\\% confidence interval. This difference in performance is likely because deformable natural animals tend to be easier to localize than rigid natural fruit. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_152",
"text": " Second, for object detection, man-made rigid objects are easier than man-made deformable objects: 38.5%percent38.538.5\\% mAP (CI 35.2%−41.7%percent35.2percent41.735.2\\%-41.7\\%) on man-made rigid objects is higher than 33.0%percent33.033.0\\% mAP on man-made deformable objects. This is because man-made rigid objects include classes like “traffic light” or “car” whereas the man-made deformable objects contain challenging classes like “plastic bag,” “swimming trunks” or “stethoscope.” ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_153",
"text": " Finally, we analyze the effect that object texture has on the accuracy of the “optimistic” model. Figure 14(fourth row) demonstrates that the model performs better as the amount of texture on the object increases. The most significant difference is between the performance on untextured objects and the performance on objects with low texture. Image classification accuracy is 90.5%percent90.590.5\\% on untextured objects (CI 89.3%−91.6%percent89.3percent91.689.3\\%-91.6\\%), lower than 94.6%percent94.694.6\\% on low-textured objects. Single-object localization accuracy is 71.4%percent71.471.4\\% on untextured objects (CI 69.1%−73.3%percent69.1percent73.369.1\\%-73.3\\%), lower than 80.2%percent80.280.2\\% on low-textured objects. Object detection mAP is 33.2%percent33.233.2\\% on untextured objects (CI 29.5%−35.9%percent29.5percent35.929.5\\%-35.9\\%), lower than 42.9%percent42.942.9\\% on low-textured objects. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_154",
"text": " Texture is correlated with whether the object is natural or man-made, at 0.350.350.35 correlation for image classification and single-object localization, and 0.460.460.46 correlation for object detection. To determine if this is a contributing factor, in Figure 14(bottom row) we break up the object classes into natural and man-made and show the accuracy on objects with no texture versus objects with low texture. We observe that the model is still statistically significantly better on low-textured object classes than on untextured ones, both on man-made and natural object classes independently.121212Natural object detection classes are removed from this analysis because there are only 3 and 13 natural untextured and low-textured classes respectively, and none remain after scale normalization. All other bins contain at least 9 object classes after scale normalization. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_155",
"text": " Recent improvements in state-of-the-art accuracy on the ILSVRC dataset are easier to put in perspective when compared to human-level accuracy. In this section we compare the performance of the leading large-scale image classification method with the performance of humans on this task. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_156",
"text": " To support this comparison, we developed an interface that allowed a human labeler to annotate images with up to five ILSVRC target classes. We compare human errors to those of the winning ILSRC2014 image classification model, GoogLeNet (Section 5.1). For this analysis we use a random sample of 1500 ILSVRC2012-2014 image classification test set images. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_157",
"text": " Our web-based annotation interface consists of one test set image and a list of 1000 ILSVRC categories on the side. Each category is described by its title, such as “cowboy boot.” The categories are sorted in the topological order of the ImageNet hierarchy, which places semantically similar concepts nearby in the list. For example, all motor vehicle-related classes are arranged contiguously in the list. Every class category is additionally accompanied by a row of 13 examples images from the training set to allow for faster visual scanning. The user of the interface selects 5 categories from the list by clicking on the desired items. Since our interface is web-based, it allows for natural scrolling through the list, and also search by text. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_158",
"text": " We found the task of annotating images with one of 1000 categories to be an extremely challenging task for an untrained annotator. The most common error that an untrained annotator is susceptible to is a failure to consider a relevant class as a possible label because they are unaware of its existence. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_159",
"text": " Therefore, in evaluating the human accuracy we relied primarily on expert annotators who learned to recognize a large portion of the 1000 ILSVRC classes. During training, the annotators labeled a few hundred validation images for practice and later switched to the test set images. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_160",
"text": " We report results based on experiments with two expert annotators. The first annotator (A1) trained on 500 images and annotated 1500 test images. The second annotator (A2) trained on 100 images and then annotated 258 test images. The average pace of labeling was approximately 1 image per minute, but the distribution is strongly bimodal: some images are quickly recognized, while some images (such as those of fine-grained breeds of dogs, birds, or monkeys) may require multiple minutes of concentrated effort. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_161",
"text": " The results are reported in Table 6. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_162",
"text": " Annotator A1 evaluated a total of 1500 test set images. The GoogLeNet classification error on this sample was estimated to be 6.8%percent6.86.8\\% (recall that the error on full test set of 100,000 images is 6.7%percent6.76.7\\%, as shown in Table LABEL:table:sub14). The human error was estimated to be 5.1%. Thus, annotator A1 achieves a performance superior to GoogLeNet, by approximately 1.7%percent1.71.7\\%. We can analyze the statistical significance of this result under the null hypothesis that they are from the same distribution. In particular, comparing the two proportions with a z-test yields a one-sided p𝑝p-value of p=0.022𝑝0.022p=0.022. Thus, we can conclude that this result is statistically significant at the 95%percent9595\\% confidence level. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_163",
"text": " Our second annotator (A2) trained on a smaller sample of only 100 images and then labeled 258 test set images. As seen in Table 6, the final classification error is significantly worse, at approximately 12.0%percent12.012.0\\% Top-5 error. The majority of these errors (48.8%percent48.848.8\\%) can be attributed to the annotator failing to spot and consider the ground truth label as an option. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_164",
"text": " Thus, we conclude that a significant amount of training time is necessary for a human to achieve competitive performance on ILSVRC. However, with a sufficient amount of training, a human annotator is still able to outperform the GoogLeNet result (p=0.022𝑝0.022p=0.022) by approximately 1.7%percent1.71.7\\%. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_165",
"text": " We also compare the prediction accuracy of the two annotators. Of a total of 204 images that both A1 and A2 labeled, 174 (85%percent8585\\%) were correctly labeled by both A1 and A2, 19 (9%percent99\\%) were correctly labeled by A1 but not A2, 6 (3%percent33\\%) were correctly labeled by A2 but not A1, and 5 (2%percent22\\%) were incorrectly labeled by both. These include 2 images that we consider to be incorrectly labeled in the ground truth. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_166",
"text": " In particular, our results suggest that the human annotators do not exhibit strong overlap in their predictions. We can approximate the performance of an “optimistic” human classifier by assuming an image to be correct if at least one of A1 or A2 correctly labeled the image. On this sample of 204 images, we approximate the error rate of an “optimistic” human annotator at 2.4%percent2.42.4\\%, compared to the GoogLeNet error rate of 4.9%percent4.94.9\\%. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_167",
"text": " We manually inspected both human and GoogLeNet errors to gain an understanding of common error types and how they compare. For purposes of this section, we only discuss results based on the larger sample of 1500 images that were labeled by annotator A1. Examples of representative mistakes are in Figure 15. The analysis and insights below were derived specifically from GoogLeNet predictions, but we suspect that many of the same errors may be present in other methods. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_168",
"text": " 1. Multiple objects. Both GoogLeNet and humans struggle with images that contain multiple ILSVRC classes (usually many more than five), with little indication of which object is the focus of the image. This error is only present in the Classification setting, since every image is constrained to have exactly one correct label. In total, we attribute 24 (24%percent2424\\%) of GoogLeNet errors and 12 (16%percent1616\\%) of human errors to this category. It is worth noting that humans can have a slight advantage in this error type, since it can sometimes be easy to identify the most salient object in the image. 2. Incorrect annotations. We found that approximately 5 out of 1500 images (0.3%percent0.30.3\\%) were incorrectly annotated in the ground truth. This introduces an approximately equal number of errors for both humans and GoogLeNet. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_169",
"text": " 1. Object small or thin. GoogLeNet struggles with recognizing objects that are very small or thin in the image, even if that object is the only object present. Examples of this include an image of a standing person wearing sunglasses, a person holding a quill in their hand, or a small ant on a stem of a flower. We estimate that approximately 22 (21%percent2121\\%) of GoogLeNet errors fall into this category, while none of the human errors do. In other words, in our sample of images, no image was mislabeled by a human because they were unable to identify a very small or thin object. This discrepancy can be attributed to the fact that a human can very effectively leverage context and affordances to accurately infer the identity of small objects (for example, a few barely visible feathers near person’s hand as very likely belonging to a mostly occluded quill). 2. Image filters. Many people enhance their photos with filters that distort the contrast and color distributions of the image. We found that 13 (13%percent1313\\%) of the images that GoogLeNet incorrectly classified contained a filter. Thus, we posit that GoogLeNet is not very robust to these distortions. In comparison, only one image among the human errors contained a filter, but we do not attribute the source of the error to the filter. 3. Abstract representations. GoogLeNet struggles with images that depict objects of interest in an abstract form, such as 3D-rendered images, paintings, sketches, plush toys, or statues. An example is the abstract shape of a bow drawn with a light source in night photography, a 3D-rendered robotic scorpion, or a shadow on the ground, of a child on a swing. We attribute approximately 6 (6%percent66\\%) of GoogLeNet errors to this type of error and believe that humans are significantly more robust, with no such errors seen in our sample. 4. Miscellaneous sources. Additional sources of error that occur relatively infrequently include extreme closeups of parts of an object, unconventional viewpoints such as a rotated image, images that can significantly benefit from the ability to read text (e.g. a featureless container identifying itself as “face powder”), objects with heavy occlusions, and images that depict a collage of multiple images. In general, we found that humans are more robust to all of these types of error. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_170",
"text": " 1. Fine-grained recognition. We found that humans are noticeably worse at fine-grained recognition (e.g. dogs, monkeys, snakes, birds), even when they are in clear view. To understand the difficulty, consider that there are more than 120 species of dogs in the dataset. We estimate that 28 (37%percent3737\\%) of the human errors fall into this category, while only 7 (7%percent77\\%) of GoogLeNet errors do. 2. Class unawareness. The annotator may sometimes be unaware of the ground truth class present as a label option. When pointed out as an ILSVRC class, it is usually clear that the label applies to the image. These errors get progressively less frequent as the annotator becomes more familiar with ILSVRC classes. Approximately 18 (24%percent2424\\%) of the human errors fall into this category. 3. Insufficient training data. Recall that the annotator is only presented with 13 examples of a class under every category name. However, 13 images are not always enough to adequately convey the allowed class variations. For example, a brown dog can be incorrectly dismissed as a “Kelpie” if all examples of a “Kelpie” feature a dog with black coat. However, if more than 13 images were listed it would have become clear that a “Kelpie” may have brown coat. Approximately 4 (5%percent55\\%) of human errors fall into this category. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_171",
"text": " We investigated the performance of trained human annotators on a sample of 1500 ILSVRC test set images. Our results indicate that a trained human annotator is capable of outperforming the best model (GoogLeNet) by approximately 1.7%percent1.71.7\\% (p=0.022𝑝0.022p=0.022). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_172",
"text": " We expect that some sources of error may be relatively easily eliminated (e.g. robustness to filters, rotations, collages, effectively reasoning over multiple scales), while others may prove more elusive (e.g. identifying abstract representations of objects). On the other hand, a large majority of human errors come from fine-grained categories and class unawareness. We expect that the former can be significantly reduced with fine-grained expert annotators, while the latter could be reduced with more practice and greater familiarity with ILSVRC classes. Our results also hint that human errors are not strongly correlated and that human ensembles may further reduce human error rate. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_173",
"text": " It is clear that humans will soon outperform state-of-the-art ILSVRC image classification models only by use of significant effort, expertise, and time. One interesting follow-up question for future investigation is how computer-level accuracy compares with human-level accuracy on more complex image understanding tasks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_174",
"text": " In this paper we described the large-scale data collection process of ILSVRC, provided a summary of the most successful algorithms on this data, and analyzed the success and failure modes of these algorithms. In this section we discuss some of the key lessons we learned over the years of ILSVRC, strive to address the key criticisms of the datasets and the challenges we encountered over the years, and conclude by looking forward into the future. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_175",
"text": " The key lesson of collecting the datasets and running the challenges for five years is this: All human intelligence tasks need to be exceptionally well-designed. We learned this lesson both when annotating the dataset using Amazon Mechanical Turk workers (Section 3) and even when trying to evaluate human-level image classification accuracy using expert labelers (Section 6.4). The first iteration of the labeling interface was always bad – generally meaning completely unusable. If there was any inherent ambiguity in the questions posed (and there almost always was), workers found it and accuracy suffered. If there is one piece of advice we can offer to future research, it is to very carefully design, continuously monitor, and extensively sanity-check all crowdsourcing tasks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_176",
"text": " The other lesson, already well-known to large-scale researchers, is this: Scaling up the dataset always reveals unexpected challenges. From designing complicated multi-step annotation strategies (Section 3.2.1) to having to modify the evaluation procedure (Section 4), we had to continuously adjust to the large-scale setting. On the plus side, of course, the major breakthroughs in object recognition accuracy (Section 5) and the analysis of the strength and weaknesses of current algorithms as a function of object class properties ( Section 6.3) would never have been possible on a smaller scale. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_177",
"text": " In the past five years, we encountered three major criticisms of the ILSVRC dataset and the corresponding challenge: (1) the ILSVRC dataset is insufficiently challenging, (2) the ILSVRC dataset contains annotation errors, and (3) the rules of ILSVRC competition are too restrictive. We discuss these in order. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_178",
"text": " The first criticism is that the objects in the dataset tend to be large and centered in the images, making the dataset insufficiently challenging. In Sections 3.2.2 and 3.3.4 we tried to put those concerns to rest by analyzing the statistics of the ILSVRC dataset and concluding that it is comparable with, and in many cases much more challenging than, the long-standing PASCAL VOC benchmark (Everingham et al.,, 2010). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_179",
"text": " The second is regarding the errors in ground truth labeling. We went through several rounds of in-house post-processing of the annotations obtained using crowdsourcing, and corrected many common sources of errors (e.g., Appendix E). The major remaining source of annotation errors stem from fine-grained object classes, e.g., labelers failing to distinguish different species of birds. This is a tradeoff that had to be made: in order to annotate data at this scale on a reasonable budget, we had to rely on non-expert crowd labelers. However, overall the dataset is encouragingly clean. By our estimates, 99.7%percent99.799.7\\% precision is achieved in the image classification dataset (Sections 3.1.3 and 6.4) and 97.9%percent97.997.9\\% of images that went through the bounding box annotation system have all instances of the target object class labeled with bounding boxes (Section 3.2.1). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_180",
"text": " The third criticism we encountered is over the rules of the competition regarding using external training data. In ILSVRC2010-2013, algorithms had to only use the provided training and validation set images and annotations for training their models. With the growth of the field of large-scale unsupervised feature learning, however, questions began to arise about what exactly constitutes “outside” data: for example, are image features trained on a large pool of “outside” images in an unsupervised fashion allowed in the competition? After much discussion, in ILSVRC2014 we took the first step towards addressing this problem. We followed the PASCAL VOC strategy and created two tracks in the competition: entries using only “provided” data and entries using “outside” data, meaning any images or annotations not provided as part of ILSVRC training or validation sets. However, in the future this strategy will likely need to be further revised as the computer vision field evolves. For example, competitions can consider allowing the use of any image features which are publically available, even if these features were learned on an external source of data. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_181",
"text": " Given the massive algorithmic breakthroughs over the past five years, we are very eager to see what will happen in the next five years. There are many potential directions of improvement and growth for ILSVRC and other large-scale image datasets. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_182",
"text": " First, continuing the trend of moving towards richer image understanding (from image classification to single-object localization to object detection), the next challenge would be to tackle pixel-level object segmentation. The recently released large-scale COCO dataset (Lin et al., 2014b, ) is already taking a step in that direction. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_183",
"text": " Second, as datasets grow even larger in scale, it may become impossible to fully annotate them manually. The scale of ILSVRC is already imposing limits on the manual annotations that are feasible to obtain: for example, we had to restrict the number of objects labeled per image in the image classification and single-object localization datasets. In the future, with billions of images, it will become impossible to obtain even one clean label for every image. Datasets such as Yahoo’s Flickr Creative Commons 100M,131313http://webscope.sandbox.yahoo.com/catalog.php?datatype=i&did=67 released with weak human tags but no centralized annotation, will become more common. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_184",
"text": " The growth of unlabeled or only partially labeled large-scale datasets implies two things. First, algorithms will have to rely more on weakly supervised training data. Second, even evaluation might have to be done after the algorithms make predictions, not before. This means that rather than evaluating accuracy (how many of the test images or objects did the algorithm get right) or recall (how many of the desired images or objects did the algorithm manage to find), both of which require a fully annotated test set, we will be focusing more on precision: of the predictions that the algorithm made, how many were deemed correct by humans. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_185",
"text": " We are eagerly awaiting the future development of object recognition datasets and algorithms, and are grateful that ILSVRC served as a stepping stone along this path. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
}
] |
Is it true then that YOLOv2's classification network is first trained with 416 x 416 images, then finetuned with 448 x 448 images?
|
Yes YOLOv2 uses reduced size resolution of 416\times416 [11]. And during fine tuning on the ImageNet it uses 448\times 448 resolution for 10 epochs [31].
|
[
11,
31
] |
[
{
"id": "1612.08242_all_0",
"text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained to a small set of objects. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_1",
"text": " Current object detection datasets are limited compared to datasets for other tasks like classification and tagging. The most common detection datasets contain thousands to hundreds of thousands of images with dozens to hundreds of tags . Classification datasets have millions of images with tens or hundreds of thousands of categories . ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_2",
"text": " We would like detection to scale to level of object classification. However, labelling images for detection is far more expensive than labelling for classification or tagging (tags are often user-supplied for free). Thus we are unlikely to see detection datasets on the same scale as classification datasets in the near future. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_3",
"text": " We propose a new method to harness the large amount of classification data we already have and use it to expand the scope of current detection systems. Our method uses a hierarchical view of object classification that allows us to combine distinct datasets together. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_4",
"text": " We also propose a joint training algorithm that allows us to train object detectors on both detection and classification data. Our method leverages labeled detection images to learn to precisely localize objects while it uses classification images to increase its vocabulary and robustness. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_5",
"text": " Using this method we train YOLO9000, a real-time object detector that can detect over 9000 different object categories. First we improve upon the base YOLO detection system to produce YOLOv2, a state-of-the-art, real-time detector. Then we use our dataset combination method and joint training algorithm to train a model on more than 9000 classes from ImageNet as well as detection data from COCO. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_6",
"text": " All of our code and pre-trained models are available online at http://pjreddie.com/yolo9000/. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_7",
"text": " YOLO suffers from a variety of shortcomings relative to state-of-the-art detection systems. Error analysis of YOLO compared to Fast R-CNN shows that YOLO makes a significant number of localization errors. Furthermore, YOLO has relatively low recall compared to region proposal-based methods. Thus we focus mainly on improving recall and localization while maintaining classification accuracy. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_8",
"text": " Computer vision generally trends towards larger, deeper networks . Better performance often hinges on training larger networks or ensembling multiple models together. However, with YOLOv2 we want a more accurate detector that is still fast. Instead of scaling up our network, we simplify the network and then make the representation easier to learn. We pool a variety of ideas from past work with our own novel concepts to improve YOLO’s performance. A summary of results can be found in Table 2. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_9",
"text": " Batch Normalization. Batch normalization leads to significant improvements in convergence while eliminating the need for other forms of regularization . By adding batch normalization on all of the convolutional layers in YOLO we get more than 2% improvement in mAP. Batch normalization also helps regularize the model. With batch normalization we can remove dropout from the model without overfitting. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_10",
"text": " High Resolution Classifier. All state-of-the-art detection methods use classifier pre-trained on ImageNet . Starting with AlexNet most classifiers operate on input images smaller than 256×256256256256\\times 256 . The original YOLO trains the classifier network at 224×224224224224\\times 224 and increases the resolution to 448448448 for detection. This means the network has to simultaneously switch to learning object detection and adjust to the new input resolution. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_11",
"text": " For YOLOv2 we first fine tune the classification network at the full 448×448448448448\\times 448 resolution for 10 epochs on ImageNet. This gives the network time to adjust its filters to work better on higher resolution input. We then fine tune the resulting network on detection. This high resolution classification network gives us an increase of almost 4% mAP. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_12",
"text": " Convolutional With Anchor Boxes. YOLO predicts the coordinates of bounding boxes directly using fully connected layers on top of the convolutional feature extractor. Instead of predicting coordinates directly Faster R-CNN predicts bounding boxes using hand-picked priors . Using only convolutional layers the region proposal network (RPN) in Faster R-CNN predicts offsets and confidences for anchor boxes. Since the prediction layer is convolutional, the RPN predicts these offsets at every location in a feature map. Predicting offsets instead of coordinates simplifies the problem and makes it easier for the network to learn. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_13",
"text": " We remove the fully connected layers from YOLO and use anchor boxes to predict bounding boxes. First we eliminate one pooling layer to make the output of the network’s convolutional layers higher resolution. We also shrink the network to operate on 416416416 input images instead of 448×448448448448\\times 448. We do this because we want an odd number of locations in our feature map so there is a single center cell. Objects, especially large objects, tend to occupy the center of the image so it’s good to have a single location right at the center to predict these objects instead of four locations that are all nearby. YOLO’s convolutional layers downsample the image by a factor of 32 so by using an input image of 416416416 we get an output feature map of 13×13131313\\times 13. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_14",
"text": " When we move to anchor boxes we also decouple the class prediction mechanism from the spatial location and instead predict class and objectness for every anchor box. Following YOLO, the objectness prediction still predicts the IOU of the ground truth and the proposed box and the class predictions predict the conditional probability of that class given that there is an object. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_15",
"text": " Using anchor boxes we get a small decrease in accuracy. YOLO only predicts 98 boxes per image but with anchor boxes our model predicts more than a thousand. Without anchor boxes our intermediate model gets 69.569.569.5 mAP with a recall of 81%percent8181\\%. With anchor boxes our model gets 69.269.269.2 mAP with a recall of 88%percent8888\\%. Even though the mAP decreases, the increase in recall means that our model has more room to improve. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_16",
"text": " Dimension Clusters. We encounter two issues with anchor boxes when using them with YOLO. The first is that the box dimensions are hand picked. The network can learn to adjust the boxes appropriately but if we pick better priors for the network to start with we can make it easier for the network to learn to predict good detections. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_17",
"text": " Instead of choosing priors by hand, we run k-means clustering on the training set bounding boxes to automatically find good priors. If we use standard k-means with Euclidean distance larger boxes generate more error than smaller boxes. However, what we really want are priors that lead to good IOU scores, which is independent of the size of the box. Thus for our distance metric we use: ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_18",
"text": " d(box,centroid)=1−IOU(box,centroid)𝑑boxcentroid1IOUboxcentroidd(\\text{box},\\text{centroid})=1-\\text{IOU}(\\text{box},\\text{centroid}) ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_19",
"text": " We run k-means for various values of k𝑘k and plot the average IOU with closest centroid, see Figure 2. We choose k=5𝑘5k=5 as a good tradeoff between model complexity and high recall. The cluster centroids are significantly different than hand-picked anchor boxes. There are fewer short, wide boxes and more tall, thin boxes. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_20",
"text": " We compare the average IOU to closest prior of our clustering strategy and the hand-picked anchor boxes in Table 1. At only 5 priors the centroids perform similarly to 9 anchor boxes with an average IOU of 61.0 compared to 60.9. If we use 9 centroids we see a much higher average IOU. This indicates that using k-means to generate our bounding box starts the model off with a better representation and makes the task easier to learn. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_21",
"text": " Direct location prediction. When using anchor boxes with YOLO we encounter a second issue: model instability, especially during early iterations. Most of the instability comes from predicting the (x,y)𝑥𝑦(x,y) locations for the box. In region proposal networks the network predicts values txsubscript𝑡𝑥t_{x} and tysubscript𝑡𝑦t_{y} and the (x,y)𝑥𝑦(x,y) center coordinates are calculated as: ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_22",
"text": " x𝑥\\displaystyle x =(tx∗wa)−xaabsentsubscript𝑡𝑥subscript𝑤𝑎subscript𝑥𝑎\\displaystyle=(t_{x}*w_{a})-x_{a} y𝑦\\displaystyle y =(ty∗ha)−yaabsentsubscript𝑡𝑦subscriptℎ𝑎subscript𝑦𝑎\\displaystyle=(t_{y}*h_{a})-y_{a} ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_23",
"text": " For example, a prediction of tx=1subscript𝑡𝑥1t_{x}=1 would shift the box to the right by the width of the anchor box, a prediction of tx=−1subscript𝑡𝑥1t_{x}=-1 would shift it to the left by the same amount. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_24",
"text": " This formulation is unconstrained so any anchor box can end up at any point in the image, regardless of what location predicted the box. With random initialization the model takes a long time to stabilize to predicting sensible offsets. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_25",
"text": " Instead of predicting offsets we follow the approach of YOLO and predict location coordinates relative to the location of the grid cell. This bounds the ground truth to fall between 00 and 111. We use a logistic activation to constrain the network’s predictions to fall in this range. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_26",
"text": " The network predicts 5 bounding boxes at each cell in the output feature map. The network predicts 5 coordinates for each bounding box, txsubscript𝑡𝑥t_{x}, tysubscript𝑡𝑦t_{y}, twsubscript𝑡𝑤t_{w}, thsubscript𝑡ℎt_{h}, and tosubscript𝑡𝑜t_{o}. If the cell is offset from the top left corner of the image by (cx,cy)subscript𝑐𝑥subscript𝑐𝑦(c_{x},c_{y}) and the bounding box prior has width and height pwsubscript𝑝𝑤p_{w}, phsubscript𝑝ℎp_{h}, then the predictions correspond to: ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_27",
"text": " bxsubscript𝑏𝑥\\displaystyle b_{x} =σ(tx)+cxabsent𝜎subscript𝑡𝑥subscript𝑐𝑥\\displaystyle=\\sigma(t_{x})+c_{x} bysubscript𝑏𝑦\\displaystyle b_{y} =σ(ty)+cyabsent𝜎subscript𝑡𝑦subscript𝑐𝑦\\displaystyle=\\sigma(t_{y})+c_{y} bwsubscript𝑏𝑤\\displaystyle b_{w} =pwetwabsentsubscript𝑝𝑤superscript𝑒subscript𝑡𝑤\\displaystyle=p_{w}e^{t_{w}} bhsubscript𝑏ℎ\\displaystyle b_{h} =phethabsentsubscript𝑝ℎsuperscript𝑒subscript𝑡ℎ\\displaystyle=p_{h}e^{t_{h}} Pr(object)∗IOU(b,object)𝑃𝑟object𝐼𝑂𝑈𝑏object\\displaystyle Pr(\\text{object})*IOU(b,\\text{object}) =σ(to)absent𝜎subscript𝑡𝑜\\displaystyle=\\sigma(t_{o}) ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_28",
"text": " Since we constrain the location prediction the parametrization is easier to learn, making the network more stable. Using dimension clusters along with directly predicting the bounding box center location improves YOLO by almost 5% over the version with anchor boxes. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_29",
"text": " Fine-Grained Features.This modified YOLO predicts detections on a 13×13131313\\times 13 feature map. While this is sufficient for large objects, it may benefit from finer grained features for localizing smaller objects. Faster R-CNN and SSD both run their proposal networks at various feature maps in the network to get a range of resolutions. We take a different approach, simply adding a passthrough layer that brings features from an earlier layer at 26×26262626\\times 26 resolution. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_30",
"text": " The passthrough layer concatenates the higher resolution features with the low resolution features by stacking adjacent features into different channels instead of spatial locations, similar to the identity mappings in ResNet. This turns the 26×26×512262651226\\times 26\\times 512 feature map into a 13×13×20481313204813\\times 13\\times 2048 feature map, which can be concatenated with the original features. Our detector runs on top of this expanded feature map so that it has access to fine grained features. This gives a modest 1% performance increase. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_31",
"text": " Multi-Scale Training. The original YOLO uses an input resolution of 448×448448448448\\times 448. With the addition of anchor boxes we changed the resolution to 416×416416416416\\times 416. However, since our model only uses convolutional and pooling layers it can be resized on the fly. We want YOLOv2 to be robust to running on images of different sizes so we train this into the model. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_32",
"text": " Instead of fixing the input image size we change the network every few iterations. Every 10 batches our network randomly chooses a new image dimension size. Since our model downsamples by a factor of 32, we pull from the following multiples of 32: {320,352,…,608}320352…608\\{320,352,...,608\\}. Thus the smallest option is 320×320320320320\\times 320 and the largest is 608×608608608608\\times 608. We resize the network to that dimension and continue training. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_33",
"text": " This regime forces the network to learn to predict well across a variety of input dimensions. This means the same network can predict detections at different resolutions. The network runs faster at smaller sizes so YOLOv2 offers an easy tradeoff between speed and accuracy. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_34",
"text": " At low resolutions YOLOv2 operates as a cheap, fairly accurate detector. At 288×288288288288\\times 288 it runs at more than 90 FPS with mAP almost as good as Fast R-CNN. This makes it ideal for smaller GPUs, high framerate video, or multiple video streams. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_35",
"text": " At high resolution YOLOv2 is a state-of-the-art detector with 78.6 mAP on VOC 2007 while still operating above real-time speeds. See Table 3 for a comparison of YOLOv2 with other frameworks on VOC 2007. Figure 4 ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_36",
"text": " Further Experiments. We train YOLOv2 for detection on VOC 2012. Table 4 shows the comparative performance of YOLOv2 versus other state-of-the-art detection systems. YOLOv2 achieves 73.4 mAP while running far faster than competing methods. We also train on COCO and compare to other methods in Table 5. On the VOC metric (IOU = .5) YOLOv2 gets 44.0 mAP, comparable to SSD and Faster R-CNN. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_37",
"text": " We want detection to be accurate but we also want it to be fast. Most applications for detection, like robotics or self-driving cars, rely on low latency predictions. In order to maximize performance we design YOLOv2 to be fast from the ground up. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_38",
"text": " Most detection frameworks rely on VGG-16 as the base feature extractor . VGG-16 is a powerful, accurate classification network but it is needlessly complex. The convolutional layers of VGG-16 require 30.69 billion floating point operations for a single pass over a single image at 224×224224224224\\times 224 resolution. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_39",
"text": " The YOLO framework uses a custom network based on the Googlenet architecture . This network is faster than VGG-16, only using 8.52 billion operations for a forward pass. However, it’s accuracy is slightly worse than VGG-16. For single-crop, top-5 accuracy at 224×224224224224\\times 224, YOLO’s custom model gets 88.0% ImageNet compared to 90.0% for VGG-16. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_40",
"text": " Darknet-19. We propose a new classification model to be used as the base of YOLOv2. Our model builds off of prior work on network design as well as common knowledge in the field. Similar to the VGG models we use mostly 3×3333\\times 3 filters and double the number of channels after every pooling step . Following the work on Network in Network (NIN) we use global average pooling to make predictions as well as 1×1111\\times 1 filters to compress the feature representation between 3×3333\\times 3 convolutions . We use batch normalization to stabilize training, speed up convergence, and regularize the model . ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_41",
"text": " Our final model, called Darknet-19, has 19 convolutional layers and 5 maxpooling layers. For a full description see Table 6. Darknet-19 only requires 5.58 billion operations to process an image yet achieves 72.9%percent72.972.9\\% top-1 accuracy and 91.2%percent91.291.2\\% top-5 accuracy on ImageNet. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_42",
"text": " Training for classification. We train the network on the standard ImageNet 1000 class classification dataset for 160 epochs using stochastic gradient descent with a starting learning rate of 0.10.10.1, polynomial rate decay with a power of 444, weight decay of 0.00050.00050.0005 and momentum of 0.90.90.9 using the Darknet neural network framework . During training we use standard data augmentation tricks including random crops, rotations, and hue, saturation, and exposure shifts. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_43",
"text": " As discussed above, after our initial training on images at 224×224224224224\\times 224 we fine tune our network at a larger size, 448448448. For this fine tuning we train with the above parameters but for only 10 epochs and starting at a learning rate of 10−3superscript10310^{-3}. At this higher resolution our network achieves a top-1 accuracy of 76.5%percent76.576.5\\% and a top-5 accuracy of 93.3%percent93.393.3\\%. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_44",
"text": " Training for detection. We modify this network for detection by removing the last convolutional layer and instead adding on three 3×3333\\times 3 convolutional layers with 102410241024 filters each followed by a final 1×1111\\times 1 convolutional layer with the number of outputs we need for detection. For VOC we predict 5 boxes with 5 coordinates each and 20 classes per box so 125 filters. We also add a passthrough layer from the final 3×3×512335123\\times 3\\times 512 layer to the second to last convolutional layer so that our model can use fine grain features. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_45",
"text": " We train the network for 160 epochs with a starting learning rate of 10−3superscript10310^{-3}, dividing it by 10 at 60 and 90 epochs. We use a weight decay of 0.00050.00050.0005 and momentum of 0.90.90.9. We use a similar data augmentation to YOLO and SSD with random crops, color shifting, etc. We use the same training strategy on COCO and VOC. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_46",
"text": " We propose a mechanism for jointly training on classification and detection data. Our method uses images labelled for detection to learn detection-specific information like bounding box coordinate prediction and objectness as well as how to classify common objects. It uses images with only class labels to expand the number of categories it can detect. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_47",
"text": " During training we mix images from both detection and classification datasets. When our network sees an image labelled for detection we can backpropagate based on the full YOLOv2 loss function. When it sees a classification image we only backpropagate loss from the classification-specific parts of the architecture. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_48",
"text": " This approach presents a few challenges. Detection datasets have only common objects and general labels, like “dog” or “boat”. Classification datasets have a much wider and deeper range of labels. ImageNet has more than a hundred breeds of dog, including “Norfolk terrier”, “Yorkshire terrier”, and “Bedlington terrier”. If we want to train on both datasets we need a coherent way to merge these labels. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_49",
"text": " Most approaches to classification use a softmax layer across all the possible categories to compute the final probability distribution. Using a softmax assumes the classes are mutually exclusive. This presents problems for combining datasets, for example you would not want to combine ImageNet and COCO using this model because the classes “Norfolk terrier” and “dog” are not mutually exclusive. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_50",
"text": " We could instead use a multi-label model to combine the datasets which does not assume mutual exclusion. This approach ignores all the structure we do know about the data, for example that all of the COCO classes are mutually exclusive. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_51",
"text": " Hierarchical classification. ImageNet labels are pulled from WordNet, a language database that structures concepts and how they relate . In WordNet, “Norfolk terrier” and “Yorkshire terrier” are both hyponyms of “terrier” which is a type of “hunting dog”, which is a type of “dog”, which is a “canine”, etc. Most approaches to classification assume a flat structure to the labels however for combining datasets, structure is exactly what we need. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_52",
"text": " WordNet is structured as a directed graph, not a tree, because language is complex. For example a “dog” is both a type of “canine” and a type of “domestic animal” which are both synsets in WordNet. Instead of using the full graph structure, we simplify the problem by building a hierarchical tree from the concepts in ImageNet. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_53",
"text": " To build this tree we examine the visual nouns in ImageNet and look at their paths through the WordNet graph to the root node, in this case “physical object”. Many synsets only have one path through the graph so first we add all of those paths to our tree. Then we iteratively examine the concepts we have left and add the paths that grow the tree by as little as possible. So if a concept has two paths to the root and one path would add three edges to our tree and the other would only add one edge, we choose the shorter path. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_54",
"text": " The final result is WordTree, a hierarchical model of visual concepts. To perform classification with WordTree we predict conditional probabilities at every node for the probability of each hyponym of that synset given that synset. For example, at the “terrier” node we predict: ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_55",
"text": " Pr(Norfolk terrier\\displaystyle Pr(\\text{Norfolk terrier} |terrier)\\displaystyle|\\text{terrier}) Pr(Yorkshire terrier\\displaystyle Pr(\\text{Yorkshire terrier} |terrier)\\displaystyle|\\text{terrier}) Pr(Bedlington terrier\\displaystyle Pr(\\text{Bedlington terrier} |terrier)\\displaystyle|\\text{terrier}) ……\\displaystyle... ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_56",
"text": " If we want to compute the absolute probability for a particular node we simply follow the path through the tree to the root node and multiply to conditional probabilities. So if we want to know if a picture is of a Norfolk terrier we compute: ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_57",
"text": " Pr(Norfolk terrier)𝑃𝑟Norfolk terrier\\displaystyle Pr(\\text{Norfolk terrier}) =Pr(Norfolk terrier|terrier)absent𝑃𝑟conditionalNorfolk terrierterrier\\displaystyle=Pr(\\text{Norfolk terrier}|\\text{terrier}) ∗Pr(terrier\\displaystyle*Pr(\\text{terrier} |hunting dog)\\displaystyle|\\text{hunting dog}) ∗…absent…\\displaystyle*\\ldots ∗\\displaystyle* ∗Pr(mammal\\displaystyle*Pr(\\text{mammal} |Pr(animal)\\displaystyle|Pr(\\text{animal}) ∗Pr(animal\\displaystyle*Pr(\\text{animal} |physical object)\\displaystyle|\\text{physical object}) ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_58",
"text": " For classification purposes we assume that the the image contains an object: Pr(physical object)=1𝑃𝑟physical object1Pr(\\text{physical object})=1. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_59",
"text": " To validate this approach we train the Darknet-19 model on WordTree built using the 1000 class ImageNet. To build WordTree1k we add in all of the intermediate nodes which expands the label space from 1000 to 1369. During training we propagate ground truth labels up the tree so that if an image is labelled as a “Norfolk terrier” it also gets labelled as a “dog” and a “mammal”, etc. To compute the conditional probabilities our model predicts a vector of 1369 values and we compute the softmax over all sysnsets that are hyponyms of the same concept, see Figure 5. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_60",
"text": " Using the same training parameters as before, our hierarchical Darknet-19 achieves 71.9%percent71.971.9\\% top-1 accuracy and 90.4%percent90.490.4\\% top-5 accuracy. Despite adding 369 additional concepts and having our network predict a tree structure our accuracy only drops marginally. Performing classification in this manner also has some benefits. Performance degrades gracefully on new or unknown object categories. For example, if the network sees a picture of a dog but is uncertain what type of dog it is, it will still predict “dog” with high confidence but have lower confidences spread out among the hyponyms. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_61",
"text": " This formulation also works for detection. Now, instead of assuming every image has an object, we use YOLOv2’s objectness predictor to give us the value of Pr(physical object)𝑃𝑟physical objectPr(\\text{physical object}). The detector predicts a bounding box and the tree of probabilities. We traverse the tree down, taking the highest confidence path at every split until we reach some threshold and we predict that object class. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_62",
"text": " Dataset combination with WordTree. We can use WordTree to combine multiple datasets together in a sensible fashion. We simply map the categories in the datasets to synsets in the tree. Figure 6 shows an example of using WordTree to combine the labels from ImageNet and COCO. WordNet is extremely diverse so we can use this technique with most datasets. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_63",
"text": " Joint classification and detection. Now that we can combine datasets using WordTree we can train our joint model on classification and detection. We want to train an extremely large scale detector so we create our combined dataset using the COCO detection dataset and the top 9000 classes from the full ImageNet release. We also need to evaluate our method so we add in any classes from the ImageNet detection challenge that were not already included. The corresponding WordTree for this dataset has 9418 classes. ImageNet is a much larger dataset so we balance the dataset by oversampling COCO so that ImageNet is only larger by a factor of 4:1. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_64",
"text": " Using this dataset we train YOLO9000. We use the base YOLOv2 architecture but only 3 priors instead of 5 to limit the output size. When our network sees a detection image we backpropagate loss as normal. For classification loss, we only backpropagate loss at or above the corresponding level of the label. For example, if the label is “dog” we do assign any error to predictions further down in the tree, “German Shepherd” versus “Golden Retriever”, because we do not have that information. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_65",
"text": " When it sees a classification image we only backpropagate classification loss. To do this we simply find the bounding box that predicts the highest probability for that class and we compute the loss on just its predicted tree. We also assume that the predicted box overlaps what would be the ground truth label by at least .3.3.3 IOU and we backpropagate objectness loss based on this assumption. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_66",
"text": " Using this joint training, YOLO9000 learns to find objects in images using the detection data in COCO and it learns to classify a wide variety of these objects using data from ImageNet. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_67",
"text": " We evaluate YOLO9000 on the ImageNet detection task. The detection task for ImageNet shares on 44 object categories with COCO which means that YOLO9000 has only seen classification data for the majority of the test images, not detection data. YOLO9000 gets 19.7 mAP overall with 16.0 mAP on the disjoint 156 object classes that it has never seen any labelled detection data for. This mAP is higher than results achieved by DPM but YOLO9000 is trained on different datasets with only partial supervision . It also is simultaneously detecting 9000 other object categories, all in real-time. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_68",
"text": " When we analyze YOLO9000’s performance on ImageNet we see it learns new species of animals well but struggles with learning categories like clothing and equipment. New animals are easier to learn because the objectness predictions generalize well from the animals in COCO. Conversely, COCO does not have bounding box label for any type of clothing, only for person, so YOLO9000 struggles to model categories like “sunglasses” or “swimming trunks”. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_69",
"text": " We introduce YOLOv2 and YOLO9000, real-time detection systems. YOLOv2 is state-of-the-art and faster than other detection systems across a variety of detection datasets. Furthermore, it can be run at a variety of image sizes to provide a smooth tradeoff between speed and accuracy. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_70",
"text": " YOLO9000 is a real-time framework for detection more than 9000 object categories by jointly optimizing detection and classification. We use WordTree to combine data from various sources and our joint optimization technique to train simultaneously on ImageNet and COCO. YOLO9000 is a strong step towards closing the dataset size gap between detection and classification. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_71",
"text": " Many of our techniques generalize outside of object detection. Our WordTree representation of ImageNet offers a richer, more detailed output space for image classification. Dataset combination using hierarchical classification would be useful in the classification and segmentation domains. Training techniques like multi-scale training could provide benefit across a variety of visual tasks. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_72",
"text": " For future work we hope to use similar techniques for weakly supervised image segmentation. We also plan to improve our detection results using more powerful matching strategies for assigning weak labels to classification data during training. Computer vision is blessed with an enormous amount of labelled data. We will continue looking for ways to bring different sources and structures of data together to make stronger models of the visual world. ",
"title": "YOLO9000: Better, Faster, Stronger"
}
] |
How many different types of experiments are performed to test the proposed models?
|
5 different types of experiments are performed to test the proposed models [30]. They are Generalization over time scales, Experiments on MNIST, Experiments on Natural Image Patches, Out-of-domain Inputs, and Visualizing Features [27].
|
[
30,
27
] |
[
{
"id": "1502.04681_all_0",
"text": " Understanding temporal sequences is important for solving many problems in the AI-set. Recently, recurrent neural networks using the Long Short Term Memory (LSTM) architecture (Hochreiter & Schmidhuber, 1997) have been used successfully to perform various supervised sequence learning tasks, such as speech recognition (Graves & Jaitly, 2014), machine translation (Sutskever et al., 2014; Cho et al., 2014), and caption generation for images (Vinyals et al., 2014). They have also been applied on videos for recognizing actions and generating natural language descriptions (Donahue et al., 2014). A general sequence to sequence learning framework was described by Sutskever et al. (2014) in which a recurrent network is used to encode a sequence into a fixed length representation, and then another recurrent network is used to decode a sequence out of that representation. In this work, we apply and extend this framework to learn representations of sequences of images. We choose to work in the unsupervised setting where we only have access to a dataset of unlabelled videos. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_1",
"text": " Videos are an abundant and rich source of visual information and can be seen as a window into the physics of the world we live in, showing us examples of what constitutes objects, how objects move against backgrounds, what happens when cameras move and how things get occluded. Being able to learn a representation that disentangles these factors would help in making intelligent machines that can understand and act in their environment. Additionally, learning good video representations is essential for a number of useful tasks, such as recognizing actions and gestures. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_2",
"text": " Supervised learning has been extremely successful in learning good visual representations that not only produce good results at the task they are trained for, but also transfer well to other tasks and datasets. Therefore, it is natural to extend the same approach to learning video representations. This has led to research in 3D convolutional nets (Ji et al., 2013; Tran et al., 2014), different temporal fusion strategies (Karpathy et al., 2014) and exploring different ways of presenting visual information to convolutional nets (Simonyan & Zisserman, 2014a). However, videos are much higher dimensional entities compared to single images. Therefore, it becomes increasingly difficult to do credit assignment and learn long range structure, unless we collect much more labelled data or do a lot of feature engineering (for example computing the right kinds of flow features) to keep the dimensionality low. The costly work of collecting more labelled data and the tedious work of doing more clever engineering can go a long way in solving particular problems, but this is ultimately unsatisfying as a machine learning solution. This highlights the need for using unsupervised learning to find and represent structure in videos. Moreover, videos have a lot of structure in them (spatial and temporal regularities) which makes them particularly well suited as a domain for building unsupervised learning models. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_3",
"text": " When designing any unsupervised learning model, it is crucial to have the right inductive biases and choose the right objective function so that the learning signal points the model towards learning useful features. In this paper, we use the LSTM Encoder-Decoder framework to learn video representations. The key inductive bias here is that the same operation must be applied at each time step to propagate information to the next step. This enforces the fact that the physics of the world remains the same, irrespective of input. The same physics acting on any state, at any time, must produce the next state. Our model works as follows. The Encoder LSTM runs through a sequence of frames to come up with a representation. This representation is then decoded through another LSTM to produce a target sequence. We consider different choices of the target sequence. One choice is to predict the same sequence as the input. The motivation is similar to that of autoencoders – we wish to capture all that is needed to reproduce the input but at the same time go through the inductive biases imposed by the model. Another option is to predict the future frames. Here the motivation is to learn a representation that extracts all that is needed to extrapolate the motion and appearance beyond what has been observed. These two natural choices can also be combined. In this case, there are two decoder LSTMs – one that decodes the representation into the input sequence and another that decodes the same representation to predict the future. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_4",
"text": " The inputs to the model can, in principle, be any representation of individual video frames. However, for the purposes of this work, we limit our attention to two kinds of inputs. The first is image patches. For this we use natural image patches as well as a dataset of moving MNIST digits. The second is high-level “percepts” extracted by applying a convolutional net trained on ImageNet. These percepts are the states of last (and/or second-to-last) layers of rectified linear hidden states from a convolutional neural net model. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_5",
"text": " In order to evaluate the learned representations we qualitatively analyze the reconstructions and predictions made by the model. For a more quantitative evaluation, we use these LSTMs as initializations for the supervised task of action recognition. If the unsupervised learning model comes up with useful representations then the classifier should be able to perform better, especially when there are only a few labelled examples. We find that this is indeed the case. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_6",
"text": " The first approaches to learning representations of videos in an unsupervised way were based on ICA (van Hateren & Ruderman, 1998; Hurri & Hyvärinen, 2003). Le et al. (2011) approached this problem using multiple layers of Independent Subspace Analysis modules. Generative models for understanding transformations between pairs of consecutive images are also well studied (Memisevic, 2013; Memisevic & Hinton, 2010; Susskind et al., 2011). This work was extended recently by Michalski et al. (2014) to model longer sequences. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_7",
"text": " Recently, Ranzato et al. (2014) proposed a generative model for videos. The model uses a recurrent neural network to predict the next frame or interpolate between frames. In this work, the authors highlight the importance of choosing the right loss function. It is argued that squared loss in input space is not the right objective because it does not respond well to small distortions in input space. The proposed solution is to quantize image patches into a large dictionary and train the model to predict the identity of the target patch. This does solve some of the problems of squared loss but it introduces an arbitrary dictionary size into the picture and altogether removes the idea of patches being similar or dissimilar to one other. Designing an appropriate loss function that respects our notion of visual similarity is a very hard problem (in a sense, almost as hard as the modeling problem we want to solve in the first place). Therefore, in this paper, we use the simple squared loss objective function as a starting point and focus on designing an encoder-decoder RNN architecture that can be used with any loss function. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_8",
"text": " In this section, we describe several variants of our LSTM Encoder-Decoder model. The basic unit of our network is the LSTM cell block. Our implementation of LSTMs follows closely the one discussed by Graves (2013). ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_9",
"text": " In this section we briefly describe the LSTM unit which is the basic building block of our model. The unit is shown in Fig. 1 (reproduced from Graves (2013)). ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_10",
"text": " Each LSTM unit has a cell which has a state ctsubscript𝑐𝑡c_{t} at time t𝑡t. This cell can be thought of as a memory unit. Access to this memory unit for reading or modifying it is controlled through sigmoidal gates – input gate itsubscript𝑖𝑡i_{t}, forget gate ftsubscript𝑓𝑡f_{t} and output gate otsubscript𝑜𝑡o_{t}. The LSTM unit operates as follows. At each time step it receives inputs from two external sources at each of the four terminals (the three gates and the input). The first source is the current frame 𝐱tsubscript𝐱𝑡{{\\bf x}_{t}}. The second source is the previous hidden states of all LSTM units in the same layer 𝐡t−1subscript𝐡𝑡1{\\bf h}_{t-1}. Additionally, each gate has an internal source, the cell state ct−1subscript𝑐𝑡1c_{t-1} of its cell block. The links between a cell and its own gates are called peephole connections. The inputs coming from different sources get added up, along with a bias. The gates are activated by passing their total input through the logistic function. The total input at the input terminal is passed through the tanh non-linearity. The resulting activation is multiplied by the activation of the input gate. This is then added to the cell state after multiplying the cell state by the forget gate’s activation ftsubscript𝑓𝑡f_{t}. The final output from the LSTM unit htsubscriptℎ𝑡h_{t} is computed by multiplying the output gate’s activation otsubscript𝑜𝑡o_{t} with the updated cell state passed through a tanh non-linearity. These updates are summarized for a layer of LSTM units as follows 𝐢tsubscript𝐢𝑡\\displaystyle{\\bf i}_{t} =\\displaystyle= σ(Wxi𝐱t+Whi𝐡t−1+Wci𝐜t−1+𝐛i),𝜎subscript𝑊𝑥𝑖subscript𝐱𝑡subscript𝑊ℎ𝑖subscript𝐡𝑡1subscript𝑊𝑐𝑖subscript𝐜𝑡1subscript𝐛𝑖\\displaystyle\\sigma\\left(W_{xi}{\\bf x}_{t}+W_{hi}{\\bf h}_{t-1}+W_{ci}{\\bf c}_{t-1}+{\\bf b}_{i}\\right), 𝐟tsubscript𝐟𝑡\\displaystyle{\\bf f}_{t} =\\displaystyle= σ(Wxf𝐱t+Whf𝐡t−1+Wcf𝐜t−1+𝐛f),𝜎subscript𝑊𝑥𝑓subscript𝐱𝑡subscript𝑊ℎ𝑓subscript𝐡𝑡1subscript𝑊𝑐𝑓subscript𝐜𝑡1subscript𝐛𝑓\\displaystyle\\sigma\\left(W_{xf}{\\bf x}_{t}+W_{hf}{\\bf h}_{t-1}+W_{cf}{\\bf c}_{t-1}+{\\bf b}_{f}\\right), 𝐜tsubscript𝐜𝑡\\displaystyle{\\bf c}_{t} =\\displaystyle= 𝐟t𝐜t−1+𝐢ttanh(Wxc𝐱t+Whc𝐡t−1+𝐛c),subscript𝐟𝑡subscript𝐜𝑡1subscript𝐢𝑡subscript𝑊𝑥𝑐subscript𝐱𝑡subscript𝑊ℎ𝑐subscript𝐡𝑡1subscript𝐛𝑐\\displaystyle{\\bf f}_{t}{\\bf c}_{t-1}+{\\bf i}_{t}\\tanh\\left(W_{xc}{\\bf x}_{t}+W_{hc}{\\bf h}_{t-1}+{\\bf b}_{c}\\right), 𝐨tsubscript𝐨𝑡\\displaystyle{\\bf o}_{t} =\\displaystyle= σ(Wxo𝐱t+Who𝐡t−1+Wco𝐜t+𝐛o),𝜎subscript𝑊𝑥𝑜subscript𝐱𝑡subscript𝑊ℎ𝑜subscript𝐡𝑡1subscript𝑊𝑐𝑜subscript𝐜𝑡subscript𝐛𝑜\\displaystyle\\sigma\\left(W_{xo}{\\bf x}_{t}+W_{ho}{\\bf h}_{t-1}+W_{co}{\\bf c}_{t}+{\\bf b}_{o}\\right), 𝐡tsubscript𝐡𝑡\\displaystyle{\\bf h}_{t} =\\displaystyle= 𝐨ttanh(𝐜t).subscript𝐨𝑡subscript𝐜𝑡\\displaystyle{\\bf o}_{t}\\tanh({\\bf c}_{t}). ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_11",
"text": " Note that all Wc∙subscript𝑊𝑐∙W_{c\\bullet} matrices are diagonal, whereas the rest are dense. The key advantage of using an LSTM unit over a traditional neuron in an RNN is that the cell state in an LSTM unit sums activities over time. Since derivatives distribute over sums, the error derivatives don’t vanish quickly as they get sent back into time. This makes it easy to do credit assignment over long sequences and discover long-range features. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_12",
"text": " In this section, we describe a model that uses Recurrent Neural Nets (RNNs) made of LSTM units to do unsupervised learning. The model consists of two RNNs – the encoder LSTM and the decoder LSTM as shown in Fig. 2. The input to the model is a sequence of vectors (image patches or features). The encoder LSTM reads in this sequence. After the last input has been read, the decoder LSTM takes over and outputs a prediction for the target sequence. The target sequence is same as the input sequence, but in reverse order. Reversing the target sequence makes the optimization easier because the model can get off the ground by looking at low range correlations. This is also inspired by how lists are represented in LISP. The encoder can be seen as creating a list by applying the cons function on the previously constructed list and the new input. The decoder essentially unrolls this list, with the hidden to output weights extracting the element at the top of the list (car function) and the hidden to hidden weights extracting the rest of the list (cdr function). Therefore, the first element out is the last element in. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_13",
"text": " The decoder can be of two kinds – conditional or unconditioned. A conditional decoder receives the last generated output frame as input, i.e., the dotted input in Fig. 2 is present. An unconditioned decoder does not receive that input. This is discussed in more detail in Sec. 2.4. Fig. 2 shows a single layer LSTM Autoencoder. The architecture can be extend to multiple layers by stacking LSTMs on top of each other. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_14",
"text": " Why should this learn good features? The state of the encoder LSTM after the last input has been read is the representation of the input video. The decoder LSTM is being asked to reconstruct back the input sequence from this representation. In order to do so, the representation must retain information about the appearance of the objects and the background as well as the motion contained in the video. However, an important question for any autoencoder-style model is what prevents it from learning an identity mapping and effectively copying the input to the output. In that case all the information about the input would still be present but the representation will be no better than the input. There are two factors that control this behaviour. First, the fact that there are only a fixed number of hidden units makes it unlikely that the model can learn trivial mappings for arbitrary length input sequences. Second, the same LSTM operation is used to decode the representation recursively. This means that the same dynamics must be applied on the representation at any stage of decoding. This further prevents the model from learning an identity mapping. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_15",
"text": " Another natural unsupervised learning task for sequences is predicting the future. This is the approach used in language models for modeling sequences of words. The design of the Future Predictor Model is same as that of the Autoencoder Model, except that the decoder LSTM in this case predicts frames of the video that come after the input sequence (Fig. 3). Ranzato et al. (2014) use a similar model but predict only the next frame at each time step. This model, on the other hand, predicts a long sequence into the future. Here again we can consider two variants of the decoder – conditional and unconditioned. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_16",
"text": " Why should this learn good features? In order to predict the next few frames correctly, the model needs information about which objects and background are present and how they are moving so that the motion can be extrapolated. The hidden state coming out from the encoder will try to capture this information. Therefore, this state can be seen as a representation of the input sequence. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_17",
"text": " For each of these two models, we can consider two possibilities - one in which the decoder LSTM is conditioned on the last generated frame and the other in which it is not. In the experimental section, we explore these choices quantitatively. Here we briefly discuss arguments for and against a conditional decoder. A strong argument in favour of using a conditional decoder is that it allows the decoder to model multiple modes in the target sequence distribution. Without that, we would end up averaging the multiple modes in the low-level input space. However, this is an issue only if we expect multiple modes in the target sequence distribution. For the LSTM Autoencoder, there is only one correct target and hence a unimodal target distribution. But for the LSTM Future Predictor there is a possibility of multiple targets given an input because even if we assume a deterministic universe, everything needed to predict the future will not necessarily be observed in the input. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_18",
"text": " There is also an argument against using a conditional decoder from the optimization point-of-view. There are strong short-range correlations in video data, for example, most of the content of a frame is same as the previous one. If the decoder was given access to the last few frames while generating a particular frame at training time, it would find it easy to pick up on these correlations. There would only be a very small gradient that tries to fix up the extremely subtle errors that require long term knowledge about the input sequence. In an unconditioned decoder, this input is removed and the model is forced to look for information deep inside the encoder. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_19",
"text": " The two tasks – reconstructing the input and predicting the future can be combined to create a composite model as shown in Fig. 4. Here the encoder LSTM is asked to come up with a state from which we can both predict the next few frames as well as reconstruct the input. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_20",
"text": " This composite model tries to overcome the shortcomings that each model suffers on its own. A high-capacity autoencoder would suffer from the tendency to learn trivial representations that just memorize the inputs. However, this memorization is not useful at all for predicting the future. Therefore, the composite model cannot just memorize information. On the other hand, the future predictor suffers form the tendency to store information only about the last few frames since those are most important for predicting the future, i.e., in order to predict vtsubscript𝑣𝑡v_{t}, the frames {vt−1,…,vt−k}subscript𝑣𝑡1…subscript𝑣𝑡𝑘\\{v_{t-1},\\ldots,v_{t-k}\\} are much more important than v0subscript𝑣0v_{0}, for some small value of k𝑘k. Therefore the representation at the end of the encoder will have forgotten about a large part of the input. But if we ask the model to also predict all of the input sequence, then it cannot just pay attention to the last few frames. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_21",
"text": " We design experiments to accomplish the following objectives: • Get a qualitative understanding of what the LSTM learns to do. • Measure the benefit of initializing networks for supervised learning tasks with the weights found by unsupervised learning, especially with very few training examples. • Compare the different proposed models - Autoencoder, Future Predictor and Composite models and their conditional variants. • Compare with state-of-the-art action recognition benchmarks. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_22",
"text": " We use the UCF-101 and HMDB-51 datasets for supervised tasks. The UCF-101 dataset (Soomro et al., 2012) contains 13,320 videos with an average length of 6.2 seconds belonging to 101 different action categories. The dataset has 3 standard train/test splits with the training set containing around 9,500 videos in each split (the rest are test). The HMDB-51 dataset (Kuehne et al., 2011) contains 5100 videos belonging to 51 different action categories. Mean length of the videos is 3.2 seconds. This also has 3 train/test splits with 3570 videos in the training set and rest in test. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_23",
"text": " To train the unsupervised models, we used a subset of the Sports-1M dataset (Karpathy et al., 2014), that contains 1 million YouTube clips. Even though this dataset is labelled for actions, we did not do any supervised experiments on it because of logistical constraints with working with such a huge dataset. We instead collected 300 hours of video by randomly sampling 10 second clips from the dataset. It is possible to collect better samples if instead of choosing randomly, we extracted videos where a lot of motion is happening and where there are no shot boundaries. However, we did not do so in the spirit of unsupervised learning, and because we did not want to introduce any unnatural bias in the samples. We also used the supervised datasets (UCF-101 and HMDB-51) for unsupervised training. However, we found that using them did not give any significant advantage over just using the YouTube videos. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_24",
"text": " We extracted percepts using the convolutional neural net model of Simonyan & Zisserman (2014b). The videos have a resolution of 240 ×\\times 320 and were sampled at almost 30 frames per second. We took the central 224 ×\\times 224 patch from each frame and ran it through the convnet. This gave us the RGB percepts. Additionally, for UCF-101, we computed flow percepts by extracting flows using the Brox method and training the temporal stream convolutional network as described by Simonyan & Zisserman (2014a). We found that the fc6 features worked better than fc7 for single frame classification using both RGB and flow percepts. Therefore, we used the 4096-dimensional fc6 layer as the input representation of our data. Besides these percepts, we also trained the proposed models on 32 ×\\times 32 patches of pixels. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_25",
"text": " All models were trained using backprop on a single NVIDIA Titan GPU. A two layer 2048 unit Composite model that predicts 13 frames and reconstructs 16 frames took 18-20 hours to converge on 300 hours of percepts. We initialized weights by sampling from a uniform distribution whose scale was set to 1/sqrt(fan-in). Biases at all the gates were initialized to zero. Peep-hole connections were initialized to zero. The supervised classifiers trained on 16 frames took 5-15 minutes to converge. The code can be found at https://github.com/emansim/unsupervised-videos. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_26",
"text": " The aim of this set of experiments to visualize the properties of the proposed models. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_27",
"text": " Experiments on MNIST We first trained our models on a dataset of moving MNIST digits. In this dataset, each video was 20 frames long and consisted of two digits moving inside a 64 ×\\times 64 patch. The digits were chosen randomly from the training set and placed initially at random locations inside the patch. Each digit was assigned a velocity whose direction was chosen uniformly randomly on a unit circle and whose magnitude was also chosen uniformly at random over a fixed range. The digits bounced-off the edges of the 64 ×\\times 64 frame and overlapped if they were at the same location. The reason for working with this dataset is that it is infinite in size and can be generated quickly on the fly. This makes it possible to explore the model without expensive disk accesses or overfitting issues. It also has interesting behaviours due to occlusions and the dynamics of bouncing off the walls. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_28",
"text": " We first trained a single layer Composite Model. Each LSTM had 2048 units. The encoder took 10 frames as input. The decoder tried to reconstruct these 10 frames and the future predictor attempted to predict the next 10 frames. We used logistic output units with a cross entropy loss function. Fig. 5 shows two examples of running this model. The true sequences are shown in the first two rows. The next two rows show the reconstruction and future prediction from the one layer Composite Model. It is interesting to note that the model figures out how to separate superimposed digits and can model them even as they pass through each other. This shows some evidence of disentangling the two independent factors of variation in this sequence. The model can also correctly predict the motion after bouncing off the walls. In order to see if adding depth helps, we trained a two layer Composite Model, with each layer having 2048 units. We can see that adding depth helps the model make better predictions. Next, we changed the future predictor by making it conditional. We can see that this model makes sharper predictions. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_29",
"text": " Experiments on Natural Image Patches Next, we tried to see if our models can also work with natural image patches. For this, we trained the models on sequences of 32 ×\\times 32 natural image patches extracted from the UCF-101 dataset. In this case, we used linear output units and the squared error loss function. The input was 16 frames and the model was asked to reconstruct the 16 frames and predict the future 13 frames. Fig. 6 shows the results obtained from a two layer Composite model with 2048 units. We found that the reconstructions and the predictions are both very blurry. We then trained a bigger model with 4096 units. The outputs from this model are also shown in Fig. 6. We can see that the reconstructions get much sharper. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_30",
"text": " Generalization over time scales In the next experiment, we test if the model can work at time scales that are different than what it was trained on. We take a one hidden layer unconditioned Composite Model trained on moving MNIST digits. The model has 2048 LSTM units and looks at a 64 ×\\times 64 input. It was trained on input sequences of 10 frames to reconstruct those 10 frames as well as predict 10 frames into the future. In order to test if the future predictor is able to generalize beyond 10 frames, we let the model run for 100 steps into the future. Fig. 7(a) shows the pattern of activity in the LSTM units of the future predictor pathway for a randomly chosen test input. It shows the activity at each of the three sigmoidal gates (input, forget, output), the input (after the tanh non-linearity, before being multiplied by the input gate), the cell state and the final output (after being multiplied by the output gate). Even though the units are ordered randomly along the vertical axis, we can see that the dynamics has a periodic quality to it. The model is able to generate persistent motion for long periods of time. In terms of reconstruction, the model only outputs blobs after the first 15 frames, but the motion is relatively well preserved. More results, including long range future predictions over hundreds of time steps can see been at http://www.cs.toronto.edu/~nitish/unsupervised_video. To show that setting up a periodic behaviour is not trivial, Fig. 7(b) shows the activity from a randomly initialized future predictor. Here, the LSTM state quickly converges and the outputs blur completely. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_31",
"text": " Out-of-domain Inputs Next, we test this model’s ability to deal with out-of-domain inputs. For this, we test the model on sequences of one and three moving digits. The model was trained on sequences of two moving digits, so it has never seen inputs with just one digit or three digits. Fig. 8 shows the reconstruction and future prediction results. For one moving digit, we can see that the model can do a good job but it really tries to hallucinate a second digit overlapping with the first one. The second digit shows up towards the end of the future reconstruction. For three digits, the model merges digits into blobs. However, it does well at getting the overall motion right. This highlights a key drawback of modeling entire frames of input in a single pass. In order to model videos with variable number of objects, we perhaps need models that not only have an attention mechanism in place, but can also learn to execute themselves a variable number of times and do variable amounts of computation. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_32",
"text": " Visualizing Features Next, we visualize the features learned by this model. Fig. 9 shows the weights that connect each input frame to the encoder LSTM. There are four sets of weights. One set of weights connects the frame to the input units. There are three other sets, one corresponding to each of the three gates (input, forget and output). Each weight has a size of 64 ×\\times 64. A lot of features look like thin strips. Others look like higher frequency strips. It is conceivable that the high frequency features help in encoding the direction and velocity of motion. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_33",
"text": " Fig. 10 shows the output features from the two LSTM decoders of a Composite Model. These correspond to the weights connecting the LSTM output units to the output layer. They appear to be somewhat qualitatively different from the input features shown in Fig. 9. There are many more output features that are local blobs, whereas those are rare in the input features. In the output features, the ones that do look like strips are much shorter than those in the input features. One way to interpret this is the following. The model needs to know about motion (which direction and how fast things are moving) from the input. This requires precise information about location (thin strips) and velocity (high frequency strips). But when it is generating the output, the model wants to hedge its bets so that it does not suffer a huge loss for predicting things sharply at the wrong place. This could explain why the output features have somewhat bigger blobs. The relative shortness of the strips in the output features can be explained by the fact that in the inputs, it does not hurt to have a longer feature than what is needed to detect a location because information is coarse-coded through multiple features. But in the output, the model may not want to put down a feature that is bigger than any digit because other units will have to conspire to correct for it. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_34",
"text": " The aim of this set of experiments is to see if the features learned by unsupervised learning can help improve performance on supervised tasks. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_35",
"text": " We trained a two layer Composite Model with 2048 hidden units with no conditioning on either decoders. The model was trained on percepts extracted from 300 hours of YouTube data. The model was trained to autoencode 16 frames and predict the next 13 frames. We initialize an LSTM classifier with the weights learned by the encoder LSTM from this model. The classifier is shown in Fig. 11. The output from each LSTM in the second layer goes into a softmax classifier that makes a prediction about the action being performed at each time step. Since only one action is being performed in each video in the datasets we consider, the target is the same at each time step. At test time, the predictions made at each time step are averaged. To get a prediction for the entire video, we average the predictions from all 16 frame blocks in the video with a stride of 8 frames. Using a smaller stride did not improve results. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_36",
"text": " The baseline for comparing these models is an identical LSTM classifier but with randomly initialized weights. All classifiers used dropout regularization, where we dropped activations as they were communicated across layers but not through time within the same LSTM as proposed in Zaremba et al. (2014). We emphasize that this is a very strong baseline and does significantly better than just using single frames. Using dropout was crucial in order to train good baseline models especially with very few training examples. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_37",
"text": " Fig. 12 compares three models - single frame classifier (logistic regression), baseline LSTM classifier and the LSTM classifier initialized with weights from the Composite Model as the number of labelled videos per class is varied. Note that having one labelled video means having many labelled 16 frame blocks. We can see that for the case of very few training examples, unsupervised learning gives a substantial improvement. For example, for UCF-101, the performance improves from 29.6% to 34.3% when training on only one labelled video. As the size of the labelled dataset grows, the improvement becomes smaller. Even for the full UCF-101 dataset we still get a considerable improvement from 74.5% to 75.8%. On HMDB-51, the improvement is from 42.8% to 44.0% for the full dataset (70 videos per class) and 14.4% to 19.1% for one video per class. Although, the improvement in classification by using unsupervised learning was not as big as we expected, we still managed to yield an additional improvement over a strong baseline. We discuss some avenues for improvements later. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_38",
"text": " We further ran similar experiments on the optical flow percepts extracted from the UCF-101 dataset. A temporal stream convolutional net, similar to the one proposed by Simonyan & Zisserman (2014b), was trained on single frame optical flows as well as on stacks of 10 optical flows. This gave an accuracy of 72.2% and 77.5% respectively. Here again, our models took 16 frames as input, reconstructed them and predicted 13 frames into the future. LSTMs with 128 hidden units improved the accuracy by 2.1% to 74.3% for the single frame case. Bigger LSTMs did not improve results. By pretraining the LSTM, we were able to further improve the classification to 74.9% (±0.1plus-or-minus0.1\\pm 0.1). For stacks of 10 frames we improved very slightly to 77.7%. These results are summarized in Table 1. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_39",
"text": " The aim of this set of experiments is to compare the different variants of the model proposed in this paper. Since it is always possible to get lower reconstruction error by copying the inputs, we cannot use input reconstruction error as a measure of how good a model is doing. However, we can use the error in predicting the future as a reasonable measure of how good the model is doing. Besides, we can use the performance on supervised tasks as a proxy for how good the unsupervised model is doing. In this section, we present results from these two analyses. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_40",
"text": " Future prediction results are summarized in Table 2. For MNIST we compute the cross entropy of the predictions with respect to the ground truth, both of which are 64 ×\\times 64 patches. For natural image patches, we compute the squared loss. We see that the Composite Model always does a better job of predicting the future compared to the Future Predictor. This indicates that having the autoencoder along with the future predictor to force the model to remember more about the inputs actually helps predict the future better. Next, we can compare each model with its conditional variant. Here, we find that the conditional models perform better, as was also noted in Fig. 5. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_41",
"text": " Next, we compare the models using performance on a supervised task. Table 3 shows the performance on action recognition achieved by finetuning different unsupervised learning models. Besides running the experiments on the full UCF-101 and HMDB-51 datasets, we also ran the experiments on small subsets of these to better highlight the case where we have very few training examples. We find that all unsupervised models improve over the baseline LSTM which is itself well-regularized by using dropout. The Autoencoder model seems to perform consistently better than the Future Predictor. The Composite model which combines the two does better than either one alone. Conditioning on the generated inputs does not seem to give a clear advantage over not doing so. The Composite Model with a conditional future predictor works the best, although its performance is almost same as that of the Composite Model. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_42",
"text": " Finally, we compare our models to the state-of-the-art action recognition results. The performance is summarized in Table 4. The table is divided into three sets. The first set compares models that use only RGB data (single or multiple frames). The second set compares models that use explicitly computed flow features only. Models in the third set use both. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_43",
"text": " On RGB data, our model performs at par with the best deep models. It performs 3% better than the LRCN model that also used LSTMs on top of convnet features111However, the improvement is only partially from unsupervised learning, since we used a better convnet model.. Our model performs better than C3D features that use a 3D convolutional net. However, when the C3D features are concatenated with fc6 percepts, they do slightly better than our model. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_44",
"text": " The improvement for flow features over using a randomly initialized LSTM network is quite small. We believe this is atleast partly due to the fact that the flow percepts already capture a lot of the motion information that the LSTM would otherwise discover. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_45",
"text": " When we combine predictions from the RGB and flow models, we obtain 84.3 accuracy on UCF-101. We believe further improvements can be made by running the model over different patch locations and mirroring the patches. Also, our model can be applied deeper inside the convnet instead of just at the top-level. That can potentially lead to further improvements. In this paper, we focus on showing that unsupervised training helps consistently across both datasets and across different sized training sets. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_46",
"text": " We proposed models based on LSTMs that can learn good video representations. We compared them and analyzed their properties through visualizations. Moreover, we managed to get an improvement on supervised tasks. The best performing model was the Composite Model that combined an autoencoder and a future predictor. Conditioning on generated outputs did not have a significant impact on the performance for supervised tasks, however it made the future predictions look slightly better. The model was able to persistently generate motion well beyond the time scales it was trained for. However, it lost the precise object features rapidly after the training time scale. The features at the input and output layers were found to have some interesting properties. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
},
{
"id": "1502.04681_all_47",
"text": " To further get improvements for supervised tasks, we believe that the model can be extended by applying it convolutionally across patches of the video and stacking multiple layers of such models. Applying this model in the lower layers of a convolutional net could help extract motion information that would otherwise be lost across max-pooling layers. In our future work, we plan to build models based on these autoencoders from the bottom up instead of applying them only to percepts. ",
"title": "Unsupervised Learning of Video Representations using LSTMs"
}
] |
In language models, which method would be better for preventing overfitting from batch normalization and dropout?
|
According to this work, without dropout, a vanilla LM can run the risk of overfitting, which decreases performance [44].
|
[
44
] |
[
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS-COCO, and other datasets Sharif Razavian et al. (2014); Long et al. (2015a); He et al. (2016); Huang et al. (2017). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_1",
"text": " Text classification is a category of Natural Language Processing (NLP) tasks with real-world applications such as spam, fraud, and bot detection Jindal and Liu (2007); Ngai et al. (2011); Chu et al. (2012), emergency response Caragea et al. (2011), and commercial document classification, such as for legal discovery Roitblat et al. (2010). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_2",
"text": " While Deep Learning models have achieved state-of-the-art on many NLP tasks, these models are trained from scratch, requiring large datasets, and days to converge. Research in NLP focused mostly on transductive transfer Blitzer et al. (2007). For inductive transfer, fine-tuning pretrained word embeddings Mikolov et al. (2013), a simple transfer technique that only targets a model’s first layer, has had a large impact in practice and is used in most state-of-the-art models. Recent approaches that concatenate embeddings derived from other tasks with the input at different layers Peters et al. (2017); McCann et al. (2017); Peters et al. (2018) still train the main task model from scratch and treat pretrained embeddings as fixed parameters, limiting their usefulness. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_3",
"text": " In light of the benefits of pretraining Erhan et al. (2010), we should be able to do better than randomly initializing the remaining parameters of our models. However, inductive transfer via fine-tuning has been unsuccessful for NLP Mou et al. (2016). Dai and Le (2015) first proposed fine-tuning a language model (LM) but require millions of in-domain documents to achieve good performance, which severely limits its applicability. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_4",
"text": " We show that not the idea of LM fine-tuning but our lack of knowledge of how to train them effectively has been hindering wider adoption. LMs overfit to small datasets and suffered catastrophic forgetting when fine-tuned with a classifier. Compared to CV, NLP models are typically more shallow and thus require different fine-tuning methods. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_5",
"text": " We propose a new method, Universal Language Model Fine-tuning (ULMFiT) that addresses these issues and enables robust inductive transfer learning for any NLP task, akin to fine-tuning ImageNet models: The same 3-layer LSTM architecture—with the same hyperparameters and no additions other than tuned dropout hyperparameters—outperforms highly engineered models and transfer learning approaches on six widely studied text classification tasks. On IMDb, with 100100100 labeled examples, ULMFiT matches the performance of training from scratch with 10×10\\times and—given 505050k unlabeled examples—with 100×100\\times more data. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_6",
"text": " Our contributions are the following: 1) We propose Universal Language Model Fine-tuning (ULMFiT), a method that can be used to achieve CV-like transfer learning for any task for NLP. 2) We propose discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing, novel techniques to retain previous knowledge and avoid catastrophic forgetting during fine-tuning. 3) We significantly outperform the state-of-the-art on six representative text classification datasets, with an error reduction of 18-24% on the majority of datasets. 4) We show that our method enables extremely sample-efficient transfer learning and perform an extensive ablation analysis. 5) We make the pretrained models and our code available to enable wider adoption. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_7",
"text": " Features in deep neural networks in CV have been observed to transition from general to task-specific from the first to the last layer Yosinski et al. (2014). For this reason, most work in CV focuses on transferring the first layers of the model Long et al. (2015b). Sharif Razavian et al. (2014) achieve state-of-the-art results using features of an ImageNet model as input to a simple classifier. In recent years, this approach has been superseded by fine-tuning either the last Donahue et al. (2014) or several of the last layers of a pretrained model and leaving the remaining layers frozen Long et al. (2015a). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_8",
"text": " In NLP, only recently have methods been proposed that go beyond transferring word embeddings. The prevailing approach is to pretrain embeddings that capture additional context via other tasks. Embeddings at different levels are then used as features, concatenated either with the word embeddings or with the inputs at intermediate layers. This method is known as hypercolumns Hariharan et al. (2015) in CV333A hypercolumn at a pixel in CV is the vector of activations of all CNN units above that pixel. In analogy, a hypercolumn for a word or sentence in NLP is the concatenation of embeddings at different layers in a pretrained model. and is used by Peters et al. (2017), Peters et al. (2018), Wieting and Gimpel (2017), Conneau et al. (2017), and McCann et al. (2017) who use language modeling, paraphrasing, entailment, and Machine Translation (MT) respectively for pretraining. Specifically, Peters et al. (2018) require engineered custom architectures, while we show state-of-the-art performance with the same basic architecture across a range of tasks. In CV, hypercolumns have been nearly entirely superseded by end-to-end fine-tuning Long et al. (2015a). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_9",
"text": " A related direction is multi-task learning (MTL) Caruana (1993). This is the approach taken by Rei (2017) and Liu et al. (2018) who add a language modeling objective to the model that is trained jointly with the main task model. MTL requires the tasks to be trained from scratch every time, which makes it inefficient and often requires careful weighting of the task-specific objective functions Chen et al. (2017). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_10",
"text": " Fine-tuning has been used successfully to transfer between similar tasks, e.g. in QA Min et al. (2017), for distantly supervised sentiment analysis Severyn and Moschitti (2015), or MT domains Sennrich et al. (2015) but has been shown to fail between unrelated ones Mou et al. (2016). Dai and Le (2015) also fine-tune a language model, but overfit with 101010k labeled examples and require millions of in-domain documents for good performance. In contrast, ULMFiT leverages general-domain pretraining and novel fine-tuning techniques to prevent overfitting even with only 100100100 labeled examples and achieves state-of-the-art results also on small datasets. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_11",
"text": " We are interested in the most general inductive transfer learning setting for NLP Pan and Yang (2010): Given a static source task 𝒯Ssubscript𝒯𝑆\\mathcal{T}_{S} and any target task 𝒯Tsubscript𝒯𝑇\\mathcal{T}_{T} with 𝒯S≠𝒯Tsubscript𝒯𝑆subscript𝒯𝑇\\mathcal{T}_{S}\\neq\\mathcal{T}_{T}, we would like to improve performance on 𝒯Tsubscript𝒯𝑇\\mathcal{T}_{T}. Language modeling can be seen as the ideal source task and a counterpart of ImageNet for NLP: It captures many facets of language relevant for downstream tasks, such as long-term dependencies Linzen et al. (2016), hierarchical relations Gulordava et al. (2018), and sentiment Radford et al. (2017). In contrast to tasks like MT McCann et al. (2017) and entailment Conneau et al. (2017), it provides data in near-unlimited quantities for most domains and languages. Additionally, a pretrained LM can be easily adapted to the idiosyncrasies of a target task, which we show significantly improves performance (see Section 5). Moreover, language modeling already is a key component of existing tasks such as MT and dialogue modeling. Formally, language modeling induces a hypothesis space ℋℋ\\mathcal{H} that should be useful for many other NLP tasks Vapnik and Kotz (1982); Baxter (2000). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_12",
"text": " We propose Universal Language Model Fine-tuning (ULMFiT), which pretrains a language model (LM) on a large general-domain corpus and fine-tunes it on the target task using novel techniques. The method is universal in the sense that it meets these practical criteria: 1) It works across tasks varying in document size, number, and label type; 2) it uses a single architecture and training process; 3) it requires no custom feature engineering or preprocessing; and 4) it does not require additional in-domain documents or labels. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_13",
"text": " In our experiments, we use the state-of-the-art language model AWD-LSTM Merity et al. (2017a), a regular LSTM (with no attention, short-cut connections, or other sophisticated additions) with various tuned dropout hyperparameters. Analogous to CV, we expect that downstream performance can be improved by using higher-performance language models in the future. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_14",
"text": " ULMFiT consists of the following steps, which we show in Figure 1: a) General-domain LM pretraining (§3.1); b) target task LM fine-tuning (§3.2); and c) target task classifier fine-tuning (§3.3). We discuss these in the following sections. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_15",
"text": " An ImageNet-like corpus for language should be large and capture general properties of language. We pretrain the language model on Wikitext-103 Merity et al. (2017b) consisting of 28,595 preprocessed Wikipedia articles and 103 million words. Pretraining is most beneficial for tasks with small datasets and enables generalization even with 100100100 labeled examples. We leave the exploration of more diverse pretraining corpora to future work, but expect that they would boost performance. While this stage is the most expensive, it only needs to be performed once and improves performance and convergence of downstream models. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_16",
"text": " No matter how diverse the general-domain data used for pretraining is, the data of the target task will likely come from a different distribution. We thus fine-tune the LM on data of the target task. Given a pretrained general-domain LM, this stage converges faster as it only needs to adapt to the idiosyncrasies of the target data, and it allows us to train a robust LM even for small datasets. We propose discriminative fine-tuning and slanted triangular learning rates for fine-tuning the LM, which we introduce in the following. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_17",
"text": " As different layers capture different types of information Yosinski et al. (2014), they should be fine-tuned to different extents. To this end, we propose a novel fine-tuning method, discriminative fine-tuning444 An unrelated method of the same name exists for deep Boltzmann machines Salakhutdinov and Hinton (2009).. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_18",
"text": " Instead of using the same learning rate for all layers of the model, discriminative fine-tuning allows us to tune each layer with different learning rates. For context, the regular stochastic gradient descent (SGD) update of a model’s parameters θ𝜃\\theta at time step t𝑡t looks like the following Ruder (2016): θt=θt−1−η⋅∇θJ(θ)subscript𝜃𝑡subscript𝜃𝑡1⋅𝜂subscript∇𝜃𝐽𝜃\\theta_{t}=\\theta_{t-1}-\\eta\\cdot\\nabla_{\\theta}J(\\theta) (1) where η𝜂\\eta is the learning rate and ∇θJ(θ)subscript∇𝜃𝐽𝜃\\nabla_{\\theta}J(\\theta) is the gradient with regard to the model’s objective function. For discriminative fine-tuning, we split the parameters θ𝜃\\theta into {θ1,…,θL}superscript𝜃1…superscript𝜃𝐿\\{\\theta^{1},\\ldots,\\theta^{L}\\} where θlsuperscript𝜃𝑙\\theta^{l} contains the parameters of the model at the l𝑙l-th layer and L𝐿L is the number of layers of the model. Similarly, we obtain {η1,…,ηL}superscript𝜂1…superscript𝜂𝐿\\{\\eta^{1},\\ldots,\\eta^{L}\\} where ηlsuperscript𝜂𝑙\\eta^{l} is the learning rate of the l𝑙l-th layer. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_19",
"text": " The SGD update with discriminative fine-tuning is then the following: θtl=θt−1l−ηl⋅∇θlJ(θ)superscriptsubscript𝜃𝑡𝑙superscriptsubscript𝜃𝑡1𝑙⋅superscript𝜂𝑙subscript∇superscript𝜃𝑙𝐽𝜃\\theta_{t}^{l}=\\theta_{t-1}^{l}-\\eta^{l}\\cdot\\nabla_{\\theta^{l}}J(\\theta) (2) We empirically found it to work well to first choose the learning rate ηLsuperscript𝜂𝐿\\eta^{L} of the last layer by fine-tuning only the last layer and using ηl−1=ηl/2.6superscript𝜂𝑙1superscript𝜂𝑙2.6\\eta^{l-1}=\\eta^{l}/2.6 as the learning rate for lower layers. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_20",
"text": " For adapting its parameters to task-specific features, we would like the model to quickly converge to a suitable region of the parameter space in the beginning of training and then refine its parameters. Using the same learning rate (LR) or an annealed learning rate throughout training is not the best way to achieve this behaviour. Instead, we propose slanted triangular learning rates (STLR), which first linearly increases the learning rate and then linearly decays it according to the following update schedule, which can be seen in Figure 2: cut=⌊T⋅cut_frac⌋p={t/cut,ift<cut1−t−cutcut⋅(1/cut_frac−1),otherwiseηt=ηmax⋅1+p⋅(ratio−1)ratio𝑐𝑢𝑡⋅𝑇𝑐𝑢𝑡_𝑓𝑟𝑎𝑐𝑝cases𝑡𝑐𝑢𝑡if𝑡𝑐𝑢𝑡1𝑡𝑐𝑢𝑡⋅𝑐𝑢𝑡1𝑐𝑢𝑡_𝑓𝑟𝑎𝑐1otherwisesubscript𝜂𝑡⋅subscript𝜂𝑚𝑎𝑥1⋅𝑝𝑟𝑎𝑡𝑖𝑜1𝑟𝑎𝑡𝑖𝑜\\begin{split}cut&=\\lfloor T\\cdot cut\\_frac\\rfloor\\\\ p&=\\begin{cases}t/cut,&\\text{if}\\ t<cut\\\\ 1-\\frac{t-cut}{cut\\cdot(1/cut\\_frac-1)},&\\text{otherwise}\\end{cases}\\\\ \\eta_{t}&=\\eta_{max}\\cdot\\frac{1+p\\cdot(ratio-1)}{ratio}\\end{split} (3) where T𝑇T is the number of training iterations555In other words, the number of epochs times the number of updates per epoch., cut_frac𝑐𝑢𝑡_𝑓𝑟𝑎𝑐cut\\_frac is the fraction of iterations we increase the LR, cut𝑐𝑢𝑡cut is the iteration when we switch from increasing to decreasing the LR, p𝑝p is the fraction of the number of iterations we have increased or will decrease the LR respectively, ratio𝑟𝑎𝑡𝑖𝑜ratio specifies how much smaller the lowest LR is from the maximum LR ηmaxsubscript𝜂𝑚𝑎𝑥\\eta_{max}, and ηtsubscript𝜂𝑡\\eta_{t} is the learning rate at iteration t𝑡t. We generally use cut_frac=0.1𝑐𝑢𝑡_𝑓𝑟𝑎𝑐0.1cut\\_frac=0.1, ratio=32𝑟𝑎𝑡𝑖𝑜32ratio=32 and ηmax=0.01subscript𝜂𝑚𝑎𝑥0.01\\eta_{max}=0.01. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_21",
"text": " STLR modifies triangular learning rates Smith (2017) with a short increase and a long decay period, which we found key for good performance.666We also credit personal communication with the author. In Section 5, we compare against aggressive cosine annealing, a similar schedule that has recently been used to achieve state-of-the-art performance in CV Loshchilov and Hutter (2017).777While Loshchilov and Hutter (2017) use multiple annealing cycles, we generally found one cycle to work best. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_22",
"text": " Finally, for fine-tuning the classifier, we augment the pretrained language model with two additional linear blocks. Following standard practice for CV classifiers, each block uses batch normalization Ioffe and Szegedy (2015) and dropout, with ReLU activations for the intermediate layer and a softmax activation that outputs a probability distribution over target classes at the last layer. Note that the parameters in these task-specific classifier layers are the only ones that are learned from scratch. The first linear layer takes as the input the pooled last hidden layer states. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_23",
"text": " The signal in text classification tasks is often contained in a few words, which may occur anywhere in the document. As input documents can consist of hundreds of words, information may get lost if we only consider the last hidden state of the model. For this reason, we concatenate the hidden state at the last time step 𝐡Tsubscript𝐡𝑇\\mathbf{h}_{T} of the document with both the max-pooled and the mean-pooled representation of the hidden states over as many time steps as fit in GPU memory 𝐇={𝐡1,…,𝐡T}𝐇subscript𝐡1…subscript𝐡𝑇\\mathbf{H}=\\{\\mathbf{h}_{1},\\ldots,\\mathbf{h}_{T}\\}: 𝐡c=(𝐡T,𝚖𝚊𝚡𝚙𝚘𝚘𝚕(𝐇),𝚖𝚎𝚊𝚗𝚙𝚘𝚘𝚕(𝐇))subscript𝐡𝑐subscript𝐡𝑇𝚖𝚊𝚡𝚙𝚘𝚘𝚕𝐇𝚖𝚎𝚊𝚗𝚙𝚘𝚘𝚕𝐇\\mathbf{h}_{c}=(\\mathbf{h}_{T},\\mathtt{maxpool}(\\mathbf{H}),\\mathtt{meanpool}(\\mathbf{H})) (4) where ()() is concatenation. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_24",
"text": " Fine-tuning the target classifier is the most critical part of the transfer learning method. Overly aggressive fine-tuning will cause catastrophic forgetting, eliminating the benefit of the information captured through language modeling; too cautious fine-tuning will lead to slow convergence (and resultant overfitting). Besides discriminative fine-tuning and triangular learning rates, we propose gradual unfreezing for fine-tuning the classifier. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_25",
"text": " Rather than fine-tuning all layers at once, which risks catastrophic forgetting, we propose to gradually unfreeze the model starting from the last layer as this contains the least general knowledge Yosinski et al. (2014): We first unfreeze the last layer and fine-tune all unfrozen layers for one epoch. We then unfreeze the next lower frozen layer and repeat, until we fine-tune all layers until convergence at the last iteration. This is similar to ‘chain-thaw’ Felbo et al. (2017), except that we add a layer at a time to the set of ‘thawed’ layers, rather than only training a single layer at a time. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_26",
"text": " While discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing all are beneficial on their own, we show in Section 5 that they complement each other and enable our method to perform well across diverse datasets. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_27",
"text": " Language models are trained with backpropagation through time (BPTT) to enable gradient propagation for large input sequences. In order to make fine-tuning a classifier for large documents feasible, we propose BPTT for Text Classification (BPT3C): We divide the document into fixed-length batches of size b𝑏b. At the beginning of each batch, the model is initialized with the final state of the previous batch; we keep track of the hidden states for mean and max-pooling; gradients are back-propagated to the batches whose hidden states contributed to the final prediction. In practice, we use variable length backpropagation sequences Merity et al. (2017a). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_28",
"text": " Similar to existing work Peters et al. (2017, 2018), we are not limited to fine-tuning a unidirectional language model. For all our experiments, we pretrain both a forward and a backward LM. We fine-tune a classifier for each LM independently using BPT3C and average the classifier predictions. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_29",
"text": " While our approach is equally applicable to sequence labeling tasks, we focus on text classification tasks in this work due to their important real-world applications. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_30",
"text": " We evaluate our method on six widely-studied datasets, with varying numbers of documents and varying document length, used by state-of-the-art text classification and transfer learning approaches Johnson and Zhang (2017); McCann et al. (2017) as instances of three common text classification tasks: sentiment analysis, question classification, and topic classification. We show the statistics for each dataset and task in Table 1. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_31",
"text": " For sentiment analysis, we evaluate our approach on the binary movie review IMDb dataset Maas et al. (2011) and on the binary and five-class version of the Yelp review dataset compiled by Zhang et al. (2015). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_32",
"text": " We use the six-class version of the small TREC dataset Voorhees and Tice (1999) dataset of open-domain, fact-based questions divided into broad semantic categories. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_33",
"text": " For topic classification, we evaluate on the large-scale AG news and DBpedia ontology datasets created by Zhang et al. (2015). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_34",
"text": " We use the same pre-processing as in earlier work Johnson and Zhang (2017); McCann et al. (2017). In addition, to allow the language model to capture aspects that might be relevant for classification, we add special tokens for upper-case words, elongation, and repetition. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_35",
"text": " We are interested in a model that performs robustly across a diverse set of tasks. To this end, if not mentioned otherwise, we use the same set of hyperparameters across tasks, which we tune on the IMDb validation set. We use the AWD-LSTM language model Merity et al. (2017a) with an embedding size of 400400400, 333 layers, 115011501150 hidden activations per layer, and a BPTT batch size of 707070. We apply dropout of 0.40.40.4 to layers, 0.30.30.3 to RNN layers, 0.40.40.4 to input embedding layers, 0.050.050.05 to embedding layers, and weight dropout of 0.50.50.5 to the RNN hidden-to-hidden matrix. The classifier has a hidden layer of size 505050. We use Adam with β1=0.7subscript𝛽10.7\\beta_{1}=0.7 instead of the default β1=0.9subscript𝛽10.9\\beta_{1}=0.9 and β2=0.99subscript𝛽20.99\\beta_{2}=0.99, similar to Dozat and Manning (2017). We use a batch size of 646464, a base learning rate of 0.0040.0040.004 and 0.010.010.01 for fine-tuning the LM and the classifier respectively, and tune the number of epochs on the validation set of each task888On small datasets such as TREC-6, we fine-tune the LM only for 151515 epochs without overfitting, while we can fine-tune longer on larger datasets. We found 505050 epochs to be a good default for fine-tuning the classifier.. We otherwise use the same practices used in Merity et al. (2017a). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_36",
"text": " For each task, we compare against the current state-of-the-art. For the IMDb and TREC-6 datasets, we compare against CoVe McCann et al. (2017), a state-of-the-art transfer learning method for NLP. For the AG, Yelp, and DBpedia datasets, we compare against the state-of-the-art text categorization method by Johnson and Zhang (2017). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_37",
"text": " For consistency, we report all results as error rates (lower is better). We show the test error rates on the IMDb and TREC-6 datasets used by McCann et al. (2017) in Table 2. Our method outperforms both CoVe, a state-of-the-art transfer learning method based on hypercolumns, as well as the state-of-the-art on both datasets. On IMDb, we reduce the error dramatically by 43.9% and 22% with regard to CoVe and the state-of-the-art respectively. This is promising as the existing state-of-the-art requires complex architectures Peters et al. (2018), multiple forms of attention McCann et al. (2017) and sophisticated embedding schemes Johnson and Zhang (2016), while our method employs a regular LSTM with dropout. We note that the language model fine-tuning approach of Dai and Le (2015) only achieves an error of 7.64 vs. 4.6 for our method on IMDb, demonstrating the benefit of transferring knowledge from a large ImageNet-like corpus using our fine-tuning techniques. IMDb in particular is reflective of real-world datasets: Its documents are generally a few paragraphs long—similar to emails (e.g for legal discovery) and online comments (e.g for community management); and sentiment analysis is similar to many commercial applications, e.g. product response tracking and support email routing. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_38",
"text": " On TREC-6, our improvement—similar as the improvements of state-of-the-art approaches—is not statistically significant, due to the small size of the 500-examples test set. Nevertheless, the competitive performance on TREC-6 demonstrates that our model performs well across different dataset sizes and can deal with examples that range from single sentences—in the case of TREC-6—to several paragraphs for IMDb. Note that despite pretraining on more than two orders of magnitude less data than the 7 million sentence pairs used by McCann et al. (2017), we consistently outperform their approach on both datasets. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_39",
"text": " We show the test error rates on the larger AG, DBpedia, Yelp-bi, and Yelp-full datasets in Table 3. Our method again outperforms the state-of-the-art significantly. On AG, we observe a similarly dramatic error reduction by 23.7% compared to the state-of-the-art. On DBpedia, Yelp-bi, and Yelp-full, we reduce the error by 4.8%, 18.2%, 2.0% respectively. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_40",
"text": " In order to assess the impact of each contribution, we perform a series of analyses and ablations. We run experiments on three corpora, IMDb, TREC-6, and AG that are representative of different tasks, genres, and sizes. For all experiments, we split off 10%percent1010\\% of the training set and report error rates on this validation set with unidirectional LMs. We fine-tune the classifier for 505050 epochs and train all methods but ULMFiT with early stopping. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_41",
"text": " One of the main benefits of transfer learning is being able to train a model for a task with a small number of labels. We evaluate ULMFiT on different numbers of labeled examples in two settings: only labeled examples are used for LM fine-tuning (‘supervised’); and all task data is available and can be used to fine-tune the LM (‘semi-supervised’). We compare ULMFiT to training from scratch—which is necessary for hypercolumn-based approaches. We split off balanced fractions of the training data, keep the validation set fixed, and use the same hyperparameters as before. We show the results in Figure 3. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_42",
"text": " On IMDb and AG, supervised ULMFiT with only 100100100 labeled examples matches the performance of training from scratch with 10×10\\times and 20×20\\times more data respectively, clearly demonstrating the benefit of general-domain LM pretraining. If we allow ULMFiT to also utilize unlabeled examples (505050k for IMDb, 100100100k for AG), at 100100100 labeled examples, we match the performance of training from scratch with 50×50\\times and 100×100\\times more data on AG and IMDb respectively. On TREC-6, ULMFiT significantly improves upon training from scratch; as examples are shorter and fewer, supervised and semi-supervised ULMFiT achieve similar results. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_43",
"text": " We compare using no pretraining with pretraining on WikiText-103 Merity et al. (2017b) in Table 4. Pretraining is most useful for small and medium-sized datasets, which are most common in commercial applications. However, even for large datasets, pretraining improves performance. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_44",
"text": " In order to gauge the importance of choosing an appropriate LM, we compare a vanilla LM with the same hyperparameters without any dropout999To avoid overfitting, we only train the vanilla LM classifier for 555 epochs and keep dropout of 0.40.40.4 in the classifier. with the AWD-LSTM LM with tuned dropout parameters in Table 5. Using our fine-tuning techniques, even a regular LM reaches surprisingly good performance on the larger datasets. On the smaller TREC-6, a vanilla LM without dropout runs the risk of overfitting, which decreases performance. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_45",
"text": " We compare no fine-tuning against fine-tuning the full model Erhan et al. (2010) (‘Full’), the most commonly used fine-tuning method, with and without discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’) in Table 6. Fine-tuning the LM is most beneficial for larger datasets. ‘Discr’ and ‘Stlr’ improve performance across all three datasets and are necessary on the smaller TREC-6, where regular fine-tuning is not beneficial. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_46",
"text": " We compare training from scratch, fine-tuning the full model (‘Full’), only fine-tuning the last layer (‘Last’) Donahue et al. (2014), ‘Chain-thaw’ Felbo et al. (2017), and gradual unfreezing (‘Freez’). We furthermore assess the importance of discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’). We compare the latter to an alternative, aggressive cosine annealing schedule (‘Cos’) Loshchilov and Hutter (2017). We use a learning rate ηL=0.01superscript𝜂𝐿0.01\\eta^{L}=0.01 for ‘Discr’, learning rates of 0.0010.0010.001 and 0.00010.00010.0001 for the last and all other layers respectively for ‘Chain-thaw’ as in Felbo et al. (2017), and a learning rate of 0.0010.0010.001 otherwise. We show the results in Table 7. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_47",
"text": " Fine-tuning the classifier significantly improves over training from scratch, particularly on the small TREC-6. ‘Last’, the standard fine-tuning method in CV, severely underfits and is never able to lower the training error to 00. ‘Chain-thaw’ achieves competitive performance on the smaller datasets, but is outperformed significantly on the large AG. ‘Freez’ provides similar performance as ‘Full’. ‘Discr’ consistently boosts the performance of ‘Full’ and ‘Freez’, except for the large AG. Cosine annealing is competitive with slanted triangular learning rates on large data, but under-performs on smaller datasets. Finally, full ULMFiT classifier fine-tuning (bottom row) achieves the best performance on IMDB and TREC-6 and competitive performance on AG. Importantly, ULMFiT is the only method that shows excellent performance across the board—and is therefore the only universal method. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_48",
"text": " While our results demonstrate that how we fine-tune the classifier makes a significant difference, fine-tuning for inductive transfer is currently under-explored in NLP as it mostly has been thought to be unhelpful Mou et al. (2016). To better understand the fine-tuning behavior of our model, we compare the validation error of the classifier fine-tuned with ULMFiT and ‘Full’ during training in Figure 4. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_49",
"text": " On all datasets, fine-tuning the full model leads to the lowest error comparatively early in training, e.g. already after the first epoch on IMDb. The error then increases as the model starts to overfit and knowledge captured through pretraining is lost. In contrast, ULMFiT is more stable and suffers from no such catastrophic forgetting; performance remains similar or improves until late epochs, which shows the positive effect of the learning rate schedule. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_50",
"text": " At the cost of training a second model, ensembling the predictions of a forward and backwards LM-classifier brings a performance boost of around 0.50.50.5–0.70.70.7. On IMDb we lower the test error from 5.305.305.30 of a single model to 4.584.584.58 for the bidirectional model. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_51",
"text": " While we have shown that ULMFiT can achieve state-of-the-art performance on widely used text classification tasks, we believe that language model fine-tuning will be particularly useful in the following settings compared to existing transfer learning approaches Conneau et al. (2017); McCann et al. (2017); Peters et al. (2018): a) NLP for non-English languages, where training data for supervised pretraining tasks is scarce; b) new NLP tasks where no state-of-the-art architecture exists; and c) tasks with limited amounts of labeled data (and some amounts of unlabeled data). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_52",
"text": " Given that transfer learning and particularly fine-tuning for NLP is under-explored, many future directions are possible. One possible direction is to improve language model pretraining and fine-tuning and make them more scalable: for ImageNet, predicting far fewer classes only incurs a small performance drop Huh et al. (2016), while recent work shows that an alignment between source and target task label sets is important Mahajan et al. (2018)—focusing on predicting a subset of words such as the most frequent ones might retain most of the performance while speeding up training. Language modeling can also be augmented with additional tasks in a multi-task learning fashion Caruana (1993) or enriched with additional supervision, e.g. syntax-sensitive dependencies Linzen et al. (2016) to create a model that is more general or better suited for certain downstream tasks, ideally in a weakly-supervised manner to retain its universal properties. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_53",
"text": " Another direction is to apply the method to novel tasks and models. While an extension to sequence labeling is straightforward, other tasks with more complex interactions such as entailment or question answering may require novel ways to pretrain and fine-tune. Finally, while we have provided a series of analyses and ablations, more studies are required to better understand what knowledge a pretrained language model captures, how this changes during fine-tuning, and what information different tasks require. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_54",
"text": " We have proposed ULMFiT, an effective and extremely sample-efficient transfer learning method that can be applied to any NLP task. We have also proposed several novel fine-tuning techniques that in conjunction prevent catastrophic forgetting and enable robust learning across a diverse range of tasks. Our method significantly outperformed existing transfer learning techniques and the state-of-the-art on six representative text classification tasks. We hope that our results will catalyze new developments in transfer learning for NLP. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
}
] |
How is the inversion of text-guided diffusion models different from the inversion of GAN?
|
Inversion of GANs requires finding the initial noise vector that produces the edit we want [33].
|
[
33
] |
[
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained on extremely large language-image datasets and use state-of-the-art image generative models including auto-regressive and diffusion models. However, these models do not provide simple editing means, and generally lack control over specific semantic regions of a given image. In particular, even the slightest change in the textual prompt may lead to a completely different output image. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_1",
"text": " To circumvent this, LLI-based methods (28, 4, 33) require the user to explicitly mask a part of the image to be inpainted, and drive the edited image to change in the masked area only, while matching the background of the original image. This approach has provided appealing results, however, the masking procedure is cumbersome, hampering quick and intuitive text-driven editing. Moreover, masking the image content removes important structural information, which is completely ignored in the inpainting process. Therefore, some editing capabilities are out of the inpainting scope, such as modifying the texture of a specific object. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_2",
"text": " In this paper, we introduce an intuitive and powerful textual editing method to semantically edit images in pre-trained text-conditioned diffusion models via Prompt-to-Prompt manipulations. To do so, we dive deep into the cross-attention layers and explore their semantic strength as a handle to control the generated image. Specifically, we consider the internal cross-attention maps, which are high-dimensional tensors that bind pixels and tokens extracted from the prompt text. We find that these maps contain rich semantic relations which critically affect the generated image. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_3",
"text": " Our key idea is that we can edit images by injecting the cross-attention maps during the diffusion process, controlling which pixels attend to which tokens of the prompt text during which diffusion steps. To apply our method to various creative editing applications, we show several methods to control the cross-attention maps through a simple and semantic interface (see fig. 1). The first is to change a single token’s value in the prompt (e.g., “dog” to “cat”), while fixing the cross-attention maps, to preserve the scene composition. The second is to globally edit an image, e.g., change the style, by adding new words to the prompt and freezing the attention on previous tokens, while allowing new attention to flow to the new tokens. The third is to amplify or attenuate the semantic effect of a word in the generated image. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_4",
"text": " Our approach constitutes an intuitive image editing interface through editing only the textual prompt, therefore called Prompt-to-Prompt. This method enables various editing tasks, which are challenging otherwise, and does not requires model training, fine-tuning, extra data, or optimization. Throughout our analysis, we discover even more control over the generation process, recognizing a trade-off between the fidelity to the edited prompt and the source image. We even demonstrate that our method can be applied to real images by using an existing inversion process. Our experiments and numerous results show that our method enables seamless editing in an intuitive text-based manner over extremely diverse images. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_5",
"text": " Image editing is one of the most fundamental tasks in computer graphics, encompassing the process of modifying an input image through the use of an auxiliary input, such as a label, scribble, mask, or reference image. A specifically intuitive way to edit an image is through textual prompts provided by the user. Recently, text-driven image manipulation has achieved significant progress using GANs (15, 8, 19, 20, 21), which are known for their high-quality generation, in tandem with CLIP , which consists of a semantically rich joint image-text representation, trained over millions of text-image pairs. Seminal works (29, 14, 46, 2) which combined these components were revolutionary, since they did not require extra manual labor, and produced highly realistic manipulations using text only. Bau et al. further demonstrated how to use masks provided by the user, to localize the text-based editing and restrict the change to a specific spatial region. However, while GAN-based image editing approaches succeed on highly-curated datasets , e.g., human faces, they struggle over large and diverse datasets. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_6",
"text": " To obtain more expressive generation capabilities, Crowson et al. use VQ-GAN , trained over diverse data, as a backbone. Other works (5, 22) exploit the recent Diffusion models (17, 39, 41, 17, 40, 36), which achieve state-of-the-art generation quality over highly diverse datasets, often surpassing GANs . Kim et al. show how to perform global changes, whereas Avrahami et al. successfully perform local manipulations using user-provided masks for guidance. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_7",
"text": " While most works that require only text (i.e., no masks) are limited to global editing (9, 23), Bar-Tal et al. proposed a text-based localized editing technique without using any mask, showing impressive results. Yet, their techniques mainly allow changing textures, but not modifying complex structures, such as changing a bicycle to a car. Moreover, unlike our method, their approach requires training a network for each input. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_8",
"text": " Numerous works (11, 16, 42, 25, 26, 30, 31, 34, 49, 9, 13, 36) significantly advanced the generation of images conditioned on plain text, known as text-to-image synthesis. Several large-scale text-image models have recently emerged, such as Imagen , DALL-E2 , and Parti , demonstrating unprecedented semantic generation. However, these models do not provide control over a generated image, specifically using text guidance only. Changing a single word in the original prompt associated with the image often leads to a completely different outcome. For instance, adding the adjective “white” to “dog” often changes the dog’s shape. To overcome this, several works (28, 4) assume that the user provides a mask to restrict the area in which the changes are applied. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_9",
"text": " Unlike previous works, our method requires textual input only, by using the spatial information from the internal layers of the generative model itself. This offers the user a much more intuitive editing experience of modifying local or global details by merely modifying the text prompt. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_10",
"text": " Let ℐℐ\\mathcal{I} be an image which was generated by a text-guided diffusion model using the text prompt 𝒫𝒫\\mathcal{P} and a random seed s𝑠s. Our goal is editing the input image guided only by the edited prompt 𝒫∗superscript𝒫\\mathcal{P}^{*}, resulting in an edited image ℐ∗superscriptℐ\\mathcal{I}^{*}. For example, consider an image generated from the prompt “my new bicycle”, and assume that the user wants to edit the color of the bicycle, its material, or even replace it with a scooter while preserving the appearance and structure of the original image. An intuitive interface for the user is to directly change the text prompt by further describing the appearance of the bikes, or replacing it with another word. As opposed to previous works, we wish to avoid relying on any user-defined mask to assist or signify where the edit should occur. A simple, but an unsuccessful attempt is to fix the internal randomness and regenerate using the edited text prompt. Unfortunately, as fig. 2 shows, this results in a completely different image with a different structure and composition. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_11",
"text": " Our key observation is that the structure and appearances of the generated image depend not only on the random seed, but also on the interaction between the pixels to the text embedding through the diffusion process. By modifying the pixel-to-text interaction that occurs in cross-attention layers, we provide Prompt-to-Prompt image editing capabilities. More specifically, injecting the cross-attention maps of the input image ℐℐ\\mathcal{I} enables us to preserve the original composition and structure. In section 3.1, we review how cross-attention is used, and in section 3.2 we describe how to exploit the cross-attention for editing. For additional background on diffusion models, please refer to appendix A. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_12",
"text": " We use the Imagen text-guided synthesis model as a backbone. Since the composition and geometry are mostly determined at the 64×64646464\\times 64 resolution, we only adapt the text-to-image diffusion model, using the super-resolution process as is. Recall that each diffusion step t𝑡t consists of predicting the noise ϵitalic-ϵ\\epsilon from a noisy image ztsubscript𝑧𝑡z_{t} and text embedding ψ(𝒫)𝜓𝒫\\psi(\\mathcal{P}) using a U-shaped network . At the final step, this process yields the generated image ℐ=z0ℐsubscript𝑧0\\mathcal{I}=z_{0}. Most importantly, the interaction between the two modalities occurs during the noise prediction, where the embeddings of the visual and textual features are fused using Cross-attention layers that produce spatial attention maps for each textual token. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_13",
"text": " More formally, as illustrated in fig. 3(Top), the deep spatial features of the noisy image ϕ(zt)italic-ϕsubscript𝑧𝑡\\phi(z_{t}) are projected to a query matrix Q=ℓQ(ϕ(zt))𝑄subscriptℓ𝑄italic-ϕsubscript𝑧𝑡Q=\\ell_{Q}(\\phi(z_{t})), and the textual embedding is projected to a key matrix K=ℓK(ψ(𝒫))𝐾subscriptℓ𝐾𝜓𝒫K=\\ell_{K}(\\psi(\\mathcal{P})) and a value matrix V=ℓV(ψ(𝒫))𝑉subscriptℓ𝑉𝜓𝒫V=\\ell_{V}(\\psi(\\mathcal{P})), via learned linear projections ℓQ,ℓK,ℓVsubscriptℓ𝑄subscriptℓ𝐾subscriptℓ𝑉\\ell_{Q},\\ell_{K},\\ell_{V}. The attention maps are then M=Softmax(QKTd),𝑀Softmax𝑄superscript𝐾𝑇𝑑M=\\text{Softmax}\\left(\\frac{QK^{T}}{\\sqrt{d}}\\right), (1) where the cell Mijsubscript𝑀𝑖𝑗M_{ij} defines the weight of the value of the j𝑗j-th token on the pixel i𝑖i, and where d𝑑d is the latent projection dimension of the keys and queries. Finally, the cross-attention output is defined to be ϕ^(zt)=MV^italic-ϕsubscript𝑧𝑡𝑀𝑉\\widehat{\\phi}\\left(z_{t}\\right)=MV, which is then used to update the spatial features ϕ(zt)italic-ϕsubscript𝑧𝑡\\phi(z_{t}). ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_14",
"text": " Intuitively, the cross-attention output MV𝑀𝑉MV is a weighted average of the values V𝑉V where the weights are the attention maps M𝑀M, which are correlated to the similarity between Q𝑄Q and K𝐾K. In practice, to increase their expressiveness, multi-head attention is used in parallel, and then the results are concatenated and passed through a learned linear layer to get the final output. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_15",
"text": " Imagen , similar to GLIDE , conditions on the text prompt in the noise prediction of each diffusion step (see section A.2) through two types of attention layers: i) cross-attention layers. ii) hybrid attention that acts both as self-attention and cross-attention by simply concatenating the text embedding sequence to the key-value pairs of each self-attention layer. Throughout the rest of the paper, we refer to both of them as cross-attention since our method only intervenes in the cross-attention part of the hybrid attention. That is, only the last channels, which refer to text tokens, are modified in the hybrid attention modules. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_16",
"text": " We return to our key observation — the spatial layout and geometry of the generated image depend on the cross-attention maps. This interaction between pixels and text is illustrated in fig. 4, where the average attention maps are plotted. As can be seen, pixels are more attracted to the words that describe them, e.g., pixels of the bear are correlated with the word “bear”. Note that averaging is done for visualization purposes, and attention maps are kept separate for each head in our method. Interestingly, we can see that the structure of the image is already determined in the early steps of the diffusion process. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_17",
"text": " Since the attention reflects the overall composition, we can inject the attention maps M𝑀M that were obtained from the generation with the original prompt 𝒫𝒫\\mathcal{P}, into a second generation with the modified prompt 𝒫∗superscript𝒫\\mathcal{P}^{*}. This allows the synthesis of an edited image ℐ∗superscriptℐ\\mathcal{I}^{*} that is not only manipulated according to the edited prompt, but also preserves the structure of the input image ℐℐ\\mathcal{I}. This example is a specific instance of a broader set of attention-based manipulations leading to different types of intuitive editing. We, therefore, start by proposing a general framework, followed by the details of the specific editing operations. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_18",
"text": " Let DM(zt,𝒫,t,s)𝐷𝑀subscript𝑧𝑡𝒫𝑡𝑠DM(z_{t},\\mathcal{P},t,s) be the computation of a single step t𝑡t of the diffusion process, which outputs the noisy image zt−1subscript𝑧𝑡1z_{t-1}, and the attention map Mtsubscript𝑀𝑡M_{t} (omitted if not used). We denote by DM(zt,𝒫,t,s){M←M^}𝐷𝑀subscript𝑧𝑡𝒫𝑡𝑠←𝑀^𝑀DM(z_{t},\\mathcal{P},t,s)\\{M\\leftarrow\\widehat{M}\\} the diffusion step where we override the attention map M𝑀M with an additional given map M^^𝑀\\widehat{M}, but keep the values V𝑉V from the supplied prompt. We also denote by Mt∗superscriptsubscript𝑀𝑡M_{t}^{*} the produced attention map using the edited prompt 𝒫∗superscript𝒫\\mathcal{P}^{*}. Lastly, we define Edit(Mt,Mt∗,t)𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡Edit(M_{t},M_{t}^{*},t) to be a general edit function, receiving as input the t𝑡t’th attention maps of the original and edited images during their generation. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_19",
"text": " Our general algorithm for controlled image generation consists of performing the iterative diffusion process for both prompts simultaneously, where an attention-based manipulation is applied in each step according to the desired editing task. We note that for the method above to work, we must fix the internal randomness. This is due to the nature of diffusion models, where even for the same prompt, two random seeds produce drastically different outputs. Formally, our general algorithm is: ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_20",
"text": " Notice that we can also define image ℐℐ\\mathcal{I}, which is generated by prompt 𝒫𝒫\\mathcal{P} and random seed s𝑠s, as an additional input. Yet, the algorithm would remain the same. For editing real images, see section 4. Also, note that we can skip the forward call in line 777 by applying the edit function inside the diffusion forward function. Moreover, a diffusion step can be applied on both zt−1subscript𝑧𝑡1z_{t-1} and zt∗superscriptsubscript𝑧𝑡z_{t}^{*} in the same batch (i.e., in parallel), and so there is only one step overhead with respect to the original inference of the diffusion model. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_21",
"text": " We now turn to address specific editing operations, filling the missing definition of the Edit(Mt,Mt∗,t)𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡Edit(M_{t},M_{t}^{*},t) function. An overview is presented in fig. 3(Bottom). ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_22",
"text": " In this case, the user swaps tokens of the original prompt with others, e.g., 𝒫=𝒫absent\\mathcal{P}=“a big red bicycle” to 𝒫∗=superscript𝒫absent\\mathcal{P}^{*}=“a big red car”. The main challenge is to preserve the original composition while also addressing the content of the new prompt. To this end, we inject the attention maps of the source image into the generation with the modified prompt. However, the proposed attention injection may over constrain the geometry, especially when a large structural modification, such as “car” to “bicycle”, is involved. We address this by suggesting a softer attention constrain: ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_23",
"text": " Edit(Mt,Mt∗,t):={Mt∗ift<τMtotherwise.assign𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡casessuperscriptsubscript𝑀𝑡if𝑡𝜏subscript𝑀𝑡otherwise.Edit(M_{t},M_{t}^{*},t):=\\begin{cases}M_{t}^{*}&\\quad\\text{if}\\;t<\\tau\\\\ M_{t}&\\quad\\text{otherwise.}\\\\ \\end{cases} where τ𝜏\\tau is a timestamp parameter that determines until which step the injection is applied. Note that the composition is determined in the early steps of the diffusion process. Therefore, by limiting the number of injection steps, we can guide the composition of the newly generated image while allowing the necessary geometry freedom for adapting to the new prompt. An illustration is provided in section 4. Another natural relaxation for our algorithm is to assign a different number of injection timestamps for the different tokens in the prompt. In case the two words are represented using a different number of tokens, the maps can be duplicated/averaged as necessary using an alignment function as described in the next paragraph. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_24",
"text": " In another setting, the user adds new tokens to the prompt, e.g., 𝒫=𝒫absent\\mathcal{P}=“a castle next to a river” to 𝒫∗=superscript𝒫absent\\mathcal{P}^{*}=“children drawing of a castle next to a river”. To preserve the common details, we apply the attention injection only over the common tokens from both prompts. Formally, we use an alignment function A𝐴A that receives a token index from target prompt 𝒫∗superscript𝒫\\mathcal{P}^{*} and outputs the corresponding token index in 𝒫𝒫\\mathcal{P} or None if there isn’t a match. Then, the editing function is given by: ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_25",
"text": " (Edit(Mt,Mt∗,t))i,j:={(Mt∗)i,jifA(j)=None(Mt)i,A(j)otherwise.assignsubscript𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡𝑖𝑗casessubscriptsuperscriptsubscript𝑀𝑡𝑖𝑗if𝐴𝑗𝑁𝑜𝑛𝑒subscriptsubscript𝑀𝑡𝑖𝐴𝑗otherwise.\\left(Edit\\left(M_{t},M_{t}^{*},t\\right)\\right)_{i,j}:=\\begin{cases}(M_{t}^{*})_{i,j}&\\quad\\text{if}\\;A(j)=None\\\\ (M_{t})_{i,A(j)}&\\quad\\text{otherwise.}\\\\ \\end{cases} Recall that index i𝑖i corresponds to a pixel value, where j𝑗j corresponds to a text token. Again, we may set a timestamp τ𝜏\\tau to control the number of diffusion steps in which the injection is applied. This kind of editing enables diverse Prompt-to-Prompt capabilities such as stylization, specification of object attributes, or global manipulations as demonstrated in section 4. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_26",
"text": " Lastly, the user may wish to strengthen or weakens the extent to which each token is affecting the resulting image. For example, consider the prompt 𝒫=𝒫absent\\mathcal{P}= “a fluffy red ball”, and assume we want to make the ball more or less fluffy. To achieve such manipulation, we scale the attention map of the assigned token j∗superscript𝑗j^{*} with parameter c∈(−2,2)𝑐22c\\in(-2,2), resulting in a stronger/weaker effect. The rest of the attention maps remain unchanged. That is: (Edit(Mt,Mt∗,t))i,j:={c⋅(Mt)i,jif j=j∗(Mt)i,jotherwise.assignsubscript𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡𝑖𝑗cases⋅𝑐subscriptsubscript𝑀𝑡𝑖𝑗if 𝑗superscript𝑗subscriptsubscript𝑀𝑡𝑖𝑗otherwise.\\left(Edit\\left(M_{t},M_{t}^{*},t\\right)\\right)_{i,j}:=\\begin{cases}c\\cdot(M_{t})_{i,j}&\\quad\\text{if }j=j^{*}\\\\ (M_{t})_{i,j}&\\quad\\text{otherwise.}\\\\ \\end{cases} As described in section 4, the parameter c𝑐c allows fine and intuitive control over the induced effect. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_27",
"text": " Our method, described in section 3, enables intuitive text-only editing by controlling the spatial layout corresponding to each word in the user-provided prompt. In this section, we show several applications using this technique. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_28",
"text": " Text-Only Localized Editing. We first demonstrate localized editing by modifying the user-provided prompt without requiring any user-provided mask. In fig. 2, we depict an example where we generate an image using the prompt “lemon cake”. Our method allows us to retain the spatial layout, geometry, and semantics when replacing the word “lemon” with “pumpkin” (top row). Observe that the background is well-preserved, including the top-left lemons transforming into pumpkins. On the other hand, naively feeding the synthesis model with the prompt “pumpkin cake” results in a completely different geometry (333rd row), even when using the same random seed in a deterministic setting (i.e., DDIM ). Our method succeeds even for a challenging prompt such as “pasta cake.” (222nd row) — the generated cake consists of pasta layers with tomato sauce on top. Another example is provided in fig. 5 where we do not inject the attention of the entire prompt but only the attention of a specific word – “butterfly”. This enables the preservation of the original butterfly while changing the rest of the content. Additional results are provided in the appendix (fig. 13). ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_29",
"text": " As can be seen in fig. 6, our method is not confined to modifying only textures, and it can perform structural modifications, e.g., change a “bicycle” to a “car”. To analyze our attention injection, in the left column we show the results without cross-attention injection, where changing a single word leads to an entirely different outcome. From left to right, we then show the resulting generated image by injecting attention to an increasing number of diffusion steps. Note that the more diffusion steps in which we apply cross-attention injection, the higher the fidelity to the original image. However, the optimal result is not necessarily achieved by applying the injection throughout all diffusion steps. Therefore, we can provide the user with even better control over the fidelity to the original image by changing the number of injection steps. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_30",
"text": " Instead of replacing one word with another, the user may wish to add a new specification to the generated image. In this case, we keep the attention maps of the original prompt, while allowing the generator to address the newly added words. For example, see fig. 7 (top), where we add “crushed” to the “car”, resulting in the generation of additional details over the original image while the background is still preserved. See the appendix (fig. 14) for more examples. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_31",
"text": " Global editing. Preserving the image composition is not only valuable for localized editing, but also an important aspect of global editing. In this setting, the editing should affect all parts of the image, but still retain the original composition, such as the location and identity of the objects. As shown in fig. 7 (bottom), we retain the image content while adding “snow” or changing the lightning. Additional examples appear in fig. 8, including translating a sketch into a photo-realistic image and inducing an artistic style. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_32",
"text": " Fader Control using Attention Re-weighting. While controlling the image by editing the prompt is very effective, we find that it still does not allow full control over the generated image. Consider the prompt “snowy mountain”. A user may want to control the amount of snow on the mountain. However, it is quite difficult to describe the desired amount of snow through text. Instead, we suggest a fader control , where the user controls the magnitude of the effect induced by a specific word, as depicted in fig. 9. As described in section 3, we achieve such control by re-scaling the attention of the specified word. Additional results are in the appendix (fig. 15). ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_33",
"text": " Real Image Editing. Editing a real image requires finding an initial noise vector that produces the given input image when fed into the diffusion process. This process, known as inversion, has recently drawn considerable attention for GANs, e.g., (51, 1, 3, 35, 50, 43, 45, 47), but has not yet been fully addressed for text-guided diffusion models. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_34",
"text": " In the following, we show preliminary editing results on real images, based on common inversion techniques for diffusion models. First, a rather naïve approach is to add Gaussian noise to the input image, and then perform a predefined number of diffusion steps. Since this approach results in significant distortions, we adopt an improved inversion approach (10, 40), which is based on the deterministic DDIM model rather than the DDPM model. We perform the diffusion process in the reverse direction, that is x0⟶xT⟶subscript𝑥0subscript𝑥𝑇x_{0}\\longrightarrow x_{T} instead of xT⟶x0⟶subscript𝑥𝑇subscript𝑥0x_{T}\\longrightarrow x_{0}, where x0subscript𝑥0x_{0} is set to be the given real image. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_35",
"text": " This inversion process often produces satisfying results, as presented in fig. 10. However, the inversion is not sufficiently accurate in many other cases, as in fig. 11. This is partially due to a distortion-editability tradeoff , where we recognize that reducing the classifier-free guidance parameter (i.e., reducing the prompt influence) improves reconstruction but constrains our ability to perform significant manipulations. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_36",
"text": " To alleviate this limitation, we propose to restore the unedited regions of the original image using a mask, directly extracted from the attention maps. Note that here the mask is generated with no guidance from the user. As presented in fig. 12, this approach works well even using the naïve DDPM inversion scheme (adding noise followed by denoising). Note that the cat’s identity is well-preserved under various editing operations, while the mask is produced only from the prompt itself. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_37",
"text": " In this work, we uncovered the powerful capabilities of the cross-attention layers within text-to-image diffusion models. We showed that these high-dimensional layers have an interpretable representation of spatial maps that play a key role in tying the words in the text prompt to the spatial layout of the synthesized image. With this observation, we showed how various manipulations of the prompt can directly control attributes in the synthesized image, paving the way to various applications including local and global editing. This work is a first step towards providing users with simple and intuitive means to edit images, leveraging textual semantic power. It enables users to navigate through a semantic, textual, space, which exhibits incremental changes after each step, rather than producing the desired image from scratch after each text manipulation. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_38",
"text": " While we have demonstrated semantic control by changing only textual prompts, our technique is still subject to a few limitations to be addressed in follow-up work. First, the current inversion process results in a visible distortion over some of the test images. In addition, the inversion requires the user to come up with a suitable prompt. This could be challenging for complicated compositions. Note that the challenge of inversion for text-guided diffusion models is an orthogonal endeavor to our work, which will be thoroughly studied in the future. Second, the current attention maps are of low resolution, as the cross-attention is placed in the network’s bottleneck. This bounds our ability to perform even more precise localized editing. To alleviate this, we suggest incorporating cross-attention also in higher-resolution layers. We leave this for future works since it requires analyzing the training procedure which is out of our current scope. Finally, we recognize that our current method cannot be used to spatially move existing objects across the image and also leave this kind of control for future work. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_39",
"text": " We thank Noa Glaser, Adi Zicher, Yaron Brodsky and Shlomi Fruchter for their valuable inputs that helped improve this work, and to Mohammad Norouzi, Chitwan Saharia and William Chan for providing us with their support and the pretrained models of Imagen . Special thanks to Yossi Matias for early inspiring discussion on the problem and for motivating and encouraging us to develop technologies along the avenue of intuitive interaction. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
}
] |
Increasing default box shape will increase in model performance. How many default boxes are used in the SSD framework?
|
In SSD framework, generally 6 default boxes per location are used [22].
|
[
22
] |
[
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective Search work through the current leading results on PASCAL VOC, COCO, and ILSVRC detection all based on Faster R-CNN albeit with deeper features such as . While accurate, these approaches have been too computationally intensive for embedded systems and, even with high-end hardware, too slow for real-time applications. Often detection speed for these approaches is measured in seconds per frame (SPF), and even the fastest high-accuracy detector, Faster R-CNN, operates at only 7 frames per second (FPS). There have been many attempts to build faster detectors by attacking each stage of the detection pipeline (see related work in Sec. 4), but so far, significantly increased speed comes only at the cost of significantly decreased detection accuracy. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_1",
"text": " This paper presents the first deep network based object detector that does not resample pixels or features for bounding box hypotheses and and is as accurate as approaches that do. This results in a significant improvement in speed for high-accuracy detection (59 FPS with mAP 74.3% on VOC2007 test, vs. Faster R-CNN 7 FPS with mAP 73.2% or YOLO 45 FPS with mAP 63.4%). The fundamental improvement in speed comes from eliminating bounding box proposals and the subsequent pixel or feature resampling stage. We are not the first to do this (cf (4, 5)), but by adding a series of improvements, we manage to increase the accuracy significantly over previous attempts. Our improvements include using a small convolutional filter to predict object categories and offsets in bounding box locations, using separate predictors (filters) for different aspect ratio detections, and applying these filters to multiple feature maps from the later stages of a network in order to perform detection at multiple scales. With these modifications—especially using multiple layers for prediction at different scales—we can achieve high-accuracy using relatively low resolution input, further increasing detection speed. While these contributions may seem small independently, we note that the resulting system improves accuracy on real-time detection for PASCAL VOC from 63.4% mAP for YOLO to 74.3% mAP for our SSD. This is a larger relative improvement in detection accuracy than that from the recent, very high-profile work on residual networks . Furthermore, significantly improving the speed of high-quality detection can broaden the range of settings where computer vision is useful. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_2",
"text": " We summarize our contributions as follows: • We introduce SSD, a single-shot detector for multiple categories that is faster than the previous state-of-the-art for single shot detectors (YOLO), and significantly more accurate, in fact as accurate as slower techniques that perform explicit region proposals and pooling (including Faster R-CNN). • The core of SSD is predicting category scores and box offsets for a fixed set of default bounding boxes using small convolutional filters applied to feature maps. • To achieve high detection accuracy we produce predictions of different scales from feature maps of different scales, and explicitly separate predictions by aspect ratio. • These design features lead to simple end-to-end training and high accuracy, even on low resolution input images, further improving the speed vs accuracy trade-off. • Experiments include timing and accuracy analysis on models with varying input size evaluated on PASCAL VOC, COCO, and ILSVRC and are compared to a range of recent state-of-the-art approaches. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_3",
"text": " This section describes our proposed SSD framework for detection (Sec. 2.1) and the associated training methodology (Sec. 2.2). Afterwards, Sec. 3 presents dataset-specific model details and experimental results. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_4",
"text": " The SSD approach is based on a feed-forward convolutional network that produces a fixed-size collection of bounding boxes and scores for the presence of object class instances in those boxes, followed by a non-maximum suppression step to produce the final detections. The early network layers are based on a standard architecture used for high quality image classification (truncated before any classification layers), which we will call the base network222We use the VGG-16 network as a base, but other networks should also produce good results.. We then add auxiliary structure to the network to produce detections with the following key features: ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_5",
"text": " Multi-scale feature maps for detection We add convolutional feature layers to the end of the truncated base network. These layers decrease in size progressively and allow predictions of detections at multiple scales. The convolutional model for predicting detections is different for each feature layer (cf Overfeat and YOLO that operate on a single scale feature map). ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_6",
"text": " Convolutional predictors for detection Each added feature layer (or optionally an existing feature layer from the base network) can produce a fixed set of detection predictions using a set of convolutional filters. These are indicated on top of the SSD network architecture in Fig. 2. For a feature layer of size m×n𝑚𝑛m\\times n with p𝑝p channels, the basic element for predicting parameters of a potential detection is a 3×3×p33𝑝3\\times 3\\times p small kernel that produces either a score for a category, or a shape offset relative to the default box coordinates. At each of the m×n𝑚𝑛m\\times n locations where the kernel is applied, it produces an output value. The bounding box offset output values are measured relative to a default box position relative to each feature map location (cf the architecture of YOLO that uses an intermediate fully connected layer instead of a convolutional filter for this step). ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_7",
"text": " Default boxes and aspect ratios We associate a set of default bounding boxes with each feature map cell, for multiple feature maps at the top of the network. The default boxes tile the feature map in a convolutional manner, so that the position of each box relative to its corresponding cell is fixed. At each feature map cell, we predict the offsets relative to the default box shapes in the cell, as well as the per-class scores that indicate the presence of a class instance in each of those boxes. Specifically, for each box out of k𝑘k at a given location, we compute c𝑐c class scores and the 444 offsets relative to the original default box shape. This results in a total of (c+4)k𝑐4𝑘(c+4)k filters that are applied around each location in the feature map, yielding (c+4)kmn𝑐4𝑘𝑚𝑛(c+4)kmn outputs for a m×n𝑚𝑛m\\times n feature map. For an illustration of default boxes, please refer to Fig. 1. Our default boxes are similar to the anchor boxes used in Faster R-CNN , however we apply them to several feature maps of different resolutions. Allowing different default box shapes in several feature maps let us efficiently discretize the space of possible output box shapes. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_8",
"text": " The key difference between training SSD and training a typical detector that uses region proposals, is that ground truth information needs to be assigned to specific outputs in the fixed set of detector outputs. Some version of this is also required for training in YOLO and for the region proposal stage of Faster R-CNN and MultiBox. Once this assignment is determined, the loss function and back propagation are applied end-to-end. Training also involves choosing the set of default boxes and scales for detection as well as the hard negative mining and data augmentation strategies. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_9",
"text": " During training we need to determine which default boxes correspond to a ground truth detection and train the network accordingly. For each ground truth box we are selecting from default boxes that vary over location, aspect ratio, and scale. We begin by matching each ground truth box to the default box with the best jaccard overlap (as in MultiBox ). Unlike MultiBox, we then match default boxes to any ground truth with jaccard overlap higher than a threshold (0.5). This simplifies the learning problem, allowing the network to predict high scores for multiple overlapping default boxes rather than requiring it to pick only the one with maximum overlap. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_10",
"text": " The SSD training objective is derived from the MultiBox objective (7, 8) but is extended to handle multiple object categories. Let xijp={1,0}superscriptsubscript𝑥𝑖𝑗𝑝10x_{ij}^{p}=\\{1,0\\} be an indicator for matching the i𝑖i-th default box to the j𝑗j-th ground truth box of category p𝑝p. In the matching strategy above, we can have ∑ixijp≥1subscript𝑖superscriptsubscript𝑥𝑖𝑗𝑝1\\sum_{i}x_{ij}^{p}\\geq 1. The overall objective loss function is a weighted sum of the localization loss (loc) and the confidence loss (conf): L(x,c,l,g)=1N(Lconf(x,c)+αLloc(x,l,g))𝐿𝑥𝑐𝑙𝑔1𝑁subscript𝐿𝑐𝑜𝑛𝑓𝑥𝑐𝛼subscript𝐿𝑙𝑜𝑐𝑥𝑙𝑔L(x,c,l,g)=\\frac{1}{N}(L_{conf}(x,c)+\\alpha L_{loc}(x,l,g)) (1) where N is the number of matched default boxes. If N=0𝑁0N=0, wet set the loss to 0. The localization loss is a Smooth L1 loss between the predicted box (l𝑙l) and the ground truth box (g𝑔g) parameters. Similar to Faster R-CNN , we regress to offsets for the center (cx,cy𝑐𝑥𝑐𝑦cx,cy) of the default bounding box (d𝑑d) and for its width (w𝑤w) and height (hℎh). Lloc(x,l,g)=∑i∈PosN∑m∈{cx,cy,w,h}xijksmoothL1(lim−g^jm)g^jcx=(gjcx−dicx)/diwg^jcy=(gjcy−dicy)/dihg^jw=log(gjwdiw)g^jh=log(gjhdih)formulae-sequencesubscript𝐿𝑙𝑜𝑐𝑥𝑙𝑔superscriptsubscript𝑖𝑃𝑜𝑠𝑁subscript𝑚𝑐𝑥𝑐𝑦𝑤ℎsuperscriptsubscript𝑥𝑖𝑗𝑘subscriptsmoothL1superscriptsubscript𝑙𝑖𝑚superscriptsubscript^𝑔𝑗𝑚superscriptsubscript^𝑔𝑗𝑐𝑥superscriptsubscript𝑔𝑗𝑐𝑥superscriptsubscript𝑑𝑖𝑐𝑥superscriptsubscript𝑑𝑖𝑤superscriptsubscript^𝑔𝑗𝑐𝑦superscriptsubscript𝑔𝑗𝑐𝑦superscriptsubscript𝑑𝑖𝑐𝑦superscriptsubscript𝑑𝑖ℎsuperscriptsubscript^𝑔𝑗𝑤superscriptsubscript𝑔𝑗𝑤superscriptsubscript𝑑𝑖𝑤superscriptsubscript^𝑔𝑗ℎsuperscriptsubscript𝑔𝑗ℎsuperscriptsubscript𝑑𝑖ℎ\\begin{split}L_{loc}(x,l,g)=\\sum_{i\\in Pos}^{N}\\sum_{m\\in\\{cx,cy,w,h\\}}&x_{ij}^{k}\\text{smooth}_{\\text{L1}}(l_{i}^{m}-\\hat{g}_{j}^{m})\\\\ \\hat{g}_{j}^{cx}=(g_{j}^{cx}-d_{i}^{cx})/d_{i}^{w}\\quad\\quad&\\hat{g}_{j}^{cy}=(g_{j}^{cy}-d_{i}^{cy})/d_{i}^{h}\\\\ \\hat{g}_{j}^{w}=\\log\\Big{(}\\frac{g_{j}^{w}}{d_{i}^{w}}\\Big{)}\\quad\\quad&\\hat{g}_{j}^{h}=\\log\\Big{(}\\frac{g_{j}^{h}}{d_{i}^{h}}\\Big{)}\\end{split} (2) The confidence loss is the softmax loss over multiple classes confidences (c𝑐c). Lconf(x,c)=−∑i∈PosNxijplog(c^ip)−∑i∈Neglog(c^i0)wherec^ip=exp(cip)∑pexp(cip)formulae-sequencesubscript𝐿𝑐𝑜𝑛𝑓𝑥𝑐superscriptsubscript𝑖𝑃𝑜𝑠𝑁superscriptsubscript𝑥𝑖𝑗𝑝𝑙𝑜𝑔superscriptsubscript^𝑐𝑖𝑝subscript𝑖𝑁𝑒𝑔𝑙𝑜𝑔superscriptsubscript^𝑐𝑖0wheresuperscriptsubscript^𝑐𝑖𝑝superscriptsubscript𝑐𝑖𝑝subscript𝑝superscriptsubscript𝑐𝑖𝑝L_{conf}(x,c)=-\\sum_{i\\in Pos}^{N}x_{ij}^{p}log(\\hat{c}_{i}^{p})-\\sum_{i\\in Neg}log(\\hat{c}_{i}^{0})\\quad\\text{where}\\quad\\hat{c}_{i}^{p}=\\frac{\\exp(c_{i}^{p})}{\\sum_{p}\\exp(c_{i}^{p})} (3) and the weight term α𝛼\\alpha is set to 1 by cross validation. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_11",
"text": " To handle different object scales, some methods (4, 9) suggest processing the image at different sizes and combining the results afterwards. However, by utilizing feature maps from several different layers in a single network for prediction we can mimic the same effect, while also sharing parameters across all object scales. Previous works (10, 11) have shown that using feature maps from the lower layers can improve semantic segmentation quality because the lower layers capture more fine details of the input objects. Similarly, showed that adding global context pooled from a feature map can help smooth the segmentation results. Motivated by these methods, we use both the lower and upper feature maps for detection. Figure 1 shows two exemplar feature maps (8×8888\\times 8 and 4×4444\\times 4) which are used in the framework. In practice, we can use many more with small computational overhead. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_12",
"text": " Feature maps from different levels within a network are known to have different (empirical) receptive field sizes . Fortunately, within the SSD framework, the default boxes do not necessary need to correspond to the actual receptive fields of each layer. We design the tiling of default boxes so that specific feature maps learn to be responsive to particular scales of the objects. Suppose we want to use m𝑚m feature maps for prediction. The scale of the default boxes for each feature map is computed as: sk=smin+smax−sminm−1(k−1),k∈(1,m)formulae-sequencesubscript𝑠𝑘subscript𝑠minsubscript𝑠maxsubscript𝑠min𝑚1𝑘1𝑘1𝑚s_{k}=s_{\\text{min}}+\\frac{s_{\\text{max}}-s_{\\text{min}}}{m-1}(k-1),\\quad k\\in(1,m) (4) where sminsubscript𝑠mins_{\\text{min}} is 0.2 and smaxsubscript𝑠maxs_{\\text{max}} is 0.9, meaning the lowest layer has a scale of 0.2 and the highest layer has a scale of 0.9, and all layers in between are regularly spaced. We impose different aspect ratios for the default boxes, and denote them as ar∈{1,2,3,12,13}subscript𝑎𝑟1231213a_{r}\\in\\{1,2,3,\\frac{1}{2},\\frac{1}{3}\\}. We can compute the width (wka=skarsuperscriptsubscript𝑤𝑘𝑎subscript𝑠𝑘subscript𝑎𝑟w_{k}^{a}=s_{k}\\sqrt{a_{r}}) and height (hka=sk/arsuperscriptsubscriptℎ𝑘𝑎subscript𝑠𝑘subscript𝑎𝑟h_{k}^{a}=s_{k}/\\sqrt{a_{r}}) for each default box. For the aspect ratio of 1, we also add a default box whose scale is sk′=sksk+1subscriptsuperscript𝑠′𝑘subscript𝑠𝑘subscript𝑠𝑘1s^{\\prime}_{k}=\\sqrt{s_{k}s_{k+1}}, resulting in 6 default boxes per feature map location. We set the center of each default box to (i+0.5|fk|,j+0.5|fk|)𝑖0.5subscript𝑓𝑘𝑗0.5subscript𝑓𝑘(\\frac{i+0.5}{|f_{k}|},\\frac{j+0.5}{|f_{k}|}), where |fk|subscript𝑓𝑘|f_{k}| is the size of the k𝑘k-th square feature map, i,j∈(0,|fk|)𝑖𝑗0subscript𝑓𝑘i,j\\in(0,|f_{k}|). In practice, one can also design a distribution of default boxes to best fit a specific dataset. How to design the optimal tiling is an open question as well. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_13",
"text": " By combining predictions for all default boxes with different scales and aspect ratios from all locations of many feature maps, we have a diverse set of predictions, covering various input object sizes and shapes. For example, in Fig. 1, the dog is matched to a default box in the 4×4444\\times 4 feature map, but not to any default boxes in the 8×8888\\times 8 feature map. This is because those boxes have different scales and do not match the dog box, and therefore are considered as negatives during training. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_14",
"text": " After the matching step, most of the default boxes are negatives, especially when the number of possible default boxes is large. This introduces a significant imbalance between the positive and negative training examples. Instead of using all the negative examples, we sort them using the highest confidence loss for each default box and pick the top ones so that the ratio between the negatives and positives is at most 3:1. We found that this leads to faster optimization and a more stable training. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_15",
"text": " To make the model more robust to various input object sizes and shapes, each training image is randomly sampled by one of the following options: • Use the entire original input image. • Sample a patch so that the minimum jaccard overlap with the objects is 0.1, 0.3, 0.5, 0.7, or 0.9. • Randomly sample a patch. The size of each sampled patch is (0.1, 1) of the original image size, and the aspect ratio is between 1212\\frac{1}{2} and 2. We keep the overlapped part of the ground truth box if the center of it is in the sampled patch. After the aforementioned sampling step, each sampled patch is resized to fixed size and is horizontally flipped with probability of 0.5, in addition to applying some photo-metric distortions similar to those described in . ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_16",
"text": " Our experiments are all based on VGG16 , which is pre-trained on the ILSVRC CLS-LOC dataset . Similar to DeepLab-LargeFOV , we convert fc6 and fc7 to convolutional layers, subsample parameters from fc6 and fc7, change pool5 from 2×2−s222𝑠22\\times 2-s2 to 3×3−s133𝑠13\\times 3-s1, and use the à trous algorithm to fill the ”holes”. We remove all the dropout layers and the fc8 layer. We fine-tune the resulting model using SGD with initial learning rate 10−3superscript10310^{-3}, 0.9 momentum, 0.0005 weight decay, and batch size 32. The learning rate decay policy is slightly different for each dataset, and we will describe details later. The full training and testing code is built on Caffe and is open source at: https://github.com/weiliu89/caffe/tree/ssd . ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_17",
"text": " On this dataset, we compare against Fast R-CNN and Faster R-CNN on VOC2007 test (4952 images). All methods fine-tune on the same pre-trained VGG16 network. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_18",
"text": " Figure 2 shows the architecture details of the SSD300 model. We use conv4_3, conv7 (fc7), conv8_2, conv9_2, conv10_2, and conv11_2 to predict both location and confidences. We set default box with scale 0.1 on conv4_3333For SSD512 model, we add extra conv12_2 for prediction, set sminsubscript𝑠mins_{\\text{min}} to 0.15, and 0.07 on conv4_3.. We initialize the parameters for all the newly added convolutional layers with the ”xavier” method . For conv4_3, conv10_2 and conv11_2, we only associate 4 default boxes at each feature map location – omitting aspect ratios of 1313\\frac{1}{3} and 3. For all other layers, we put 6 default boxes as described in Sec. 2.2.3. Since, as pointed out in , conv4_3 has a different feature scale compared to the other layers, we use the L2 normalization technique introduced in to scale the feature norm at each location in the feature map to 20 and learn the scale during back propagation. We use the 10−3superscript10310^{-3} learning rate for 40k iterations, then continue training for 10k iterations with 10−4superscript10410^{-4} and 10−5superscript10510^{-5}. When training on VOC2007 trainval, Table 1 shows that our low resolution SSD300 model is already more accurate than Fast R-CNN. When we train SSD on a larger 512×512512512512\\times 512 input image, it is even more accurate, surpassing Faster R-CNN by 1.7% mAP. If we train SSD with more (i.e. 07+12) data, we see that SSD300 is already better than Faster R-CNN by 1.1% and that SSD512 is 3.6% better. If we take models trained on COCO trainval35k as described in Sec. 3.4 and fine-tuning them on the 07+12 dataset with SSD512, we achieve the best results: 81.6% mAP. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_19",
"text": " To understand the performance of our two SSD models in more details, we used the detection analysis tool from . Figure 3 shows that SSD can detect various object categories with high quality (large white area). The majority of its confident detections are correct. The recall is around 85-90%, and is much higher with “weak” (0.1 jaccard overlap) criteria. Compared to R-CNN , SSD has less localization error, indicating that SSD can localize objects better because it directly learns to regress the object shape and classify object categories instead of using two decoupled steps. However, SSD has more confusions with similar object categories (especially for animals), partly because we share locations for multiple categories. Figure 4 shows that SSD is very sensitive to the bounding box size. In other words, it has much worse performance on smaller objects than bigger objects. This is not surprising because those small objects may not even have any information at the very top layers. Increasing the input size (e.g. from 300×300300300300\\times 300 to 512×512512512512\\times 512) can help improve detecting small objects, but there is still a lot of room to improve. On the positive side, we can clearly see that SSD performs really well on large objects. And it is very robust to different object aspect ratios because we use default boxes of various aspect ratios per feature map location. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_20",
"text": " To understand SSD better, we carried out controlled experiments to examine how each component affects performance. For all the experiments, we use the same settings and input size (300×300300300300\\times 300), except for specified changes to the settings or component(s). ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_21",
"text": " Data augmentation is crucial. Fast and Faster R-CNN use the original image and the horizontal flip to train. We use a more extensive sampling strategy, similar to YOLO . Table 2 shows that we can improve 8.8% mAP with this sampling strategy. We do not know how much our sampling strategy will benefit Fast and Faster R-CNN, but they are likely to benefit less because they use a feature pooling step during classification that is relatively robust to object translation by design. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_22",
"text": " More default box shapes is better. As described in Sec. 2.2.3, by default we use 6 default boxes per location. If we remove the boxes with 1313\\frac{1}{3} and 3 aspect ratios, the performance drops by 0.6%. By further removing the boxes with 1212\\frac{1}{2} and 2 aspect ratios, the performance drops another 2.1%. Using a variety of default box shapes seems to make the task of predicting boxes easier for the network. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_23",
"text": " Atrous is faster. As described in Sec. 3, we used the atrous version of a subsampled VGG16, following DeepLab-LargeFOV . If we use the full VGG16, keeping pool5 with 2×2−s222𝑠22\\times 2-s2 and not subsampling parameters from fc6 and fc7, and add conv5_3 for prediction, the result is about the same while the speed is about 20% slower. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_24",
"text": " We use the same settings as those used for our basic VOC2007 experiments above, except that we use VOC2012 trainval and VOC2007 trainval and test (21503 images) for training, and test on VOC2012 test (10991 images). We train the models with 10−3superscript10310^{-3} learning rate for 60k iterations, then 10−4superscript10410^{-4} for 20k iterations. Table 4 shows the results of our SSD300 and SSD512444\\ssmallhttp://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?cls=mean&challengeid=11&compid=4 model. We see the same performance trend as we observed on VOC2007 test. Our SSD300 improves accuracy over Fast/Faster R-CNN. By increasing the training and testing image size to 512×512512512512\\times 512, we are 4.5% more accurate than Faster R-CNN. Compared to YOLO, SSD is significantly more accurate, likely due to the use of convolutional default boxes from multiple feature maps and our matching strategy during training. When fine-tuned from models trained on COCO, our SSD512 achieves 80.0% mAP, which is 4.1% higher than Faster R-CNN. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_25",
"text": " To further validate the SSD framework, we trained our SSD300 and SSD512 architectures on the COCO dataset. Since objects in COCO tend to be smaller than PASCAL VOC, we use smaller default boxes for all layers. We follow the strategy mentioned in Sec. 2.2.3, but now our smallest default box has a scale of 0.15 instead of 0.2, and the scale of the default box on conv4_3 is 0.07 (e.g. 21 pixels for a 300×300300300300\\times 300 image)555For SSD512 model, we add extra conv12_2 for prediction, set sminsubscript𝑠mins_{\\text{min}} to 0.1, and 0.04 on conv4_3.. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_26",
"text": " We use the trainval35k for training. We first train the model with 10−3superscript10310^{-3} learning rate for 160k iterations, and then continue training for 40k iterations with 10−4superscript10410^{-4} and 40k iterations with 10−5superscript10510^{-5}. Table 5 shows the results on test-dev2015. Similar to what we observed on the PASCAL VOC dataset, SSD300 is better than Fast R-CNN in both [email protected] and mAP@(0.5:0.95). SSD300 has a similar [email protected] as ION and Faster R-CNN , but is worse in [email protected]. By increasing the image size to 512×512512512512\\times 512, our SSD512 is better than Faster R-CNN in both criteria. Interestingly, we observe that SSD512 is 5.3% better in [email protected], but is only 1.2% better in [email protected]. We also observe that it has much better AP (4.8%) and AR (4.6%) for large objects, but has relatively less improvement in AP (1.3%) and AR (2.0%) for small objects. Compared to ION, the improvement in AR for large and small objects is more similar (5.4% vs. 3.9%). We conjecture that Faster R-CNN is more competitive on smaller objects with SSD because it performs two box refinement steps, in both the RPN part and in the Fast R-CNN part. In Fig. 3.2, we show some detection examples on COCO test-dev with the SSD512 model. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_27",
"text": " We applied the same network architecture we used for COCO to the ILSVRC DET dataset . We train a SSD300 model using the ILSVRC2014 DET train and val1 as used in . We first train the model with 10−3superscript10310^{-3} learning rate for 320k iterations, and then continue training for 80k iterations with 10−4superscript10410^{-4} and 40k iterations with 10−5superscript10510^{-5}. We can achieve 43.4 mAP on the val2 set . Again, it validates that SSD is a general framework for high quality real-time detection. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_28",
"text": " ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_29",
"text": " Without a follow-up feature resampling step as in Faster R-CNN, the classification task for small objects is relatively hard for SSD, as demonstrated in our analysis (see Fig. 4). The data augmentation strategy described in Sec. 2.2 helps to improve the performance dramatically, especially on small datasets such as PASCAL VOC. The random crops generated by the strategy can be thought of as a ”zoom in” operation and can generate many larger training examples. To implement a ”zoom out” operation that creates more small training examples, we first randomly place an image on a canvas of 16×16\\times of the original image size filled with mean values before we do any random crop operation. Because we have more training images by introducing this new ”expansion” data augmentation trick, we have to double the training iterations. We have seen a consistent increase of 2%-3% mAP across multiple datasets, as shown in Table 6. In specific, Figure 3.2 shows that the new augmentation trick significantly improves the performance on small objects. This result underscores the importance of the data augmentation strategy for the final model accuracy. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_30",
"text": " An alternative way of improving SSD is to design a better tiling of default boxes so that its position and scale are better aligned with the receptive field of each position on a feature map. We leave this for future work. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_31",
"text": " ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_32",
"text": " Considering the large number of boxes generated from our method, it is essential to perform non-maximum suppression (nms) efficiently during inference. By using a confidence threshold of 0.01, we can filter out most boxes. We then apply nms with jaccard overlap of 0.45 per class and keep the top 200 detections per image. This step costs about 1.7 msec per image for SSD300 and 20 VOC classes, which is close to the total time (2.4 msec) spent on all newly added layers. We measure the speed with batch size 8 using Titan X and cuDNN v4 with Intel Xeon [email protected]. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_33",
"text": " Table 7 shows the comparison between SSD, Faster R-CNN, and YOLO. Both our SSD300 and SSD512 method outperforms Faster R-CNN in both speed and accuracy. Although Fast YOLO can run at 155 FPS, it has lower accuracy by almost 22% mAP. To the best of our knowledge, SSD300 is the first real-time method to achieve above 70% mAP. Note that about 80% of the forward time is spent on the base network (VGG16 in our case). Therefore, using a faster base network could even further improve the speed, which can possibly make the SSD512 model real-time as well. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_34",
"text": " There are two established classes of methods for object detection in images, one based on sliding windows and the other based on region proposal classification. Before the advent of convolutional neural networks, the state of the art for those two approaches – Deformable Part Model (DPM) and Selective Search – had comparable performance. However, after the dramatic improvement brought on by R-CNN , which combines selective search region proposals and convolutional network based post-classification, region proposal object detection methods became prevalent. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_35",
"text": " The original R-CNN approach has been improved in a variety of ways. The first set of approaches improve the quality and speed of post-classification, since it requires the classification of thousands of image crops, which is expensive and time-consuming. SPPnet speeds up the original R-CNN approach significantly. It introduces a spatial pyramid pooling layer that is more robust to region size and scale and allows the classification layers to reuse features computed over feature maps generated at several image resolutions. Fast R-CNN extends SPPnet so that it can fine-tune all layers end-to-end by minimizing a loss for both confidences and bounding box regression, which was first introduced in MultiBox for learning objectness. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_36",
"text": " The second set of approaches improve the quality of proposal generation using deep neural networks. In the most recent works like MultiBox (7, 8), the Selective Search region proposals, which are based on low-level image features, are replaced by proposals generated directly from a separate deep neural network. This further improves the detection accuracy but results in a somewhat complex setup, requiring the training of two neural networks with a dependency between them. Faster R-CNN replaces selective search proposals by ones learned from a region proposal network (RPN), and introduces a method to integrate the RPN with Fast R-CNN by alternating between fine-tuning shared convolutional layers and prediction layers for these two networks. This way region proposals are used to pool mid-level features and the final classification step is less expensive. Our SSD is very similar to the region proposal network (RPN) in Faster R-CNN in that we also use a fixed set of (default) boxes for prediction, similar to the anchor boxes in the RPN. But instead of using these to pool features and evaluate another classifier, we simultaneously produce a score for each object category in each box. Thus, our approach avoids the complication of merging RPN with Fast R-CNN and is easier to train, faster, and straightforward to integrate in other tasks. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_37",
"text": " Another set of methods, which are directly related to our approach, skip the proposal step altogether and predict bounding boxes and confidences for multiple categories directly. OverFeat , a deep version of the sliding window method, predicts a bounding box directly from each location of the topmost feature map after knowing the confidences of the underlying object categories. YOLO uses the whole topmost feature map to predict both confidences for multiple categories and bounding boxes (which are shared for these categories). Our SSD method falls in this category because we do not have the proposal step but use the default boxes. However, our approach is more flexible than the existing methods because we can use default boxes of different aspect ratios on each feature location from multiple feature maps at different scales. If we only use one default box per location from the topmost feature map, our SSD would have similar architecture to OverFeat ; if we use the whole topmost feature map and add a fully connected layer for predictions instead of our convolutional predictors, and do not explicitly consider multiple aspect ratios, we can approximately reproduce YOLO . ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_38",
"text": " This paper introduces SSD, a fast single-shot object detector for multiple categories. A key feature of our model is the use of multi-scale convolutional bounding box outputs attached to multiple feature maps at the top of the network. This representation allows us to efficiently model the space of possible box shapes. We experimentally validate that given appropriate training strategies, a larger number of carefully chosen default bounding boxes results in improved performance. We build SSD models with at least an order of magnitude more box predictions sampling location, scale, and aspect ratio, than existing methods (5, 7). We demonstrate that given the same VGG-16 base architecture, SSD compares favorably to its state-of-the-art object detector counterparts in terms of both accuracy and speed. Our SSD512 model significantly outperforms the state-of-the-art Faster R-CNN in terms of accuracy on PASCAL VOC and COCO, while being 3×3\\times faster. Our real time SSD300 model runs at 59 FPS, which is faster than the current real time YOLO alternative, while producing markedly superior detection accuracy. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_39",
"text": " Apart from its standalone utility, we believe that our monolithic and relatively simple SSD model provides a useful building block for larger systems that employ an object detection component. A promising future direction is to explore its use as part of a system using recurrent neural networks to detect and track objects in video simultaneously. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_40",
"text": " This work was started as an internship project at Google and continued at UNC. We would like to thank Alex Toshev for helpful discussions and are indebted to the Image Understanding and DistBelief teams at Google. We also thank Philip Ammirato and Patrick Poirson for helpful comments. We thank NVIDIA for providing GPUs and acknowledge support from NSF 1452851, 1446631, 1526367, 1533771. ",
"title": "SSD: Single Shot MultiBox Detector"
}
] |
What is meant by "differential interference contrast"?
|
Differential interference contrast (DIC) is a microscopy technique which can be used to record HeLa cells [21].
|
[
21
] |
[
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available training sets and the size of the considered networks. The breakthrough by Krizhevsky et al. was due to supervised training of a large network with 8 layers and millions of parameters on the ImageNet dataset with 1 million training images. Since then, even larger and deeper networks have been trained . ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_1",
"text": " The typical use of convolutional networks is on classification tasks, where the output to an image is a single class label. However, in many visual tasks, especially in biomedical image processing, the desired output should include localization, i.e., a class label is supposed to be assigned to each pixel. Moreover, thousands of training images are usually beyond reach in biomedical tasks. Hence, Ciresan et al. trained a network in a sliding-window setup to predict the class label of each pixel by providing a local region (patch) around that pixel as input. First, this network can localize. Secondly, the training data in terms of patches is much larger than the number of training images. The resulting network won the EM segmentation challenge at ISBI 2012 by a large margin. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_2",
"text": " Obviously, the strategy in Ciresan et al. has two drawbacks. First, it is quite slow because the network must be run separately for each patch, and there is a lot of redundancy due to overlapping patches. Secondly, there is a trade-off between localization accuracy and the use of context. Larger patches require more max-pooling layers that reduce the localization accuracy, while small patches allow the network to see only little context. More recent approaches (11, 4) proposed a classifier output that takes into account the features from multiple layers. Good localization and the use of context are possible at the same time. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_3",
"text": " In this paper, we build upon a more elegant architecture, the so-called “fully convolutional network” . We modify and extend this architecture such that it works with very few training images and yields more precise segmentations; see Figure 1. The main idea in is to supplement a usual contracting network by successive layers, where pooling operators are replaced by upsampling operators. Hence, these layers increase the resolution of the output. In order to localize, high resolution features from the contracting path are combined with the upsampled output. A successive convolution layer can then learn to assemble a more precise output based on this information. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_4",
"text": " One important modification in our architecture is that in the upsampling part we have also a large number of feature channels, which allow the network to propagate context information to higher resolution layers. As a consequence, the expansive path is more or less symmetric to the contracting path, and yields a u-shaped architecture. The network does not have any fully connected layers and only uses the valid part of each convolution, i.e., the segmentation map only contains the pixels, for which the full context is available in the input image. This strategy allows the seamless segmentation of arbitrarily large images by an overlap-tile strategy (see Figure 2). To predict the pixels in the border region of the image, the missing context is extrapolated by mirroring the input image. This tiling strategy is important to apply the network to large images, since otherwise the resolution would be limited by the GPU memory. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_5",
"text": " As for our tasks there is very little training data available, we use excessive data augmentation by applying elastic deformations to the available training images. This allows the network to learn invariance to such deformations, without the need to see these transformations in the annotated image corpus. This is particularly important in biomedical segmentation, since deformation used to be the most common variation in tissue and realistic deformations can be simulated efficiently. The value of data augmentation for learning invariance has been shown in Dosovitskiy et al. in the scope of unsupervised feature learning. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_6",
"text": " Another challenge in many cell segmentation tasks is the separation of touching objects of the same class; see Figure 3. To this end, we propose the use of a weighted loss, where the separating background labels between touching cells obtain a large weight in the loss function. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_7",
"text": " The resulting network is applicable to various biomedical segmentation problems. In this paper, we show results on the segmentation of neuronal structures in EM stacks (an ongoing competition started at ISBI 2012), where we outperformed the network of Ciresan et al. . Furthermore, we show results for cell segmentation in light microscopy images from the ISBI cell tracking challenge 2015. Here we won with a large margin on the two most challenging 2D transmitted light datasets. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_8",
"text": " The network architecture is illustrated in Figure 1. It consists of a contracting path (left side) and an expansive path (right side). The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a 1x1 convolution is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_9",
"text": " To allow a seamless tiling of the output segmentation map (see Figure 2), it is important to select the input tile size such that all 2x2 max-pooling operations are applied to a layer with an even x- and y-size. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_10",
"text": " The input images and their corresponding segmentation maps are used to train the network with the stochastic gradient descent implementation of Caffe . Due to the unpadded convolutions, the output image is smaller than the input by a constant border width. To minimize the overhead and make maximum use of the GPU memory, we favor large input tiles over a large batch size and hence reduce the batch to a single image. Accordingly we use a high momentum (0.99) such that a large number of the previously seen training samples determine the update in the current optimization step. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_11",
"text": " The energy function is computed by a pixel-wise soft-max over the final feature map combined with the cross entropy loss function. The soft-max is defined as pk(𝐱)=exp(ak(𝐱))/(∑k′=1Kexp(ak′(𝐱)))subscript𝑝𝑘𝐱subscript𝑎𝑘𝐱superscriptsubscriptsuperscript𝑘′1𝐾subscript𝑎superscript𝑘′𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}})=\\exp({a_{k}(\\boldsymbol{\\mathbf{x}})})/\\left(\\sum_{k^{\\prime}=1}^{K}\\exp(a_{k^{\\prime}}(\\boldsymbol{\\mathbf{x}}))\\right) where ak(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) denotes the activation in feature channel k𝑘k at the pixel position 𝐱∈Ω𝐱Ω\\boldsymbol{\\mathbf{x}}\\in\\Omega with Ω⊂ℤ2Ωsuperscriptℤ2\\Omega\\subset\\mathbb{Z}^{2}. K𝐾K is the number of classes and pk(𝐱)subscript𝑝𝑘𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}}) is the approximated maximum-function. I.e. pk(𝐱)≈1subscript𝑝𝑘𝐱1{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 1 for the k𝑘k that has the maximum activation ak(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) and pk(𝐱)≈0subscript𝑝𝑘𝐱0{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 0 for all other k𝑘k. The cross entropy then penalizes at each position the deviation of pℓ(𝐱)(𝐱)subscript𝑝ℓ𝐱𝐱{p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}}) from 1 using E=∑𝐱∈Ωw(𝐱)log(pℓ(𝐱)(𝐱))𝐸subscript𝐱Ω𝑤𝐱subscript𝑝ℓ𝐱𝐱E=\\sum_{\\boldsymbol{\\mathbf{x}}\\in\\Omega}w(\\boldsymbol{\\mathbf{x}})\\log({p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}})) (1) where ℓ:Ω→{1,…,K}:ℓ→Ω1…𝐾\\ell:\\Omega\\rightarrow\\{1,\\dots,K\\} is the true label of each pixel and w:Ω→ℝ:𝑤→Ωℝw:\\Omega\\rightarrow\\mathds{R} is a weight map that we introduced to give some pixels more importance in the training. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_12",
"text": " We pre-compute the weight map for each ground truth segmentation to compensate the different frequency of pixels from a certain class in the training data set, and to force the network to learn the small separation borders that we introduce between touching cells (See Figure 3c and d). ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_13",
"text": " The separation border is computed using morphological operations. The weight map is then computed as w(𝐱)=wc(𝐱)+w0⋅exp(−(d1(𝐱)+d2(𝐱))22σ2)𝑤𝐱subscript𝑤𝑐𝐱⋅subscript𝑤0superscriptsubscript𝑑1𝐱subscript𝑑2𝐱22superscript𝜎2w(\\boldsymbol{\\mathbf{x}})=w_{c}(\\boldsymbol{\\mathbf{x}})+w_{0}\\cdot\\exp\\left(-\\frac{(d_{1}(\\boldsymbol{\\mathbf{x}})+d_{2}(\\boldsymbol{\\mathbf{x}}))^{2}}{2\\sigma^{2}}\\right) (2) where wc:Ω→ℝ:subscript𝑤𝑐→Ωℝw_{c}:\\Omega\\rightarrow\\mathds{R} is the weight map to balance the class frequencies, d1:Ω→ℝ:subscript𝑑1→Ωℝd_{1}:\\Omega\\rightarrow\\mathds{R} denotes the distance to the border of the nearest cell and d2:Ω→ℝ:subscript𝑑2→Ωℝd_{2}:\\Omega\\rightarrow\\mathds{R} the distance to the border of the second nearest cell. In our experiments we set w0=10subscript𝑤010w_{0}=10 and σ≈5𝜎5\\sigma\\approx 5 pixels. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_14",
"text": " In deep networks with many convolutional layers and different paths through the network, a good initialization of the weights is extremely important. Otherwise, parts of the network might give excessive activations, while other parts never contribute. Ideally the initial weights should be adapted such that each feature map in the network has approximately unit variance. For a network with our architecture (alternating convolution and ReLU layers) this can be achieved by drawing the initial weights from a Gaussian distribution with a standard deviation of 2/N2𝑁\\sqrt{2/N}, where N𝑁N denotes the number of incoming nodes of one neuron . E.g. for a 3x3 convolution and 64 feature channels in the previous layer N=9⋅64=576𝑁⋅964576N=9\\cdot 64=576. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_15",
"text": " Data augmentation is essential to teach the network the desired invariance and robustness properties, when only few training samples are available. In case of microscopical images we primarily need shift and rotation invariance as well as robustness to deformations and gray value variations. Especially random elastic deformations of the training samples seem to be the key concept to train a segmentation network with very few annotated images. We generate smooth deformations using random displacement vectors on a coarse 3 by 3 grid. The displacements are sampled from a Gaussian distribution with 10 pixels standard deviation. Per-pixel displacements are then computed using bicubic interpolation. Drop-out layers at the end of the contracting path perform further implicit data augmentation. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_16",
"text": " We demonstrate the application of the u-net to three different segmentation tasks. The first task is the segmentation of neuronal structures in electron microscopic recordings. An example of the data set and our obtained segmentation is displayed in Figure 2. We provide the full result as Supplementary Material. The data set is provided by the EM segmentation challenge that was started at ISBI 2012 and is still open for new contributions. The training data is a set of 30 images (512x512 pixels) from serial section transmission electron microscopy of the Drosophila first instar larva ventral nerve cord (VNC). Each image comes with a corresponding fully annotated ground truth segmentation map for cells (white) and membranes (black). The test set is publicly available, but its segmentation maps are kept secret. An evaluation can be obtained by sending the predicted membrane probability map to the organizers. The evaluation is done by thresholding the map at 10 different levels and computation of the “warping error”, the “Rand error” and the “pixel error” . ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_17",
"text": " The u-net (averaged over 7 rotated versions of the input data) achieves without any further pre- or postprocessing a warping error of 0.0003529 (the new best score, see Table 1) and a rand-error of 0.0382. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_18",
"text": " This is significantly better than the sliding-window convolutional network result by Ciresan et al. , whose best submission had a warping error of 0.000420 and a rand error of 0.0504. In terms of rand error the only better performing algorithms on this data set use highly data set specific post-processing methods111The authors of this algorithm have submitted 78 different solutions to achieve this result. applied to the probability map of Ciresan et al. . ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_19",
"text": " We also applied the u-net to a cell segmentation task in light microscopic images. This segmenation task is part of the ISBI cell tracking challenge 2014 and 2015 (10, 13). The first data set “PhC-U373”222Data set provided by Dr. Sanjay Kumar. Department of Bioengineering University of California at Berkeley. Berkeley CA (USA) contains Glioblastoma-astrocytoma U373 cells on a polyacrylimide substrate recorded by phase contrast microscopy (see Figure 4a,b and Supp. Material). It contains 35 partially annotated training images. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_20",
"text": " Here we achieve an average IOU (“intersection over union”) of 92%, which is significantly better than the second best algorithm with 83% (see Table 2). ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_21",
"text": " The second data set “DIC-HeLa”333Data set provided by Dr. Gert van Cappellen Erasmus Medical Center. Rotterdam. The Netherlands are HeLa cells on a flat glass recorded by differential interference contrast (DIC) microscopy (see Figure 3, Figure 4c,d and Supp. Material). It contains 20 partially annotated training images. Here we achieve an average IOU of 77.5% which is significantly better than the second best algorithm with 46%. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_22",
"text": " The u-net architecture achieves very good performance on very different biomedical segmentation applications. Thanks to data augmentation with elastic deformations, it only needs very few annotated images and has a very reasonable training time of only 10 hours on a NVidia Titan GPU (6 GB). We provide the full Caffe-based implementation and the trained networks444U-net implementation, trained networks and supplementary material available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. We are sure that the u-net architecture can be applied easily to many more tasks. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
}
] |
What are the metrics they used for measuring efficiency and effectiveness?
|
They used (MRR@10) for measuring efficiency and effectiveness [56].
|
[
56
] |
[
{
"id": "2004.12832_all_0",
"text": " Over the past few years, the Information Retrieval (IR) community has witnessed the introduction of a host of neural ranking models, including DRMM (Guo et al., 2016), KNRM (Xiong et al., 2017; Dai et al., 2018), and Duet (Mitra et al., 2017; Mitra and Craswell, 2019). In contrast to prior learning-to-rank methods that rely on hand-crafted features, these models employ embedding-based representations of queries and documents and directly model local interactions (i.e., fine-granular relationships) between their contents. Among them, a recent approach has emerged that fine-tunes deep pre-trained language models (LMs) like ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) for estimating relevance. By computing deeply-contextualized semantic representations of query–document pairs, these LMs help bridge the pervasive vocabulary mismatch (Zhao, 2012; Mitra et al., 2018) between documents and queries (Qiao et al., 2019). Indeed, in the span of just a few months, a number of ranking models based on BERT have achieved state-of-the-art results on various retrieval benchmarks (Nogueira and Cho, 2019; MacAvaney et al., 2019; Dai and Callan, 2019b; Yilmaz et al., 2019) and have been proprietarily adapted for deployment by Google111https://blog.google/products/search/search-language-understanding-bert/ and Bing222https://azure.microsoft.com/en-us/blog/bing-delivers-its-largest-improvement-in-search-experience-using-azure-gpus/. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_1",
"text": " However, the remarkable gains delivered by these LMs come at a steep increase in computational cost. Hofstätter et al. (Hofstätter and Hanbury, 2019) and MacAvaney et al. (MacAvaney et al., 2019) observe that BERT-based models in the literature are 100-1000×\\times more computationally expensive than prior models—some of which are arguably not inexpensive to begin with (Ji et al., 2019). This quality–cost tradeoff is summarized by Figure 1, which compares two BERT-based rankers (Nogueira and Cho, 2019; Nogueira et al., 2019b) against a representative set of ranking models. The figure uses MS MARCO Ranking (Nguyen et al., 2016), a recent collection of 9M passages and 1M queries from Bing’s logs. It reports retrieval effectiveness (MRR@10) on the official validation set as well as average query latency (log-scale) using a high-end server that dedicates one Tesla V100 GPU per query for neural re-rankers. Following the re-ranking setup of MS MARCO, ColBERT (re-rank), the Neural Matching Models, and the Deep LMs re-rank the MS MARCO’s official top-1000 documents per query. Other methods, including ColBERT (full retrieval), directly retrieve the top-1000 results from the entire collection. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_2",
"text": " As the figure shows, BERT considerably improves search precision, raising MRR@10 by almost 7% against the best previous methods; simultaneously, it increases latency by up to tens of thousands of milliseconds even with a high-end GPU. This poses a challenging tradeoff since raising query response times by as little as 100ms is known to impact user experience and even measurably diminish revenue (Kohavi et al., 2013). To tackle this problem, recent work has started exploring using Natural Language Understanding (NLU) techniques to augment traditional retrieval models like BM25 (Robertson et al., 1995). For example, Nogueira et al. (Nogueira et al., 2019c, a) expand documents with NLU-generated queries before indexing with BM25 scores and Dai & Callan (Dai and Callan, 2019a) replace BM25’s term frequency with NLU-estimated term importance. Despite successfully reducing latency, these approaches generally reduce precision substantially relative to BERT. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_3",
"text": " To reconcile efficiency and contextualization in IR, we propose ColBERT, a ranking model based on contextualized late interaction over BERT. As the name suggests, ColBERT proposes a novel late interaction paradigm for estimating relevance between a query q𝑞q and a document d𝑑d. Under late interaction, q𝑞q and d𝑑d are separately encoded into two sets of contextual embeddings, and relevance is evaluated using cheap and pruning-friendly computations between both sets—that is, fast computations that enable ranking without exhaustively evaluating every possible candidate. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_4",
"text": " Figure 2 contrasts our proposed late interaction approach with existing neural matching paradigms. On the left, Figure 2 (a) illustrates representation-focused rankers, which independently compute an embedding for q𝑞q and another for d𝑑d and estimate relevance as a single similarity score between two vectors (Huang et al., 2013; Zamani et al., 2018). Moving to the right, Figure 2 (b) visualizes typical interaction-focused rankers. Instead of summarizing q𝑞q and d𝑑d into individual embeddings, these rankers model word- and phrase-level relationships across q𝑞q and d𝑑d and match them using a deep neural network (e.g., with CNNs/MLPs (Mitra et al., 2017) or kernels (Xiong et al., 2017)). In the simplest case, they feed the neural network an interaction matrix that reflects the similiarity between every pair of words across q𝑞q and d𝑑d. Further right, Figure 2 (c) illustrates a more powerful interaction-based paradigm, which models the interactions between words within as well as across q𝑞q and d𝑑d at the same time, as in BERT’s transformer architecture (Nogueira and Cho, 2019). ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_5",
"text": " These increasingly expressive architectures are in tension. While interaction-based models (i.e., Figure 2 (b) and (c)) tend to be superior for IR tasks (Guo et al., 2019; Mitra et al., 2018), a representation-focused model—by isolating the computations among q𝑞q and d𝑑d—makes it possible to pre-compute document representations offline (Zamani et al., 2018), greatly reducing the computational load per query. In this work, we observe that the fine-grained matching of interaction-based models and the pre-computation of document representations of representation-based models can be combined by retaining yet judiciously delaying the query–document interaction. Figure 2 (d) illustrates an architecture that precisely does so. As illustrated, every query embedding interacts with all document embeddings via a MaxSim operator, which computes maximum similarity (e.g., cosine similarity), and the scalar outputs of these operators are summed across query terms. This paradigm allows ColBERT to exploit deep LM-based representations while shifting the cost of encoding documents offline and amortizing the cost of encoding the query once across all ranked documents. Additionally, it enables ColBERT to leverage vector-similarity search indexes (e.g., (Johnson et al., 2017; Abuzaid et al., 2019)) to retrieve the top-k𝑘k results directly from a large document collection, substantially improving recall over models that only re-rank the output of term-based retrieval. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_6",
"text": " As Figure 1 illustrates, ColBERT can serve queries in tens or few hundreds of milliseconds. For instance, when used for re-ranking as in “ColBERT (re-rank)”, it delivers over 170×\\times speedup (and requires 14,000×\\times fewer FLOPs) relative to existing BERT-based models, while being more effective than every non-BERT baseline (§4.2 & 4.3). ColBERT’s indexing—the only time it needs to feed documents through BERT—is also practical: it can index the MS MARCO collection of 9M passages in about 3 hours using a single server with four GPUs (§4.5), retaining its effectiveness with a space footprint of as little as few tens of GiBs. Our extensive ablation study (§4.4) shows that late interaction, its implementation via MaxSim operations, and crucial design choices within our BERT-based encoders are all essential to ColBERT’s effectiveness. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_7",
"text": " Our main contributions are as follows. (1) We propose late interaction (§3.1) as a paradigm for efficient and effective neural ranking. (2) We present ColBERT (§3.2 & 3.3), a highly-effective model that employs novel BERT-based query and document encoders within the late interaction paradigm. (3) We show how to leverage ColBERT both for re-ranking on top of a term-based retrieval model (§3.5) and for searching a full collection using vector similarity indexes (§3.6). (4) We evaluate ColBERT on MS MARCO and TREC CAR, two recent passage search collections. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_8",
"text": " Neural Matching Models. Over the past few years, IR researchers have introduced numerous neural architectures for ranking. In this work, we compare against KNRM (Xiong et al., 2017; Dai et al., 2018), Duet (Mitra et al., 2017; Mitra and Craswell, 2019), ConvKNRM (Dai et al., 2018), and fastText+ConvKNRM (Hofstätter et al., 2019a). KNRM proposes a differentiable kernel-pooling technique for extracting matching signals from an interaction matrix, while Duet combines signals from exact-match-based as well as embedding-based similarities for ranking. Introduced in 2018, ConvKNRM learns to match n𝑛n-grams in the query and the document. Lastly, fastText+ConvKNRM (abbreviated fT+ConvKNRM) tackles the absence of rare words from typical word embeddings lists by adopting sub-word token embeddings. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_9",
"text": " In 2018, Zamani et al. (Zamani et al., 2018) introduced SNRM, a representation-focused IR model that encodes each query and each document as a single, sparse high-dimensional vector of “latent terms”. By producing a sparse-vector representation for each document, SNRM is able to use a traditional IR inverted index for representing documents, allowing fast end-to-end retrieval. Despite highly promising results and insights, SNRM’s effectiveness is substantially outperformed by the state of the art on the datasets with which it was evaluated (e.g., see (Yang et al., 2019; MacAvaney et al., 2019)). While SNRM employs sparsity to allow using inverted indexes, we relax this assumption and compare a (dense) BERT-based representation-focused model against our late-interaction ColBERT in our ablation experiments in §4.4. For a detailed overview of existing neural ranking models, we refer the readers to two recent surveys of the literature (Mitra et al., 2018; Guo et al., 2019). ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_10",
"text": " Language Model Pretraining for IR. Recent work in NLU emphasizes the importance pre-training language representation models in an unsupervised fashion before subsequently fine-tuning them on downstream tasks. A notable example is BERT (Devlin et al., 2018), a bi-directional transformer-based language model whose fine-tuning advanced the state of the art on various NLU benchmarks. Nogueira et al. (Nogueira and Cho, 2019), MacAvaney et al. (MacAvaney et al., 2019), and Dai & Callan (Dai and Callan, 2019b) investigate incorporating such LMs (mainly BERT, but also ELMo (Peters et al., 2018)) on different ranking datasets. As illustrated in Figure 2 (c), the common approach (and the one adopted by Nogueira et al. on MS MARCO and TREC CAR) is to feed the query–document pair through BERT and use an MLP on top of BERT’s (CLS) output token to produce a relevance score. Subsequent work by Nogueira et al. (Nogueira et al., 2019b) introduced duoBERT, which fine-tunes BERT to compare the relevance of a pair of documents given a query. Relative to their single-document BERT, this gives duoBERT a 1% MRR@10 advantage on MS MARCO while increasing the cost by at least 1.4×\\times. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_11",
"text": " BERT Optimizations. As discussed in §1, these LM-based rankers can be highly expensive in practice. While ongoing efforts in the NLU literature for distilling (Jiao et al., 2019; Tang et al., 2019), compressing (Zafrir et al., 2019), and pruning (Michel et al., 2019) BERT can be instrumental in narrowing this gap, they generally achieve significantly smaller speedups than our re-designed architecture for IR, due to their generic nature, and more aggressive optimizations often come at the cost of lower quality. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_12",
"text": " Efficient NLU-based Models. Recently, a direction emerged that employs expensive NLU computation offline. This includes doc2query (Nogueira et al., 2019c) and DeepCT (Dai and Callan, 2019a). The doc2query model expands each document with a pre-defined number of synthetic queries queries generated by a seq2seq transformer model that is trained to generate queries given a document. It then relies on a BM25 index for retrieval from the (expanded) documents. DeepCT uses BERT to produce the term frequency component of BM25 in a context-aware manner, essentially representing a feasible realization of the term-independence assumption with neural networks (Mitra et al., 2019). Lastly, docTTTTTquery (Nogueira et al., 2019a) is identical to doc2query except that it fine-tunes a pre-trained model (namely, T5 (Raffel et al., 2019)) for generating the predicted queries. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_13",
"text": " Concurrently with our drafting of this paper, Hofstätter et al. (Hofstätter et al., 2019b) published their Transformer-Kernel (TK) model. At a high level, TK improves the KNRM architecture described earlier: while KNRM employs kernel pooling on top of word-embedding-based interaction, TK uses a Transformer (Vaswani et al., 2017) component for contextually encoding queries and documents before kernel pooling. TK establishes a new state-of-the-art for non-BERT models on MS MARCO (Dev); however, the best non-ensemble MRR@10 it achieves is 31% while ColBERT reaches up to 36%. Moreover, due to indexing document representations offline and employing a MaxSim-based late interaction mechanism, ColBERT is much more scalable, enabling end-to-end retrieval which is not supported by TK. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_14",
"text": " ColBERT prescribes a simple framework for balancing the quality and cost of neural IR, particularly deep language models like BERT. As introduced earlier, delaying the query–document interaction can facilitate cheap neural re-ranking (i.e., through pre-computation) and even support practical end-to-end neural retrieval (i.e., through pruning via vector-similarity search). ColBERT addresses how to do so while still preserving the effectiveness of state-of-the-art models, which condition the bulk of their computations on the joint query–document pair. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_15",
"text": " Even though ColBERT’s late-interaction framework can be applied to a wide variety of architectures (e.g., CNNs, RNNs, transformers, etc.), we choose to focus this work on bi-directional transformer-based encoders (i.e., BERT) owing to their state-of-the-art effectiveness yet very high computational cost. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_16",
"text": " Figure 3 depicts the general architecture of ColBERT, which comprises: (a) a query encoder fQsubscript𝑓𝑄f_{Q}, (b) a document encoder fDsubscript𝑓𝐷f_{D}, and (c) the late interaction mechanism. Given a query q𝑞q and document d𝑑d, fQsubscript𝑓𝑄f_{Q} encodes q𝑞q into a bag of fixed-size embeddings Eqsubscript𝐸𝑞E_{q} while fDsubscript𝑓𝐷f_{D} encodes d𝑑d into another bag Edsubscript𝐸𝑑E_{d}. Crucially, each embeddings in Eqsubscript𝐸𝑞E_{q} and Edsubscript𝐸𝑑E_{d} is contextualized based on the other terms in q𝑞q or d𝑑d, respectively. We describe our BERT-based encoders in §3.2. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_17",
"text": " Using Eqsubscript𝐸𝑞E_{q} and Edsubscript𝐸𝑑E_{d}, ColBERT computes the relevance score between q𝑞q and d𝑑d via late interaction, which we define as a summation of maximum similarity (MaxSim) operators. In particular, we find the maximum cosine similarity of each v∈Eq𝑣subscript𝐸𝑞v\\in E_{q} with vectors in Edsubscript𝐸𝑑E_{d}, and combine the outputs via summation. Besides cosine, we also evaluate squared L2 distance as a measure of vector similarity. Intuitively, this interaction mechanism softly searches for each query term tqsubscript𝑡𝑞t_{q}—in a manner that reflects its context in the query—against the document’s embeddings, quantifying the strength of the “match” via the largest similarity score between tqsubscript𝑡𝑞t_{q} and a document term tdsubscript𝑡𝑑t_{d}. Given these term scores, it then estimates the document relevance by summing the matching evidence across all query terms. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_18",
"text": " While more sophisticated matching is possible with other choices such as deep convolution and attention layers (i.e., as in typical interaction-focused models), a summation of maximum similarity computations has two distinctive characteristics. First, it stands out as a particularly cheap interaction mechanism, as we examine its FLOPs in §4.2. Second, and more importantly, it is amenable to highly-efficient pruning for top-k𝑘k retrieval, as we evaluate in §4.3. This enables using vector-similarity algorithms for skipping documents without materializing the full interaction matrix or even considering each document in isolation. Other cheap choices (e.g., a summation of average similarity scores, instead of maximum) are possible; however, many are less amenable to pruning. In §4.4, we conduct an extensive ablation study that empirically verifies the advantage of our MaxSim-based late interaction against alternatives. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_19",
"text": " Prior to late interaction, ColBERT encodes each query or document into a bag of embeddings, employing BERT-based encoders. We share a single BERT model among our query and document encoders but distinguish input sequences that correspond to queries and documents by prepending a special token (Q) to queries and another token (D) to documents. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_20",
"text": " Query Encoder. Given a textual query q𝑞q, we tokenize it into its BERT-based WordPiece (Wu et al., 2016) tokens q1q2…qlsubscript𝑞1subscript𝑞2…subscript𝑞𝑙q_{1}q_{2}...q_{l}. We prepend the token (Q) to the query. We place this token right after BERT’s sequence-start token (CLS). If the query has fewer than a pre-defined number of tokens Nqsubscript𝑁𝑞N_{q}, we pad it with BERT’s special (mask) tokens up to length Nqsubscript𝑁𝑞N_{q} (otherwise, we truncate it to the first Nqsubscript𝑁𝑞N_{q} tokens). This padded sequence of input tokens is then passed into BERT’s deep transformer architecture, which computes a contextualized representation of each token. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_21",
"text": " We denote the padding with masked tokens as query augmentation, a step that allows BERT to produce query-based embeddings at the positions corresponding to these masks. Query augmentation is intended to serve as a soft, differentiable mechanism for learning to expand queries with new terms or to re-weigh existing terms based on their importance for matching the query. As we show in §4.4, this operation is essential for ColBERT’s effectiveness. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_22",
"text": " Given BERT’s representation of each token, our encoder passes the contextualized output representations through a linear layer with no activations. This layer serves to control the dimension of ColBERT’s embeddings, producing m𝑚m-dimensional embeddings for the layer’s output size m𝑚m. As we discuss later in more detail, we typically fix m𝑚m to be much smaller than BERT’s fixed hidden dimension. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_23",
"text": " While ColBERT’s embedding dimension has limited impact on the efficiency of query encoding, this step is crucial for controlling the space footprint of documents, as we show in §4.5. In addition, it can have a significant impact on query execution time, particularly the time taken for transferring the document representations onto the GPU from system memory (where they reside before processing a query). In fact, as we show in §4.2, gathering, stacking, and transferring the embeddings from CPU to GPU can be the most expensive step in re-ranking with ColBERT. Finally, the output embeddings are normalized so each has L2 norm equal to one. The result is that the dot-product of any two embeddings becomes equivalent to their cosine similarity, falling in the (−1,1)11(-1,1) range. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_24",
"text": " Document Encoder. Our document encoder has a very similar architecture. We first segment a document d𝑑d into its constituent tokens d1d2…dmsubscript𝑑1subscript𝑑2…subscript𝑑𝑚d_{1}d_{2}...d_{m}, to which we prepend BERT’s start token (CLS) followed by our special token (D) that indicates a document sequence. Unlike queries, we do not append (mask) tokens to documents. After passing this input sequence through BERT and the subsequent linear layer, the document encoder filters out the embeddings corresponding to punctuation symbols, determined via a pre-defined list. This filtering is meant to reduce the number of embeddings per document, as we hypothesize that (even contextualized) embeddings of punctuation are unnecessary for effectiveness. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_25",
"text": " In summary, given q=q0q1…ql𝑞subscript𝑞0subscript𝑞1…subscript𝑞𝑙q=q_{0}q_{1}...q_{l} and d=d0d1…dn𝑑subscript𝑑0subscript𝑑1…subscript𝑑𝑛d=d_{0}d_{1}...d_{n}, we compute the bags of embeddings Eqsubscript𝐸𝑞E_{q} and Edsubscript𝐸𝑑E_{d} in the following manner, where ##\\# refers to the (mask) tokens: (1) Eqsubscript𝐸𝑞\\displaystyle E_{q} :=Normalize(CNN(BERT(``(Q)q0q1…ql##…#\")))assignabsentNormalizeCNNBERT``delimited-()𝑄subscript𝑞0subscript𝑞1…subscript𝑞𝑙##…#\"\\displaystyle:=\\texttt{Normalize}(\\;\\texttt{CNN}(\\;\\texttt{BERT}(``(Q)q_{0}q_{1}...q_{l}\\#\\#...\\#\")\\;)\\;) (2) Edsubscript𝐸𝑑\\displaystyle E_{d} :=Filter(Normalize(CNN(BERT(``(D)d0d1…dn\"))))assignabsentFilterNormalizeCNNBERT``delimited-()𝐷subscript𝑑0subscript𝑑1…subscript𝑑𝑛\"\\displaystyle:=\\texttt{Filter}(\\;\\texttt{Normalize}(\\;\\texttt{CNN}(\\;\\texttt{BERT}(``(D)d_{0}d_{1}...d_{n}\")\\;)\\;)\\;) ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_26",
"text": " Given the representation of a query q𝑞q and a document d𝑑d, the relevance score of d𝑑d to q𝑞q, denoted as Sq,dsubscript𝑆𝑞𝑑S_{q,d}, is estimated via late interaction between their bags of contextualized embeddings. As mentioned before, this is conducted as a sum of maximum similarity computations, namely cosine similarity (implemented as dot-products due to the embedding normalization) or squared L2 distance. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_27",
"text": " (3) Sq,dsubscript𝑆𝑞𝑑\\displaystyle S_{q,d} :=∑i∈(|Eq|)maxj∈(|Ed|)Eqi⋅EdjTassignabsentsubscript𝑖delimited-()subscript𝐸𝑞subscript𝑗delimited-()subscript𝐸𝑑⋅subscript𝐸subscript𝑞𝑖superscriptsubscript𝐸subscript𝑑𝑗𝑇\\displaystyle:=\\sum_{i\\in(|E_{q}|)}\\max_{j\\in(|E_{d}|)}E_{q_{i}}\\cdot E_{d_{j}}^{T} ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_28",
"text": " ColBERT is differentiable end-to-end. We fine-tune the BERT encoders and train from scratch the additional parameters (i.e., the linear layer and the (Q) and (D) markers’ embeddings) using the Adam (Kingma and Ba, 2014) optimizer. Notice that our interaction mechanism has no trainable parameters. Given a triple ⟨q,d+,d−⟩𝑞superscript𝑑superscript𝑑\\langle q,d^{+},d^{-}\\rangle with query q𝑞q, positive document d+superscript𝑑d^{+} and negative document d−superscript𝑑d^{-}, ColBERT is used to produce a score for each document individually and is optimized via pairwise softmax cross-entropy loss over the computed scores of d+superscript𝑑d^{+} and d−superscript𝑑d^{-}. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_29",
"text": " By design, ColBERT isolates almost all of the computations between queries and documents, largely to enable pre-computing document representations offline. At a high level, our indexing procedure is straight-forward: we proceed over the documents in the collection in batches, running our document encoder fDsubscript𝑓𝐷f_{D} on each batch and storing the output embeddings per document. Although indexing a set of documents is an offline process, we incorporate a few simple optimizations for enhancing the throughput of indexing. As we show in §4.5, these optimizations can considerably reduce the offline cost of indexing. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_30",
"text": " To begin with, we exploit multiple GPUs, if available, for faster encoding of batches of documents in parallel. When batching, we pad all documents to the maximum length of a document within the batch.333The public BERT implementations we saw simply pad to a pre-defined length. To make capping the sequence length on a per-batch basis more effective, our indexer proceeds through documents in groups of B𝐵B (e.g., B=𝐵absentB= 100,000) documents. It sorts these documents by length and then feeds batches of b𝑏b (e.g., b=𝑏absentb= 128) documents of comparable length through our encoder. This length-based bucketing is sometimes refered to as a BucketIterator in some libraries (e.g., allenNLP). Lastly, while most computations occur on the GPU, we found that a non-trivial portion of the indexing time is spent on pre-processing the text sequences, primarily BERT’s WordPiece tokenization. Exploiting that these operations are independent across documents in a batch, we parallelize the pre-processing across the available CPU cores. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_31",
"text": " Once the document representations are produced, they are saved to disk using 32-bit or 16-bit values to represent each dimension. As we describe in §3.5 and 3.6, these representations are either simply loaded from disk for ranking or are subsequently indexed for vector-similarity search, respectively. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_32",
"text": " Recall that ColBERT can be used for re-ranking the output of another retrieval model, typically a term-based model, or directly for end-to-end retrieval from a document collection. In this section, we discuss how we use ColBERT for ranking a small set of k𝑘k (e.g., k=1000𝑘1000k=1000) documents given a query q𝑞q. Since k𝑘k is small, we rely on batch computations to exhaustively score each document (unlike our approach in §3.6). To begin with, our query serving sub-system loads the indexed documents representations into memory, representing each document as a matrix of embeddings. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_33",
"text": " Given a query q𝑞q, we compute its bag of contextualized embeddings Eqsubscript𝐸𝑞E_{q} (Equation 1) and, concurrently, gather the document representations into a 3-dimensional tensor D𝐷D consisting of k𝑘k document matrices. We pad the k𝑘k documents to their maximum length to facilitate batched operations, and move the tensor D𝐷D to the GPU’s memory. On the GPU, we compute a batch dot-product of Eqsubscript𝐸𝑞E_{q} and D𝐷D, possibly over multiple mini-batches. The output materializes a 3-dimensional tensor that is a collection of cross-match matrices between q𝑞q and each document. To compute the score of each document, we reduce its matrix across document terms via a max-pool (i.e., representing an exhaustive implementation of our MaxSim computation) and reduce across query terms via a summation. Finally, we sort the k𝑘k documents by their total scores. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_34",
"text": " Relative to existing neural rankers (especially, but not exclusively, BERT-based ones), this computation is very cheap that, in fact, its cost is dominated by the cost of gathering and transferring the pre-computed embeddings. To illustrate, ranking k𝑘k documents via typical BERT rankers requires feeding BERT k𝑘k different inputs each of length l=|q|+|di|𝑙𝑞subscript𝑑𝑖l=|q|+|d_{i}| for query q𝑞q and documents disubscript𝑑𝑖d_{i}, where attention has quadratic cost in the length of the sequence. In contrast, ColBERT feeds BERT only a single, much shorter sequence of length l=|q|𝑙𝑞l=|q|. Consequently, ColBERT is not only cheaper, it also scales much better with k𝑘k as we examine in §4.2. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_35",
"text": " As mentioned before, ColBERT’s late-interaction operator is specifically designed to enable end-to-end retrieval from a large collection, largely to improve recall relative to term-based retrieval approaches. This section is concerned with cases where the number of documents to be ranked is too large for exhaustive evaluation of each possible candidate document, particularly when we are only interested in the highest scoring ones. Concretely, we focus here on retrieving the top-k𝑘k results directly from a large document collection with N𝑁N (e.g., N=10,000,000𝑁10000000N=10,000,000) documents, where k≪Nmuch-less-than𝑘𝑁k\\ll N. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_36",
"text": " To do so, we leverage the pruning-friendly nature of the MaxSim operations at the backbone of late interaction. Instead of applying MaxSim between one of the query embeddings and all of one document’s embeddings, we can use fast vector-similarity data structures to efficiently conduct this search between the query embedding and all document embeddings across the full collection. For this, we employ an off-the-shelf library for large-scale vector-similarity search, namely faiss (Johnson et al., 2017) from Facebook.444https://github.com/facebookresearch/faissIn particular, at the end of offline indexing (§3.4), we maintain a mapping from each embedding to its document of origin and then index all document embeddings into faiss. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_37",
"text": " Subsequently, when serving queries, we use a two-stage procedure to retrieve the top-k𝑘k documents from the entire collection. Both stages rely on ColBERT’s scoring: the first is an approximate stage aimed at filtering while the second is a refinement stage. For the first stage, we concurrently issue Nqsubscript𝑁𝑞N_{q} vector-similarity queries (corresponding to each of the embeddings in Eqsubscript𝐸𝑞E_{q}) onto our faiss index. This retrieves the top-k′superscript𝑘′k^{\\prime} (e.g., k′=k/2superscript𝑘′𝑘2k^{\\prime}=k/2) matches for that vector over all document embeddings. We map each of those to its document of origin, producing Nq×k′subscript𝑁𝑞superscript𝑘′N_{q}\\times k^{\\prime} document IDs, only K≤Nq×k′𝐾subscript𝑁𝑞superscript𝑘′K\\leq N_{q}\\times k^{\\prime} of which are unique. These K𝐾K documents likely contain one or more embeddings that are highly similar to the query embeddings. For the second stage, we refine this set by exhaustively re-ranking only those K𝐾K documents in the usual manner described in §3.5. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_38",
"text": " In our faiss-based implementation, we use an IVFPQ index (“inverted file with product quantization”). This index partitions the embedding space into P𝑃P (e.g., P=1000𝑃1000P=1000) cells based on k𝑘k-means clustering and then assigns each document embedding to its nearest cell based on the selected vector-similarity metric. For serving queries, when searching for the top-k′superscript𝑘′k^{\\prime} matches for a single query embedding, only the nearest p𝑝p (e.g., p=10𝑝10p=10) partitions are searched. To improve memory efficiency, every embedding is divided into s𝑠s (e.g., s=16𝑠16s=16) sub-vectors, each represented using one byte. Moreover, the index conducts the similarity computations in this compressed domain, leading to cheaper computations and thus faster search. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_39",
"text": " We now turn our attention to empirically testing ColBERT, addressing the following research questions. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_40",
"text": " RQ1: In a typical re-ranking setup, how well can ColBERT bridge the existing gap (highlighted in §1) between highly-efficient and highly-effective neural models? (§4.2) ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_41",
"text": " RQ2: Beyond re-ranking, can ColBERT effectively support end-to-end retrieval directly from a large collection? (§4.3) ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_42",
"text": " RQ3: What does each component of ColBERT (e.g., late interaction, query augmentation) contribute to its quality? (§4.4) ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_43",
"text": " RQ4: What are ColBERT’s indexing-related costs in terms of offline computation and memory overhead? (§4.5) ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_44",
"text": " Similar to related work (Nogueira et al., 2019c; Dai and Callan, 2019a; Nogueira et al., 2019b), we conduct our experiments on the MS MARCO Ranking (Nguyen et al., 2016) (henceforth, MS MARCO) and TREC Complex Answer Retrieval (TREC-CAR) (Dietz et al., 2017) datasets. Both of these recent datasets provide large training data of the scale that facilitates training and evaluating deep neural networks. We describe both in detail below. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_45",
"text": " MS MARCO. MS MARCO is a dataset (and a corresponding competition) introduced by Microsoft in 2016 for reading comprehension and adapted in 2018 for retrieval. It is a collection of 8.8M passages from Web pages, which were gathered from Bing’s results to 1M real-world queries. Each query is associated with sparse relevance judgements of one (or very few) documents marked as relevant and no documents explicitly indicated as irrelevant. Per the official evaluation, we use MRR@10 to measure effectiveness. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_46",
"text": " We use three sets of queries for evaluation. The official development and evaluation sets contain roughly 7k queries. However, the relevance judgements of the evaluation set are held-out by Microsoft and effectiveness results can only be obtained by submitting to the competition’s organizers. We submitted our main re-ranking ColBERT model for the results in §4.2. In addition, the collection includes roughly 55k queries (with labels) that are provided as additional validation data. We re-purpose a random sample of 5k queries among those (i.e., ones not in our development or training sets) as a “local” evaluation set. Along with the official development set, we use this held-out set for testing our models as well as baselines in §4.3. We do so to avoid submitting multiple variants of the same model at once, as the organizers discourage too many submissions by the same team. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_47",
"text": " TREC CAR. Introduced by Dietz (Dietz et al., 2017) et al. in 2017, TREC CAR is a synthetic dataset based on Wikipedia that consists of about 29M passages. Similar to related work (Nogueira and Cho, 2019), we use the first four of five pre-defined folds for training and the fifth for validation. This amounts to roughly 3M queries generated by concatenating the title of a Wikipedia page with the heading of one of its sections. That section’s passages are marked as relevant to the corresponding query. Our evaluation is conducted on the test set used in TREC 2017 CAR, which contains 2,254 queries. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_48",
"text": " Our ColBERT models are implemented using Python 3 and PyTorch 1. We use the popular transformers555https://github.com/huggingface/transformers library for the pre-trained BERT model. Similar to (Nogueira and Cho, 2019), we fine-tune all ColBERT models with learning rate 3×10−63superscript1063\\times 10^{-6} with a batch size 32. We fix the number of embeddings per query at Nq=32subscript𝑁𝑞32N_{q}=32. We set our ColBERT embedding dimension m𝑚m to be 128; §4.5 demonstrates ColBERT’s robustness to a wide range of embedding dimensions. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_49",
"text": " For MS MARCO, we initialize the BERT components of the ColBERT query and document encoders using Google’s official pre-trained BERTbasebase{}_{\\textnormal{base}} model. Further, we train all models for 200k iterations. For TREC CAR, we follow related work (Nogueira and Cho, 2019; Dai and Callan, 2019a) and use a different pre-trained model to the official ones. To explain, the official BERT models were pre-trained on Wikipedia, which is the source of TREC CAR’s training and test sets. To avoid leaking test data into train, Nogueira and Cho’s (Nogueira and Cho, 2019) pre-train a randomly-initialized BERT model on the Wiki pages corresponding to training subset of TREC CAR. They release their BERTlargelarge{}_{\\textnormal{large}} pre-trained model, which we fine-tune for ColBERT’s experiments on TREC CAR. Since fine-tuning this model is significantly slower than BERTbasebase{}_{\\textnormal{base}}, we train on TREC CAR for only 125k iterations. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_50",
"text": " In our re-ranking results, unless stated otherwise, we use 4 bytes per dimension in our embeddings and employ cosine as our vector-similarity function. For end-to-end ranking, we use (squared) L2 distance, as we found our faiss index was faster at L2-based retrieval. For our faiss index, we set the number of partitions to P=𝑃absentP=2,000, and search the nearest p=10𝑝10p=10 to each query embedding to retrieve k′=k=1000superscript𝑘′𝑘1000k^{\\prime}=k=1000 document vectors per query embedding. We divide each embedding into s=16𝑠16s=16 sub-vectors, each encoded using one byte. To represent the index used for the second stage of our end-to-end retrieval procedure, we use 16-bit values per dimension. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_51",
"text": " To evaluate the latency of neural re-ranking models in §4.2, we use a single Tesla V100 GPU that has 32 GiBs of memory on a server with two Intel Xeon Gold 6132 CPUs, each with 14 physical cores (24 hyperthreads), and 469 GiBs of RAM. For the mostly CPU-based retrieval experiments in §4.3 and the indexing experiments in §4.5, we use another server with the same CPU and system memory specifications but which has four Titan V GPUs attached, each with 12 GiBs of memory. Across all experiments, only one GPU is dedicated per query for retrieval (i.e., for methods with neural computations) but we use up to all four GPUs during indexing. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_52",
"text": " In this section, we examine ColBERT’s efficiency and effectiveness at re-ranking the top-k𝑘k results extracted by a bag-of-words retrieval model, which is the most typical setting for testing and deploying neural ranking models. We begin with the MS MARCO dataset. We compare against KNRM, Duet, and fastText+ConvKNRM, a representative set of neural matching models that have been previously tested on MS MARCO. In addition, we compare against the natural adaptation of BERT for ranking by Nogueira and Cho (Nogueira and Cho, 2019), in particular, BERTbasebase{}_{\\textnormal{base}} and its deeper counterpart BERTlargelarge{}_{\\textnormal{large}}. We also report results for “BERTbasebase{}_{\\textnormal{base}} (our training)”, which is based on Nogueira and Cho’s base model (including hyperparameters) but is trained with the same loss function as ColBERT (§3.3) for 200k iterations, allowing for a more direct comparison of the results. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_53",
"text": " We report the competition’s official metric, namely MRR@10, on the validation set (Dev) and the evaluation set (Eval). We also report the re-ranking latency, which we measure using a single Tesla V100 GPU, and the FLOPs per query for each neural ranking model. For ColBERT, our reported latency subsumes the entire computation from gathering the document representations, moving them to the GPU, tokenizing then encoding the query, and applying late interaction to compute document scores. For the baselines, we measure the scoring computations on the GPU and exclude the CPU-based text preprocessing (similar to (Hofstätter and Hanbury, 2019)). In principle, the baselines can pre-compute the majority of this preprocessing (e.g., document tokenization) offline and parallelize the rest across documents online, leaving only a negligible cost. We estimate the FLOPs per query of each model using the torchprofile666https://github.com/mit-han-lab/torchprofile library. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_54",
"text": " We now proceed to study the results, which are reported in Table 1. To begin with, we notice the fast progress from KNRM in 2017 to the BERT-based models in 2019, manifesting itself in over 16% increase in MRR@10. As described in §1, the simultaneous increase in computational cost is difficult to miss. Judging by their rather monotonic pattern of increasingly larger cost and higher effectiveness, these results appear to paint a picture where expensive models are necessary for high-quality ranking. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_55",
"text": " In contrast with this trend, ColBERT (which employs late interaction over BERTbasebase{}_{\\textnormal{base}}) performs no worse than the original adaptation of BERTbasebase{}_{\\textnormal{base}} for ranking by Nogueira and Cho (Nogueira and Cho, 2019; Nogueira et al., 2019b) and is only marginally less effective than BERTlargelarge{}_{\\textnormal{large}} and our training of BERTbasebase{}_{\\textnormal{base}} (described above). While highly competitive in effectiveness, ColBERT is orders of magnitude cheaper than BERTbasebase{}_{\\textnormal{base}}, in particular, by over 170×\\times in latency and 13,900×\\times in FLOPs. This highlights the expressiveness of our proposed late interaction mechanism, particularly when coupled with a powerful pre-trained LM like BERT. While ColBERT’s re-ranking latency is slightly higher than the non-BERT re-ranking models shown (i.e., by 10s of milliseconds), this difference is explained by the time it takes to gather, stack, and transfer the document embeddings to the GPU. In particular, the query encoding and interaction in ColBERT consume only 13 milliseconds of its total execution time. We note that ColBERT’s latency and FLOPs can be considerably reduced by padding queries to a shorter length, using smaller vector dimensions (the MRR@10 of which is tested in §4.5), employing quantization of the document vectors, and storing the embeddings on GPU if sufficient memory exists. We leave these directions for future work. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_56",
"text": " Diving deeper into the quality–cost tradeoff between BERT and ColBERT, Figure 4 demonstrates the relationships between FLOPs and effectiveness (MRR@10) as a function of the re-ranking depth k𝑘k when re-ranking the top-k𝑘k results by BM25, comparing ColBERT and BERTbasebase{}_{\\textnormal{base}} (our training). We conduct this experiment on MS MARCO (Dev). We note here that as the official top-1000 ranking does not provide the BM25 order (and also lacks documents beyond the top-1000 per query), the models in this experiment re-rank the Anserini (Yang et al., 2018) toolkit’s BM25 output. Consequently, both MRR@10 values at k=1000𝑘1000k=1000 are slightly higher from those reported in Table 1. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_57",
"text": " Studying the results in Figure 4, we notice that not only is ColBERT much cheaper than BERT for the same model size (i.e., 12-layer “base” transformer encoder), it also scales better with the number of ranked documents. In part, this is because ColBERT only needs to process the query once, irrespective of the number of documents evaluated. For instance, at k=10𝑘10k=10, BERT requires nearly 180×\\times more FLOPs than ColBERT; at k=1000𝑘1000k=1000, BERT’s overhead jumps to 13,900×\\times. It then reaches 23,000×\\times at k=2000𝑘2000k=2000. In fact, our informal experimentation shows that this orders-of-magnitude gap in FLOPs makes it practical to run ColBERT entirely on the CPU, although CPU-based re-ranking lies outside our scope. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_58",
"text": " Having studied our results on MS MARCO, we now consider TREC CAR, whose official metric is MAP. Results are summarized in Table 3, which includes a number of important baselines (BM25, doc2query, and DeepCT) in addition to re-ranking baselines that have been tested on this dataset. These results directly mirror those with MS MARCO. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_59",
"text": " Beyond cheap re-ranking, ColBERT is amenable to top-k𝑘k retrieval directly from a full collection. Table 2 considers full retrieval, wherein each model retrieves the top-1000 documents directly from MS MARCO’s 8.8M documents per query. In addition to MRR@10 and latency in milliseconds, the table reports Recall@50, Recall@200, and Recall@1000, important metrics for a full-retrieval model that essentially filters down a large collection on a per-query basis. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_60",
"text": " We compare against BM25, in particular MS MARCO’s official BM25 ranking as well as a well-tuned baseline based on the Anserini toolkit.777http://anserini.io/ While many other traditional models exist, we are not aware of any that substantially outperform Anserini’s BM25 implementation (e.g., see RM3 in (Nogueira et al., 2019c), LMDir in (Dai and Callan, 2019a), or Microsoft’s proprietary feature-based RankSVM on the leaderboard). ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_61",
"text": " We also compare against doc2query, DeepCT, and docTTTTTquery. All three rely on a traditional bag-of-words model (primarily BM25) for retrieval. Crucially, however, they re-weigh the frequency of terms per document and/or expand the set of terms in each document before building the BM25 index. In particular, doc2query expands each document with a pre-defined number of synthetic queries generated by a seq2seq transformer model (which docTTTTquery replaced with a pre-trained language model, T5 (Raffel et al., 2019)). In contrast, DeepCT uses BERT to produce the term frequency component of BM25 in a context-aware manner. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_62",
"text": " For the latency of Anserini’s BM25, doc2query, and docTTTTquery, we use the authors’ (Nogueira et al., 2019c, a) Anserini-based implementation. While this implementation supports multi-threading, it only utilizes parallelism across different queries. We thus report single-threaded latency for these models, noting that simply parallelizing their computation over shards of the index can substantially decrease their already-low latency. For DeepCT, we only estimate its latency using that of BM25 (as denoted by (est.) in the table), since DeepCT re-weighs BM25’s term frequency without modifying the index otherwise.888In practice, a myriad of reasons could still cause DeepCT’s latency to differ slightly from BM25’s. For instance, the top-k𝑘k pruning strategy employed, if any, could interact differently with a changed distribution of scores. As discussed in §4.1, we use ColBERTL2L2{}_{\\textnormal{L2}} for end-to-end retrieval, which employs negative squared L2 distance as its vector-similarity function. For its latency, we measure the time for faiss-based candidate filtering and the subsequent re-ranking. In this experiment, faiss uses all available CPU cores. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_63",
"text": " Looking at Table 2, we first see Anserini’s BM25 baseline at 18.7 MRR@10, noticing its very low latency as implemented in Anserini (which extends the well-known Lucene system), owing to both very cheap operations and decades of bag-of-words top-k𝑘k retrieval optimizations. The three subsequent baselines, namely doc2query, DeepCT, and docTTTTquery, each brings a decisive enhancement to effectiveness. These improvements come at negligible overheads in latency, since these baselines ultimately rely on BM25-based retrieval. The most effective among these three, docTTTTquery, demonstrates a massive 9% gain over vanilla BM25 by fine-tuning the recent language model T5. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_64",
"text": " Shifting our attention to ColBERT’s end-to-end retrieval effectiveness, we see its major gains in MRR@10 over all of these end-to-end models. In fact, using ColBERT in the end-to-end setup is superior in terms of MRR@10 to re-ranking with the same model due to the improved recall. Moving beyond MRR@10, we also see large gains in Recall@k𝑘k for k𝑘k equals to 50, 200, and 1000. For instance, its Recall@50 actually exceeds the official BM25’s Recall@1000 and even all but docTTTTTquery’s Recall@200, emphasizing the value of end-to-end retrieval (instead of just re-ranking) with ColBERT. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_65",
"text": " The results from §4.2 indicate that ColBERT is highly effective despite the low cost and simplicity of its late interaction mechanism. To better understand the source of this effectiveness, we examine a number of important details in ColBERT’s interaction and encoder architecture. For this ablation, we report MRR@10 on the validation set of MS MARCO in Figure 5, which shows our main re-ranking ColBERT model (E), with MRR@10 of 34.9%. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_66",
"text": " Due to the cost of training all models, we train a copy of our main model that retains only the first 5 layers of BERT out of 12 (i.e., model (D)) and similarly train all our ablation models for 200k iterations with five BERT layers. To begin with, we ask if the fine-granular interaction in late interaction is necessary. Model (A) tackles this question: it uses BERT to produce a single embedding vector for the query and another for the document, extracted from BERT’s (CLS) contextualized embedding and expanded through a linear layer to dimension 4096 (which equals Nq×128=32×128subscript𝑁𝑞12832128N_{q}\\times 128=32\\times 128). Relevance is estimated as the inner product of the query’s and the document’s embeddings, which we found to perform better than cosine similarity for single-vector re-ranking. As the results show, this model is considerably less effective than ColBERT, reinforcing the importance of late interaction. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_67",
"text": " Subsequently, we ask if our MaxSim-based late interaction is better than other simple alternatives. We test a model (B) that replaces ColBERT’s maximum similarity with average similarity. The results suggest the importance of individual terms in the query paying special attention to particular terms in the document. Similarly, the figure emphasizes the importance of our query augmentation mechanism: without query augmentation (C), ColBERT has a noticeably lower MRR@10. Lastly, we see the impact of end-to-end retrieval not only on recall but also on MRR@10. By retrieving directly from the full collection, ColBERT is able to retrieve to the top-10 documents missed entirely from BM25’s top-1000. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_68",
"text": " Lastly, we examine the indexing throughput and space footprint of ColBERT. Figure 6 reports indexing throughput on MS MARCO documents with ColBERT and four other ablation settings, which individually enable optimizations described in §3.4 on top of basic batched indexing. Based on these throughputs, ColBERT can index MS MARCO in about three hours. Note that any BERT-based model must incur the computational cost of processing each document at least once. While ColBERT encodes each document with BERT exactly once, existing BERT-based rankers would repeat similar computations on possibly hundreds of documents for each query. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_69",
"text": " Table 4 reports the space footprint of ColBERT under various settings as we reduce the embeddings dimension and/or the bytes per dimension. Interestingly, the most space-efficient setting, that is, re-ranking with cosine similarity with 24-dimensional vectors stored as 2-byte floats, is only 1% worse in MRR@10 than the most space-consuming one, while the former requires only 27 GiBs to represent the MS MARCO collection. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_70",
"text": " In this paper, we introduced ColBERT, a novel ranking model that employs contextualized late interaction over deep LMs (in particular, BERT) for efficient retrieval. By independently encoding queries and documents into fine-grained representations that interact via cheap and pruning-friendly computations, ColBERT can leverage the expressiveness of deep LMs while greatly speeding up query processing. In addition, doing so allows using ColBERT for end-to-end neural retrieval directly from a large document collection. Our results show that ColBERT is more than 170×\\times faster and requires 14,000×\\times fewer FLOPs/query than existing BERT-based models, all while only minimally impacting quality and while outperforming every non-BERT baseline. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
},
{
"id": "2004.12832_all_71",
"text": " Acknowledgments. OK was supported by the Eltoukhy Family Graduate Fellowship at the Stanford School of Engineering. This research was supported in part by affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Infosys, NEC, and VMware—as well as Cisco, SAP, and the NSF under CAREER grant CNS-1651570. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. ",
"title": "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT"
}
] |
The reason why the diffusion step can be applied on both z_{t-1} and z^*_t in parallel is their one timestep difference is matched each other. Is it right?
|
The reason is in the diffusion process a noisy image outputted "zt-1" at a single time-step "t" can be computed as DM(zt,P,t,s) [18].
|
[
18
] |
[
{
"id": "2208.01626_all_0",
"text": " Recently, large-scale language-image (LLI) models, such as Imagen , DALL·E 2 and Parti , have shown phenomenal generative semantic and compositional power, and gained unprecedented attention from the research community and the public eye. These LLI models are trained on extremely large language-image datasets and use state-of-the-art image generative models including auto-regressive and diffusion models. However, these models do not provide simple editing means, and generally lack control over specific semantic regions of a given image. In particular, even the slightest change in the textual prompt may lead to a completely different output image. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_1",
"text": " To circumvent this, LLI-based methods (28, 4, 33) require the user to explicitly mask a part of the image to be inpainted, and drive the edited image to change in the masked area only, while matching the background of the original image. This approach has provided appealing results, however, the masking procedure is cumbersome, hampering quick and intuitive text-driven editing. Moreover, masking the image content removes important structural information, which is completely ignored in the inpainting process. Therefore, some editing capabilities are out of the inpainting scope, such as modifying the texture of a specific object. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_2",
"text": " In this paper, we introduce an intuitive and powerful textual editing method to semantically edit images in pre-trained text-conditioned diffusion models via Prompt-to-Prompt manipulations. To do so, we dive deep into the cross-attention layers and explore their semantic strength as a handle to control the generated image. Specifically, we consider the internal cross-attention maps, which are high-dimensional tensors that bind pixels and tokens extracted from the prompt text. We find that these maps contain rich semantic relations which critically affect the generated image. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_3",
"text": " Our key idea is that we can edit images by injecting the cross-attention maps during the diffusion process, controlling which pixels attend to which tokens of the prompt text during which diffusion steps. To apply our method to various creative editing applications, we show several methods to control the cross-attention maps through a simple and semantic interface (see fig. 1). The first is to change a single token’s value in the prompt (e.g., “dog” to “cat”), while fixing the cross-attention maps, to preserve the scene composition. The second is to globally edit an image, e.g., change the style, by adding new words to the prompt and freezing the attention on previous tokens, while allowing new attention to flow to the new tokens. The third is to amplify or attenuate the semantic effect of a word in the generated image. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_4",
"text": " Our approach constitutes an intuitive image editing interface through editing only the textual prompt, therefore called Prompt-to-Prompt. This method enables various editing tasks, which are challenging otherwise, and does not requires model training, fine-tuning, extra data, or optimization. Throughout our analysis, we discover even more control over the generation process, recognizing a trade-off between the fidelity to the edited prompt and the source image. We even demonstrate that our method can be applied to real images by using an existing inversion process. Our experiments and numerous results show that our method enables seamless editing in an intuitive text-based manner over extremely diverse images. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_5",
"text": " Image editing is one of the most fundamental tasks in computer graphics, encompassing the process of modifying an input image through the use of an auxiliary input, such as a label, scribble, mask, or reference image. A specifically intuitive way to edit an image is through textual prompts provided by the user. Recently, text-driven image manipulation has achieved significant progress using GANs (15, 8, 19, 20, 21), which are known for their high-quality generation, in tandem with CLIP , which consists of a semantically rich joint image-text representation, trained over millions of text-image pairs. Seminal works (29, 14, 46, 2) which combined these components were revolutionary, since they did not require extra manual labor, and produced highly realistic manipulations using text only. Bau et al. further demonstrated how to use masks provided by the user, to localize the text-based editing and restrict the change to a specific spatial region. However, while GAN-based image editing approaches succeed on highly-curated datasets , e.g., human faces, they struggle over large and diverse datasets. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_6",
"text": " To obtain more expressive generation capabilities, Crowson et al. use VQ-GAN , trained over diverse data, as a backbone. Other works (5, 22) exploit the recent Diffusion models (17, 39, 41, 17, 40, 36), which achieve state-of-the-art generation quality over highly diverse datasets, often surpassing GANs . Kim et al. show how to perform global changes, whereas Avrahami et al. successfully perform local manipulations using user-provided masks for guidance. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_7",
"text": " While most works that require only text (i.e., no masks) are limited to global editing (9, 23), Bar-Tal et al. proposed a text-based localized editing technique without using any mask, showing impressive results. Yet, their techniques mainly allow changing textures, but not modifying complex structures, such as changing a bicycle to a car. Moreover, unlike our method, their approach requires training a network for each input. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_8",
"text": " Numerous works (11, 16, 42, 25, 26, 30, 31, 34, 49, 9, 13, 36) significantly advanced the generation of images conditioned on plain text, known as text-to-image synthesis. Several large-scale text-image models have recently emerged, such as Imagen , DALL-E2 , and Parti , demonstrating unprecedented semantic generation. However, these models do not provide control over a generated image, specifically using text guidance only. Changing a single word in the original prompt associated with the image often leads to a completely different outcome. For instance, adding the adjective “white” to “dog” often changes the dog’s shape. To overcome this, several works (28, 4) assume that the user provides a mask to restrict the area in which the changes are applied. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_9",
"text": " Unlike previous works, our method requires textual input only, by using the spatial information from the internal layers of the generative model itself. This offers the user a much more intuitive editing experience of modifying local or global details by merely modifying the text prompt. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_10",
"text": " Let ℐℐ\\mathcal{I} be an image which was generated by a text-guided diffusion model using the text prompt 𝒫𝒫\\mathcal{P} and a random seed s𝑠s. Our goal is editing the input image guided only by the edited prompt 𝒫∗superscript𝒫\\mathcal{P}^{*}, resulting in an edited image ℐ∗superscriptℐ\\mathcal{I}^{*}. For example, consider an image generated from the prompt “my new bicycle”, and assume that the user wants to edit the color of the bicycle, its material, or even replace it with a scooter while preserving the appearance and structure of the original image. An intuitive interface for the user is to directly change the text prompt by further describing the appearance of the bikes, or replacing it with another word. As opposed to previous works, we wish to avoid relying on any user-defined mask to assist or signify where the edit should occur. A simple, but an unsuccessful attempt is to fix the internal randomness and regenerate using the edited text prompt. Unfortunately, as fig. 2 shows, this results in a completely different image with a different structure and composition. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_11",
"text": " Our key observation is that the structure and appearances of the generated image depend not only on the random seed, but also on the interaction between the pixels to the text embedding through the diffusion process. By modifying the pixel-to-text interaction that occurs in cross-attention layers, we provide Prompt-to-Prompt image editing capabilities. More specifically, injecting the cross-attention maps of the input image ℐℐ\\mathcal{I} enables us to preserve the original composition and structure. In section 3.1, we review how cross-attention is used, and in section 3.2 we describe how to exploit the cross-attention for editing. For additional background on diffusion models, please refer to appendix A. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_12",
"text": " We use the Imagen text-guided synthesis model as a backbone. Since the composition and geometry are mostly determined at the 64×64646464\\times 64 resolution, we only adapt the text-to-image diffusion model, using the super-resolution process as is. Recall that each diffusion step t𝑡t consists of predicting the noise ϵitalic-ϵ\\epsilon from a noisy image ztsubscript𝑧𝑡z_{t} and text embedding ψ(𝒫)𝜓𝒫\\psi(\\mathcal{P}) using a U-shaped network . At the final step, this process yields the generated image ℐ=z0ℐsubscript𝑧0\\mathcal{I}=z_{0}. Most importantly, the interaction between the two modalities occurs during the noise prediction, where the embeddings of the visual and textual features are fused using Cross-attention layers that produce spatial attention maps for each textual token. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_13",
"text": " More formally, as illustrated in fig. 3(Top), the deep spatial features of the noisy image ϕ(zt)italic-ϕsubscript𝑧𝑡\\phi(z_{t}) are projected to a query matrix Q=ℓQ(ϕ(zt))𝑄subscriptℓ𝑄italic-ϕsubscript𝑧𝑡Q=\\ell_{Q}(\\phi(z_{t})), and the textual embedding is projected to a key matrix K=ℓK(ψ(𝒫))𝐾subscriptℓ𝐾𝜓𝒫K=\\ell_{K}(\\psi(\\mathcal{P})) and a value matrix V=ℓV(ψ(𝒫))𝑉subscriptℓ𝑉𝜓𝒫V=\\ell_{V}(\\psi(\\mathcal{P})), via learned linear projections ℓQ,ℓK,ℓVsubscriptℓ𝑄subscriptℓ𝐾subscriptℓ𝑉\\ell_{Q},\\ell_{K},\\ell_{V}. The attention maps are then M=Softmax(QKTd),𝑀Softmax𝑄superscript𝐾𝑇𝑑M=\\text{Softmax}\\left(\\frac{QK^{T}}{\\sqrt{d}}\\right), (1) where the cell Mijsubscript𝑀𝑖𝑗M_{ij} defines the weight of the value of the j𝑗j-th token on the pixel i𝑖i, and where d𝑑d is the latent projection dimension of the keys and queries. Finally, the cross-attention output is defined to be ϕ^(zt)=MV^italic-ϕsubscript𝑧𝑡𝑀𝑉\\widehat{\\phi}\\left(z_{t}\\right)=MV, which is then used to update the spatial features ϕ(zt)italic-ϕsubscript𝑧𝑡\\phi(z_{t}). ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_14",
"text": " Intuitively, the cross-attention output MV𝑀𝑉MV is a weighted average of the values V𝑉V where the weights are the attention maps M𝑀M, which are correlated to the similarity between Q𝑄Q and K𝐾K. In practice, to increase their expressiveness, multi-head attention is used in parallel, and then the results are concatenated and passed through a learned linear layer to get the final output. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_15",
"text": " Imagen , similar to GLIDE , conditions on the text prompt in the noise prediction of each diffusion step (see section A.2) through two types of attention layers: i) cross-attention layers. ii) hybrid attention that acts both as self-attention and cross-attention by simply concatenating the text embedding sequence to the key-value pairs of each self-attention layer. Throughout the rest of the paper, we refer to both of them as cross-attention since our method only intervenes in the cross-attention part of the hybrid attention. That is, only the last channels, which refer to text tokens, are modified in the hybrid attention modules. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_16",
"text": " We return to our key observation — the spatial layout and geometry of the generated image depend on the cross-attention maps. This interaction between pixels and text is illustrated in fig. 4, where the average attention maps are plotted. As can be seen, pixels are more attracted to the words that describe them, e.g., pixels of the bear are correlated with the word “bear”. Note that averaging is done for visualization purposes, and attention maps are kept separate for each head in our method. Interestingly, we can see that the structure of the image is already determined in the early steps of the diffusion process. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_17",
"text": " Since the attention reflects the overall composition, we can inject the attention maps M𝑀M that were obtained from the generation with the original prompt 𝒫𝒫\\mathcal{P}, into a second generation with the modified prompt 𝒫∗superscript𝒫\\mathcal{P}^{*}. This allows the synthesis of an edited image ℐ∗superscriptℐ\\mathcal{I}^{*} that is not only manipulated according to the edited prompt, but also preserves the structure of the input image ℐℐ\\mathcal{I}. This example is a specific instance of a broader set of attention-based manipulations leading to different types of intuitive editing. We, therefore, start by proposing a general framework, followed by the details of the specific editing operations. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_18",
"text": " Let DM(zt,𝒫,t,s)𝐷𝑀subscript𝑧𝑡𝒫𝑡𝑠DM(z_{t},\\mathcal{P},t,s) be the computation of a single step t𝑡t of the diffusion process, which outputs the noisy image zt−1subscript𝑧𝑡1z_{t-1}, and the attention map Mtsubscript𝑀𝑡M_{t} (omitted if not used). We denote by DM(zt,𝒫,t,s){M←M^}𝐷𝑀subscript𝑧𝑡𝒫𝑡𝑠←𝑀^𝑀DM(z_{t},\\mathcal{P},t,s)\\{M\\leftarrow\\widehat{M}\\} the diffusion step where we override the attention map M𝑀M with an additional given map M^^𝑀\\widehat{M}, but keep the values V𝑉V from the supplied prompt. We also denote by Mt∗superscriptsubscript𝑀𝑡M_{t}^{*} the produced attention map using the edited prompt 𝒫∗superscript𝒫\\mathcal{P}^{*}. Lastly, we define Edit(Mt,Mt∗,t)𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡Edit(M_{t},M_{t}^{*},t) to be a general edit function, receiving as input the t𝑡t’th attention maps of the original and edited images during their generation. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_19",
"text": " Our general algorithm for controlled image generation consists of performing the iterative diffusion process for both prompts simultaneously, where an attention-based manipulation is applied in each step according to the desired editing task. We note that for the method above to work, we must fix the internal randomness. This is due to the nature of diffusion models, where even for the same prompt, two random seeds produce drastically different outputs. Formally, our general algorithm is: ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_20",
"text": " Notice that we can also define image ℐℐ\\mathcal{I}, which is generated by prompt 𝒫𝒫\\mathcal{P} and random seed s𝑠s, as an additional input. Yet, the algorithm would remain the same. For editing real images, see section 4. Also, note that we can skip the forward call in line 777 by applying the edit function inside the diffusion forward function. Moreover, a diffusion step can be applied on both zt−1subscript𝑧𝑡1z_{t-1} and zt∗superscriptsubscript𝑧𝑡z_{t}^{*} in the same batch (i.e., in parallel), and so there is only one step overhead with respect to the original inference of the diffusion model. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_21",
"text": " We now turn to address specific editing operations, filling the missing definition of the Edit(Mt,Mt∗,t)𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡Edit(M_{t},M_{t}^{*},t) function. An overview is presented in fig. 3(Bottom). ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_22",
"text": " In this case, the user swaps tokens of the original prompt with others, e.g., 𝒫=𝒫absent\\mathcal{P}=“a big red bicycle” to 𝒫∗=superscript𝒫absent\\mathcal{P}^{*}=“a big red car”. The main challenge is to preserve the original composition while also addressing the content of the new prompt. To this end, we inject the attention maps of the source image into the generation with the modified prompt. However, the proposed attention injection may over constrain the geometry, especially when a large structural modification, such as “car” to “bicycle”, is involved. We address this by suggesting a softer attention constrain: ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_23",
"text": " Edit(Mt,Mt∗,t):={Mt∗ift<τMtotherwise.assign𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡casessuperscriptsubscript𝑀𝑡if𝑡𝜏subscript𝑀𝑡otherwise.Edit(M_{t},M_{t}^{*},t):=\\begin{cases}M_{t}^{*}&\\quad\\text{if}\\;t<\\tau\\\\ M_{t}&\\quad\\text{otherwise.}\\\\ \\end{cases} where τ𝜏\\tau is a timestamp parameter that determines until which step the injection is applied. Note that the composition is determined in the early steps of the diffusion process. Therefore, by limiting the number of injection steps, we can guide the composition of the newly generated image while allowing the necessary geometry freedom for adapting to the new prompt. An illustration is provided in section 4. Another natural relaxation for our algorithm is to assign a different number of injection timestamps for the different tokens in the prompt. In case the two words are represented using a different number of tokens, the maps can be duplicated/averaged as necessary using an alignment function as described in the next paragraph. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_24",
"text": " In another setting, the user adds new tokens to the prompt, e.g., 𝒫=𝒫absent\\mathcal{P}=“a castle next to a river” to 𝒫∗=superscript𝒫absent\\mathcal{P}^{*}=“children drawing of a castle next to a river”. To preserve the common details, we apply the attention injection only over the common tokens from both prompts. Formally, we use an alignment function A𝐴A that receives a token index from target prompt 𝒫∗superscript𝒫\\mathcal{P}^{*} and outputs the corresponding token index in 𝒫𝒫\\mathcal{P} or None if there isn’t a match. Then, the editing function is given by: ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_25",
"text": " (Edit(Mt,Mt∗,t))i,j:={(Mt∗)i,jifA(j)=None(Mt)i,A(j)otherwise.assignsubscript𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡𝑖𝑗casessubscriptsuperscriptsubscript𝑀𝑡𝑖𝑗if𝐴𝑗𝑁𝑜𝑛𝑒subscriptsubscript𝑀𝑡𝑖𝐴𝑗otherwise.\\left(Edit\\left(M_{t},M_{t}^{*},t\\right)\\right)_{i,j}:=\\begin{cases}(M_{t}^{*})_{i,j}&\\quad\\text{if}\\;A(j)=None\\\\ (M_{t})_{i,A(j)}&\\quad\\text{otherwise.}\\\\ \\end{cases} Recall that index i𝑖i corresponds to a pixel value, where j𝑗j corresponds to a text token. Again, we may set a timestamp τ𝜏\\tau to control the number of diffusion steps in which the injection is applied. This kind of editing enables diverse Prompt-to-Prompt capabilities such as stylization, specification of object attributes, or global manipulations as demonstrated in section 4. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_26",
"text": " Lastly, the user may wish to strengthen or weakens the extent to which each token is affecting the resulting image. For example, consider the prompt 𝒫=𝒫absent\\mathcal{P}= “a fluffy red ball”, and assume we want to make the ball more or less fluffy. To achieve such manipulation, we scale the attention map of the assigned token j∗superscript𝑗j^{*} with parameter c∈(−2,2)𝑐22c\\in(-2,2), resulting in a stronger/weaker effect. The rest of the attention maps remain unchanged. That is: (Edit(Mt,Mt∗,t))i,j:={c⋅(Mt)i,jif j=j∗(Mt)i,jotherwise.assignsubscript𝐸𝑑𝑖𝑡subscript𝑀𝑡superscriptsubscript𝑀𝑡𝑡𝑖𝑗cases⋅𝑐subscriptsubscript𝑀𝑡𝑖𝑗if 𝑗superscript𝑗subscriptsubscript𝑀𝑡𝑖𝑗otherwise.\\left(Edit\\left(M_{t},M_{t}^{*},t\\right)\\right)_{i,j}:=\\begin{cases}c\\cdot(M_{t})_{i,j}&\\quad\\text{if }j=j^{*}\\\\ (M_{t})_{i,j}&\\quad\\text{otherwise.}\\\\ \\end{cases} As described in section 4, the parameter c𝑐c allows fine and intuitive control over the induced effect. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_27",
"text": " Our method, described in section 3, enables intuitive text-only editing by controlling the spatial layout corresponding to each word in the user-provided prompt. In this section, we show several applications using this technique. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_28",
"text": " Text-Only Localized Editing. We first demonstrate localized editing by modifying the user-provided prompt without requiring any user-provided mask. In fig. 2, we depict an example where we generate an image using the prompt “lemon cake”. Our method allows us to retain the spatial layout, geometry, and semantics when replacing the word “lemon” with “pumpkin” (top row). Observe that the background is well-preserved, including the top-left lemons transforming into pumpkins. On the other hand, naively feeding the synthesis model with the prompt “pumpkin cake” results in a completely different geometry (333rd row), even when using the same random seed in a deterministic setting (i.e., DDIM ). Our method succeeds even for a challenging prompt such as “pasta cake.” (222nd row) — the generated cake consists of pasta layers with tomato sauce on top. Another example is provided in fig. 5 where we do not inject the attention of the entire prompt but only the attention of a specific word – “butterfly”. This enables the preservation of the original butterfly while changing the rest of the content. Additional results are provided in the appendix (fig. 13). ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_29",
"text": " As can be seen in fig. 6, our method is not confined to modifying only textures, and it can perform structural modifications, e.g., change a “bicycle” to a “car”. To analyze our attention injection, in the left column we show the results without cross-attention injection, where changing a single word leads to an entirely different outcome. From left to right, we then show the resulting generated image by injecting attention to an increasing number of diffusion steps. Note that the more diffusion steps in which we apply cross-attention injection, the higher the fidelity to the original image. However, the optimal result is not necessarily achieved by applying the injection throughout all diffusion steps. Therefore, we can provide the user with even better control over the fidelity to the original image by changing the number of injection steps. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_30",
"text": " Instead of replacing one word with another, the user may wish to add a new specification to the generated image. In this case, we keep the attention maps of the original prompt, while allowing the generator to address the newly added words. For example, see fig. 7 (top), where we add “crushed” to the “car”, resulting in the generation of additional details over the original image while the background is still preserved. See the appendix (fig. 14) for more examples. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_31",
"text": " Global editing. Preserving the image composition is not only valuable for localized editing, but also an important aspect of global editing. In this setting, the editing should affect all parts of the image, but still retain the original composition, such as the location and identity of the objects. As shown in fig. 7 (bottom), we retain the image content while adding “snow” or changing the lightning. Additional examples appear in fig. 8, including translating a sketch into a photo-realistic image and inducing an artistic style. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_32",
"text": " Fader Control using Attention Re-weighting. While controlling the image by editing the prompt is very effective, we find that it still does not allow full control over the generated image. Consider the prompt “snowy mountain”. A user may want to control the amount of snow on the mountain. However, it is quite difficult to describe the desired amount of snow through text. Instead, we suggest a fader control , where the user controls the magnitude of the effect induced by a specific word, as depicted in fig. 9. As described in section 3, we achieve such control by re-scaling the attention of the specified word. Additional results are in the appendix (fig. 15). ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_33",
"text": " Real Image Editing. Editing a real image requires finding an initial noise vector that produces the given input image when fed into the diffusion process. This process, known as inversion, has recently drawn considerable attention for GANs, e.g., (51, 1, 3, 35, 50, 43, 45, 47), but has not yet been fully addressed for text-guided diffusion models. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_34",
"text": " In the following, we show preliminary editing results on real images, based on common inversion techniques for diffusion models. First, a rather naïve approach is to add Gaussian noise to the input image, and then perform a predefined number of diffusion steps. Since this approach results in significant distortions, we adopt an improved inversion approach (10, 40), which is based on the deterministic DDIM model rather than the DDPM model. We perform the diffusion process in the reverse direction, that is x0⟶xT⟶subscript𝑥0subscript𝑥𝑇x_{0}\\longrightarrow x_{T} instead of xT⟶x0⟶subscript𝑥𝑇subscript𝑥0x_{T}\\longrightarrow x_{0}, where x0subscript𝑥0x_{0} is set to be the given real image. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_35",
"text": " This inversion process often produces satisfying results, as presented in fig. 10. However, the inversion is not sufficiently accurate in many other cases, as in fig. 11. This is partially due to a distortion-editability tradeoff , where we recognize that reducing the classifier-free guidance parameter (i.e., reducing the prompt influence) improves reconstruction but constrains our ability to perform significant manipulations. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_36",
"text": " To alleviate this limitation, we propose to restore the unedited regions of the original image using a mask, directly extracted from the attention maps. Note that here the mask is generated with no guidance from the user. As presented in fig. 12, this approach works well even using the naïve DDPM inversion scheme (adding noise followed by denoising). Note that the cat’s identity is well-preserved under various editing operations, while the mask is produced only from the prompt itself. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_37",
"text": " In this work, we uncovered the powerful capabilities of the cross-attention layers within text-to-image diffusion models. We showed that these high-dimensional layers have an interpretable representation of spatial maps that play a key role in tying the words in the text prompt to the spatial layout of the synthesized image. With this observation, we showed how various manipulations of the prompt can directly control attributes in the synthesized image, paving the way to various applications including local and global editing. This work is a first step towards providing users with simple and intuitive means to edit images, leveraging textual semantic power. It enables users to navigate through a semantic, textual, space, which exhibits incremental changes after each step, rather than producing the desired image from scratch after each text manipulation. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_38",
"text": " While we have demonstrated semantic control by changing only textual prompts, our technique is still subject to a few limitations to be addressed in follow-up work. First, the current inversion process results in a visible distortion over some of the test images. In addition, the inversion requires the user to come up with a suitable prompt. This could be challenging for complicated compositions. Note that the challenge of inversion for text-guided diffusion models is an orthogonal endeavor to our work, which will be thoroughly studied in the future. Second, the current attention maps are of low resolution, as the cross-attention is placed in the network’s bottleneck. This bounds our ability to perform even more precise localized editing. To alleviate this, we suggest incorporating cross-attention also in higher-resolution layers. We leave this for future works since it requires analyzing the training procedure which is out of our current scope. Finally, we recognize that our current method cannot be used to spatially move existing objects across the image and also leave this kind of control for future work. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
},
{
"id": "2208.01626_all_39",
"text": " We thank Noa Glaser, Adi Zicher, Yaron Brodsky and Shlomi Fruchter for their valuable inputs that helped improve this work, and to Mohammad Norouzi, Chitwan Saharia and William Chan for providing us with their support and the pretrained models of Imagen . Special thanks to Yossi Matias for early inspiring discussion on the problem and for motivating and encouraging us to develop technologies along the avenue of intuitive interaction. ",
"title": "Prompt-to-Prompt Image Editing with Cross Attention Control"
}
] |
Which gives better performance: using more than one image in the batch or larger input tiles with only one image in the batch ?
|
According to the experiments in the paper use of large tiles instead of large size is preferred which reduces the overhead and maximize the use of GPU memory [10].
|
[
10
] |
[
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available training sets and the size of the considered networks. The breakthrough by Krizhevsky et al. was due to supervised training of a large network with 8 layers and millions of parameters on the ImageNet dataset with 1 million training images. Since then, even larger and deeper networks have been trained . ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_1",
"text": " The typical use of convolutional networks is on classification tasks, where the output to an image is a single class label. However, in many visual tasks, especially in biomedical image processing, the desired output should include localization, i.e., a class label is supposed to be assigned to each pixel. Moreover, thousands of training images are usually beyond reach in biomedical tasks. Hence, Ciresan et al. trained a network in a sliding-window setup to predict the class label of each pixel by providing a local region (patch) around that pixel as input. First, this network can localize. Secondly, the training data in terms of patches is much larger than the number of training images. The resulting network won the EM segmentation challenge at ISBI 2012 by a large margin. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_2",
"text": " Obviously, the strategy in Ciresan et al. has two drawbacks. First, it is quite slow because the network must be run separately for each patch, and there is a lot of redundancy due to overlapping patches. Secondly, there is a trade-off between localization accuracy and the use of context. Larger patches require more max-pooling layers that reduce the localization accuracy, while small patches allow the network to see only little context. More recent approaches (11, 4) proposed a classifier output that takes into account the features from multiple layers. Good localization and the use of context are possible at the same time. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_3",
"text": " In this paper, we build upon a more elegant architecture, the so-called “fully convolutional network” . We modify and extend this architecture such that it works with very few training images and yields more precise segmentations; see Figure 1. The main idea in is to supplement a usual contracting network by successive layers, where pooling operators are replaced by upsampling operators. Hence, these layers increase the resolution of the output. In order to localize, high resolution features from the contracting path are combined with the upsampled output. A successive convolution layer can then learn to assemble a more precise output based on this information. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_4",
"text": " One important modification in our architecture is that in the upsampling part we have also a large number of feature channels, which allow the network to propagate context information to higher resolution layers. As a consequence, the expansive path is more or less symmetric to the contracting path, and yields a u-shaped architecture. The network does not have any fully connected layers and only uses the valid part of each convolution, i.e., the segmentation map only contains the pixels, for which the full context is available in the input image. This strategy allows the seamless segmentation of arbitrarily large images by an overlap-tile strategy (see Figure 2). To predict the pixels in the border region of the image, the missing context is extrapolated by mirroring the input image. This tiling strategy is important to apply the network to large images, since otherwise the resolution would be limited by the GPU memory. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_5",
"text": " As for our tasks there is very little training data available, we use excessive data augmentation by applying elastic deformations to the available training images. This allows the network to learn invariance to such deformations, without the need to see these transformations in the annotated image corpus. This is particularly important in biomedical segmentation, since deformation used to be the most common variation in tissue and realistic deformations can be simulated efficiently. The value of data augmentation for learning invariance has been shown in Dosovitskiy et al. in the scope of unsupervised feature learning. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_6",
"text": " Another challenge in many cell segmentation tasks is the separation of touching objects of the same class; see Figure 3. To this end, we propose the use of a weighted loss, where the separating background labels between touching cells obtain a large weight in the loss function. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_7",
"text": " The resulting network is applicable to various biomedical segmentation problems. In this paper, we show results on the segmentation of neuronal structures in EM stacks (an ongoing competition started at ISBI 2012), where we outperformed the network of Ciresan et al. . Furthermore, we show results for cell segmentation in light microscopy images from the ISBI cell tracking challenge 2015. Here we won with a large margin on the two most challenging 2D transmitted light datasets. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_8",
"text": " The network architecture is illustrated in Figure 1. It consists of a contracting path (left side) and an expansive path (right side). The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a 1x1 convolution is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_9",
"text": " To allow a seamless tiling of the output segmentation map (see Figure 2), it is important to select the input tile size such that all 2x2 max-pooling operations are applied to a layer with an even x- and y-size. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_10",
"text": " The input images and their corresponding segmentation maps are used to train the network with the stochastic gradient descent implementation of Caffe . Due to the unpadded convolutions, the output image is smaller than the input by a constant border width. To minimize the overhead and make maximum use of the GPU memory, we favor large input tiles over a large batch size and hence reduce the batch to a single image. Accordingly we use a high momentum (0.99) such that a large number of the previously seen training samples determine the update in the current optimization step. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_11",
"text": " The energy function is computed by a pixel-wise soft-max over the final feature map combined with the cross entropy loss function. The soft-max is defined as pk(𝐱)=exp(ak(𝐱))/(∑k′=1Kexp(ak′(𝐱)))subscript𝑝𝑘𝐱subscript𝑎𝑘𝐱superscriptsubscriptsuperscript𝑘′1𝐾subscript𝑎superscript𝑘′𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}})=\\exp({a_{k}(\\boldsymbol{\\mathbf{x}})})/\\left(\\sum_{k^{\\prime}=1}^{K}\\exp(a_{k^{\\prime}}(\\boldsymbol{\\mathbf{x}}))\\right) where ak(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) denotes the activation in feature channel k𝑘k at the pixel position 𝐱∈Ω𝐱Ω\\boldsymbol{\\mathbf{x}}\\in\\Omega with Ω⊂ℤ2Ωsuperscriptℤ2\\Omega\\subset\\mathbb{Z}^{2}. K𝐾K is the number of classes and pk(𝐱)subscript𝑝𝑘𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}}) is the approximated maximum-function. I.e. pk(𝐱)≈1subscript𝑝𝑘𝐱1{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 1 for the k𝑘k that has the maximum activation ak(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) and pk(𝐱)≈0subscript𝑝𝑘𝐱0{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 0 for all other k𝑘k. The cross entropy then penalizes at each position the deviation of pℓ(𝐱)(𝐱)subscript𝑝ℓ𝐱𝐱{p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}}) from 1 using E=∑𝐱∈Ωw(𝐱)log(pℓ(𝐱)(𝐱))𝐸subscript𝐱Ω𝑤𝐱subscript𝑝ℓ𝐱𝐱E=\\sum_{\\boldsymbol{\\mathbf{x}}\\in\\Omega}w(\\boldsymbol{\\mathbf{x}})\\log({p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}})) (1) where ℓ:Ω→{1,…,K}:ℓ→Ω1…𝐾\\ell:\\Omega\\rightarrow\\{1,\\dots,K\\} is the true label of each pixel and w:Ω→ℝ:𝑤→Ωℝw:\\Omega\\rightarrow\\mathds{R} is a weight map that we introduced to give some pixels more importance in the training. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_12",
"text": " We pre-compute the weight map for each ground truth segmentation to compensate the different frequency of pixels from a certain class in the training data set, and to force the network to learn the small separation borders that we introduce between touching cells (See Figure 3c and d). ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_13",
"text": " The separation border is computed using morphological operations. The weight map is then computed as w(𝐱)=wc(𝐱)+w0⋅exp(−(d1(𝐱)+d2(𝐱))22σ2)𝑤𝐱subscript𝑤𝑐𝐱⋅subscript𝑤0superscriptsubscript𝑑1𝐱subscript𝑑2𝐱22superscript𝜎2w(\\boldsymbol{\\mathbf{x}})=w_{c}(\\boldsymbol{\\mathbf{x}})+w_{0}\\cdot\\exp\\left(-\\frac{(d_{1}(\\boldsymbol{\\mathbf{x}})+d_{2}(\\boldsymbol{\\mathbf{x}}))^{2}}{2\\sigma^{2}}\\right) (2) where wc:Ω→ℝ:subscript𝑤𝑐→Ωℝw_{c}:\\Omega\\rightarrow\\mathds{R} is the weight map to balance the class frequencies, d1:Ω→ℝ:subscript𝑑1→Ωℝd_{1}:\\Omega\\rightarrow\\mathds{R} denotes the distance to the border of the nearest cell and d2:Ω→ℝ:subscript𝑑2→Ωℝd_{2}:\\Omega\\rightarrow\\mathds{R} the distance to the border of the second nearest cell. In our experiments we set w0=10subscript𝑤010w_{0}=10 and σ≈5𝜎5\\sigma\\approx 5 pixels. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_14",
"text": " In deep networks with many convolutional layers and different paths through the network, a good initialization of the weights is extremely important. Otherwise, parts of the network might give excessive activations, while other parts never contribute. Ideally the initial weights should be adapted such that each feature map in the network has approximately unit variance. For a network with our architecture (alternating convolution and ReLU layers) this can be achieved by drawing the initial weights from a Gaussian distribution with a standard deviation of 2/N2𝑁\\sqrt{2/N}, where N𝑁N denotes the number of incoming nodes of one neuron . E.g. for a 3x3 convolution and 64 feature channels in the previous layer N=9⋅64=576𝑁⋅964576N=9\\cdot 64=576. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_15",
"text": " Data augmentation is essential to teach the network the desired invariance and robustness properties, when only few training samples are available. In case of microscopical images we primarily need shift and rotation invariance as well as robustness to deformations and gray value variations. Especially random elastic deformations of the training samples seem to be the key concept to train a segmentation network with very few annotated images. We generate smooth deformations using random displacement vectors on a coarse 3 by 3 grid. The displacements are sampled from a Gaussian distribution with 10 pixels standard deviation. Per-pixel displacements are then computed using bicubic interpolation. Drop-out layers at the end of the contracting path perform further implicit data augmentation. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_16",
"text": " We demonstrate the application of the u-net to three different segmentation tasks. The first task is the segmentation of neuronal structures in electron microscopic recordings. An example of the data set and our obtained segmentation is displayed in Figure 2. We provide the full result as Supplementary Material. The data set is provided by the EM segmentation challenge that was started at ISBI 2012 and is still open for new contributions. The training data is a set of 30 images (512x512 pixels) from serial section transmission electron microscopy of the Drosophila first instar larva ventral nerve cord (VNC). Each image comes with a corresponding fully annotated ground truth segmentation map for cells (white) and membranes (black). The test set is publicly available, but its segmentation maps are kept secret. An evaluation can be obtained by sending the predicted membrane probability map to the organizers. The evaluation is done by thresholding the map at 10 different levels and computation of the “warping error”, the “Rand error” and the “pixel error” . ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_17",
"text": " The u-net (averaged over 7 rotated versions of the input data) achieves without any further pre- or postprocessing a warping error of 0.0003529 (the new best score, see Table 1) and a rand-error of 0.0382. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_18",
"text": " This is significantly better than the sliding-window convolutional network result by Ciresan et al. , whose best submission had a warping error of 0.000420 and a rand error of 0.0504. In terms of rand error the only better performing algorithms on this data set use highly data set specific post-processing methods111The authors of this algorithm have submitted 78 different solutions to achieve this result. applied to the probability map of Ciresan et al. . ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_19",
"text": " We also applied the u-net to a cell segmentation task in light microscopic images. This segmenation task is part of the ISBI cell tracking challenge 2014 and 2015 (10, 13). The first data set “PhC-U373”222Data set provided by Dr. Sanjay Kumar. Department of Bioengineering University of California at Berkeley. Berkeley CA (USA) contains Glioblastoma-astrocytoma U373 cells on a polyacrylimide substrate recorded by phase contrast microscopy (see Figure 4a,b and Supp. Material). It contains 35 partially annotated training images. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_20",
"text": " Here we achieve an average IOU (“intersection over union”) of 92%, which is significantly better than the second best algorithm with 83% (see Table 2). ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_21",
"text": " The second data set “DIC-HeLa”333Data set provided by Dr. Gert van Cappellen Erasmus Medical Center. Rotterdam. The Netherlands are HeLa cells on a flat glass recorded by differential interference contrast (DIC) microscopy (see Figure 3, Figure 4c,d and Supp. Material). It contains 20 partially annotated training images. Here we achieve an average IOU of 77.5% which is significantly better than the second best algorithm with 46%. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_22",
"text": " The u-net architecture achieves very good performance on very different biomedical segmentation applications. Thanks to data augmentation with elastic deformations, it only needs very few annotated images and has a very reasonable training time of only 10 hours on a NVidia Titan GPU (6 GB). We provide the full Caffe-based implementation and the trained networks444U-net implementation, trained networks and supplementary material available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. We are sure that the u-net architecture can be applied easily to many more tasks. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
}
] |
What is the issue with intractable posterior distribution?
|
The variational Bayesian (VB) approach involves the optimization of an approximation to the intractable posterior [4].
|
[
4
] |
[
{
"id": "1312.6114_all_0",
"text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approximation to the intractable posterior. Unfortunately, the common mean-field approach requires analytical solutions of expectations w.r.t. the approximate posterior, which are also intractable in the general case. We show how a reparameterization of the variational lower bound yields a simple differentiable unbiased estimator of the lower bound; this SGVB (Stochastic Gradient Variational Bayes) estimator can be used for efficient approximate posterior inference in almost any model with continuous latent variables and/or parameters, and is straightforward to optimize using standard stochastic gradient ascent techniques. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_1",
"text": " For the case of an i.i.d. dataset and continuous latent variables per datapoint, we propose the Auto-Encoding VB (AEVB) algorithm. In the AEVB algorithm we make inference and learning especially efficient by using the SGVB estimator to optimize a recognition model that allows us to perform very efficient approximate posterior inference using simple ancestral sampling, which in turn allows us to efficiently learn the model parameters, without the need of expensive iterative inference schemes (such as MCMC) per datapoint. The learned approximate posterior inference model can also be used for a host of tasks such as recognition, denoising, representation and visualization purposes. When a neural network is used for the recognition model, we arrive at the variational auto-encoder. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_2",
"text": " The strategy in this section can be used to derive a lower bound estimator (a stochastic objective function) for a variety of directed graphical models with continuous latent variables. We will restrict ourselves here to the common case where we have an i.i.d. dataset with latent variables per datapoint, and where we like to perform maximum likelihood (ML) or maximum a posteriori (MAP) inference on the (global) parameters, and variational inference on the latent variables. It is, for example, straightforward to extend this scenario to the case where we also perform variational inference on the global parameters; that algorithm is put in the appendix, but experiments with that case are left to future work. Note that our method can be applied to online, non-stationary settings, e.g. streaming data, but here we assume a fixed dataset for simplicity. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_3",
"text": " Let us consider some dataset 𝐗={𝐱(i)}i=1N𝐗superscriptsubscriptsuperscript𝐱𝑖𝑖1𝑁\\mathbf{X}=\\{\\mathbf{x}^{(i)}\\}_{i=1}^{N} consisting of N𝑁N i.i.d. samples of some continuous or discrete variable 𝐱𝐱\\mathbf{x}. We assume that the data are generated by some random process, involving an unobserved continuous random variable 𝐳𝐳\\mathbf{z}. The process consists of two steps: (1) a value 𝐳(i)superscript𝐳𝑖\\mathbf{z}^{(i)} is generated from some prior distribution p𝜽∗(𝐳)subscript𝑝superscript𝜽𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{z}); (2) a value 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} is generated from some conditional distribution p𝜽∗(𝐱|𝐳)subscript𝑝superscript𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{x}|\\mathbf{z}). We assume that the prior p𝜽∗(𝐳)subscript𝑝superscript𝜽𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{z}) and likelihood p𝜽∗(𝐱|𝐳)subscript𝑝superscript𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{x}|\\mathbf{z}) come from parametric families of distributions p𝜽(𝐳)subscript𝑝𝜽𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{z}) and p𝜽(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}), and that their PDFs are differentiable almost everywhere w.r.t. both 𝜽𝜽\\boldsymbol{\\theta} and 𝐳𝐳\\mathbf{z}. Unfortunately, a lot of this process is hidden from our view: the true parameters 𝜽∗superscript𝜽\\boldsymbol{\\theta}^{*} as well as the values of the latent variables 𝐳(i)superscript𝐳𝑖\\mathbf{z}^{(i)} are unknown to us. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_4",
"text": " Very importantly, we do not make the common simplifying assumptions about the marginal or posterior probabilities. Conversely, we are here interested in a general algorithm that even works efficiently in the case of: 1. Intractability: the case where the integral of the marginal likelihood p𝜽(𝐱)=∫p𝜽(𝐳)p𝜽(𝐱|𝐳)𝑑𝐳subscript𝑝𝜽𝐱subscript𝑝𝜽𝐳subscript𝑝𝜽conditional𝐱𝐳differential-d𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x})=\\int p_{\\boldsymbol{\\theta}}(\\mathbf{z})p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z})\\,d\\mathbf{z} is intractable (so we cannot evaluate or differentiate the marginal likelihood), where the true posterior density p𝜽(𝐳|𝐱)=p𝜽(𝐱|𝐳)p𝜽(𝐳)/p𝜽(𝐱)subscript𝑝𝜽conditional𝐳𝐱subscript𝑝𝜽conditional𝐱𝐳subscript𝑝𝜽𝐳subscript𝑝𝜽𝐱p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x})=p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z})p_{\\boldsymbol{\\theta}}(\\mathbf{z})/p_{\\boldsymbol{\\theta}}(\\mathbf{x}) is intractable (so the EM algorithm cannot be used), and where the required integrals for any reasonable mean-field VB algorithm are also intractable. These intractabilities are quite common and appear in cases of moderately complicated likelihood functions p𝜽(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}), e.g. a neural network with a nonlinear hidden layer. 2. A large dataset: we have so much data that batch optimization is too costly; we would like to make parameter updates using small minibatches or even single datapoints. Sampling-based solutions, e.g. Monte Carlo EM, would in general be too slow, since it involves a typically expensive sampling loop per datapoint. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_5",
"text": " We are interested in, and propose a solution to, three related problems in the above scenario: 1. Efficient approximate ML or MAP estimation for the parameters 𝜽𝜽\\boldsymbol{\\theta}. The parameters can be of interest themselves, e.g. if we are analyzing some natural process. They also allow us to mimic the hidden random process and generate artificial data that resembles the real data. 2. Efficient approximate posterior inference of the latent variable 𝐳𝐳\\mathbf{z} given an observed value 𝐱𝐱\\mathbf{x} for a choice of parameters 𝜽𝜽\\boldsymbol{\\theta}. This is useful for coding or data representation tasks. 3. Efficient approximate marginal inference of the variable 𝐱𝐱\\mathbf{x}. This allows us to perform all kinds of inference tasks where a prior over 𝐱𝐱\\mathbf{x} is required. Common applications in computer vision include image denoising, inpainting and super-resolution. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_6",
"text": " For the purpose of solving the above problems, let us introduce a recognition model qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}): an approximation to the intractable true posterior p𝜽(𝐳|𝐱)subscript𝑝𝜽conditional𝐳𝐱p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x}). Note that in contrast with the approximate posterior in mean-field variational inference, it is not necessarily factorial and its parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} are not computed from some closed-form expectation. Instead, we’ll introduce a method for learning the recognition model parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} jointly with the generative model parameters 𝜽𝜽\\boldsymbol{\\theta}. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_7",
"text": " From a coding theory perspective, the unobserved variables 𝐳𝐳\\mathbf{z} have an interpretation as a latent representation or code. In this paper we will therefore also refer to the recognition model qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) as a probabilistic encoder, since given a datapoint 𝐱𝐱\\mathbf{x} it produces a distribution (e.g. a Gaussian) over the possible values of the code 𝐳𝐳\\mathbf{z} from which the datapoint 𝐱𝐱\\mathbf{x} could have been generated. In a similar vein we will refer to p𝜽(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}) as a probabilistic decoder, since given a code 𝐳𝐳\\mathbf{z} it produces a distribution over the possible corresponding values of 𝐱𝐱\\mathbf{x}. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_8",
"text": " The marginal likelihood is composed of a sum over the marginal likelihoods of individual datapoints logp𝜽(𝐱(1),⋯,𝐱(N))=∑i=1Nlogp𝜽(𝐱(i))subscript𝑝𝜽superscript𝐱1⋯superscript𝐱𝑁superscriptsubscript𝑖1𝑁subscript𝑝𝜽superscript𝐱𝑖\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(1)},\\cdots,\\mathbf{x}^{(N)})=\\sum_{i=1}^{N}\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}), which can each be rewritten as: logp𝜽(𝐱(i))=DKL(qϕ(𝐳|𝐱(i))||p𝜽(𝐳|𝐱(i)))+ℒ(𝜽,ϕ;𝐱(i))\\displaystyle\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)})=D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x}^{(i)}))+\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) (1) The first RHS term is the KL divergence of the approximate from the true posterior. Since this KL-divergence is non-negative, the second RHS term ℒ(𝜽,ϕ;𝐱(i))ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) is called the (variational) lower bound on the marginal likelihood of datapoint i𝑖i, and can be written as: logp𝜽(𝐱(i))≥ℒ(𝜽,ϕ;𝐱(i))subscript𝑝𝜽superscript𝐱𝑖ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)})\\geq\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) =𝔼qϕ(𝐳|𝐱)(−logqϕ(𝐳|𝐱)+logp𝜽(𝐱,𝐳))absentsubscript𝔼subscript𝑞bold-italic-ϕconditional𝐳𝐱delimited-()subscript𝑞bold-italic-ϕconditional𝐳𝐱subscript𝑝𝜽𝐱𝐳\\displaystyle=\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})}\\left(-\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})+\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x},\\mathbf{z})\\right) (2) which can also be written as: ℒ(𝜽,ϕ;𝐱(i))=−DKL(qϕ(𝐳|𝐱(i))||p𝜽(𝐳))+𝔼qϕ(𝐳|𝐱(i))(logp𝜽(𝐱(i)|𝐳))\\displaystyle\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)})=-D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z}))+\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})}\\left(\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z})\\right) (3) We want to differentiate and optimize the lower bound ℒ(𝜽,ϕ;𝐱(i))ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) w.r.t. both the variational parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} and generative parameters 𝜽𝜽\\boldsymbol{\\theta}. However, the gradient of the lower bound w.r.t. ϕbold-italic-ϕ\\boldsymbol{\\phi} is a bit problematic. The usual (naïve) Monte Carlo gradient estimator for this type of problem is: ∇ϕ𝔼qϕ(𝐳)(f(𝐳))=𝔼qϕ(𝐳)(f(𝐳)∇qϕ(𝐳)logqϕ(𝐳))≃1L∑l=1Lf(𝐳)∇qϕ(𝐳(l))logqϕ(𝐳(l))subscript∇bold-italic-ϕsubscript𝔼subscript𝑞bold-italic-ϕ𝐳delimited-()𝑓𝐳subscript𝔼subscript𝑞bold-italic-ϕ𝐳delimited-()𝑓𝐳subscript∇subscript𝑞bold-italic-ϕ𝐳subscript𝑞bold-italic-ϕ𝐳similar-to-or-equals1𝐿superscriptsubscript𝑙1𝐿𝑓𝐳subscript∇subscript𝑞bold-italic-ϕsuperscript𝐳𝑙subscript𝑞bold-italic-ϕsuperscript𝐳𝑙\\nabla_{\\boldsymbol{\\phi}}\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z})}\\left(f(\\mathbf{z})\\right)=\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z})}\\left(f(\\mathbf{z})\\nabla_{q_{\\boldsymbol{\\phi}}(\\mathbf{z})}\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z})\\right)\\simeq\\frac{1}{L}\\sum_{l=1}^{L}f(\\mathbf{z})\\nabla_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}^{(l)})}\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}^{(l)}) where 𝐳(l)∼qϕ(𝐳|𝐱(i))similar-tosuperscript𝐳𝑙subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\mathbf{z}^{(l)}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}). This gradient estimator exhibits exhibits very high variance (see e.g. (BJP12)) and is impractical for our purposes. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_9",
"text": " In this section we introduce a practical estimator of the lower bound and its derivatives w.r.t. the parameters. We assume an approximate posterior in the form qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}), but please note that the technique can be applied to the case qϕ(𝐳)subscript𝑞bold-italic-ϕ𝐳q_{\\boldsymbol{\\phi}}(\\mathbf{z}), i.e. where we do not condition on 𝐱𝐱\\mathbf{x}, as well. The fully variational Bayesian method for inferring a posterior over the parameters is given in the appendix. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_10",
"text": " Under certain mild conditions outlined in section 2.4 for a chosen approximate posterior qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) we can reparameterize the random variable 𝐳~∼qϕ(𝐳|𝐱)similar-to~𝐳subscript𝑞bold-italic-ϕconditional𝐳𝐱\\widetilde{\\mathbf{z}}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) using a differentiable transformation gϕ(ϵ,𝐱)subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}) of an (auxiliary) noise variable ϵbold-italic-ϵ\\boldsymbol{\\epsilon}: 𝐳~=gϕ(ϵ,𝐱) with ϵ∼p(ϵ)~𝐳subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱 with bold-italic-ϵsimilar-to𝑝bold-italic-ϵ\\displaystyle\\widetilde{\\mathbf{z}}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x})\\text{\\quad with \\quad}\\boldsymbol{\\epsilon}\\sim p(\\boldsymbol{\\epsilon}) (4) See section 2.4 for general strategies for chosing such an approriate distribution p(ϵ)𝑝bold-italic-ϵp(\\boldsymbol{\\epsilon}) and function gϕ(ϵ,𝐱)subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}). We can now form Monte Carlo estimates of expectations of some function f(𝐳)𝑓𝐳f(\\mathbf{z}) w.r.t. qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) as follows: 𝔼qϕ(𝐳|𝐱(i))(f(𝐳))=𝔼p(ϵ)(f(gϕ(ϵ,𝐱(i))))subscript𝔼subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖delimited-()𝑓𝐳subscript𝔼𝑝bold-italic-ϵdelimited-()𝑓subscript𝑔bold-italic-ϕbold-italic-ϵsuperscript𝐱𝑖\\displaystyle\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})}\\left(f(\\mathbf{z})\\right)=\\mathbb{E}_{p(\\boldsymbol{\\epsilon})}\\left(f(g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}^{(i)}))\\right) ≃1L∑l=1Lf(gϕ(ϵ(l),𝐱(i))) where ϵ(l)∼p(ϵ)similar-to-or-equalsabsent1𝐿superscriptsubscript𝑙1𝐿𝑓subscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑙superscript𝐱𝑖 where superscriptbold-italic-ϵ𝑙similar-to𝑝bold-italic-ϵ\\displaystyle\\simeq\\frac{1}{L}\\sum_{l=1}^{L}{f(g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(l)},\\mathbf{x}^{(i)}))}\\text{\\quad where \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}) (5) We apply this technique to the variational lower bound (eq. (2)), yielding our generic Stochastic Gradient Variational Bayes (SGVB) estimator ℒ~A(𝜽,ϕ;𝐱(i))≃ℒ(𝜽,ϕ;𝐱(i))similar-to-or-equalssuperscript~ℒ𝐴𝜽bold-italic-ϕsuperscript𝐱𝑖ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\widetilde{\\mathcal{L}}^{A}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)})\\simeq\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}): ℒ~A(𝜽,ϕ;𝐱(i))superscript~ℒ𝐴𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\widetilde{\\mathcal{L}}^{A}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) =1L∑l=1Llogp𝜽(𝐱(i),𝐳(i,l))−logqϕ(𝐳(i,l)|𝐱(i))absent1𝐿superscriptsubscript𝑙1𝐿subscript𝑝𝜽superscript𝐱𝑖superscript𝐳𝑖𝑙subscript𝑞bold-italic-ϕconditionalsuperscript𝐳𝑖𝑙superscript𝐱𝑖\\displaystyle=\\frac{1}{L}\\sum_{l=1}^{L}\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)},\\mathbf{z}^{(i,l)})-\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}^{(i,l)}|\\mathbf{x}^{(i)}) where 𝐳(i,l)where superscript𝐳𝑖𝑙\\displaystyle\\text{where \\quad}\\mathbf{z}^{(i,l)} =gϕ(ϵ(i,l),𝐱(i)) and ϵ(l)∼p(ϵ)absentsubscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑖𝑙superscript𝐱𝑖 and superscriptbold-italic-ϵ𝑙similar-to𝑝bold-italic-ϵ\\displaystyle=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(i,l)},\\mathbf{x}^{(i)})\\text{\\quad and \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}) (6) Often, the KL-divergence DKL(qϕ(𝐳|𝐱(i))||p𝜽(𝐳))D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z})) of eq. (3) can be integrated analytically (see appendix B), such that only the expected reconstruction error 𝔼qϕ(𝐳|𝐱(i))(logp𝜽(𝐱(i)|𝐳))subscript𝔼subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖delimited-()subscript𝑝𝜽conditionalsuperscript𝐱𝑖𝐳\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})}\\left(\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z})\\right) requires estimation by sampling. The KL-divergence term can then be interpreted as regularizing ϕbold-italic-ϕ\\boldsymbol{\\phi}, encouraging the approximate posterior to be close to the prior p𝜽(𝐳)subscript𝑝𝜽𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{z}). This yields a second version of the SGVB estimator ℒ~B(𝜽,ϕ;𝐱(i))≃ℒ(𝜽,ϕ;𝐱(i))similar-to-or-equalssuperscript~ℒ𝐵𝜽bold-italic-ϕsuperscript𝐱𝑖ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\widetilde{\\mathcal{L}}^{B}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)})\\simeq\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}), corresponding to eq. (3), which typically has less variance than the generic estimator: ℒ~B(𝜽,ϕ;𝐱(i))superscript~ℒ𝐵𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\widetilde{\\mathcal{L}}^{B}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) =−DKL(qϕ(𝐳|𝐱(i))||p𝜽(𝐳))+1L∑l=1L(logp𝜽(𝐱(i)|𝐳(i,l)))\\displaystyle=-D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z}))+\\frac{1}{L}\\sum_{l=1}^{L}(\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)})) where 𝐳(i,l)where superscript𝐳𝑖𝑙\\displaystyle\\text{where \\quad}\\mathbf{z}^{(i,l)} =gϕ(ϵ(i,l),𝐱(i)) and ϵ(l)∼p(ϵ)absentsubscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑖𝑙superscript𝐱𝑖 and superscriptbold-italic-ϵ𝑙similar-to𝑝bold-italic-ϵ\\displaystyle=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(i,l)},\\mathbf{x}^{(i)})\\text{\\quad and \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}) (7) Given multiple datapoints from a dataset 𝐗𝐗\\mathbf{X} with N𝑁N datapoints, we can construct an estimator of the marginal likelihood lower bound of the full dataset, based on minibatches: ℒ(𝜽,ϕ;𝐗)≃ℒ~M(𝜽,ϕ;𝐗M)=NM∑i=1Mℒ~(𝜽,ϕ;𝐱(i))similar-to-or-equalsℒ𝜽bold-italic-ϕ𝐗superscript~ℒ𝑀𝜽bold-italic-ϕsuperscript𝐗𝑀𝑁𝑀superscriptsubscript𝑖1𝑀~ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{X})\\simeq\\widetilde{\\mathcal{L}}^{M}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{X}^{M})=\\frac{N}{M}\\sum_{i=1}^{M}\\widetilde{\\mathcal{L}}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) (8) where the minibatch 𝐗M={𝐱(i)}i=1Msuperscript𝐗𝑀superscriptsubscriptsuperscript𝐱𝑖𝑖1𝑀\\mathbf{X}^{M}=\\{\\mathbf{x}^{(i)}\\}_{i=1}^{M} is a randomly drawn sample of M𝑀M datapoints from the full dataset 𝐗𝐗\\mathbf{X} with N𝑁N datapoints. In our experiments we found that the number of samples L𝐿L per datapoint can be set to 111 as long as the minibatch size M𝑀M was large enough, e.g. M=100𝑀100M=100. Derivatives ∇𝜽,ϕℒ~(𝜽;𝐗M)subscript∇𝜽bold-italic-ϕ~ℒ𝜽superscript𝐗𝑀\\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}}\\widetilde{\\mathcal{L}}(\\boldsymbol{\\theta};\\mathbf{X}^{M}) can be taken, and the resulting gradients can be used in conjunction with stochastic optimization methods such as SGD or Adagrad (DHS10). See algorithm 1 for a basic approach to compute the stochastic gradients. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_11",
"text": " A connection with auto-encoders becomes clear when looking at the objective function given at eq. (7). The first term is (the KL divergence of the approximate posterior from the prior) acts as a regularizer, while the second term is a an expected negative reconstruction error. The function gϕ(.)g_{\\boldsymbol{\\phi}}(.) is chosen such that it maps a datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} and a random noise vector ϵ(l)superscriptbold-italic-ϵ𝑙\\boldsymbol{\\epsilon}^{(l)} to a sample from the approximate posterior for that datapoint: 𝐳(i,l)=gϕ(ϵ(l),𝐱(i))superscript𝐳𝑖𝑙subscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑙superscript𝐱𝑖\\mathbf{z}^{(i,l)}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(l)},\\mathbf{x}^{(i)}) where 𝐳(i,l)∼qϕ(𝐳|𝐱(i))similar-tosuperscript𝐳𝑖𝑙subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\mathbf{z}^{(i,l)}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}). Subsequently, the sample 𝐳(i,l)superscript𝐳𝑖𝑙\\mathbf{z}^{(i,l)} is then input to function logp𝜽(𝐱(i)|𝐳(i,l))subscript𝑝𝜽conditionalsuperscript𝐱𝑖superscript𝐳𝑖𝑙\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)}), which equals the probability density (or mass) of datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} under the generative model, given 𝐳(i,l)superscript𝐳𝑖𝑙\\mathbf{z}^{(i,l)}. This term is a negative reconstruction error in auto-encoder parlance. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_12",
"text": " In order to solve our problem we invoked an alternative method for generating samples from qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}). The essential parameterization trick is quite simple. Let 𝐳𝐳\\mathbf{z} be a continuous random variable, and 𝐳∼qϕ(𝐳|𝐱)similar-to𝐳subscript𝑞bold-italic-ϕconditional𝐳𝐱\\mathbf{z}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) be some conditional distribution. It is then often possible to express the random variable 𝐳𝐳\\mathbf{z} as a deterministic variable 𝐳=gϕ(ϵ,𝐱)𝐳subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱\\mathbf{z}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}), where ϵbold-italic-ϵ\\boldsymbol{\\epsilon} is an auxiliary variable with independent marginal p(ϵ)𝑝bold-italic-ϵp(\\boldsymbol{\\epsilon}), and gϕ(.)g_{\\boldsymbol{\\phi}}(.) is some vector-valued function parameterized by ϕbold-italic-ϕ\\boldsymbol{\\phi}. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_13",
"text": " This reparameterization is useful for our case since it can be used to rewrite an expectation w.r.t qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) such that the Monte Carlo estimate of the expectation is differentiable w.r.t. ϕbold-italic-ϕ\\boldsymbol{\\phi}. A proof is as follows. Given the deterministic mapping 𝐳=gϕ(ϵ,𝐱)𝐳subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱\\mathbf{z}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}) we know that qϕ(𝐳|𝐱)∏idzi=p(ϵ)∏idϵisubscript𝑞bold-italic-ϕconditional𝐳𝐱subscriptproduct𝑖𝑑subscript𝑧𝑖𝑝bold-italic-ϵsubscriptproduct𝑖𝑑subscriptitalic-ϵ𝑖q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})\\prod_{i}dz_{i}=p(\\boldsymbol{\\epsilon})\\prod_{i}d\\epsilon_{i}. Therefore111Note that for infinitesimals we use the notational convention d𝐳=∏idzi𝑑𝐳subscriptproduct𝑖𝑑subscript𝑧𝑖d\\mathbf{z}=\\prod_{i}dz_{i}, ∫qϕ(𝐳|𝐱)f(𝐳)𝑑𝐳=∫p(ϵ)f(𝐳)𝑑ϵ=∫p(ϵ)f(gϕ(ϵ,𝐱))𝑑ϵsubscript𝑞bold-italic-ϕconditional𝐳𝐱𝑓𝐳differential-d𝐳𝑝bold-italic-ϵ𝑓𝐳differential-dbold-italic-ϵ𝑝bold-italic-ϵ𝑓subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱differential-dbold-italic-ϵ\\int q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})f(\\mathbf{z})\\,d\\mathbf{z}=\\int p(\\boldsymbol{\\epsilon})f(\\mathbf{z})\\,d\\boldsymbol{\\epsilon}=\\int p(\\boldsymbol{\\epsilon})f(g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}))\\,d\\boldsymbol{\\epsilon}. It follows that a differentiable estimator can be constructed: ∫qϕ(𝐳|𝐱)f(𝐳)𝑑𝐳≃1L∑l=1Lf(gϕ(𝐱,ϵ(l)))similar-to-or-equalssubscript𝑞bold-italic-ϕconditional𝐳𝐱𝑓𝐳differential-d𝐳1𝐿superscriptsubscript𝑙1𝐿𝑓subscript𝑔bold-italic-ϕ𝐱superscriptbold-italic-ϵ𝑙\\int q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})f(\\mathbf{z})\\,d\\mathbf{z}\\simeq\\frac{1}{L}\\sum_{l=1}^{L}f(g_{\\boldsymbol{\\phi}}(\\mathbf{x},\\boldsymbol{\\epsilon}^{(l)})) where ϵ(l)∼p(ϵ)similar-tosuperscriptbold-italic-ϵ𝑙𝑝bold-italic-ϵ\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}). In section 2.3 we applied this trick to obtain a differentiable estimator of the variational lower bound. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_14",
"text": " Take, for example, the univariate Gaussian case: let z∼p(z|x)=𝒩(μ,σ2)similar-to𝑧𝑝conditional𝑧𝑥𝒩𝜇superscript𝜎2z\\sim p(z|x)=\\mathcal{N}(\\mu,\\sigma^{2}). In this case, a valid reparameterization is z=μ+σϵ𝑧𝜇𝜎italic-ϵz=\\mu+\\sigma\\epsilon, where ϵitalic-ϵ\\epsilon is an auxiliary noise variable ϵ∼𝒩(0,1)similar-toitalic-ϵ𝒩01\\epsilon\\sim\\mathcal{N}(0,1). Therefore, 𝔼𝒩(z;μ,σ2)(f(z))=𝔼𝒩(ϵ;0,1)(f(μ+σϵ))≃1L∑l=1Lf(μ+σϵ(l))subscript𝔼𝒩𝑧𝜇superscript𝜎2delimited-()𝑓𝑧subscript𝔼𝒩italic-ϵ01delimited-()𝑓𝜇𝜎italic-ϵsimilar-to-or-equals1𝐿superscriptsubscript𝑙1𝐿𝑓𝜇𝜎superscriptitalic-ϵ𝑙\\mathbb{E}_{\\mathcal{N}(z;\\mu,\\sigma^{2})}\\left(f(z)\\right)=\\mathbb{E}_{\\mathcal{N}(\\epsilon;0,1)}\\left(f(\\mu+\\sigma\\epsilon)\\right)\\simeq\\frac{1}{L}\\sum_{l=1}^{L}f(\\mu+\\sigma\\epsilon^{(l)}) where ϵ(l)∼𝒩(0,1)similar-tosuperscriptitalic-ϵ𝑙𝒩01\\epsilon^{(l)}\\sim\\mathcal{N}(0,1). ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_15",
"text": " For which qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) can we choose such a differentiable transformation gϕ(.)g_{\\boldsymbol{\\phi}}(.) and auxiliary variable ϵ∼p(ϵ)similar-tobold-italic-ϵ𝑝bold-italic-ϵ\\boldsymbol{\\epsilon}\\sim p(\\boldsymbol{\\epsilon})? Three basic approaches are: 1. Tractable inverse CDF. In this case, let ϵ∼𝒰(𝟎,𝐈)similar-tobold-italic-ϵ𝒰0𝐈\\boldsymbol{\\epsilon}\\sim\\mathcal{U}(\\mathbf{0},\\mathbf{I}), and let gϕ(ϵ,𝐱)subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}) be the inverse CDF of qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}). Examples: Exponential, Cauchy, Logistic, Rayleigh, Pareto, Weibull, Reciprocal, Gompertz, Gumbel and Erlang distributions. 2. Analogous to the Gaussian example, for any ”location-scale” family of distributions we can choose the standard distribution (with location=0location0\\text{location}=0, scale=1scale1\\text{scale}=1) as the auxiliary variable ϵbold-italic-ϵ\\boldsymbol{\\epsilon}, and let g(.)=location+scale⋅ϵg(.)=\\text{location}+\\text{scale}\\cdot\\boldsymbol{\\epsilon}. Examples: Laplace, Elliptical, Student’s t, Logistic, Uniform, Triangular and Gaussian distributions. 3. Composition: It is often possible to express random variables as different transformations of auxiliary variables. Examples: Log-Normal (exponentiation of normally distributed variable), Gamma (a sum over exponentially distributed variables), Dirichlet (weighted sum of Gamma variates), Beta, Chi-Squared, and F distributions. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_16",
"text": " When all three approaches fail, good approximations to the inverse CDF exist requiring computations with time complexity comparable to the PDF (see e.g. (Dev86) for some methods). ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_17",
"text": " In this section we’ll give an example where we use a neural network for the probabilistic encoder qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) (the approximation to the posterior of the generative model p𝜽(𝐱,𝐳)subscript𝑝𝜽𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x},\\mathbf{z})) and where the parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} and 𝜽𝜽\\boldsymbol{\\theta} are optimized jointly with the AEVB algorithm. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_18",
"text": " Let the prior over the latent variables be the centered isotropic multivariate Gaussian p𝜽(𝐳)=𝒩(𝐳;𝟎,𝐈)subscript𝑝𝜽𝐳𝒩𝐳0𝐈p_{\\boldsymbol{\\theta}}(\\mathbf{z})=\\mathcal{N}(\\mathbf{z};\\mathbf{0},\\mathbf{I}). Note that in this case, the prior lacks parameters. We let p𝜽(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}) be a multivariate Gaussian (in case of real-valued data) or Bernoulli (in case of binary data) whose distribution parameters are computed from 𝐳𝐳\\mathbf{z} with a MLP (a fully-connected neural network with a single hidden layer, see appendix C). Note the true posterior p𝜽(𝐳|𝐱)subscript𝑝𝜽conditional𝐳𝐱p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x}) is in this case intractable. While there is much freedom in the form qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}), we’ll assume the true (but intractable) posterior takes on a approximate Gaussian form with an approximately diagonal covariance. In this case, we can let the variational approximate posterior be a multivariate Gaussian with a diagonal covariance structure222Note that this is just a (simplifying) choice, and not a limitation of our method.: logqϕ(𝐳|𝐱(i))subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\displaystyle\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}) =log𝒩(𝐳;𝝁(i),𝝈2(i)𝐈)absent𝒩𝐳superscript𝝁𝑖superscript𝝈2𝑖𝐈\\displaystyle=\\log\\mathcal{N}(\\mathbf{z};\\boldsymbol{\\mu}^{(i)},\\boldsymbol{\\sigma}^{2(i)}\\mathbf{I}) (9) where the mean and s.d. of the approximate posterior, 𝝁(i)superscript𝝁𝑖\\boldsymbol{\\mu}^{(i)} and 𝝈(i)superscript𝝈𝑖\\boldsymbol{\\sigma}^{(i)}, are outputs of the encoding MLP, i.e. nonlinear functions of datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} and the variational parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} (see appendix C). ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_19",
"text": " As explained in section 2.4, we sample from the posterior 𝐳(i,l)∼qϕ(𝐳|𝐱(i))similar-tosuperscript𝐳𝑖𝑙subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\mathbf{z}^{(i,l)}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}) using 𝐳(i,l)=gϕ(𝐱(i),ϵ(l))=𝝁(i)+𝝈(i)⊙ϵ(l)superscript𝐳𝑖𝑙subscript𝑔bold-italic-ϕsuperscript𝐱𝑖superscriptbold-italic-ϵ𝑙superscript𝝁𝑖direct-productsuperscript𝝈𝑖superscriptbold-italic-ϵ𝑙\\mathbf{z}^{(i,l)}=g_{\\boldsymbol{\\phi}}(\\mathbf{x}^{(i)},\\boldsymbol{\\epsilon}^{(l)})=\\boldsymbol{\\mu}^{(i)}+\\boldsymbol{\\sigma}^{(i)}\\odot\\boldsymbol{\\epsilon}^{(l)} where ϵ(l)∼𝒩(𝟎,𝐈)similar-tosuperscriptbold-italic-ϵ𝑙𝒩0𝐈\\boldsymbol{\\epsilon}^{(l)}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}). With ⊙direct-product\\odot we signify an element-wise product. In this model both p𝜽(𝐳)subscript𝑝𝜽𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{z}) (the prior) and qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) are Gaussian; in this case, we can use the estimator of eq. (7) where the KL divergence can be computed and differentiated without estimation (see appendix B). The resulting estimator for this model and datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} is: ℒ(𝜽,ϕ;𝐱(i))ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) ≃12∑j=1J(1+log((σj(i))2)−(μj(i))2−(σj(i))2)+1L∑l=1Llogp𝜽(𝐱(i)|𝐳(i,l))similar-to-or-equalsabsent12superscriptsubscript𝑗1𝐽1superscriptsuperscriptsubscript𝜎𝑗𝑖2superscriptsuperscriptsubscript𝜇𝑗𝑖2superscriptsuperscriptsubscript𝜎𝑗𝑖21𝐿superscriptsubscript𝑙1𝐿subscript𝑝𝜽conditionalsuperscript𝐱𝑖superscript𝐳𝑖𝑙\\displaystyle\\simeq\\frac{1}{2}\\sum_{j=1}^{J}\\left(1+\\log((\\sigma_{j}^{(i)})^{2})-(\\mu_{j}^{(i)})^{2}-(\\sigma_{j}^{(i)})^{2}\\right)+\\frac{1}{L}\\sum_{l=1}^{L}\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)}) where 𝐳(i,l)where superscript𝐳𝑖𝑙\\displaystyle\\text{where\\quad}\\mathbf{z}^{(i,l)} =𝝁(i)+𝝈(i)⊙ϵ(l) and ϵ(l)∼𝒩(0,𝐈)absentsuperscript𝝁𝑖direct-productsuperscript𝝈𝑖superscriptbold-italic-ϵ𝑙 and superscriptbold-italic-ϵ𝑙similar-to𝒩0𝐈\\displaystyle=\\boldsymbol{\\mu}^{(i)}+\\boldsymbol{\\sigma}^{(i)}\\odot\\boldsymbol{\\epsilon}^{(l)}\\text{\\quad and \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim\\mathcal{N}(0,\\mathbf{I}) (10) As explained above and in appendix C, the decoding term logp𝜽(𝐱(i)|𝐳(i,l))subscript𝑝𝜽conditionalsuperscript𝐱𝑖superscript𝐳𝑖𝑙\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)}) is a Bernoulli or Gaussian MLP, depending on the type of data we are modelling. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_20",
"text": " The wake-sleep algorithm (HDFN95) is, to the best of our knowledge, the only other on-line learning method in the literature that is applicable to the same general class of continuous latent variable models. Like our method, the wake-sleep algorithm employs a recognition model that approximates the true posterior. A drawback of the wake-sleep algorithm is that it requires a concurrent optimization of two objective functions, which together do not correspond to optimization of (a bound of) the marginal likelihood. An advantage of wake-sleep is that it also applies to models with discrete latent variables. Wake-Sleep has the same computational complexity as AEVB per datapoint. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_21",
"text": " Stochastic variational inference (HBWP13) has recently received increasing interest. Recently, (BJP12) introduced a control variate schemes to reduce the high variance of the naïve gradient estimator discussed in section 2.1, and applied to exponential family approximations of the posterior. In (RGB13) some general methods, i.e. a control variate scheme, were introduced for reducing the variance of the original gradient estimator. In (SK13), a similar reparameterization as in this paper was used in an efficient version of a stochastic variational inference algorithm for learning the natural parameters of exponential-family approximating distributions. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_22",
"text": " The AEVB algorithm exposes a connection between directed probabilistic models (trained with a variational objective) and auto-encoders. A connection between linear auto-encoders and a certain class of generative linear-Gaussian models has long been known. In (Row98) it was shown that PCA corresponds to the maximum-likelihood (ML) solution of a special case of the linear-Gaussian model with a prior p(𝐳)=𝒩(0,𝐈)𝑝𝐳𝒩0𝐈p(\\mathbf{z})=\\mathcal{N}(0,\\mathbf{I}) and a conditional distribution p(𝐱|𝐳)=𝒩(𝐱;𝐖𝐳,ϵ𝐈)𝑝conditional𝐱𝐳𝒩𝐱𝐖𝐳italic-ϵ𝐈p(\\mathbf{x}|\\mathbf{z})=\\mathcal{N}(\\mathbf{x};\\mathbf{W}\\mathbf{z},\\epsilon\\mathbf{I}), specifically the case with infinitesimally small ϵitalic-ϵ\\epsilon. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_23",
"text": " In relevant recent work on autoencoders (VLL+10) it was shown that the training criterion of unregularized autoencoders corresponds to maximization of a lower bound (see the infomax principle (Lin89)) of the mutual information between input X𝑋X and latent representation Z𝑍Z. Maximizing (w.r.t. parameters) of the mutual information is equivalent to maximizing the conditional entropy, which is lower bounded by the expected loglikelihood of the data under the autoencoding model (VLL+10), i.e. the negative reconstrution error. However, it is well known that this reconstruction criterion is in itself not sufficient for learning useful representations (BCV13). Regularization techniques have been proposed to make autoencoders learn useful representations, such as denoising, contractive and sparse autoencoder variants (BCV13). The SGVB objective contains a regularization term dictated by the variational bound (e.g. eq. (10)), lacking the usual nuisance regularization hyperparameter required to learn useful representations. Related are also encoder-decoder architectures such as the predictive sparse decomposition (PSD) (KRL08), from which we drew some inspiration. Also relevant are the recently introduced Generative Stochastic Networks (BTL13) where noisy auto-encoders learn the transition operator of a Markov chain that samples from the data distribution. In (SL10) a recognition model was employed for efficient learning with Deep Boltzmann Machines. These methods are targeted at either unnormalized models (i.e. undirected models like Boltzmann machines) or limited to sparse coding models, in contrast to our proposed algorithm for learning a general class of directed probabilistic models. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_24",
"text": " The recently proposed DARN method (GMW13), also learns a directed probabilistic model using an auto-encoding structure, however their method applies to binary latent variables. Even more recently, (RMW14) also make the connection between auto-encoders, directed proabilistic models and stochastic variational inference using the reparameterization trick we describe in this paper. Their work was developed independently of ours and provides an additional perspective on AEVB. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_25",
"text": " We trained generative models of images from the MNIST and Frey Face datasets333Available at http://www.cs.nyu.edu/~roweis/data.html and compared learning algorithms in terms of the variational lower bound, and the estimated marginal likelihood. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_26",
"text": " The generative model (encoder) and variational approximation (decoder) from section 3 were used, where the described encoder and decoder have an equal number of hidden units. Since the Frey Face data are continuous, we used a decoder with Gaussian outputs, identical to the encoder, except that the means were constrained to the interval (0,1)01(0,1) using a sigmoidal activation function at the decoder output. Note that with hidden units we refer to the hidden layer of the neural networks of the encoder and decoder. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_27",
"text": " Parameters are updated using stochastic gradient ascent where gradients are computed by differentiating the lower bound estimator ∇𝜽,ϕℒ(𝜽,ϕ;𝐗)subscript∇𝜽bold-italic-ϕℒ𝜽bold-italic-ϕ𝐗\\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}}\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{X}) (see algorithm 1), plus a small weight decay term corresponding to a prior p(𝜽)=𝒩(0,𝐈)𝑝𝜽𝒩0𝐈p(\\boldsymbol{\\theta})=\\mathcal{N}(0,\\mathbf{I}). Optimization of this objective is equivalent to approximate MAP estimation, where the likelihood gradient is approximated by the gradient of the lower bound. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_28",
"text": " We compared performance of AEVB to the wake-sleep algorithm (HDFN95). We employed the same encoder (also called recognition model) for the wake-sleep algorithm and the variational auto-encoder. All parameters, both variational and generative, were initialized by random sampling from 𝒩(0,0.01)𝒩00.01\\mathcal{N}(0,0.01), and were jointly stochastically optimized using the MAP criterion. Stepsizes were adapted with Adagrad (DHS10); the Adagrad global stepsize parameters were chosen from {0.01, 0.02, 0.1} based on performance on the training set in the first few iterations. Minibatches of size M=100𝑀100M=100 were used, with L=1𝐿1L=1 samples per datapoint. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_29",
"text": " We trained generative models (decoders) and corresponding encoders (a.k.a. recognition models) having 500500500 hidden units in case of MNIST, and 200200200 hidden units in case of the Frey Face dataset (to prevent overfitting, since it is a considerably smaller dataset). The chosen number of hidden units is based on prior literature on auto-encoders, and the relative performance of different algorithms was not very sensitive to these choices. Figure 2 shows the results when comparing the lower bounds. Interestingly, superfluous latent variables did not result in overfitting, which is explained by the regularizing nature of the variational bound. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_30",
"text": " For very low-dimensional latent space it is possible to estimate the marginal likelihood of the learned generative models using an MCMC estimator. More information about the marginal likelihood estimator is available in the appendix. For the encoder and decoder we again used neural networks, this time with 100 hidden units, and 3 latent variables; for higher dimensional latent space the estimates became unreliable. Again, the MNIST dataset was used. The AEVB and Wake-Sleep methods were compared to Monte Carlo EM (MCEM) with a Hybrid Monte Carlo (HMC) (DKPR87) sampler; details are in the appendix. We compared the convergence speed for the three algorithms, for a small and large training set size. Results are in figure 3. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_31",
"text": " If we choose a low-dimensional latent space (e.g. 2D), we can use the learned encoders (recognition model) to project high-dimensional data to a low-dimensional manifold. See appendix A for visualisations of the 2D latent manifolds for the MNIST and Frey Face datasets. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_32",
"text": " We have introduced a novel estimator of the variational lower bound, Stochastic Gradient VB (SGVB), for efficient approximate inference with continuous latent variables. The proposed estimator can be straightforwardly differentiated and optimized using standard stochastic gradient methods. For the case of i.i.d. datasets and continuous latent variables per datapoint we introduce an efficient algorithm for efficient inference and learning, Auto-Encoding VB (AEVB), that learns an approximate inference model using the SGVB estimator. The theoretical advantages are reflected in experimental results. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_33",
"text": " Since the SGVB estimator and the AEVB algorithm can be applied to almost any inference and learning problem with continuous latent variables, there are plenty of future directions: (i) learning hierarchical generative architectures with deep neural networks (e.g. convolutional networks) used for the encoders and decoders, trained jointly with AEVB; (ii) time-series models (i.e. dynamic Bayesian networks); (iii) application of SGVB to the global parameters; (iv) supervised models with latent variables, useful for learning complicated noise distributions. ",
"title": "Auto-Encoding Variational Bayes"
}
] |
How are objective control signals more advantageous than subjective control signals when controlling the caption generation process?
|
Subjective control signals are harder to control the generation process effectively and precisely [1].
|
[
1
] |
[
{
"id": "2103.12204_all_0",
"text": " Image captioning, \\ie, generating fluent and meaningful descriptions to summarize the salient contents of an image, is a classic proxy task for comprehensive scene understanding . With the release of several large scale datasets and advanced encoder-decoder frameworks, current captioning models plausibly have already achieved “super-human” performance in all accuracy-based evaluation metrics. However, many studies have indicated that these models tend to produce generic descriptions, and fail to control the caption generation process as humans, \\eg, referring to different contents of interest or descriptive patterns. In order to endow the captioning models with human-like controllability, a recent surge of efforts (16, 10, 19, 78, 48, 77, 27, 20) resort to introducing extra control signals as constraints of the generated captions, called Controllable Image Captioning (CIC). As a byproduct, the CIC models can easily generate diverse descriptions by feeding different control signals. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_1",
"text": " Early CIC works mainly focus on subjective control signals, such as sentiments , emotions (42, 22), and personality (14, 54), \\ie, the linguistic styles of sentences. Although these stylized captioning models can eventually produce style-related captions, they remain hard to control the generation process effectively and precisely. To further improve the controllability, recent CIC works gradually put a more emphasis on objective control signals. More specifically, they can be coarsely classified into two categories: 1) Content-controlled: the control signals are about the contents of interest which need to be described. As the example shown in Figure 1 (a), given the region set () as a control signal, we hope that the generated caption can cover all regions (\\ie, man, wave, and surfboard). So far, various types of content-controlled signals have been proposed, such as visual relations , object regions (16, 35), scene graphs (10, 78), and mouse trace . 2) Structure-controlled: the control signals are about the semantic structures of sentences. For instance, the length-level , part-of-speech tags , or attributes of the sentence (cf. Figure 1 (b)) are some typical structure-controlled signals. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_2",
"text": " Nevertheless, all existing objective control signals (\\ie, both content-controlled and structure-controlled) have overlooked two indispensable characteristics of an ideal control signal towards “human-like” controllable image captioning: 1) Event-compatible: all visual contents referred to in a single sentence should be compatible with the described activity. Imaging how humans describe images — our brains always quickly structure a descriptive pattern like “sth do sth at someplace” first, and then fill in the detailed description (56, 46, 30, 71), \\ie, we have subconsciously made sure that all the mentioned entities are event-compatible (\\eg, man, wave, surfboard are all involved in activity riding in Figure 1 (a)). To further see the negative impact of dissatisfying this requirement, suppose that we deliberately utilize two more objects (hand and sky, \\ie, ) as part of the control signal, and the model generates an incoherent and illogical caption. 2) Sample-suitable: the control signals should be suitable for the specific image sample. By “suitable”, we mean that there do exist reasonable descriptions satisfying the control signals, \\eg, a large length-level may not be suitable for an image with a very simple scene. Unfortunately, it is always very difficult to decide whether a control signal is sample-suitable in advance. For example in Figure 1 (b), although the two control signals (\\ie, length-levels 3 and 4) are quite close, the quality of respectively generated captions varies greatly. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_3",
"text": " In this paper, we propose a new event-oriented objective control signal, Verb-specific Semantic Roles (VSR), to meet both event-compatible and sample-suitable requirements simultaneously. VSR consists of a verb (\\ie, predicate ) and some user-interested semantic roles . As shown in Figure 2, the verb captures the scope of a salient activity in the image (\\eg, eating), and the corresponding semantic roles111We use PropBank-style annotations of semantic roles (\\eg, Arg0, Arg1) in all experiments (cf. Figure 1). The FrameNet-style annotations of semantic roles (\\eg, Agent) here are just for a more intuitive illustration. In the PropBank-style annotations, Arg denotes “argument”, MNR denotes “manner”, DIR denotes “directional”, and LOC denotes “location”. We leave more details in the supplementary material. (\\eg, agent, food, container, and tool) categorize how objects participate in this activity, \\ie, a child (agent) is eating (activity) a pancake (food) from a plate (container) with a fork (tool). Thus, VSR is designed to guarantee that all the mentioned entities are event-compatible. Meanwhile, unlike the existing structure-controlled signals which directly impose constraints on the generated captions, VSR only restricts the involved semantic roles, which is theoretically suitable for all the images with the activity, \\ie, sample-suitable. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_4",
"text": " In order to generate sentences with respect to the designated VSRs, we first train a grounded semantic role labeling (GSRL) model to identify and ground all entities for each role. Then, we propose a semantic structure planner (SSP) to rank the given verb and semantic roles, and output some human-like descriptive semantic structures, \\eg, Arg0readerreader{}_{\\text{reader}} – read – Arg1thingthing{}_{\\text{thing}} – LOC in Figure 1 (c). Finally, we combine the grounded entities and semantic structures, and use an RNN-based role-shift captioning model to generate the captions by sequentially focusing on different roles. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_5",
"text": " Although these are no available captioning datasets with the VSR annotations, they can be easily obtained by off-the-shelf semantic role parsing toolkits . Extensive experiments on two challenging CIC benchmarks (\\ie, COCO Entities and Flickr30K Entities ) demonstrate that our framework can achieve better controllability given designated VSRs than several strong baselines. Moreover, our framework can also realize diverse image captioning and achieve a better trade-off between quality and diversity. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_6",
"text": " In summary, we make three contributions in this paper: 1. We propose a new control signal for CIC: Verb-specific Semantic Roles (VSR). To the best of our knowledge, VSR is the first control signal to consider both event-compatible and sample-suitable requirements222When using control signals extracted from GT captions, existing control signals can always meet both requirements and generate reasonable captions. However, in more general settings (\\eg, construct control signals without GT captions), the form of VSR is more human-friendly, and it is easier to construct signals which meet both requirements compared with all existing forms of control signals, which is the main advantage of VSR.. 2. We can learn human-like verb-specific semantic structures automatically, and abundant visualization examples demonstrate that these patterns are reasonable. 3. We achieve state-of-the-art controllability on two challenging benchmarks, and generate diverse captions by using different verbs, semantic roles, or structures. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_7",
"text": " Controllable Image Captioning. Compared with conventional image captioning (63, 68, 9, 25, 13), CIC is a more challenging task, which needs to consider extra constraints. Early CIC works are mostly about stylized image captioning, \\ie, constraints are the linguistic styles of sentences. According to the requirements of parallel training samples, existing solutions can be divided into two types: models using parallel stylized image-caption data (41, 11, 54, 1) or not (22, 42). Subsequently, the community gradually shifts the emphasis to controlling described contents (16, 77, 27, 10, 78, 48, 35) or structures (20, 19, 75, 76) of the sentences. In this paper, we propose a novel control signal VSR, which is the first control signal to consider both the event-compatible and sample-suitable requirements. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_8",
"text": " Diverse and Distinctive Image Captioning. Diverse image captioning, \\ie, describing the image contents with diverse wordings and rich expressions, is an essential property of human-like captioning models. Except from feeding different control signals to the CIC models, other diverse captioning methods can be coarsely grouped into four types: 1) GAN-based (17, 52, 32): they use a discriminator to force the generator to generate human-indistinguishable captions. 2) VAE-based (65, 7): the diversity obtained with them is by sampling from a learned latent space. 3) RL-based : they regard diversity as an extra reward in the RL training stage. 4) BS-based : they decode a list of diverse captions by optimizing a diversity-augmented objective. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_9",
"text": " Meanwhile, distinctive image captioning is another close research direction (18, 60, 37, 36, 64), which aims to generate discriminative and unique captions for individual images. Unfortunately, due to the subjective nature of diverse and distinctive captions, effective evaluation remains as an open problem, and several new metrics are proposed, such as SPICE-U , CIDErBtw , self-CIDEr , word recall , mBLEU . In this paper, we can easily generate diverse captions in both lexical-level and syntactic-level. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_10",
"text": " Semantic Roles in Images. Inspired from the semantic role labeling task in NLP, several tasks have been proposed to label the roles of each object in an activity in an image: ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_11",
"text": " Visual Semantic Role Labeling (VSRL), also called situation recognition, is a generalization of action recognition and human-object interaction, which aims to label an image with a set of verb-specific action frames . Specifically, each action frame describes details of the activity captured by the verb, and it consists of a fixed set of verb-specific semantic roles and their corresponding values. The values are the entities or objects involved in the activity and the semantic roles categorize how objects participate in the activity. The current VSRL methods (23, 73, 40, 33, 72, 57, 15) usually learn an independent action classifier first, and then model the role inter-dependency by RNNs or GNNs. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_12",
"text": " Grounded Semantic Role Labeling (GSRL), also called grounded situation recognition, builds upon the VSRL task, which requires the models not only to label a set of frames, but also to localize each role-value pair in the image (49, 55, 70, 23). In this paper, we use the GSRL model as a bridge to connect the control signals (VSR) and related regions. To the best of our knowledge, we are the first captioning work to benefit from the verb lexicon developed by linguists. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_13",
"text": " For human-like controllable image captioning, we first propose the Verb-specific Semantic Roles (VSR) as the control signal for generating customized captions. As shown in Figure 3, we formally represent a control signal VSR as: 𝒱𝒮ℛ={v,<s1,n1>,…,<sm,nm>},\\displaystyle\\begin{aligned} \\mathcal{VSR}=\\{v,<s_{1},n_{1}>,...,<s_{m},n_{m}>\\},\\\\ \\end{aligned} (1) where v𝑣v is a verb capturing the scope of a salient activity in the image (\\eg, ride), sisubscript𝑠𝑖s_{i} is a semantic role of verb v𝑣v (\\eg, LOC), and nisubscript𝑛𝑖n_{i} is the number of interested entities in the role sisubscript𝑠𝑖s_{i}. For example, for 𝒱𝒮ℛ={ride,<Arg0,1>,<Arg1,1>,<Loc,2>}\\mathcal{VSR}=\\{\\texttt{ride},<\\texttt{Arg0},\\texttt{1}>,<\\texttt{Arg1},\\texttt{1}>,<\\texttt{Loc},\\texttt{2}>\\}, we hope to generate a caption which not only focuses on describing the ride activity, but also contains one entity respectively in the role Arg0riderrider{}_{\\text{rider}} and Arg1steedsteed{}_{\\text{steed}}, and two entities in the role LOC. Thus, VSR can effectively control the amount of information carried in the whole sentence and each role, \\ie, the level of details. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_14",
"text": " It is convenient to construct VSRs automatically or manually. For the verbs, they can be accurately predicted by an off-the-shelf action recognition network with a predefined verb vocabulary. For the verb-specific semantic roles, they can be easily retrieved from the verb lexicon such as PropBank or FrameNet. Then, the users can easily select a subset of roles or an automatic sampling to generate a subset of roles, and randomly assign the entity number for each role. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_15",
"text": " Given an image 𝑰𝑰\\bm{I} and a control signal 𝒱𝒮ℛ𝒱𝒮ℛ\\mathcal{VSR}, the controllable image captioning model aims to describe 𝑰𝑰\\bm{I} by a textual sentence 𝒚={y1,…,yT}𝒚subscript𝑦1…subscript𝑦𝑇\\bm{y}=\\{y_{1},...,y_{T}\\}, \\ie, modeling the probability p(𝒚|𝑰,𝒱𝒮ℛ)𝑝conditional𝒚𝑰𝒱𝒮ℛp(\\bm{y}|\\bm{I},\\mathcal{VSR}). Inspired from the human habit of describing images, we decompose this task into two steps: structuring a descriptive pattern and filling in detailed captions: p(𝒚|𝑰,𝒱𝒮ℛ)=p(𝒚|pattern)p(pattern|𝑰,𝒱𝒮ℛ).𝑝conditional𝒚𝑰𝒱𝒮ℛ𝑝conditional𝒚pattern𝑝conditionalpattern𝑰𝒱𝒮ℛ\\displaystyle p(\\bm{y}|\\bm{I},\\mathcal{VSR})=p(\\bm{y}|\\text{pattern})p(\\text{pattern}|\\bm{I},\\mathcal{VSR}). (2) ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_16",
"text": " Further, we utilize two sequences 𝒮=(s1b,…,sKb)𝒮subscriptsuperscript𝑠𝑏1…subscriptsuperscript𝑠𝑏𝐾\\mathcal{S}=(s^{b}_{1},...,s^{b}_{K}) and ℛ=(𝒓1,…,𝒓K)ℛsubscript𝒓1…subscript𝒓𝐾\\mathcal{R}=(\\bm{r}_{1},...,\\bm{r}_{K}) to model the descriptive patterns. Specifically, 𝒮𝒮\\mathcal{S} is a semantic structure of the sentence and each sib∈𝒮subscriptsuperscript𝑠𝑏𝑖𝒮s^{b}_{i}\\in\\mathcal{S} is a sub-role. By “sub-role”, we mean that each role si∈𝒱𝒮ℛsubscript𝑠𝑖𝒱𝒮ℛs_{i}\\in\\mathcal{VSR} can be divided into nisubscript𝑛𝑖n_{i} sub-roles, and when ni=1subscript𝑛𝑖1n_{i}=1, role sisubscript𝑠𝑖s_{i} itself is a sub-role. Thus, VSR in Figure 3 can be rewritten as Arg0, Arg1, LOC-1, and LOC-2. ℛℛ\\mathcal{R} is a sequence of visual features of the corresponding grounded entities for each sub-role in 𝒮𝒮\\mathcal{S} (\\eg, 𝒓isubscript𝒓𝑖\\bm{r}_{i} is the features of visual regions referring to sibsubscriptsuperscript𝑠𝑏𝑖s^{b}_{i}). Particularly, for presentation conciseness, we regard the verb in 𝒱𝒮ℛ𝒱𝒮ℛ\\mathcal{VSR} as a special type of sub-role, and since there are no grounded visual regions referring to the verb, we use the global image feature as the grounded region feature in ℛℛ\\mathcal{R}. Meanwhile, we use ℛ~~ℛ\\mathcal{\\tilde{R}} to denote a set of all elements in the sequence ℛℛ\\mathcal{R}. Thus, we further decompose this task into three components: p(𝒚|𝑰,𝒱𝒮ℛ)=p(𝒚|𝒮,ℛ)⏟Captionerp(𝒮,ℛ|ℛ~,𝒱𝒮ℛ)⏟SSPp(ℛ~|𝑰,𝒱𝒮ℛ)⏟GSRL.𝑝conditional𝒚𝑰𝒱𝒮ℛsubscript⏟𝑝conditional𝒚𝒮ℛCaptionersubscript⏟𝑝𝒮conditionalℛ~ℛ𝒱𝒮ℛSSPsubscript⏟𝑝conditional~ℛ𝑰𝒱𝒮ℛGSRL\\displaystyle p(\\bm{y}|\\bm{I},\\mathcal{VSR})=\\underbrace{p(\\bm{y}|\\mathcal{S},\\mathcal{R})}_{\\text{Captioner}}\\underbrace{p(\\mathcal{S},\\mathcal{R}|\\mathcal{\\tilde{R}},\\mathcal{VSR})}_{\\text{SSP}}\\underbrace{p(\\mathcal{\\tilde{R}}|\\bm{I},\\mathcal{VSR})}_{\\text{GSRL}}. (3) ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_17",
"text": " In this section, we first introduce each component of the whole framework of the VSR-guided controllable image captioning model sequentially in Section 3.1 (cf. Figure 3), including a grounded semantic role labeling (GSRL) model, a semantic structure planner (SSP), and a role-shift captioning model. Then, we demonstrate the details about all training objectives and the inference stage in Section 3.2, including extending from a single VSR to multiple VSRs. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_18",
"text": " Given an image 𝑰𝑰\\bm{I}, we first utilize an object detector to extract a set of object proposals ℬℬ\\mathcal{B}. Each proposal 𝒃i∈ℬsubscript𝒃𝑖ℬ\\bm{b}_{i}\\in\\mathcal{B} is associated with a visual feature 𝒇isubscript𝒇𝑖\\bm{f}_{i} and a class label ci∈𝒞subscript𝑐𝑖𝒞c_{i}\\in\\mathcal{C}. Then, we group all these proposals into N𝑁N disjoint sets, \\ie, ℬ={ℬ1,…,ℬN}ℬsubscriptℬ1…subscriptℬ𝑁\\mathcal{B}=\\{\\mathcal{B}_{1},...,\\mathcal{B}_{N}\\}333Due to different annotation natures of specific CIC datasets, we group proposals by different principles. Details are shown in Section 4.2., and each proposal set ℬisubscriptℬ𝑖\\mathcal{B}_{i} consists of one or more proposals. In this GSRL step, we need to refer each sub-role in the 𝒱𝒮ℛ𝒱𝒮ℛ\\mathcal{VSR} to a proposal set in ℬℬ\\mathcal{B}. Specifically, we calculate the similarity score aijsubscript𝑎𝑖𝑗a_{ij} between semantic role sisubscript𝑠𝑖s_{i} and proposal set ℬjsubscriptℬ𝑗\\mathcal{B}_{j} by: 𝒒i=(𝒆vg;𝒆sig;𝒇¯),aij=Fa(𝒒i,𝒇¯𝒋),formulae-sequencesubscript𝒒𝑖subscriptsuperscript𝒆𝑔𝑣subscriptsuperscript𝒆𝑔subscript𝑠𝑖bold-¯𝒇subscript𝑎𝑖𝑗subscript𝐹𝑎subscript𝒒𝑖subscriptbold-¯𝒇𝒋\\displaystyle\\bm{q}_{i}=\\left(\\bm{e}^{g}_{v};\\bm{e}^{g}_{s_{i}};\\bm{\\bar{f}}\\right),\\quad a_{ij}=F_{a}(\\bm{q}_{i},\\bm{\\bar{f}_{j}}), (4) where 𝒆vgsubscriptsuperscript𝒆𝑔𝑣\\bm{e}^{g}_{v} and 𝒆sigsubscriptsuperscript𝒆𝑔subscript𝑠𝑖\\bm{e}^{g}_{s_{i}} are the word embedding features of verb v𝑣v and semantic role sisubscript𝑠𝑖s_{i}, 𝒇¯bold-¯𝒇\\bm{\\bar{f}} and 𝒇¯𝒋subscriptbold-¯𝒇𝒋\\bm{\\bar{f}_{j}} represent the average-pooled visual features of proposal set ℬℬ\\mathcal{B} and ℬjsubscriptℬ𝑗\\mathcal{B}_{j}, (;) is a concatenation operation, and Fasubscript𝐹𝑎F_{a} is a learnable similarity function444For conciseness, we leave the details in the supplementary material. . ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_19",
"text": " After obtaining the grounding similarity scores {aij}subscript𝑎𝑖𝑗\\{a_{ij}\\} between semantic role sisubscript𝑠𝑖s_{i} and all proposal sets {ℬj}subscriptℬ𝑗\\{\\mathcal{B}_{j}\\}, we then select the top nisubscript𝑛𝑖n_{i} proposal sets with the highest scores as the grounding results for all sub-roles of sisubscript𝑠𝑖s_{i}. ℛ~~ℛ\\mathcal{\\tilde{R}} in Eq. (3) is the set of visual features of all grounded proposal sets. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_20",
"text": " Semantic structure planner (SSP) is a hierarchical semantic structure learning model, which aims to learn a reasonable sequence of sub-roles 𝒮𝒮\\mathcal{S}. As shown in Figure 3, it consists of two subnets: an S-level SSP and an R-level SSP. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_21",
"text": " S-level SSP. The sentence-level (S-level) SSP is a coarse-grained structure learning model, which only learns a sequence of all involved general semantic roles (including the verb) in 𝒱𝒮ℛ𝒱𝒮ℛ\\mathcal{VSR} (\\eg, ride, Arg0riderrider{}_{\\text{rider}}, Arg1steedsteed{}_{\\text{steed}} and LOC in Figure 3). To this end, we formulate this sentence-level structure learning as a role sequence generation task, as long as we constrain that each output role token belongs to the given role set and each role can only appear once. Specifically, we utilize a three-layer Transformer 555More comparison results between Transformer and Sinkhorn networks (43, 16) are left in supplementary material. to calucate the probability of roles p(st|𝒱𝒮ℛ)𝑝conditionalsubscript𝑠𝑡𝒱𝒮ℛp(s_{t}|\\mathcal{VSR}) at each time step t𝑡t4: 𝑯𝑯\\displaystyle\\bm{H} =Transformerenc({FCa(𝒆vi+𝒆sii)}),absentsubscriptTransformerencsubscriptFC𝑎subscriptsuperscript𝒆𝑖𝑣subscriptsuperscript𝒆𝑖subscript𝑠𝑖\\displaystyle=\\text{Transformer}_{\\text{enc}}\\left(\\{\\text{FC}_{a}(\\bm{e}^{i}_{v}+\\bm{e}^{i}_{s_{i}})\\}\\right), (5) p(st|𝒱𝒮ℛ)𝑝conditionalsubscript𝑠𝑡𝒱𝒮ℛ\\displaystyle p(s_{t}|\\mathcal{VSR}) =Transformerdec(𝑯,𝒆s<to),absentsubscriptTransformerdec𝑯subscriptsuperscript𝒆𝑜subscript𝑠absent𝑡\\displaystyle=\\text{Transformer}_{\\text{dec}}\\left(\\bm{H},\\bm{e}^{o}_{s_{<t}}\\right), where Transformer∗ are the encoder (enc) and decoder (dec) of the standard multi-head transformer. 𝒆visubscriptsuperscript𝒆𝑖𝑣\\bm{e}^{i}_{v} and 𝒆siisubscriptsuperscript𝒆𝑖subscript𝑠𝑖\\bm{e}^{i}_{s_{i}} are the word embedding features of verb v𝑣v and semantic role sjsubscript𝑠𝑗s_{j}, respectively. FCasubscriptFC𝑎\\text{FC}_{a} is a learnable fc-layer to obtain the embedding of each input token. 𝒆s<tosubscriptsuperscript𝒆𝑜subscript𝑠absent𝑡\\bm{e}^{o}_{s_{<t}} is the sequence of embeddings of previous roles. Based on p(st|𝒱𝒮ℛ)𝑝conditionalsubscript𝑠𝑡𝒱𝒮ℛp(s_{t}|\\mathcal{VSR}), we can predict a role at time step t𝑡t and obtain an initial role sequence, \\eg, Arg0riderrider{}_{\\text{rider}} – ride – Arg1steedsteed{}_{\\text{steed}} – LOC in Figure 3. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_22",
"text": " R-level SSP. The role-level (R-level) SSP is a fine-grained structure model which aims to rank all sub-roles within the same semantic role (\\eg, LOC-1 and LOC-2 are two sub-roles of role Loc in Figure 3). Since the only differences among these sub-roles are the grounded visual regions, we borrow ideas from the Sinkhorn networks (43, 16), which use a differentiable Sinkhorn operation to learn a soft permutation matrix 𝑷𝑷\\bm{P}. Specifically, for each role sisubscript𝑠𝑖s_{i} with multiple sub-roles (\\ie, ni>1subscript𝑛𝑖1n_{i}>1), we first select all the corresponding grounded proposal sets for these sub-roles, denoted as ℬ^={ℬ^1,…,ℬ^ni}^ℬsubscript^ℬ1…subscript^ℬsubscript𝑛𝑖\\mathcal{\\hat{B}}=\\{\\mathcal{\\hat{B}}_{1},...,\\mathcal{\\hat{B}}_{n_{i}}\\}. And for each proposal 𝒃∗∈ℬ^subscript𝒃^ℬ\\bm{b}_{*}\\in\\mathcal{\\hat{B}}, we encode a feature vector 𝒛∗=(𝒛∗v;𝒛∗si;𝒛∗l)subscript𝒛subscriptsuperscript𝒛𝑣subscriptsuperscript𝒛subscript𝑠𝑖subscriptsuperscript𝒛𝑙\\bm{z}_{*}=(\\bm{z}^{v}_{*};\\bm{z}^{s_{i}}_{*};\\bm{z}^{l}_{*}), where 𝒛∗vsubscriptsuperscript𝒛𝑣\\bm{z}^{v}_{*} is a transformation of its visual feature 𝒇∗subscript𝒇\\bm{f}_{*}, 𝒛∗sisubscriptsuperscript𝒛subscript𝑠𝑖\\bm{z}^{s_{i}}_{*} is the word embedding feature of the semantic role sisubscript𝑠𝑖s_{i}, and 𝒛∗lsubscriptsuperscript𝒛𝑙\\bm{z}^{l}_{*} is a 4-d encoding of the spatial position of proposal 𝒃∗subscript𝒃\\bm{b}_{*}. Then, we transform each feature 𝒛∗subscript𝒛\\bm{z}_{*} into nisubscript𝑛𝑖n_{i}-d, and average-pooled all features among the same proposal set, \\ie, we can obtain an nisubscript𝑛𝑖n_{i}-d feature for each ℬ^isubscript^ℬ𝑖\\mathcal{\\hat{B}}_{i}. We concatenate all these features to get an ni×nisubscript𝑛𝑖subscript𝑛𝑖n_{i}\\times n_{i} matrix 𝒁𝒁\\bm{Z}. Finally, we use the Sinkhorn operation to obtain the soft permutation matrix 𝑷𝑷\\bm{P}4: 𝑷=Sinkhorn(𝒁).𝑷Sinkhorn𝒁\\displaystyle\\bm{P}=\\text{Sinkhorn}(\\bm{Z}). (6) ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_23",
"text": " After the two SSP subnets (\\ie, S-level and R-level), we can obtain the semantic structure 𝒮𝒮\\mathcal{S} (cf. Eq. (3)). Based on the sequence of 𝒮𝒮\\mathcal{S} and the set of proposal featurs ℛ~~ℛ\\mathcal{\\tilde{R}} from the GSRL model, we re-rank ℛ~~ℛ\\mathcal{\\tilde{R}} based on 𝒮𝒮\\mathcal{S} and obtain ℛℛ\\mathcal{R}. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_24",
"text": " Given the semantic structure sequence 𝒮=(s1b,…,sKb)𝒮subscriptsuperscript𝑠𝑏1…subscriptsuperscript𝑠𝑏𝐾\\mathcal{S}=(s^{b}_{1},...,s^{b}_{K}) and corresponding proposal feature sequence ℛ=(𝒓1,…,𝒓K)ℛsubscript𝒓1…subscript𝒓𝐾\\mathcal{R}=(\\bm{r}_{1},...,\\bm{r}_{K}), we utilize a two-layer LSTM to generate the final caption 𝒚𝒚\\bm{y}. At each time step, the model fouces on one specific sub-role 𝒔tbsubscriptsuperscript𝒔𝑏𝑡\\bm{s}^{b}_{t} and its grounded region set 𝒓tsubscript𝒓𝑡\\bm{r}_{t}, and then generates the word ytsubscript𝑦𝑡y_{t}. Therefore, we take inspirations from previous CIC methods (16, 10), and predict two distributions simultaneously: p(gt|𝒮,ℛ)𝑝conditionalsubscript𝑔𝑡𝒮ℛp(g_{t}|\\mathcal{S},\\mathcal{R}) for controlling the shift of sub-roles, and p(yt|𝒮,ℛ)𝑝conditionalsubscript𝑦𝑡𝒮ℛp(y_{t}|\\mathcal{S},\\mathcal{R}) to predict the distribution of a word. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_25",
"text": " As for the role-shift, we use an adaptive attention mechanism to predict the probability of shifting4: αtg,𝜶tr,𝒔𝒓tgsubscriptsuperscript𝛼𝑔𝑡subscriptsuperscript𝜶𝑟𝑡𝒔subscriptsuperscript𝒓𝑔𝑡\\displaystyle\\alpha^{g}_{t},\\bm{\\alpha}^{r}_{t},\\bm{sr}^{g}_{t} =AdaptiveAttna(𝒙t,𝒓t),absentsubscriptAdaptiveAttn𝑎subscript𝒙𝑡subscript𝒓𝑡\\displaystyle=\\text{AdaptiveAttn}_{a}(\\bm{x}_{t},\\bm{r}_{t}), (7) where AdaptiveAttnasubscriptAdaptiveAttn𝑎\\text{AdaptiveAttn}_{a} is an adaptive attention network, 𝒙tsubscript𝒙𝑡\\bm{x}_{t} is the input query for attention, 𝒔𝒓tg𝒔subscriptsuperscript𝒓𝑔𝑡\\bm{sr}^{g}_{t} is a sential vector, αtgsubscriptsuperscript𝛼𝑔𝑡\\alpha^{g}_{t} and 𝒂trsubscriptsuperscript𝒂𝑟𝑡\\bm{a}^{r}_{t} are the attention weights for the sential vector and region features, respectively. We directly use attention weight αtgsubscriptsuperscript𝛼𝑔𝑡\\alpha^{g}_{t} as the probability of shifting sub-roles, \\ie, p(gt|𝒮,ℛ)=αtg𝑝conditionalsubscript𝑔𝑡𝒮ℛsubscriptsuperscript𝛼𝑔𝑡p(g_{t}|\\mathcal{S},\\mathcal{R})=\\alpha^{g}_{t}. Based on probability p(gt|𝒮,ℛ)𝑝conditionalsubscript𝑔𝑡𝒮ℛp(g_{t}|\\mathcal{S},\\mathcal{R}), we can sample a gate value gj∈{0,1}subscript𝑔𝑗01g_{j}\\in\\{0,1\\}, and the focused sub-role at time step t𝑡t is: stb←𝒮(i),wherei=min(1+∑j=1t−1gj,K).formulae-sequence←subscriptsuperscript𝑠𝑏𝑡𝒮delimited-()𝑖where𝑖1subscriptsuperscript𝑡1𝑗1subscript𝑔𝑗𝐾\\displaystyle s^{b}_{t}\\leftarrow\\mathcal{S}(i),\\text{where}\\;i=\\min\\left(1+\\textstyle{\\sum}^{t-1}_{j=1}g_{j},K\\right). (8) Due to the special nature of sub-role “verb”, we fix gt+1=1subscript𝑔𝑡11g_{t+1}=1 when stbsubscriptsuperscript𝑠𝑏𝑡s^{b}_{t} is the verb. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_26",
"text": " For each sub-role stbsubscriptsuperscript𝑠𝑏𝑡s^{b}_{t}, we use the corresponding proposal set features 𝒓tsubscript𝒓𝑡\\bm{r}_{t} and a two-layer LSTM to generate word ytsubscript𝑦𝑡y_{t}: 𝒉t1subscriptsuperscript𝒉1𝑡\\displaystyle\\bm{h}^{1}_{t} =LSTM1(𝒉t−11,{yt−1,𝒇¯,𝒉t−12}),absentsubscriptLSTM1subscriptsuperscript𝒉1𝑡1subscript𝑦𝑡1bold-¯𝒇subscriptsuperscript𝒉2𝑡1\\displaystyle=\\text{LSTM}_{1}\\left(\\bm{h}^{1}_{t-1},\\{y_{t-1},\\bm{\\bar{f}},\\bm{h}^{2}_{t-1}\\}\\right), (9) 𝒉t2subscriptsuperscript𝒉2𝑡\\displaystyle\\bm{h}^{2}_{t} =LSTM2(𝒉t−12,{𝒉t1,𝒄t}),absentsubscriptLSTM2subscriptsuperscript𝒉2𝑡1subscriptsuperscript𝒉1𝑡subscript𝒄𝑡\\displaystyle=\\text{LSTM}_{2}\\left(\\bm{h}^{2}_{t-1},\\{\\bm{h}^{1}_{t},\\bm{c}_{t}\\}\\right), ytsubscript𝑦𝑡\\displaystyle y_{t} ∼p(yt|𝒮,ℛ)=FCb(𝒉t2),similar-toabsent𝑝conditionalsubscript𝑦𝑡𝒮ℛsubscriptFC𝑏subscriptsuperscript𝒉2𝑡\\displaystyle\\sim p(y_{t}|\\mathcal{S},\\mathcal{R})=\\text{FC}_{b}(\\bm{h}^{2}_{t}), where 𝒉t1subscriptsuperscript𝒉1𝑡\\bm{h}^{1}_{t} and 𝒉t2subscriptsuperscript𝒉2𝑡\\bm{h}^{2}_{t} are hidden states of the first- and second-layer LSTM (\\ie, LSTM1 and LSTM2), FCbsubscriptFC𝑏\\text{FC}_{b} is a learnable fc-layer, and 𝒄tsubscript𝒄𝑡\\bm{c}_{t} is a context vector. To further distinguish the textual and visual words, we use another adaptive attention network to obtain the context vector 𝒄tsubscript𝒄𝑡\\bm{c}_{t}4: αtv,𝜶tr,𝒔𝒓tvsubscriptsuperscript𝛼𝑣𝑡subscriptsuperscript𝜶𝑟𝑡𝒔subscriptsuperscript𝒓𝑣𝑡\\displaystyle\\alpha^{v}_{t},\\bm{\\alpha}^{r}_{t},\\bm{sr}^{v}_{t} =AdaptiveAttnb(𝒙t,𝒓t),absentsubscriptAdaptiveAttn𝑏subscript𝒙𝑡subscript𝒓𝑡\\displaystyle=\\text{AdaptiveAttn}_{b}(\\bm{x}_{t},\\bm{r}_{t}), (10) 𝒄tsubscript𝒄𝑡\\displaystyle\\bm{c}_{t} =αtv⋅𝒔𝒓tv+∑i𝜶t,ir⋅𝒓t,i,absent⋅subscriptsuperscript𝛼𝑣𝑡𝒔subscriptsuperscript𝒓𝑣𝑡subscript𝑖⋅subscriptsuperscript𝜶𝑟𝑡𝑖subscript𝒓𝑡𝑖\\displaystyle=\\alpha^{v}_{t}\\cdot\\bm{sr}^{v}_{t}+\\textstyle{\\sum}_{i}\\bm{\\alpha}^{r}_{t,i}\\cdot\\bm{r}_{t,i}, where 𝒙tsubscript𝒙𝑡\\bm{x}_{t} is the query for adaptive attention (\\ie, the input of the LSTM1subscriptLSTM1\\text{LSTM}_{1}), 𝒔𝒓tv𝒔subscriptsuperscript𝒓𝑣𝑡\\bm{sr}^{v}_{t} is a sential vector, and αtvsubscriptsuperscript𝛼𝑣𝑡\\alpha^{v}_{t} and 𝜶trsubscriptsuperscript𝜶𝑟𝑡\\bm{\\alpha}^{r}_{t} are the attention weights for the sential vector and region features. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_27",
"text": " Training Stage. In the training stage, we train the three components (GSRL, SSP and captioning model) separately: ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_28",
"text": " Training objective of GSRL. For the GSRL model, we use a binary cross-entropy (BCE) loss between the predicted similarity scores a^ijsubscript^𝑎𝑖𝑗\\hat{a}_{ij} and its ground truth aij∗subscriptsuperscript𝑎𝑖𝑗a^{*}_{ij} as the training loss: LGSRL=∑ijBCE(a^ij,aij∗).subscript𝐿GSRLsubscript𝑖𝑗BCEsubscript^𝑎𝑖𝑗subscriptsuperscript𝑎𝑖𝑗\\displaystyle L_{\\text{GSRL}}=\\textstyle{\\sum}_{ij}\\text{BCE}(\\hat{a}_{ij},a^{*}_{ij}). (11) ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_29",
"text": " Training objective of SSP. For S-level SSP, we use a cross-entropy (XE) loss between prediction s^tsubscript^𝑠𝑡\\hat{s}_{t} and its ground truth st∗subscriptsuperscript𝑠𝑡s^{*}_{t} as the training objective. For R-level SSP, we use a mean square (MSE) loss between prediction 𝑷^tsubscriptbold-^𝑷𝑡\\bm{\\hat{P}}_{t} and its ground truth 𝑷∗tsubscriptsuperscript𝑷𝑡\\bm{P^{*}}_{t} as the training objective: LSSPS=∑tXE(s^t,st∗),LSSPR=∑t𝟏(nt>1)MSE(𝑷^t,𝑷∗t),formulae-sequencesubscriptsuperscript𝐿𝑆SSPsubscript𝑡XEsubscript^𝑠𝑡subscriptsuperscript𝑠𝑡subscriptsuperscript𝐿𝑅SSPsubscript𝑡subscript1subscript𝑛𝑡1MSEsubscriptbold-^𝑷𝑡subscriptsuperscript𝑷𝑡\\displaystyle L^{S}_{\\text{SSP}}=\\textstyle{\\sum}_{t}\\text{XE}(\\hat{s}_{t},s^{*}_{t}),L^{R}_{\\text{SSP}}=\\textstyle{\\sum}_{t}\\mathbf{1}_{(n_{t}>1)}\\text{MSE}(\\bm{\\hat{P}}_{t},\\bm{P^{*}}_{t}), (12) where 𝟏(nt>1)subscript1subscript𝑛𝑡1\\mathbf{1}_{(n_{t}>1)} is an indicator function, being 1 if nt>1subscript𝑛𝑡1n_{t}>1 and 0 otherwise. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_30",
"text": " Training objective of captioning model. We follow the conventions of previous captioning works and use a two-stage training scheme: XE and RL stages. In the XE stage, we use an XE loss between predicted words and ground truth words as the training loss. In the RL stage, we use a self-critical baseline . At each step, we sample from p(yt|𝒮,ℛ)𝑝conditionalsubscript𝑦𝑡𝒮ℛp(y_{t}|\\mathcal{S},\\mathcal{R}) and p(gt|𝒮,ℛ)𝑝conditionalsubscript𝑔𝑡𝒮ℛp(g_{t}|\\mathcal{S},\\mathcal{R}) to obtain the next word yt+1subscript𝑦𝑡1y_{t+1} and sub-role st+1bsubscriptsuperscript𝑠𝑏𝑡1s^{b}_{t+1}. Then we calcuate the reward r(𝒚s)𝑟superscript𝒚𝑠r(\\bm{y}^{s}) of the sampled sentence 𝒚ssuperscript𝒚𝑠\\bm{y}^{s}. Baseline b𝑏b is the reward of the greedily generated sentence. Thus, the gradient expression of the training loss is: ∇θL=−(r(𝒚s)−b)(∇θlogp(𝒚s)+∇θlogp(𝒈s)),subscript∇𝜃𝐿𝑟superscript𝒚𝑠𝑏subscript∇𝜃𝑝superscript𝒚𝑠subscript∇𝜃𝑝superscript𝒈𝑠\\nabla_{\\theta}L=-(r(\\bm{y}^{s})-b)(\\nabla_{\\theta}\\log p(\\bm{y}^{s})+\\nabla_{\\theta}\\log p(\\bm{g}^{s})), (13) where 𝒈ssuperscript𝒈𝑠\\bm{g}^{s} is the sequence of role-shift gates. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_31",
"text": " Inference. In testing stage, given an image and one 𝒱𝒮ℛ𝒱𝒮ℛ\\mathcal{VSR}, we sequentially use the GSRL, SSP, and captioning model to generate the final captions. Meanwhile, our framework can be easily extended from one 𝒱𝒮ℛ𝒱𝒮ℛ\\mathcal{VSR} to multiple 𝒱𝒮ℛs𝒱𝒮ℛ𝑠\\mathcal{VSR}s as the control signal. Taking an example of two 𝒱𝒮ℛs𝒱𝒮ℛ𝑠\\mathcal{VSR}s, we first use GSRL and SSP to obtain semantic structures and grounded regions features: (𝒮a,ℛa)superscript𝒮𝑎superscriptℛ𝑎(\\mathcal{S}^{a},\\mathcal{R}^{a}) and (𝒮b,ℛb)superscript𝒮𝑏superscriptℛ𝑏(\\mathcal{S}^{b},\\mathcal{R}^{b}). Then, as shown in Figure 4, we merge them by two steps4: (a) find the sub-roles in both 𝒮asuperscript𝒮𝑎\\mathcal{S}^{a} and 𝒮bsuperscript𝒮𝑏\\mathcal{S}^{b} which refer to the same visual regions (\\eg, s1asubscriptsuperscript𝑠𝑎1s^{a}_{1} and s1bsubscriptsuperscript𝑠𝑏1s^{b}_{1} refer to the same proposal set); (b) insert all other sub-roles between the nearest two selected sub-roles (\\eg, s2∗subscriptsuperscript𝑠2s^{*}_{2} are still between s1∗subscriptsuperscript𝑠1s^{*}_{1} and s3∗subscriptsuperscript𝑠3s^{*}_{3}). Concerning the order of sub-roles from different verbs, we follow the rank of two verbs (\\eg, s2asubscriptsuperscript𝑠𝑎2s^{a}_{2} is in front of s2bsubscriptsuperscript𝑠𝑏2s^{b}_{2}). ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_32",
"text": " Flickr30K Entities . It builds upon the Flickr30K dataset, by manually grounding each noun phrase in the descriptions with one or more visual regions. It consists of 31,000 images, and each image is associated with five captions. We use the same splits as in our experiments. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_33",
"text": " COCO Entities . It builds upon the COCO dataset which consists of 120,000 images and each image is annotated with five captions. Different from Flickr30K Entities where all grounding entities are annotated by humans, all annotations in COCO Entities are detected automatically. Especially, they align each entity to all the detected proposals with the same object class. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_34",
"text": " Although we only assume that there exists at least one verb (\\ie, activity) in each image; unfortunately, there are still a few samples (\\ie, 3.26% in COCO Entities and 0.04% in Flickr30K Entities) having no verbs in their captions. We use the same split as and further drop the those samples with no verb in the training and testing stages4. We will try to cover these extreme cases and leave it for future work. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_35",
"text": " Proposal Generation and Grouping. We utilize a Faster R-CNN with ResNet-101 to obtain all proposals for each image. Especially, we use the model released by , which is finetuned on VG dataset . For COCO Entities, since the “ground truth” annotations for each noun phrase are the proposals with the same class, we group the proposals by their detected class labels. But for Flickr30K Entities, we directly regard each proposal as a proposal set. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_36",
"text": " VSR Annotations. Since there are no ground truth semantic role annotations for CIC datasets, we use a pretrained SRL tool to annotate verbs and semantic roles for each caption, and regard them as ground truth annotations. For each detected verb, we convert it into its base form and build a verb dictionary for each dataset. The dictionary sizes for COCO and Flickr30K are 2,662 and 2,926, respectively. There are a total of 24 types of semantic roles for all verbs. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_37",
"text": " Experimental Settings. For the S-level SSP, the head number of multi-head attention is set to 8, and the hidden size of the transformer is set to 512. The length of the transformer is set to 10. For the R-level SSP, we set the maximum number of entities for each role to 10. For the RL training of the captioning model, we use CIDEr-D score as the training reward. Due to the limited space, we leave more detailed parameter settings in the supplementary material. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_38",
"text": " Settings. To evaluate the controllability of proposed framework, we followed the conventions of prior CIC works (16, 10, 78), and utilized the VSR aligned with ground truth captions as the control signals. Specifically, we compared the proposed framework with several carefully designed baselines666All baselines use the same visual regions as models with VSRs.: 1) C-LSTM: It is a Controllable LSTM model . Given the features of all grounded visual regions, it first averages all region features, and then uses an LSTM to generate the captions. 2) C-UpDn: It is a Controllable UpDn model , which uses an adaptive attention to generate the captions. 3) SCT : It regards the set of visual regions as a control signal, and utilizes a chunk-shift captioning model to generate the captions. 4) Ours w/o verb: We ablate our model by removing the verb information in both the SSP and captioning model. 5) Ours (oracle verb): It is an ideal situation, where the captioning model directly outputs the oracle format of the verb when the attending role is the verb. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_39",
"text": " Evaluation Metrics. To evaluate the quality of the generated captions, we use five accuracy-based metrics, including BLEU-4 (B4) , METEOR (M) , ROUGE (R) , CIDEr-D (C) , and SPICE (S) . Particularly, we evaluate the generated captions against the single ground truth caption. We also propose a new recall-based metric to evaluate whether the roles of the generated sentence are consistent with the ground truth caption (\\ie, VSR). It measures the recall rate of the verb, semantic roles, and ordered role pairs, which are denoted as RVV{}_{\\text{V}}, RSR1SR1{}_{\\text{SR1}} and RSR2SR2{}_{\\text{SR2}}, respectively. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_40",
"text": " Quantitative Results. The quantitative results are reported in Table 1. From Table 1, we can observe that our framework can achieve the best performance over almost all metrics and benchmarks. By comparing the two different proposal settings (\\ie, GSRL and GT), we can find that the accuracy of GSRL is a major bottleneck of the whole framework. Meanwhile, the ablative model (Ours w/o verb) can only achieve slightly better performance than baseline SCT and much worse performance than our full model, which reflects the importance of the verb in semantic structure learning and caption generation. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_41",
"text": " Visualizations. In Figure 6, we illustrate some examples of the generated captions. We can observe that our framework always learns a human-like semantic structure based on the VSR and grounded visual regions (\\eg, Arg1thingthing{}_{\\text{thing}} – sit – Arg2positionposition{}_{\\text{position}} – LOC – MNR). According to the semantic structures, the captioning model can generate near-perfect descriptions. As a by-product, a well-trained SSP can automatically produce several verb-specific semantic structures for a set of user-interested roles, and we show some examples in Figure 6. For each verb and role set, we illustrate the top two structures by using beam search. Particularly, we are surprised to find that we can even learn some structures that never appear in original datasets (the blue tick ones). ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_42",
"text": " One of the well-known advantages of controllable image captioning is the ability to generate diverse image captions by feeding different control signals. Thus, we also evaluate the diversity of the captions generated by our framework. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_43",
"text": " Settings. We evaluated the quality of diverse captions in two settings: 1) Given a VSR and grounded visual regions of each role aligned with the ground truth caption, we first use an SSP to select two semantic structures, and then respectively generate two diverse captions. For fair comparisons, we utilize the same set of visual regions on two strong baselines: a) BS: an UpDn model uses beam search to produce two captions, and b) SCT: an SCT model takes a permutation of all region sets to generate two captions. 2) For each verb, we can randomly sample a subset of all semantic roles to construct new VSRs. Specifically, we sample two more sets of semantic roles, and generate two diverse captions for each role set following the same manner. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_44",
"text": " Evaluation Metrics. We used two types of metrics to evaluate the diverse captions: 1) Accuracy-based: we followed the conventions of the previous works (16, 20, 65) and reported the best-1 accuracy, \\ie, the generated caption with the maximum score for each metric is chosen. Analogously, we evaluate the generated captions against the single ground truth caption. 2) Diversity-based: we followed and used two metrics which only focus on the language similarity: Div-n (D-n) (4, 20) and self-CIDEr (s-C) . ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_45",
"text": " Quantitative Results. The quantitative results are reported in Table 2. From Table 2, we can observe that the diverse captions generated by our framework in both two settings have much higher accuracy (\\eg, CIDEr 267.3 vs. 222.5 in SCT), and that the diversity is slightly behind SCT (\\eg, self-CIDEr 67.0 vs. 69.1 in SCT). This is because SCT generates captions by randomly shuffling regions. Instead, we tend to learn more reasonable structures. Thus, we can achieve much higher results on accuracy, \\ie, our method can achieve a better trade-off between quality and diversity on diverse image captioning than the two strong baselines. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_46",
"text": " Visualizations. We further illustrate the generated captions of two images with different VSRs in Figure 7. The captions are generated effectively according to the given VSR, and the diversity of VSR leads to significant diverse captions. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_47",
"text": " In this paper, we argued that all existing objective control signals for CIC have overlooked two indispensable characteristics: event-compatible and sample-suitable. To this end, we proposed a novel control signal called VSR. VSR consists of a verb and several semantic roles, \\ie, all components are guaranteed to be event-compatible. Meanwhile, VSR only restricts the involved semantic roles, which is also sample-suitable for all the images containing the activity. We have validated the effectiveness of VSR through extensive experiments. Moving forward, we will plan to 1) design a more effective captioning model to benefit more from the VSR signals; 2) extend VSR to other controllable text generation tasks, \\eg, video captioning ; 3) design a more general framework to cover the images without verbs. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_48",
"text": " Acknowledgements. This work was supported by the National Natural Science Foundation of China (U19B2043,61976185), Zhejiang Natural Science Foundation (LR19F020002), Zhejiang Innovation Foundation (2019R52002), and Fundamental Research Funds for Central Universities. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
}
] |
What metrics should be used for comparison of Mask R-CNN to the state of the art on the COCO dataset ?
|
Metrics used for comparison are AP , multi-scale train/test, horizontal flip test, and online hard example mining (OHEM) [34].
|
[
34
] |
[
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network (FCN) frameworks for object detection and semantic segmentation, respectively. These methods are conceptually intuitive and offer flexibility and robustness, together with fast training and inference time. Our goal in this work is to develop a comparably enabling framework for instance segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_1",
"text": " Instance segmentation is challenging because it requires the correct detection of all objects in an image while also precisely segmenting each instance. It therefore combines elements from the classical computer vision tasks of object detection, where the goal is to classify individual objects and localize each using a bounding box, and semantic segmentation, where the goal is to classify each pixel into a fixed set of categories without differentiating object instances.111Following common terminology, we use object detection to denote detection via bounding boxes, not masks, and semantic segmentation to denote per-pixel classification without differentiating instances. Yet we note that instance segmentation is both semantic and a form of detection. Given this, one might expect a complex method is required to achieve good results. However, we show that a surprisingly simple, flexible, and fast system can surpass prior state-of-the-art instance segmentation results. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_2",
"text": " Our method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting segmentation masks on each Region of Interest (RoI), in parallel with the existing branch for classification and bounding box regression (Figure 1). The mask branch is a small FCN applied to each RoI, predicting a segmentation mask in a pixel-to-pixel manner. Mask R-CNN is simple to implement and train given the Faster R-CNN framework, which facilitates a wide range of flexible architecture designs. Additionally, the mask branch only adds a small computational overhead, enabling a fast system and rapid experimentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_3",
"text": " In principle Mask R-CNN is an intuitive extension of Faster R-CNN, yet constructing the mask branch properly is critical for good results. Most importantly, Faster R-CNN was not designed for pixel-to-pixel alignment between network inputs and outputs. This is most evident in how RoIPool (18, 12), the de facto core operation for attending to instances, performs coarse spatial quantization for feature extraction. To fix the misalignment, we propose a simple, quantization-free layer, called RoIAlign, that faithfully preserves exact spatial locations. Despite being a seemingly minor change, RoIAlign has a large impact: it improves mask accuracy by relative 10% to 50%, showing bigger gains under stricter localization metrics. Second, we found it essential to decouple mask and class prediction: we predict a binary mask for each class independently, without competition among classes, and rely on the network’s RoI classification branch to predict the category. In contrast, FCNs usually perform per-pixel multi-class categorization, which couples segmentation and classification, and based on our experiments works poorly for instance segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_4",
"text": " Without bells and whistles, Mask R-CNN surpasses all previous state-of-the-art single-model results on the COCO instance segmentation task , including the heavily-engineered entries from the 2016 competition winner. As a by-product, our method also excels on the COCO object detection task. In ablation experiments, we evaluate multiple basic instantiations, which allows us to demonstrate its robustness and analyze the effects of core factors. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_5",
"text": " Our models can run at about 200ms per frame on a GPU, and training on COCO takes one to two days on a single 8-GPU machine. We believe the fast train and test speeds, together with the framework’s flexibility and accuracy, will benefit and ease future research on instance segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_6",
"text": " Finally, we showcase the generality of our framework via the task of human pose estimation on the COCO keypoint dataset . By viewing each keypoint as a one-hot binary mask, with minimal modification Mask R-CNN can be applied to detect instance-specific poses. Mask R-CNN surpasses the winner of the 2016 COCO keypoint competition, and at the same time runs at 5 fps. Mask R-CNN, therefore, can be seen more broadly as a flexible framework for instance-level recognition and can be readily extended to more complex tasks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_7",
"text": " We have released code to facilitate future research. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_8",
"text": " The Region-based CNN (R-CNN) approach to bounding-box object detection is to attend to a manageable number of candidate object regions (42, 20) and evaluate convolutional networks (25, 24) independently on each RoI. R-CNN was extended (18, 12) to allow attending to RoIs on feature maps using RoIPool, leading to fast speed and better accuracy. Faster R-CNN advanced this stream by learning the attention mechanism with a Region Proposal Network (RPN). Faster R-CNN is flexible and robust to many follow-up improvements (e.g., (38, 27, 21)), and is the current leading framework in several benchmarks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_9",
"text": " Driven by the effectiveness of R-CNN, many approaches to instance segmentation are based on segment proposals. Earlier methods (13, 15, 16, 9) resorted to bottom-up segments (42, 2). DeepMask and following works (34, 8) learn to propose segment candidates, which are then classified by Fast R-CNN. In these methods, segmentation precedes recognition, which is slow and less accurate. Likewise, Dai et al. proposed a complex multiple-stage cascade that predicts segment proposals from bounding-box proposals, followed by classification. Instead, our method is based on parallel prediction of masks and class labels, which is simpler and more flexible. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_10",
"text": " Most recently, Li et al. combined the segment proposal system in and object detection system in for “fully convolutional instance segmentation” (FCIS). The common idea in (8, 11, 26) is to predict a set of position-sensitive output channels fully convolutionally. These channels simultaneously address object classes, boxes, and masks, making the system fast. But FCIS exhibits systematic errors on overlapping instances and creates spurious edges (Figure 6), showing that it is challenged by the fundamental difficulties of segmenting instances. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_11",
"text": " Another family of solutions (23, 4, 3, 29) to instance segmentation are driven by the success of semantic segmentation. Starting from per-pixel classification results (e.g., FCN outputs), these methods attempt to cut the pixels of the same category into different instances. In contrast to the segmentation-first strategy of these methods, Mask R-CNN is based on an instance-first strategy. We expect a deeper incorporation of both strategies will be studied in the future. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_12",
"text": " Mask R-CNN is conceptually simple: Faster R-CNN has two outputs for each candidate object, a class label and a bounding-box offset; to this we add a third branch that outputs the object mask. Mask R-CNN is thus a natural and intuitive idea. But the additional mask output is distinct from the class and box outputs, requiring extraction of much finer spatial layout of an object. Next, we introduce the key elements of Mask R-CNN, including pixel-to-pixel alignment, which is the main missing piece of Fast/Faster R-CNN. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_13",
"text": " We begin by briefly reviewing the Faster R-CNN detector . Faster R-CNN consists of two stages. The first stage, called a Region Proposal Network (RPN), proposes candidate object bounding boxes. The second stage, which is in essence Fast R-CNN , extracts features using RoIPool from each candidate box and performs classification and bounding-box regression. The features used by both stages can be shared for faster inference. We refer readers to for latest, comprehensive comparisons between Faster R-CNN and other frameworks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_14",
"text": " Mask R-CNN adopts the same two-stage procedure, with an identical first stage (which is RPN). In the second stage, in parallel to predicting the class and box offset, Mask R-CNN also outputs a binary mask for each RoI. This is in contrast to most recent systems, where classification depends on mask predictions (e.g. (33, 10, 26)). Our approach follows the spirit of Fast R-CNN that applies bounding-box classification and regression in parallel (which turned out to largely simplify the multi-stage pipeline of original R-CNN ). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_15",
"text": " Formally, during training, we define a multi-task loss on each sampled RoI as L=Lcls+Lbox+Lmask𝐿subscript𝐿𝑐𝑙𝑠subscript𝐿𝑏𝑜𝑥subscript𝐿𝑚𝑎𝑠𝑘L=L_{cls}+L_{box}+L_{mask}. The classification loss Lclssubscript𝐿𝑐𝑙𝑠L_{cls} and bounding-box loss Lboxsubscript𝐿𝑏𝑜𝑥L_{box} are identical as those defined in . The mask branch has a Km2𝐾superscript𝑚2Km^{2}-dimensional output for each RoI, which encodes K𝐾K binary masks of resolution m×m𝑚𝑚m\\times m, one for each of the K𝐾K classes. To this we apply a per-pixel sigmoid, and define Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} as the average binary cross-entropy loss. For an RoI associated with ground-truth class k𝑘k, Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} is only defined on the k𝑘k-th mask (other mask outputs do not contribute to the loss). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_16",
"text": " Our definition of Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} allows the network to generate masks for every class without competition among classes; we rely on the dedicated classification branch to predict the class label used to select the output mask. This decouples mask and class prediction. This is different from common practice when applying FCNs to semantic segmentation, which typically uses a per-pixel softmax and a multinomial cross-entropy loss. In that case, masks across classes compete; in our case, with a per-pixel sigmoid and a binary loss, they do not. We show by experiments that this formulation is key for good instance segmentation results. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_17",
"text": " A mask encodes an input object’s spatial layout. Thus, unlike class labels or box offsets that are inevitably collapsed into short output vectors by fully-connected (fc) layers, extracting the spatial structure of masks can be addressed naturally by the pixel-to-pixel correspondence provided by convolutions. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_18",
"text": " Specifically, we predict an m×m𝑚𝑚m\\times m mask from each RoI using an FCN . This allows each layer in the mask branch to maintain the explicit m×m𝑚𝑚m\\times m object spatial layout without collapsing it into a vector representation that lacks spatial dimensions. Unlike previous methods that resort to fc layers for mask prediction (33, 34, 10), our fully convolutional representation requires fewer parameters, and is more accurate as demonstrated by experiments. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_19",
"text": " This pixel-to-pixel behavior requires our RoI features, which themselves are small feature maps, to be well aligned to faithfully preserve the explicit per-pixel spatial correspondence. This motivated us to develop the following RoIAlign layer that plays a key role in mask prediction. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_20",
"text": " RoIPool is a standard operation for extracting a small feature map (e.g., 7×\\times7) from each RoI. RoIPool first quantizes a floating-number RoI to the discrete granularity of the feature map, this quantized RoI is then subdivided into spatial bins which are themselves quantized, and finally feature values covered by each bin are aggregated (usually by max pooling). Quantization is performed, e.g., on a continuous coordinate x𝑥x by computing (x/16)delimited-()𝑥16(x/16), where 16 is a feature map stride and (⋅)delimited-()⋅(\\cdot) is rounding; likewise, quantization is performed when dividing into bins (e.g., 7×\\times7). These quantizations introduce misalignments between the RoI and the extracted features. While this may not impact classification, which is robust to small translations, it has a large negative effect on predicting pixel-accurate masks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_21",
"text": " To address this, we propose an RoIAlign layer that removes the harsh quantization of RoIPool, properly aligning the extracted features with the input. Our proposed change is simple: we avoid any quantization of the RoI boundaries or bins (i.e., we use x/16𝑥16x/16 instead of (x/16)delimited-()𝑥16(x/16)). We use bilinear interpolation to compute the exact values of the input features at four regularly sampled locations in each RoI bin, and aggregate the result (using max or average), see Figure 3 for details. We note that the results are not sensitive to the exact sampling locations, or how many points are sampled, as long as no quantization is performed. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_22",
"text": " RoIAlign leads to large improvements as we show in §4.2. We also compare to the RoIWarp operation proposed in . Unlike RoIAlign, RoIWarp overlooked the alignment issue and was implemented in as quantizing RoI just like RoIPool. So even though RoIWarp also adopts bilinear resampling motivated by , it performs on par with RoIPool as shown by experiments (more details in Table 2c), demonstrating the crucial role of alignment. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_23",
"text": " To demonstrate the generality of our approach, we instantiate Mask R-CNN with multiple architectures. For clarity, we differentiate between: (i) the convolutional backbone architecture used for feature extraction over an entire image, and (ii) the network head for bounding-box recognition (classification and regression) and mask prediction that is applied separately to each RoI. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_24",
"text": " We denote the backbone architecture using the nomenclature network-depth-features. We evaluate ResNet and ResNeXt networks of depth 50 or 101 layers. The original implementation of Faster R-CNN with ResNets extracted features from the final convolutional layer of the 4-th stage, which we call C4. This backbone with ResNet-50, for example, is denoted by ResNet-50-C4. This is a common choice used in (19, 10, 21, 39). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_25",
"text": " We also explore another more effective backbone recently proposed by Lin et al. , called a Feature Pyramid Network (FPN). FPN uses a top-down architecture with lateral connections to build an in-network feature pyramid from a single-scale input. Faster R-CNN with an FPN backbone extracts RoI features from different levels of the feature pyramid according to their scale, but otherwise the rest of the approach is similar to vanilla ResNet. Using a ResNet-FPN backbone for feature extraction with Mask R-CNN gives excellent gains in both accuracy and speed. For further details on FPN, we refer readers to . ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_26",
"text": " For the network head we closely follow architectures presented in previous work to which we add a fully convolutional mask prediction branch. Specifically, we extend the Faster R-CNN box heads from the ResNet and FPN papers. Details are shown in Figure 4. The head on the ResNet-C4 backbone includes the 5-th stage of ResNet (namely, the 9-layer ‘res5’ ), which is compute-intensive. For FPN, the backbone already includes res5 and thus allows for a more efficient head that uses fewer filters. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_27",
"text": " We note that our mask branches have a straightforward structure. More complex designs have the potential to improve performance but are not the focus of this work. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_28",
"text": " We set hyper-parameters following existing Fast/Faster R-CNN work (12, 36, 27). Although these decisions were made for object detection in original papers (12, 36, 27), we found our instance segmentation system is robust to them. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_29",
"text": " As in Fast R-CNN, an RoI is considered positive if it has IoU with a ground-truth box of at least 0.5 and negative otherwise. The mask loss Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} is defined only on positive RoIs. The mask target is the intersection between an RoI and its associated ground-truth mask. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_30",
"text": " We adopt image-centric training . Images are resized such that their scale (shorter edge) is 800 pixels . Each mini-batch has 2 images per GPU and each image has N𝑁N sampled RoIs, with a ratio of 1:3 of positive to negatives . N𝑁N is 64 for the C4 backbone (as in (12, 36)) and 512 for FPN (as in ). We train on 8 GPUs (so effective mini-batch size is 16) for 160k iterations, with a learning rate of 0.02 which is decreased by 10 at the 120k iteration. We use a weight decay of 0.0001 and momentum of 0.9. With ResNeXt , we train with 1 image per GPU and the same number of iterations, with a starting learning rate of 0.01. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_31",
"text": " The RPN anchors span 5 scales and 3 aspect ratios, following . For convenient ablation, RPN is trained separately and does not share features with Mask R-CNN, unless specified. For every entry in this paper, RPN and Mask R-CNN have the same backbones and so they are shareable. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_32",
"text": " At test time, the proposal number is 300 for the C4 backbone (as in ) and 1000 for FPN (as in ). We run the box prediction branch on these proposals, followed by non-maximum suppression . The mask branch is then applied to the highest scoring 100 detection boxes. Although this differs from the parallel computation used in training, it speeds up inference and improves accuracy (due to the use of fewer, more accurate RoIs). The mask branch can predict K𝐾K masks per RoI, but we only use the k𝑘k-th mask, where k𝑘k is the predicted class by the classification branch. The m𝑚m×\\timesm𝑚m floating-number mask output is then resized to the RoI size, and binarized at a threshold of 0.5. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_33",
"text": " Note that since we only compute masks on the top 100 detection boxes, Mask R-CNN adds a small overhead to its Faster R-CNN counterpart (e.g., ∼similar-to\\scriptstyle\\sim20% on typical models). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_34",
"text": " We perform a thorough comparison of Mask R-CNN to the state of the art along with comprehensive ablations on the COCO dataset . We report the standard COCO metrics including AP (averaged over IoU thresholds), AP50, AP75, and APS, APM, APL (AP at different scales). Unless noted, AP is evaluating using mask IoU. As in previous work (5, 27), we train using the union of 80k train images and a 35k subset of val images (trainval35k), and report ablations on the remaining 5k val images (minival). We also report results on test-dev . ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_35",
"text": " We compare Mask R-CNN to the state-of-the-art methods in instance segmentation in Table 1. All instantiations of our model outperform baseline variants of previous state-of-the-art models. This includes MNC and FCIS , the winners of the COCO 2015 and 2016 segmentation challenges, respectively. Without bells and whistles, Mask R-CNN with ResNet-101-FPN backbone outperforms FCIS+++ , which includes multi-scale train/test, horizontal flip test, and online hard example mining (OHEM) . While outside the scope of this work, we expect many such improvements to be applicable to ours. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_36",
"text": " Mask R-CNN outputs are visualized in Figures 2 and 5. Mask R-CNN achieves good results even under challenging conditions. In Figure 6 we compare our Mask R-CNN baseline and FCIS+++ . FCIS+++ exhibits systematic artifacts on overlapping instances, suggesting that it is challenged by the fundamental difficulty of instance segmentation. Mask R-CNN shows no such artifacts. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_37",
"text": " We run a number of ablations to analyze Mask R-CNN. Results are shown in Table 2 and discussed in detail next. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_38",
"text": " Table 2a shows Mask R-CNN with various backbones. It benefits from deeper networks (50 vs. 101) and advanced designs including FPN and ResNeXt. We note that not all frameworks automatically benefit from deeper or advanced networks (see benchmarking in ). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_39",
"text": " Mask R-CNN decouples mask and class prediction: as the existing box branch predicts the class label, we generate a mask for each class without competition among classes (by a per-pixel sigmoid and a binary loss). In Table 2b, we compare this to using a per-pixel softmax and a multinomial loss (as commonly used in FCN ). This alternative couples the tasks of mask and class prediction, and results in a severe loss in mask AP (5.5 points). This suggests that once the instance has been classified as a whole (by the box branch), it is sufficient to predict a binary mask without concern for the categories, which makes the model easier to train. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_40",
"text": " Our default instantiation predicts class-specific masks, i.e., one m𝑚m×\\timesm𝑚m mask per class. Interestingly, Mask R-CNN with class-agnostic masks (i.e., predicting a single m𝑚m×\\timesm𝑚m output regardless of class) is nearly as effective: it has 29.7 mask AP vs. 30.3 for the class-specific counterpart on ResNet-50-C4. This further highlights the division of labor in our approach which largely decouples classification and segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_41",
"text": " An evaluation of our proposed RoIAlign layer is shown in Table 2c. For this experiment we use the ResNet-50-C4 backbone, which has stride 16. RoIAlign improves AP by about 3 points over RoIPool, with much of the gain coming at high IoU (AP75). RoIAlign is insensitive to max/average pool; we use average in the rest of the paper. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_42",
"text": " Additionally, we compare with RoIWarp proposed in MNC that also adopt bilinear sampling. As discussed in §3, RoIWarp still quantizes the RoI, losing alignment with the input. As can be seen in Table 2c, RoIWarp performs on par with RoIPool and much worse than RoIAlign. This highlights that proper alignment is key. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_43",
"text": " We also evaluate RoIAlign with a ResNet-50-C5 backbone, which has an even larger stride of 32 pixels. We use the same head as in Figure 4 (right), as the res5 head is not applicable. Table 2d shows that RoIAlign improves mask AP by a massive 7.3 points, and mask AP75 by 10.5 points (50% relative improvement). Moreover, we note that with RoIAlign, using stride-32 C5 features (30.9 AP) is more accurate than using stride-16 C4 features (30.3 AP, Table 2c). RoIAlign largely resolves the long-standing challenge of using large-stride features for detection and segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_44",
"text": " Finally, RoIAlign shows a gain of 1.5 mask AP and 0.5 box AP when used with FPN, which has finer multi-level strides. For keypoint detection that requires finer alignment, RoIAlign shows large gains even with FPN (Table 6). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_45",
"text": " Segmentation is a pixel-to-pixel task and we exploit the spatial layout of masks by using an FCN. In Table 2e, we compare multi-layer perceptrons (MLP) and FCNs, using a ResNet-50-FPN backbone. Using FCNs gives a 2.1 mask AP gain over MLPs. We note that we choose this backbone so that the conv layers of the FCN head are not pre-trained, for a fair comparison with MLP. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_46",
"text": " We compare Mask R-CNN to the state-of-the-art COCO bounding-box object detection in Table 3. For this result, even though the full Mask R-CNN model is trained, only the classification and box outputs are used at inference (the mask output is ignored). Mask R-CNN using ResNet-101-FPN outperforms the base variants of all previous state-of-the-art models, including the single-model variant of G-RMI , the winner of the COCO 2016 Detection Challenge. Using ResNeXt-101-FPN, Mask R-CNN further improves results, with a margin of 3.0 points box AP over the best previous single model entry from (which used Inception-ResNet-v2-TDM). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_47",
"text": " As a further comparison, we trained a version of Mask R-CNN but without the mask branch, denoted by “Faster R-CNN, RoIAlign” in Table 3. This model performs better than the model presented in due to RoIAlign. On the other hand, it is 0.9 points box AP lower than Mask R-CNN. This gap of Mask R-CNN on box detection is therefore due solely to the benefits of multi-task training. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_48",
"text": " Lastly, we note that Mask R-CNN attains a small gap between its mask and box AP: e.g., 2.7 points between 37.1 (mask, Table 1) and 39.8 (box, Table 3). This indicates that our approach largely closes the gap between object detection and the more challenging instance segmentation task. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_49",
"text": " We train a ResNet-101-FPN model that shares features between the RPN and Mask R-CNN stages, following the 4-step training of Faster R-CNN . This model runs at 195ms per image on an Nvidia Tesla M40 GPU (plus 15ms CPU time resizing the outputs to the original resolution), and achieves statistically the same mask AP as the unshared one. We also report that the ResNet-101-C4 variant takes ∼similar-to\\scriptstyle\\sim400ms as it has a heavier box head (Figure 4), so we do not recommend using the C4 variant in practice. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_50",
"text": " Although Mask R-CNN is fast, we note that our design is not optimized for speed, and better speed/accuracy trade-offs could be achieved , e.g., by varying image sizes and proposal numbers, which is beyond the scope of this paper. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_51",
"text": " Mask R-CNN is also fast to train. Training with ResNet-50-FPN on COCO trainval35k takes 32 hours in our synchronized 8-GPU implementation (0.72s per 16-image mini-batch), and 44 hours with ResNet-101-FPN. In fact, fast prototyping can be completed in less than one day when training on the train set. We hope such rapid training will remove a major hurdle in this area and encourage more people to perform research on this challenging topic. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_52",
"text": " Our framework can easily be extended to human pose estimation. We model a keypoint’s location as a one-hot mask, and adopt Mask R-CNN to predict K𝐾K masks, one for each of K𝐾K keypoint types (e.g., left shoulder, right elbow). This task helps demonstrate the flexibility of Mask R-CNN. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_53",
"text": " We note that minimal domain knowledge for human pose is exploited by our system, as the experiments are mainly to demonstrate the generality of the Mask R-CNN framework. We expect that domain knowledge (e.g., modeling structures ) will be complementary to our simple approach. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_54",
"text": " We make minor modifications to the segmentation system when adapting it for keypoints. For each of the K𝐾K keypoints of an instance, the training target is a one-hot m×m𝑚𝑚m\\times m binary mask where only a single pixel is labeled as foreground. During training, for each visible ground-truth keypoint, we minimize the cross-entropy loss over an m2superscript𝑚2m^{2}-way softmax output (which encourages a single point to be detected). We note that as in instance segmentation, the K𝐾K keypoints are still treated independently. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_55",
"text": " We adopt the ResNet-FPN variant, and the keypoint head architecture is similar to that in Figure 4 (right). The keypoint head consists of a stack of eight 3×\\times3 512-d conv layers, followed by a deconv layer and 2×\\times bilinear upscaling, producing an output resolution of 56×\\times56. We found that a relatively high resolution output (compared to masks) is required for keypoint-level localization accuracy. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_56",
"text": " Models are trained on all COCO trainval35k images that contain annotated keypoints. To reduce overfitting, as this training set is smaller, we train using image scales randomly sampled from (640, 800) pixels; inference is on a single scale of 800 pixels. We train for 90k iterations, starting from a learning rate of 0.02 and reducing it by 10 at 60k and 80k iterations. We use bounding-box NMS with a threshold of 0.5. Other details are identical as in §3.1. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_57",
"text": " We evaluate the person keypoint AP (APkpkp{}^{\\text{kp}}) and experiment with a ResNet-50-FPN backbone; more backbones will be studied in the appendix. Table 4 shows that our result (62.7 APkpkp{}^{\\text{kp}}) is 0.9 points higher than the COCO 2016 keypoint detection winner that uses a multi-stage processing pipeline (see caption of Table 4). Our method is considerably simpler and faster. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_58",
"text": " More importantly, we have a unified model that can simultaneously predict boxes, segments, and keypoints while running at 5 fps. Adding a segment branch (for the person category) improves the APkpkp{}^{\\text{kp}} to 63.1 (Table 4) on test-dev. More ablations of multi-task learning on minival are in Table 5. Adding the mask branch to the box-only (i.e., Faster R-CNN) or keypoint-only versions consistently improves these tasks. However, adding the keypoint branch reduces the box/mask AP slightly, suggesting that while keypoint detection benefits from multitask training, it does not in turn help the other tasks. Nevertheless, learning all three tasks jointly enables a unified system to efficiently predict all outputs simultaneously (Figure 7). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_59",
"text": " We also investigate the effect of RoIAlign on keypoint detection (Table 6). Though this ResNet-50-FPN backbone has finer strides (e.g., 4 pixels on the finest level), RoIAlign still shows significant improvement over RoIPool and increases APkpkp{}^{\\text{kp}} by 4.4 points. This is because keypoint detections are more sensitive to localization accuracy. This again indicates that alignment is essential for pixel-level localization, including masks and keypoints. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_60",
"text": " Given the effectiveness of Mask R-CNN for extracting object bounding boxes, masks, and keypoints, we expect it be an effective framework for other instance-level tasks. ",
"title": "Mask R-CNN"
}
] |
How do the authors verify that the two characteristics mentioned in the sentence are indispensable for the ideal control signal?
|
Authors verify their work using a conventions evaluation metrics in prior CIC works [38]. As their quantitative results report in Table 1, you can observe that author's framework can achieve the best performance over almost all metrics and benchmarks [40]. and as for the visualized evaluation, you can observe in Figure 5 that the author's framework always learns a human-like semantic structure based on the VSR and grounded visual regions [41].
|
[
38,
40,
41
] |
[
{
"id": "2103.12204_all_0",
"text": " Image captioning, \\ie, generating fluent and meaningful descriptions to summarize the salient contents of an image, is a classic proxy task for comprehensive scene understanding . With the release of several large scale datasets and advanced encoder-decoder frameworks, current captioning models plausibly have already achieved “super-human” performance in all accuracy-based evaluation metrics. However, many studies have indicated that these models tend to produce generic descriptions, and fail to control the caption generation process as humans, \\eg, referring to different contents of interest or descriptive patterns. In order to endow the captioning models with human-like controllability, a recent surge of efforts (16, 10, 19, 78, 48, 77, 27, 20) resort to introducing extra control signals as constraints of the generated captions, called Controllable Image Captioning (CIC). As a byproduct, the CIC models can easily generate diverse descriptions by feeding different control signals. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_1",
"text": " Early CIC works mainly focus on subjective control signals, such as sentiments , emotions (42, 22), and personality (14, 54), \\ie, the linguistic styles of sentences. Although these stylized captioning models can eventually produce style-related captions, they remain hard to control the generation process effectively and precisely. To further improve the controllability, recent CIC works gradually put a more emphasis on objective control signals. More specifically, they can be coarsely classified into two categories: 1) Content-controlled: the control signals are about the contents of interest which need to be described. As the example shown in Figure 1 (a), given the region set () as a control signal, we hope that the generated caption can cover all regions (\\ie, man, wave, and surfboard). So far, various types of content-controlled signals have been proposed, such as visual relations , object regions (16, 35), scene graphs (10, 78), and mouse trace . 2) Structure-controlled: the control signals are about the semantic structures of sentences. For instance, the length-level , part-of-speech tags , or attributes of the sentence (cf. Figure 1 (b)) are some typical structure-controlled signals. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_2",
"text": " Nevertheless, all existing objective control signals (\\ie, both content-controlled and structure-controlled) have overlooked two indispensable characteristics of an ideal control signal towards “human-like” controllable image captioning: 1) Event-compatible: all visual contents referred to in a single sentence should be compatible with the described activity. Imaging how humans describe images — our brains always quickly structure a descriptive pattern like “sth do sth at someplace” first, and then fill in the detailed description (56, 46, 30, 71), \\ie, we have subconsciously made sure that all the mentioned entities are event-compatible (\\eg, man, wave, surfboard are all involved in activity riding in Figure 1 (a)). To further see the negative impact of dissatisfying this requirement, suppose that we deliberately utilize two more objects (hand and sky, \\ie, ) as part of the control signal, and the model generates an incoherent and illogical caption. 2) Sample-suitable: the control signals should be suitable for the specific image sample. By “suitable”, we mean that there do exist reasonable descriptions satisfying the control signals, \\eg, a large length-level may not be suitable for an image with a very simple scene. Unfortunately, it is always very difficult to decide whether a control signal is sample-suitable in advance. For example in Figure 1 (b), although the two control signals (\\ie, length-levels 3 and 4) are quite close, the quality of respectively generated captions varies greatly. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_3",
"text": " In this paper, we propose a new event-oriented objective control signal, Verb-specific Semantic Roles (VSR), to meet both event-compatible and sample-suitable requirements simultaneously. VSR consists of a verb (\\ie, predicate ) and some user-interested semantic roles . As shown in Figure 2, the verb captures the scope of a salient activity in the image (\\eg, eating), and the corresponding semantic roles111We use PropBank-style annotations of semantic roles (\\eg, Arg0, Arg1) in all experiments (cf. Figure 1). The FrameNet-style annotations of semantic roles (\\eg, Agent) here are just for a more intuitive illustration. In the PropBank-style annotations, Arg denotes “argument”, MNR denotes “manner”, DIR denotes “directional”, and LOC denotes “location”. We leave more details in the supplementary material. (\\eg, agent, food, container, and tool) categorize how objects participate in this activity, \\ie, a child (agent) is eating (activity) a pancake (food) from a plate (container) with a fork (tool). Thus, VSR is designed to guarantee that all the mentioned entities are event-compatible. Meanwhile, unlike the existing structure-controlled signals which directly impose constraints on the generated captions, VSR only restricts the involved semantic roles, which is theoretically suitable for all the images with the activity, \\ie, sample-suitable. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_4",
"text": " In order to generate sentences with respect to the designated VSRs, we first train a grounded semantic role labeling (GSRL) model to identify and ground all entities for each role. Then, we propose a semantic structure planner (SSP) to rank the given verb and semantic roles, and output some human-like descriptive semantic structures, \\eg, Arg0readerreader{}_{\\text{reader}} – read – Arg1thingthing{}_{\\text{thing}} – LOC in Figure 1 (c). Finally, we combine the grounded entities and semantic structures, and use an RNN-based role-shift captioning model to generate the captions by sequentially focusing on different roles. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_5",
"text": " Although these are no available captioning datasets with the VSR annotations, they can be easily obtained by off-the-shelf semantic role parsing toolkits . Extensive experiments on two challenging CIC benchmarks (\\ie, COCO Entities and Flickr30K Entities ) demonstrate that our framework can achieve better controllability given designated VSRs than several strong baselines. Moreover, our framework can also realize diverse image captioning and achieve a better trade-off between quality and diversity. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_6",
"text": " In summary, we make three contributions in this paper: 1. We propose a new control signal for CIC: Verb-specific Semantic Roles (VSR). To the best of our knowledge, VSR is the first control signal to consider both event-compatible and sample-suitable requirements222When using control signals extracted from GT captions, existing control signals can always meet both requirements and generate reasonable captions. However, in more general settings (\\eg, construct control signals without GT captions), the form of VSR is more human-friendly, and it is easier to construct signals which meet both requirements compared with all existing forms of control signals, which is the main advantage of VSR.. 2. We can learn human-like verb-specific semantic structures automatically, and abundant visualization examples demonstrate that these patterns are reasonable. 3. We achieve state-of-the-art controllability on two challenging benchmarks, and generate diverse captions by using different verbs, semantic roles, or structures. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_7",
"text": " Controllable Image Captioning. Compared with conventional image captioning (63, 68, 9, 25, 13), CIC is a more challenging task, which needs to consider extra constraints. Early CIC works are mostly about stylized image captioning, \\ie, constraints are the linguistic styles of sentences. According to the requirements of parallel training samples, existing solutions can be divided into two types: models using parallel stylized image-caption data (41, 11, 54, 1) or not (22, 42). Subsequently, the community gradually shifts the emphasis to controlling described contents (16, 77, 27, 10, 78, 48, 35) or structures (20, 19, 75, 76) of the sentences. In this paper, we propose a novel control signal VSR, which is the first control signal to consider both the event-compatible and sample-suitable requirements. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_8",
"text": " Diverse and Distinctive Image Captioning. Diverse image captioning, \\ie, describing the image contents with diverse wordings and rich expressions, is an essential property of human-like captioning models. Except from feeding different control signals to the CIC models, other diverse captioning methods can be coarsely grouped into four types: 1) GAN-based (17, 52, 32): they use a discriminator to force the generator to generate human-indistinguishable captions. 2) VAE-based (65, 7): the diversity obtained with them is by sampling from a learned latent space. 3) RL-based : they regard diversity as an extra reward in the RL training stage. 4) BS-based : they decode a list of diverse captions by optimizing a diversity-augmented objective. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_9",
"text": " Meanwhile, distinctive image captioning is another close research direction (18, 60, 37, 36, 64), which aims to generate discriminative and unique captions for individual images. Unfortunately, due to the subjective nature of diverse and distinctive captions, effective evaluation remains as an open problem, and several new metrics are proposed, such as SPICE-U , CIDErBtw , self-CIDEr , word recall , mBLEU . In this paper, we can easily generate diverse captions in both lexical-level and syntactic-level. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_10",
"text": " Semantic Roles in Images. Inspired from the semantic role labeling task in NLP, several tasks have been proposed to label the roles of each object in an activity in an image: ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_11",
"text": " Visual Semantic Role Labeling (VSRL), also called situation recognition, is a generalization of action recognition and human-object interaction, which aims to label an image with a set of verb-specific action frames . Specifically, each action frame describes details of the activity captured by the verb, and it consists of a fixed set of verb-specific semantic roles and their corresponding values. The values are the entities or objects involved in the activity and the semantic roles categorize how objects participate in the activity. The current VSRL methods (23, 73, 40, 33, 72, 57, 15) usually learn an independent action classifier first, and then model the role inter-dependency by RNNs or GNNs. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_12",
"text": " Grounded Semantic Role Labeling (GSRL), also called grounded situation recognition, builds upon the VSRL task, which requires the models not only to label a set of frames, but also to localize each role-value pair in the image (49, 55, 70, 23). In this paper, we use the GSRL model as a bridge to connect the control signals (VSR) and related regions. To the best of our knowledge, we are the first captioning work to benefit from the verb lexicon developed by linguists. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_13",
"text": " For human-like controllable image captioning, we first propose the Verb-specific Semantic Roles (VSR) as the control signal for generating customized captions. As shown in Figure 3, we formally represent a control signal VSR as: 𝒱𝒮ℛ={v,<s1,n1>,…,<sm,nm>},\\displaystyle\\begin{aligned} \\mathcal{VSR}=\\{v,<s_{1},n_{1}>,...,<s_{m},n_{m}>\\},\\\\ \\end{aligned} (1) where v𝑣v is a verb capturing the scope of a salient activity in the image (\\eg, ride), sisubscript𝑠𝑖s_{i} is a semantic role of verb v𝑣v (\\eg, LOC), and nisubscript𝑛𝑖n_{i} is the number of interested entities in the role sisubscript𝑠𝑖s_{i}. For example, for 𝒱𝒮ℛ={ride,<Arg0,1>,<Arg1,1>,<Loc,2>}\\mathcal{VSR}=\\{\\texttt{ride},<\\texttt{Arg0},\\texttt{1}>,<\\texttt{Arg1},\\texttt{1}>,<\\texttt{Loc},\\texttt{2}>\\}, we hope to generate a caption which not only focuses on describing the ride activity, but also contains one entity respectively in the role Arg0riderrider{}_{\\text{rider}} and Arg1steedsteed{}_{\\text{steed}}, and two entities in the role LOC. Thus, VSR can effectively control the amount of information carried in the whole sentence and each role, \\ie, the level of details. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_14",
"text": " It is convenient to construct VSRs automatically or manually. For the verbs, they can be accurately predicted by an off-the-shelf action recognition network with a predefined verb vocabulary. For the verb-specific semantic roles, they can be easily retrieved from the verb lexicon such as PropBank or FrameNet. Then, the users can easily select a subset of roles or an automatic sampling to generate a subset of roles, and randomly assign the entity number for each role. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_15",
"text": " Given an image 𝑰𝑰\\bm{I} and a control signal 𝒱𝒮ℛ𝒱𝒮ℛ\\mathcal{VSR}, the controllable image captioning model aims to describe 𝑰𝑰\\bm{I} by a textual sentence 𝒚={y1,…,yT}𝒚subscript𝑦1…subscript𝑦𝑇\\bm{y}=\\{y_{1},...,y_{T}\\}, \\ie, modeling the probability p(𝒚|𝑰,𝒱𝒮ℛ)𝑝conditional𝒚𝑰𝒱𝒮ℛp(\\bm{y}|\\bm{I},\\mathcal{VSR}). Inspired from the human habit of describing images, we decompose this task into two steps: structuring a descriptive pattern and filling in detailed captions: p(𝒚|𝑰,𝒱𝒮ℛ)=p(𝒚|pattern)p(pattern|𝑰,𝒱𝒮ℛ).𝑝conditional𝒚𝑰𝒱𝒮ℛ𝑝conditional𝒚pattern𝑝conditionalpattern𝑰𝒱𝒮ℛ\\displaystyle p(\\bm{y}|\\bm{I},\\mathcal{VSR})=p(\\bm{y}|\\text{pattern})p(\\text{pattern}|\\bm{I},\\mathcal{VSR}). (2) ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_16",
"text": " Further, we utilize two sequences 𝒮=(s1b,…,sKb)𝒮subscriptsuperscript𝑠𝑏1…subscriptsuperscript𝑠𝑏𝐾\\mathcal{S}=(s^{b}_{1},...,s^{b}_{K}) and ℛ=(𝒓1,…,𝒓K)ℛsubscript𝒓1…subscript𝒓𝐾\\mathcal{R}=(\\bm{r}_{1},...,\\bm{r}_{K}) to model the descriptive patterns. Specifically, 𝒮𝒮\\mathcal{S} is a semantic structure of the sentence and each sib∈𝒮subscriptsuperscript𝑠𝑏𝑖𝒮s^{b}_{i}\\in\\mathcal{S} is a sub-role. By “sub-role”, we mean that each role si∈𝒱𝒮ℛsubscript𝑠𝑖𝒱𝒮ℛs_{i}\\in\\mathcal{VSR} can be divided into nisubscript𝑛𝑖n_{i} sub-roles, and when ni=1subscript𝑛𝑖1n_{i}=1, role sisubscript𝑠𝑖s_{i} itself is a sub-role. Thus, VSR in Figure 3 can be rewritten as Arg0, Arg1, LOC-1, and LOC-2. ℛℛ\\mathcal{R} is a sequence of visual features of the corresponding grounded entities for each sub-role in 𝒮𝒮\\mathcal{S} (\\eg, 𝒓isubscript𝒓𝑖\\bm{r}_{i} is the features of visual regions referring to sibsubscriptsuperscript𝑠𝑏𝑖s^{b}_{i}). Particularly, for presentation conciseness, we regard the verb in 𝒱𝒮ℛ𝒱𝒮ℛ\\mathcal{VSR} as a special type of sub-role, and since there are no grounded visual regions referring to the verb, we use the global image feature as the grounded region feature in ℛℛ\\mathcal{R}. Meanwhile, we use ℛ~~ℛ\\mathcal{\\tilde{R}} to denote a set of all elements in the sequence ℛℛ\\mathcal{R}. Thus, we further decompose this task into three components: p(𝒚|𝑰,𝒱𝒮ℛ)=p(𝒚|𝒮,ℛ)⏟Captionerp(𝒮,ℛ|ℛ~,𝒱𝒮ℛ)⏟SSPp(ℛ~|𝑰,𝒱𝒮ℛ)⏟GSRL.𝑝conditional𝒚𝑰𝒱𝒮ℛsubscript⏟𝑝conditional𝒚𝒮ℛCaptionersubscript⏟𝑝𝒮conditionalℛ~ℛ𝒱𝒮ℛSSPsubscript⏟𝑝conditional~ℛ𝑰𝒱𝒮ℛGSRL\\displaystyle p(\\bm{y}|\\bm{I},\\mathcal{VSR})=\\underbrace{p(\\bm{y}|\\mathcal{S},\\mathcal{R})}_{\\text{Captioner}}\\underbrace{p(\\mathcal{S},\\mathcal{R}|\\mathcal{\\tilde{R}},\\mathcal{VSR})}_{\\text{SSP}}\\underbrace{p(\\mathcal{\\tilde{R}}|\\bm{I},\\mathcal{VSR})}_{\\text{GSRL}}. (3) ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_17",
"text": " In this section, we first introduce each component of the whole framework of the VSR-guided controllable image captioning model sequentially in Section 3.1 (cf. Figure 3), including a grounded semantic role labeling (GSRL) model, a semantic structure planner (SSP), and a role-shift captioning model. Then, we demonstrate the details about all training objectives and the inference stage in Section 3.2, including extending from a single VSR to multiple VSRs. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_18",
"text": " Given an image 𝑰𝑰\\bm{I}, we first utilize an object detector to extract a set of object proposals ℬℬ\\mathcal{B}. Each proposal 𝒃i∈ℬsubscript𝒃𝑖ℬ\\bm{b}_{i}\\in\\mathcal{B} is associated with a visual feature 𝒇isubscript𝒇𝑖\\bm{f}_{i} and a class label ci∈𝒞subscript𝑐𝑖𝒞c_{i}\\in\\mathcal{C}. Then, we group all these proposals into N𝑁N disjoint sets, \\ie, ℬ={ℬ1,…,ℬN}ℬsubscriptℬ1…subscriptℬ𝑁\\mathcal{B}=\\{\\mathcal{B}_{1},...,\\mathcal{B}_{N}\\}333Due to different annotation natures of specific CIC datasets, we group proposals by different principles. Details are shown in Section 4.2., and each proposal set ℬisubscriptℬ𝑖\\mathcal{B}_{i} consists of one or more proposals. In this GSRL step, we need to refer each sub-role in the 𝒱𝒮ℛ𝒱𝒮ℛ\\mathcal{VSR} to a proposal set in ℬℬ\\mathcal{B}. Specifically, we calculate the similarity score aijsubscript𝑎𝑖𝑗a_{ij} between semantic role sisubscript𝑠𝑖s_{i} and proposal set ℬjsubscriptℬ𝑗\\mathcal{B}_{j} by: 𝒒i=(𝒆vg;𝒆sig;𝒇¯),aij=Fa(𝒒i,𝒇¯𝒋),formulae-sequencesubscript𝒒𝑖subscriptsuperscript𝒆𝑔𝑣subscriptsuperscript𝒆𝑔subscript𝑠𝑖bold-¯𝒇subscript𝑎𝑖𝑗subscript𝐹𝑎subscript𝒒𝑖subscriptbold-¯𝒇𝒋\\displaystyle\\bm{q}_{i}=\\left(\\bm{e}^{g}_{v};\\bm{e}^{g}_{s_{i}};\\bm{\\bar{f}}\\right),\\quad a_{ij}=F_{a}(\\bm{q}_{i},\\bm{\\bar{f}_{j}}), (4) where 𝒆vgsubscriptsuperscript𝒆𝑔𝑣\\bm{e}^{g}_{v} and 𝒆sigsubscriptsuperscript𝒆𝑔subscript𝑠𝑖\\bm{e}^{g}_{s_{i}} are the word embedding features of verb v𝑣v and semantic role sisubscript𝑠𝑖s_{i}, 𝒇¯bold-¯𝒇\\bm{\\bar{f}} and 𝒇¯𝒋subscriptbold-¯𝒇𝒋\\bm{\\bar{f}_{j}} represent the average-pooled visual features of proposal set ℬℬ\\mathcal{B} and ℬjsubscriptℬ𝑗\\mathcal{B}_{j}, (;) is a concatenation operation, and Fasubscript𝐹𝑎F_{a} is a learnable similarity function444For conciseness, we leave the details in the supplementary material. . ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_19",
"text": " After obtaining the grounding similarity scores {aij}subscript𝑎𝑖𝑗\\{a_{ij}\\} between semantic role sisubscript𝑠𝑖s_{i} and all proposal sets {ℬj}subscriptℬ𝑗\\{\\mathcal{B}_{j}\\}, we then select the top nisubscript𝑛𝑖n_{i} proposal sets with the highest scores as the grounding results for all sub-roles of sisubscript𝑠𝑖s_{i}. ℛ~~ℛ\\mathcal{\\tilde{R}} in Eq. (3) is the set of visual features of all grounded proposal sets. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_20",
"text": " Semantic structure planner (SSP) is a hierarchical semantic structure learning model, which aims to learn a reasonable sequence of sub-roles 𝒮𝒮\\mathcal{S}. As shown in Figure 3, it consists of two subnets: an S-level SSP and an R-level SSP. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_21",
"text": " S-level SSP. The sentence-level (S-level) SSP is a coarse-grained structure learning model, which only learns a sequence of all involved general semantic roles (including the verb) in 𝒱𝒮ℛ𝒱𝒮ℛ\\mathcal{VSR} (\\eg, ride, Arg0riderrider{}_{\\text{rider}}, Arg1steedsteed{}_{\\text{steed}} and LOC in Figure 3). To this end, we formulate this sentence-level structure learning as a role sequence generation task, as long as we constrain that each output role token belongs to the given role set and each role can only appear once. Specifically, we utilize a three-layer Transformer 555More comparison results between Transformer and Sinkhorn networks (43, 16) are left in supplementary material. to calucate the probability of roles p(st|𝒱𝒮ℛ)𝑝conditionalsubscript𝑠𝑡𝒱𝒮ℛp(s_{t}|\\mathcal{VSR}) at each time step t𝑡t4: 𝑯𝑯\\displaystyle\\bm{H} =Transformerenc({FCa(𝒆vi+𝒆sii)}),absentsubscriptTransformerencsubscriptFC𝑎subscriptsuperscript𝒆𝑖𝑣subscriptsuperscript𝒆𝑖subscript𝑠𝑖\\displaystyle=\\text{Transformer}_{\\text{enc}}\\left(\\{\\text{FC}_{a}(\\bm{e}^{i}_{v}+\\bm{e}^{i}_{s_{i}})\\}\\right), (5) p(st|𝒱𝒮ℛ)𝑝conditionalsubscript𝑠𝑡𝒱𝒮ℛ\\displaystyle p(s_{t}|\\mathcal{VSR}) =Transformerdec(𝑯,𝒆s<to),absentsubscriptTransformerdec𝑯subscriptsuperscript𝒆𝑜subscript𝑠absent𝑡\\displaystyle=\\text{Transformer}_{\\text{dec}}\\left(\\bm{H},\\bm{e}^{o}_{s_{<t}}\\right), where Transformer∗ are the encoder (enc) and decoder (dec) of the standard multi-head transformer. 𝒆visubscriptsuperscript𝒆𝑖𝑣\\bm{e}^{i}_{v} and 𝒆siisubscriptsuperscript𝒆𝑖subscript𝑠𝑖\\bm{e}^{i}_{s_{i}} are the word embedding features of verb v𝑣v and semantic role sjsubscript𝑠𝑗s_{j}, respectively. FCasubscriptFC𝑎\\text{FC}_{a} is a learnable fc-layer to obtain the embedding of each input token. 𝒆s<tosubscriptsuperscript𝒆𝑜subscript𝑠absent𝑡\\bm{e}^{o}_{s_{<t}} is the sequence of embeddings of previous roles. Based on p(st|𝒱𝒮ℛ)𝑝conditionalsubscript𝑠𝑡𝒱𝒮ℛp(s_{t}|\\mathcal{VSR}), we can predict a role at time step t𝑡t and obtain an initial role sequence, \\eg, Arg0riderrider{}_{\\text{rider}} – ride – Arg1steedsteed{}_{\\text{steed}} – LOC in Figure 3. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_22",
"text": " R-level SSP. The role-level (R-level) SSP is a fine-grained structure model which aims to rank all sub-roles within the same semantic role (\\eg, LOC-1 and LOC-2 are two sub-roles of role Loc in Figure 3). Since the only differences among these sub-roles are the grounded visual regions, we borrow ideas from the Sinkhorn networks (43, 16), which use a differentiable Sinkhorn operation to learn a soft permutation matrix 𝑷𝑷\\bm{P}. Specifically, for each role sisubscript𝑠𝑖s_{i} with multiple sub-roles (\\ie, ni>1subscript𝑛𝑖1n_{i}>1), we first select all the corresponding grounded proposal sets for these sub-roles, denoted as ℬ^={ℬ^1,…,ℬ^ni}^ℬsubscript^ℬ1…subscript^ℬsubscript𝑛𝑖\\mathcal{\\hat{B}}=\\{\\mathcal{\\hat{B}}_{1},...,\\mathcal{\\hat{B}}_{n_{i}}\\}. And for each proposal 𝒃∗∈ℬ^subscript𝒃^ℬ\\bm{b}_{*}\\in\\mathcal{\\hat{B}}, we encode a feature vector 𝒛∗=(𝒛∗v;𝒛∗si;𝒛∗l)subscript𝒛subscriptsuperscript𝒛𝑣subscriptsuperscript𝒛subscript𝑠𝑖subscriptsuperscript𝒛𝑙\\bm{z}_{*}=(\\bm{z}^{v}_{*};\\bm{z}^{s_{i}}_{*};\\bm{z}^{l}_{*}), where 𝒛∗vsubscriptsuperscript𝒛𝑣\\bm{z}^{v}_{*} is a transformation of its visual feature 𝒇∗subscript𝒇\\bm{f}_{*}, 𝒛∗sisubscriptsuperscript𝒛subscript𝑠𝑖\\bm{z}^{s_{i}}_{*} is the word embedding feature of the semantic role sisubscript𝑠𝑖s_{i}, and 𝒛∗lsubscriptsuperscript𝒛𝑙\\bm{z}^{l}_{*} is a 4-d encoding of the spatial position of proposal 𝒃∗subscript𝒃\\bm{b}_{*}. Then, we transform each feature 𝒛∗subscript𝒛\\bm{z}_{*} into nisubscript𝑛𝑖n_{i}-d, and average-pooled all features among the same proposal set, \\ie, we can obtain an nisubscript𝑛𝑖n_{i}-d feature for each ℬ^isubscript^ℬ𝑖\\mathcal{\\hat{B}}_{i}. We concatenate all these features to get an ni×nisubscript𝑛𝑖subscript𝑛𝑖n_{i}\\times n_{i} matrix 𝒁𝒁\\bm{Z}. Finally, we use the Sinkhorn operation to obtain the soft permutation matrix 𝑷𝑷\\bm{P}4: 𝑷=Sinkhorn(𝒁).𝑷Sinkhorn𝒁\\displaystyle\\bm{P}=\\text{Sinkhorn}(\\bm{Z}). (6) ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_23",
"text": " After the two SSP subnets (\\ie, S-level and R-level), we can obtain the semantic structure 𝒮𝒮\\mathcal{S} (cf. Eq. (3)). Based on the sequence of 𝒮𝒮\\mathcal{S} and the set of proposal featurs ℛ~~ℛ\\mathcal{\\tilde{R}} from the GSRL model, we re-rank ℛ~~ℛ\\mathcal{\\tilde{R}} based on 𝒮𝒮\\mathcal{S} and obtain ℛℛ\\mathcal{R}. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_24",
"text": " Given the semantic structure sequence 𝒮=(s1b,…,sKb)𝒮subscriptsuperscript𝑠𝑏1…subscriptsuperscript𝑠𝑏𝐾\\mathcal{S}=(s^{b}_{1},...,s^{b}_{K}) and corresponding proposal feature sequence ℛ=(𝒓1,…,𝒓K)ℛsubscript𝒓1…subscript𝒓𝐾\\mathcal{R}=(\\bm{r}_{1},...,\\bm{r}_{K}), we utilize a two-layer LSTM to generate the final caption 𝒚𝒚\\bm{y}. At each time step, the model fouces on one specific sub-role 𝒔tbsubscriptsuperscript𝒔𝑏𝑡\\bm{s}^{b}_{t} and its grounded region set 𝒓tsubscript𝒓𝑡\\bm{r}_{t}, and then generates the word ytsubscript𝑦𝑡y_{t}. Therefore, we take inspirations from previous CIC methods (16, 10), and predict two distributions simultaneously: p(gt|𝒮,ℛ)𝑝conditionalsubscript𝑔𝑡𝒮ℛp(g_{t}|\\mathcal{S},\\mathcal{R}) for controlling the shift of sub-roles, and p(yt|𝒮,ℛ)𝑝conditionalsubscript𝑦𝑡𝒮ℛp(y_{t}|\\mathcal{S},\\mathcal{R}) to predict the distribution of a word. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_25",
"text": " As for the role-shift, we use an adaptive attention mechanism to predict the probability of shifting4: αtg,𝜶tr,𝒔𝒓tgsubscriptsuperscript𝛼𝑔𝑡subscriptsuperscript𝜶𝑟𝑡𝒔subscriptsuperscript𝒓𝑔𝑡\\displaystyle\\alpha^{g}_{t},\\bm{\\alpha}^{r}_{t},\\bm{sr}^{g}_{t} =AdaptiveAttna(𝒙t,𝒓t),absentsubscriptAdaptiveAttn𝑎subscript𝒙𝑡subscript𝒓𝑡\\displaystyle=\\text{AdaptiveAttn}_{a}(\\bm{x}_{t},\\bm{r}_{t}), (7) where AdaptiveAttnasubscriptAdaptiveAttn𝑎\\text{AdaptiveAttn}_{a} is an adaptive attention network, 𝒙tsubscript𝒙𝑡\\bm{x}_{t} is the input query for attention, 𝒔𝒓tg𝒔subscriptsuperscript𝒓𝑔𝑡\\bm{sr}^{g}_{t} is a sential vector, αtgsubscriptsuperscript𝛼𝑔𝑡\\alpha^{g}_{t} and 𝒂trsubscriptsuperscript𝒂𝑟𝑡\\bm{a}^{r}_{t} are the attention weights for the sential vector and region features, respectively. We directly use attention weight αtgsubscriptsuperscript𝛼𝑔𝑡\\alpha^{g}_{t} as the probability of shifting sub-roles, \\ie, p(gt|𝒮,ℛ)=αtg𝑝conditionalsubscript𝑔𝑡𝒮ℛsubscriptsuperscript𝛼𝑔𝑡p(g_{t}|\\mathcal{S},\\mathcal{R})=\\alpha^{g}_{t}. Based on probability p(gt|𝒮,ℛ)𝑝conditionalsubscript𝑔𝑡𝒮ℛp(g_{t}|\\mathcal{S},\\mathcal{R}), we can sample a gate value gj∈{0,1}subscript𝑔𝑗01g_{j}\\in\\{0,1\\}, and the focused sub-role at time step t𝑡t is: stb←𝒮(i),wherei=min(1+∑j=1t−1gj,K).formulae-sequence←subscriptsuperscript𝑠𝑏𝑡𝒮delimited-()𝑖where𝑖1subscriptsuperscript𝑡1𝑗1subscript𝑔𝑗𝐾\\displaystyle s^{b}_{t}\\leftarrow\\mathcal{S}(i),\\text{where}\\;i=\\min\\left(1+\\textstyle{\\sum}^{t-1}_{j=1}g_{j},K\\right). (8) Due to the special nature of sub-role “verb”, we fix gt+1=1subscript𝑔𝑡11g_{t+1}=1 when stbsubscriptsuperscript𝑠𝑏𝑡s^{b}_{t} is the verb. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_26",
"text": " For each sub-role stbsubscriptsuperscript𝑠𝑏𝑡s^{b}_{t}, we use the corresponding proposal set features 𝒓tsubscript𝒓𝑡\\bm{r}_{t} and a two-layer LSTM to generate word ytsubscript𝑦𝑡y_{t}: 𝒉t1subscriptsuperscript𝒉1𝑡\\displaystyle\\bm{h}^{1}_{t} =LSTM1(𝒉t−11,{yt−1,𝒇¯,𝒉t−12}),absentsubscriptLSTM1subscriptsuperscript𝒉1𝑡1subscript𝑦𝑡1bold-¯𝒇subscriptsuperscript𝒉2𝑡1\\displaystyle=\\text{LSTM}_{1}\\left(\\bm{h}^{1}_{t-1},\\{y_{t-1},\\bm{\\bar{f}},\\bm{h}^{2}_{t-1}\\}\\right), (9) 𝒉t2subscriptsuperscript𝒉2𝑡\\displaystyle\\bm{h}^{2}_{t} =LSTM2(𝒉t−12,{𝒉t1,𝒄t}),absentsubscriptLSTM2subscriptsuperscript𝒉2𝑡1subscriptsuperscript𝒉1𝑡subscript𝒄𝑡\\displaystyle=\\text{LSTM}_{2}\\left(\\bm{h}^{2}_{t-1},\\{\\bm{h}^{1}_{t},\\bm{c}_{t}\\}\\right), ytsubscript𝑦𝑡\\displaystyle y_{t} ∼p(yt|𝒮,ℛ)=FCb(𝒉t2),similar-toabsent𝑝conditionalsubscript𝑦𝑡𝒮ℛsubscriptFC𝑏subscriptsuperscript𝒉2𝑡\\displaystyle\\sim p(y_{t}|\\mathcal{S},\\mathcal{R})=\\text{FC}_{b}(\\bm{h}^{2}_{t}), where 𝒉t1subscriptsuperscript𝒉1𝑡\\bm{h}^{1}_{t} and 𝒉t2subscriptsuperscript𝒉2𝑡\\bm{h}^{2}_{t} are hidden states of the first- and second-layer LSTM (\\ie, LSTM1 and LSTM2), FCbsubscriptFC𝑏\\text{FC}_{b} is a learnable fc-layer, and 𝒄tsubscript𝒄𝑡\\bm{c}_{t} is a context vector. To further distinguish the textual and visual words, we use another adaptive attention network to obtain the context vector 𝒄tsubscript𝒄𝑡\\bm{c}_{t}4: αtv,𝜶tr,𝒔𝒓tvsubscriptsuperscript𝛼𝑣𝑡subscriptsuperscript𝜶𝑟𝑡𝒔subscriptsuperscript𝒓𝑣𝑡\\displaystyle\\alpha^{v}_{t},\\bm{\\alpha}^{r}_{t},\\bm{sr}^{v}_{t} =AdaptiveAttnb(𝒙t,𝒓t),absentsubscriptAdaptiveAttn𝑏subscript𝒙𝑡subscript𝒓𝑡\\displaystyle=\\text{AdaptiveAttn}_{b}(\\bm{x}_{t},\\bm{r}_{t}), (10) 𝒄tsubscript𝒄𝑡\\displaystyle\\bm{c}_{t} =αtv⋅𝒔𝒓tv+∑i𝜶t,ir⋅𝒓t,i,absent⋅subscriptsuperscript𝛼𝑣𝑡𝒔subscriptsuperscript𝒓𝑣𝑡subscript𝑖⋅subscriptsuperscript𝜶𝑟𝑡𝑖subscript𝒓𝑡𝑖\\displaystyle=\\alpha^{v}_{t}\\cdot\\bm{sr}^{v}_{t}+\\textstyle{\\sum}_{i}\\bm{\\alpha}^{r}_{t,i}\\cdot\\bm{r}_{t,i}, where 𝒙tsubscript𝒙𝑡\\bm{x}_{t} is the query for adaptive attention (\\ie, the input of the LSTM1subscriptLSTM1\\text{LSTM}_{1}), 𝒔𝒓tv𝒔subscriptsuperscript𝒓𝑣𝑡\\bm{sr}^{v}_{t} is a sential vector, and αtvsubscriptsuperscript𝛼𝑣𝑡\\alpha^{v}_{t} and 𝜶trsubscriptsuperscript𝜶𝑟𝑡\\bm{\\alpha}^{r}_{t} are the attention weights for the sential vector and region features. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_27",
"text": " Training Stage. In the training stage, we train the three components (GSRL, SSP and captioning model) separately: ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_28",
"text": " Training objective of GSRL. For the GSRL model, we use a binary cross-entropy (BCE) loss between the predicted similarity scores a^ijsubscript^𝑎𝑖𝑗\\hat{a}_{ij} and its ground truth aij∗subscriptsuperscript𝑎𝑖𝑗a^{*}_{ij} as the training loss: LGSRL=∑ijBCE(a^ij,aij∗).subscript𝐿GSRLsubscript𝑖𝑗BCEsubscript^𝑎𝑖𝑗subscriptsuperscript𝑎𝑖𝑗\\displaystyle L_{\\text{GSRL}}=\\textstyle{\\sum}_{ij}\\text{BCE}(\\hat{a}_{ij},a^{*}_{ij}). (11) ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_29",
"text": " Training objective of SSP. For S-level SSP, we use a cross-entropy (XE) loss between prediction s^tsubscript^𝑠𝑡\\hat{s}_{t} and its ground truth st∗subscriptsuperscript𝑠𝑡s^{*}_{t} as the training objective. For R-level SSP, we use a mean square (MSE) loss between prediction 𝑷^tsubscriptbold-^𝑷𝑡\\bm{\\hat{P}}_{t} and its ground truth 𝑷∗tsubscriptsuperscript𝑷𝑡\\bm{P^{*}}_{t} as the training objective: LSSPS=∑tXE(s^t,st∗),LSSPR=∑t𝟏(nt>1)MSE(𝑷^t,𝑷∗t),formulae-sequencesubscriptsuperscript𝐿𝑆SSPsubscript𝑡XEsubscript^𝑠𝑡subscriptsuperscript𝑠𝑡subscriptsuperscript𝐿𝑅SSPsubscript𝑡subscript1subscript𝑛𝑡1MSEsubscriptbold-^𝑷𝑡subscriptsuperscript𝑷𝑡\\displaystyle L^{S}_{\\text{SSP}}=\\textstyle{\\sum}_{t}\\text{XE}(\\hat{s}_{t},s^{*}_{t}),L^{R}_{\\text{SSP}}=\\textstyle{\\sum}_{t}\\mathbf{1}_{(n_{t}>1)}\\text{MSE}(\\bm{\\hat{P}}_{t},\\bm{P^{*}}_{t}), (12) where 𝟏(nt>1)subscript1subscript𝑛𝑡1\\mathbf{1}_{(n_{t}>1)} is an indicator function, being 1 if nt>1subscript𝑛𝑡1n_{t}>1 and 0 otherwise. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_30",
"text": " Training objective of captioning model. We follow the conventions of previous captioning works and use a two-stage training scheme: XE and RL stages. In the XE stage, we use an XE loss between predicted words and ground truth words as the training loss. In the RL stage, we use a self-critical baseline . At each step, we sample from p(yt|𝒮,ℛ)𝑝conditionalsubscript𝑦𝑡𝒮ℛp(y_{t}|\\mathcal{S},\\mathcal{R}) and p(gt|𝒮,ℛ)𝑝conditionalsubscript𝑔𝑡𝒮ℛp(g_{t}|\\mathcal{S},\\mathcal{R}) to obtain the next word yt+1subscript𝑦𝑡1y_{t+1} and sub-role st+1bsubscriptsuperscript𝑠𝑏𝑡1s^{b}_{t+1}. Then we calcuate the reward r(𝒚s)𝑟superscript𝒚𝑠r(\\bm{y}^{s}) of the sampled sentence 𝒚ssuperscript𝒚𝑠\\bm{y}^{s}. Baseline b𝑏b is the reward of the greedily generated sentence. Thus, the gradient expression of the training loss is: ∇θL=−(r(𝒚s)−b)(∇θlogp(𝒚s)+∇θlogp(𝒈s)),subscript∇𝜃𝐿𝑟superscript𝒚𝑠𝑏subscript∇𝜃𝑝superscript𝒚𝑠subscript∇𝜃𝑝superscript𝒈𝑠\\nabla_{\\theta}L=-(r(\\bm{y}^{s})-b)(\\nabla_{\\theta}\\log p(\\bm{y}^{s})+\\nabla_{\\theta}\\log p(\\bm{g}^{s})), (13) where 𝒈ssuperscript𝒈𝑠\\bm{g}^{s} is the sequence of role-shift gates. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_31",
"text": " Inference. In testing stage, given an image and one 𝒱𝒮ℛ𝒱𝒮ℛ\\mathcal{VSR}, we sequentially use the GSRL, SSP, and captioning model to generate the final captions. Meanwhile, our framework can be easily extended from one 𝒱𝒮ℛ𝒱𝒮ℛ\\mathcal{VSR} to multiple 𝒱𝒮ℛs𝒱𝒮ℛ𝑠\\mathcal{VSR}s as the control signal. Taking an example of two 𝒱𝒮ℛs𝒱𝒮ℛ𝑠\\mathcal{VSR}s, we first use GSRL and SSP to obtain semantic structures and grounded regions features: (𝒮a,ℛa)superscript𝒮𝑎superscriptℛ𝑎(\\mathcal{S}^{a},\\mathcal{R}^{a}) and (𝒮b,ℛb)superscript𝒮𝑏superscriptℛ𝑏(\\mathcal{S}^{b},\\mathcal{R}^{b}). Then, as shown in Figure 4, we merge them by two steps4: (a) find the sub-roles in both 𝒮asuperscript𝒮𝑎\\mathcal{S}^{a} and 𝒮bsuperscript𝒮𝑏\\mathcal{S}^{b} which refer to the same visual regions (\\eg, s1asubscriptsuperscript𝑠𝑎1s^{a}_{1} and s1bsubscriptsuperscript𝑠𝑏1s^{b}_{1} refer to the same proposal set); (b) insert all other sub-roles between the nearest two selected sub-roles (\\eg, s2∗subscriptsuperscript𝑠2s^{*}_{2} are still between s1∗subscriptsuperscript𝑠1s^{*}_{1} and s3∗subscriptsuperscript𝑠3s^{*}_{3}). Concerning the order of sub-roles from different verbs, we follow the rank of two verbs (\\eg, s2asubscriptsuperscript𝑠𝑎2s^{a}_{2} is in front of s2bsubscriptsuperscript𝑠𝑏2s^{b}_{2}). ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_32",
"text": " Flickr30K Entities . It builds upon the Flickr30K dataset, by manually grounding each noun phrase in the descriptions with one or more visual regions. It consists of 31,000 images, and each image is associated with five captions. We use the same splits as in our experiments. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_33",
"text": " COCO Entities . It builds upon the COCO dataset which consists of 120,000 images and each image is annotated with five captions. Different from Flickr30K Entities where all grounding entities are annotated by humans, all annotations in COCO Entities are detected automatically. Especially, they align each entity to all the detected proposals with the same object class. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_34",
"text": " Although we only assume that there exists at least one verb (\\ie, activity) in each image; unfortunately, there are still a few samples (\\ie, 3.26% in COCO Entities and 0.04% in Flickr30K Entities) having no verbs in their captions. We use the same split as and further drop the those samples with no verb in the training and testing stages4. We will try to cover these extreme cases and leave it for future work. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_35",
"text": " Proposal Generation and Grouping. We utilize a Faster R-CNN with ResNet-101 to obtain all proposals for each image. Especially, we use the model released by , which is finetuned on VG dataset . For COCO Entities, since the “ground truth” annotations for each noun phrase are the proposals with the same class, we group the proposals by their detected class labels. But for Flickr30K Entities, we directly regard each proposal as a proposal set. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_36",
"text": " VSR Annotations. Since there are no ground truth semantic role annotations for CIC datasets, we use a pretrained SRL tool to annotate verbs and semantic roles for each caption, and regard them as ground truth annotations. For each detected verb, we convert it into its base form and build a verb dictionary for each dataset. The dictionary sizes for COCO and Flickr30K are 2,662 and 2,926, respectively. There are a total of 24 types of semantic roles for all verbs. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_37",
"text": " Experimental Settings. For the S-level SSP, the head number of multi-head attention is set to 8, and the hidden size of the transformer is set to 512. The length of the transformer is set to 10. For the R-level SSP, we set the maximum number of entities for each role to 10. For the RL training of the captioning model, we use CIDEr-D score as the training reward. Due to the limited space, we leave more detailed parameter settings in the supplementary material. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_38",
"text": " Settings. To evaluate the controllability of proposed framework, we followed the conventions of prior CIC works (16, 10, 78), and utilized the VSR aligned with ground truth captions as the control signals. Specifically, we compared the proposed framework with several carefully designed baselines666All baselines use the same visual regions as models with VSRs.: 1) C-LSTM: It is a Controllable LSTM model . Given the features of all grounded visual regions, it first averages all region features, and then uses an LSTM to generate the captions. 2) C-UpDn: It is a Controllable UpDn model , which uses an adaptive attention to generate the captions. 3) SCT : It regards the set of visual regions as a control signal, and utilizes a chunk-shift captioning model to generate the captions. 4) Ours w/o verb: We ablate our model by removing the verb information in both the SSP and captioning model. 5) Ours (oracle verb): It is an ideal situation, where the captioning model directly outputs the oracle format of the verb when the attending role is the verb. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_39",
"text": " Evaluation Metrics. To evaluate the quality of the generated captions, we use five accuracy-based metrics, including BLEU-4 (B4) , METEOR (M) , ROUGE (R) , CIDEr-D (C) , and SPICE (S) . Particularly, we evaluate the generated captions against the single ground truth caption. We also propose a new recall-based metric to evaluate whether the roles of the generated sentence are consistent with the ground truth caption (\\ie, VSR). It measures the recall rate of the verb, semantic roles, and ordered role pairs, which are denoted as RVV{}_{\\text{V}}, RSR1SR1{}_{\\text{SR1}} and RSR2SR2{}_{\\text{SR2}}, respectively. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_40",
"text": " Quantitative Results. The quantitative results are reported in Table 1. From Table 1, we can observe that our framework can achieve the best performance over almost all metrics and benchmarks. By comparing the two different proposal settings (\\ie, GSRL and GT), we can find that the accuracy of GSRL is a major bottleneck of the whole framework. Meanwhile, the ablative model (Ours w/o verb) can only achieve slightly better performance than baseline SCT and much worse performance than our full model, which reflects the importance of the verb in semantic structure learning and caption generation. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_41",
"text": " Visualizations. In Figure 6, we illustrate some examples of the generated captions. We can observe that our framework always learns a human-like semantic structure based on the VSR and grounded visual regions (\\eg, Arg1thingthing{}_{\\text{thing}} – sit – Arg2positionposition{}_{\\text{position}} – LOC – MNR). According to the semantic structures, the captioning model can generate near-perfect descriptions. As a by-product, a well-trained SSP can automatically produce several verb-specific semantic structures for a set of user-interested roles, and we show some examples in Figure 6. For each verb and role set, we illustrate the top two structures by using beam search. Particularly, we are surprised to find that we can even learn some structures that never appear in original datasets (the blue tick ones). ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_42",
"text": " One of the well-known advantages of controllable image captioning is the ability to generate diverse image captions by feeding different control signals. Thus, we also evaluate the diversity of the captions generated by our framework. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_43",
"text": " Settings. We evaluated the quality of diverse captions in two settings: 1) Given a VSR and grounded visual regions of each role aligned with the ground truth caption, we first use an SSP to select two semantic structures, and then respectively generate two diverse captions. For fair comparisons, we utilize the same set of visual regions on two strong baselines: a) BS: an UpDn model uses beam search to produce two captions, and b) SCT: an SCT model takes a permutation of all region sets to generate two captions. 2) For each verb, we can randomly sample a subset of all semantic roles to construct new VSRs. Specifically, we sample two more sets of semantic roles, and generate two diverse captions for each role set following the same manner. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_44",
"text": " Evaluation Metrics. We used two types of metrics to evaluate the diverse captions: 1) Accuracy-based: we followed the conventions of the previous works (16, 20, 65) and reported the best-1 accuracy, \\ie, the generated caption with the maximum score for each metric is chosen. Analogously, we evaluate the generated captions against the single ground truth caption. 2) Diversity-based: we followed and used two metrics which only focus on the language similarity: Div-n (D-n) (4, 20) and self-CIDEr (s-C) . ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_45",
"text": " Quantitative Results. The quantitative results are reported in Table 2. From Table 2, we can observe that the diverse captions generated by our framework in both two settings have much higher accuracy (\\eg, CIDEr 267.3 vs. 222.5 in SCT), and that the diversity is slightly behind SCT (\\eg, self-CIDEr 67.0 vs. 69.1 in SCT). This is because SCT generates captions by randomly shuffling regions. Instead, we tend to learn more reasonable structures. Thus, we can achieve much higher results on accuracy, \\ie, our method can achieve a better trade-off between quality and diversity on diverse image captioning than the two strong baselines. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_46",
"text": " Visualizations. We further illustrate the generated captions of two images with different VSRs in Figure 7. The captions are generated effectively according to the given VSR, and the diversity of VSR leads to significant diverse captions. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_47",
"text": " In this paper, we argued that all existing objective control signals for CIC have overlooked two indispensable characteristics: event-compatible and sample-suitable. To this end, we proposed a novel control signal called VSR. VSR consists of a verb and several semantic roles, \\ie, all components are guaranteed to be event-compatible. Meanwhile, VSR only restricts the involved semantic roles, which is also sample-suitable for all the images containing the activity. We have validated the effectiveness of VSR through extensive experiments. Moving forward, we will plan to 1) design a more effective captioning model to benefit more from the VSR signals; 2) extend VSR to other controllable text generation tasks, \\eg, video captioning ; 3) design a more general framework to cover the images without verbs. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
},
{
"id": "2103.12204_all_48",
"text": " Acknowledgements. This work was supported by the National Natural Science Foundation of China (U19B2043,61976185), Zhejiang Natural Science Foundation (LR19F020002), Zhejiang Innovation Foundation (2019R52002), and Fundamental Research Funds for Central Universities. ",
"title": "Human-Like Controllable Image Captioning With Verb-Specific Semantic Roles"
}
] |
What is the resolution of the CUB image?
|
[The CUB dataset contains 11,788 images of 200 bird species [22].
|
[
22
] |
[
{
"id": "1703.05175_all_0",
"text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely overfit. While the problem is quite difficult, it has been demonstrated that humans have the ability to perform even one-shot classification, where only a single example of each new class is given, with a high degree of accuracy . ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_1",
"text": " Two recent approaches have made significant progress in few-shot learning. Vinyals et al. proposed matching networks, which uses an attention mechanism over a learned embedding of the labeled set of examples (the support set) to predict classes for the unlabeled points (the query set). Matching networks can be interpreted as a weighted nearest-neighbor classifier applied within an embedding space. Notably, this model utilizes sampled mini-batches called episodes during training, where each episode is designed to mimic the few-shot task by subsampling classes as well as data points. The use of episodes makes the training problem more faithful to the test environment and thereby improves generalization. Ravi and Larochelle take the episodic training idea further and propose a meta-learning approach to few-shot learning. Their approach involves training an LSTM to produce the updates to a classifier, given an episode, such that it will generalize well to a test-set. Here, rather than training a single model over multiple episodes, the LSTM meta-learner learns to train a custom model for each episode. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_2",
"text": " We attack the problem of few-shot learning by addressing the key issue of overfitting. Since data is severely limited, we work under the assumption that a classifier should have a very simple inductive bias. Our approach, prototypical networks, is based on the idea that there exists an embedding in which points cluster around a single prototype representation for each class. In order to do this, we learn a non-linear mapping of the input into an embedding space using a neural network and take a class’s prototype to be the mean of its support set in the embedding space. Classification is then performed for an embedded query point by simply finding the nearest class prototype. We follow the same approach to tackle zero-shot learning; here each class comes with meta-data giving a high-level description of the class rather than a small number of labeled examples. We therefore learn an embedding of the meta-data into a shared space to serve as the prototype for each class. Classification is performed, as in the few-shot scenario, by finding the nearest class prototype for an embedded query point. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_3",
"text": " In this paper, we formulate prototypical networks for both the few-shot and zero-shot settings. We draw connections to matching networks in the one-shot setting, and analyze the underlying distance function used in the model. In particular, we relate prototypical networks to clustering in order to justify the use of class means as prototypes when distances are computed with a Bregman divergence, such as squared Euclidean distance. We find empirically that the choice of distance is vital, as Euclidean distance greatly outperforms the more commonly used cosine similarity. On several benchmark tasks, we achieve state-of-the-art performance. Prototypical networks are simpler and more efficient than recent meta-learning algorithms, making them an appealing approach to few-shot and zero-shot learning. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_4",
"text": " In few-shot classification we are given a small support set of N𝑁N labeled examples S={(𝐱1,y1),…,(𝐱N,yN)}𝑆subscript𝐱1subscript𝑦1…subscript𝐱𝑁subscript𝑦𝑁S=\\{(\\mathbf{x}_{1},y_{1}),\\ldots,(\\mathbf{x}_{N},y_{N})\\} where each 𝐱i∈ℝDsubscript𝐱𝑖superscriptℝ𝐷\\mathbf{x}_{i}\\in\\mathbb{R}^{D} is the D𝐷D-dimensional feature vector of an example and yi∈{1,…,K}subscript𝑦𝑖1…𝐾y_{i}\\in\\{1,\\ldots,K\\} is the corresponding label. Sksubscript𝑆𝑘S_{k} denotes the set of examples labeled with class k𝑘k. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_5",
"text": " Prototypical networks compute an M𝑀M-dimensional representation 𝐜k∈ℝMsubscript𝐜𝑘superscriptℝ𝑀\\mathbf{c}_{k}\\in\\mathbb{R}^{M}, or prototype, of each class through an embedding function fϕ:ℝD→ℝM:subscript𝑓bold-italic-ϕ→superscriptℝ𝐷superscriptℝ𝑀f_{\\bm{\\phi}}:\\mathbb{R}^{D}\\rightarrow\\mathbb{R}^{M} with learnable parameters ϕbold-italic-ϕ\\bm{\\phi}. Each prototype is the mean vector of the embedded support points belonging to its class: 𝐜k=1|Sk|∑(𝐱i,yi)∈Skfϕ(𝐱i)subscript𝐜𝑘1subscript𝑆𝑘subscriptsubscript𝐱𝑖subscript𝑦𝑖subscript𝑆𝑘subscript𝑓bold-italic-ϕsubscript𝐱𝑖\\mathbf{c}_{k}=\\frac{1}{|S_{k}|}\\sum_{(\\mathbf{x}_{i},y_{i})\\in S_{k}}f_{\\bm{\\phi}}(\\mathbf{x}_{i}) (1) Given a distance function d:ℝM×ℝM→(0,+∞):𝑑→superscriptℝ𝑀superscriptℝ𝑀0d:\\mathbb{R}^{M}\\times\\mathbb{R}^{M}\\rightarrow(0,+\\infty), prototypical networks produce a distribution over classes for a query point 𝐱𝐱\\mathbf{x} based on a softmax over distances to the prototypes in the embedding space: pϕ(y=k|𝐱)=exp(−d(fϕ(𝐱),𝐜k))∑k′exp(−d(fϕ(𝐱),𝐜k′))subscript𝑝bold-italic-ϕ𝑦conditional𝑘𝐱𝑑subscript𝑓bold-italic-ϕ𝐱subscript𝐜𝑘subscriptsuperscript𝑘′𝑑subscript𝑓bold-italic-ϕ𝐱subscript𝐜superscript𝑘′p_{\\bm{\\phi}}(y=k\\,|\\,\\mathbf{x})=\\frac{\\exp(-d(f_{\\bm{\\phi}}(\\mathbf{x}),\\mathbf{c}_{k}))}{\\sum_{k^{\\prime}}\\exp(-d(f_{\\bm{\\phi}}(\\mathbf{x}),\\mathbf{c}_{k^{\\prime}}))} (2) Learning proceeds by minimizing the negative log-probability J(ϕ)=−logpϕ(y=k|𝐱)𝐽bold-italic-ϕsubscript𝑝bold-italic-ϕ𝑦conditional𝑘𝐱J(\\bm{\\phi})=-\\log p_{\\bm{\\phi}}(y=k\\,|\\,\\mathbf{x}) of the true class k𝑘k via SGD. Training episodes are formed by randomly selecting a subset of classes from the training set, then choosing a subset of examples within each class to act as the support set and a subset of the remainder to serve as query points. Pseudocode to compute the loss J(ϕ)𝐽bold-italic-ϕJ(\\bm{\\phi}) for a training episode is provided in Algorithm 1. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_6",
"text": " For a particular class of distance functions, known as regular Bregman divergences , the prototypical networks algorithm is equivalent to performing mixture density estimation on the support set with an exponential family density. A regular Bregman divergence dφsubscript𝑑𝜑d_{\\varphi} is defined as: dφ(𝐳,𝐳′)=φ(𝐳)−φ(𝐳′)−(𝐳−𝐳′)T∇φ(𝐳′),subscript𝑑𝜑𝐳superscript𝐳′𝜑𝐳𝜑superscript𝐳′superscript𝐳superscript𝐳′𝑇∇𝜑superscript𝐳′d_{\\varphi}(\\mathbf{z},\\mathbf{z}^{\\prime})=\\varphi(\\mathbf{z})-\\varphi(\\mathbf{z}^{\\prime})-(\\mathbf{z}-\\mathbf{z}^{\\prime})^{T}\\nabla\\varphi(\\mathbf{z}^{\\prime}), (3) where φ𝜑\\varphi is a differentiable, strictly convex function of the Legendre type. Examples of Bregman divergences include squared Euclidean distance ‖𝐳−𝐳′‖2superscriptnorm𝐳superscript𝐳′2\\|\\mathbf{z}-\\mathbf{z}^{\\prime}\\|^{2} and Mahalanobis distance. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_7",
"text": " Prototype computation can be viewed in terms of hard clustering on the support set, with one cluster per class and each support point assigned to its corresponding class cluster. It has been shown for Bregman divergences that the cluster representative achieving minimal distance to its assigned points is the cluster mean. Thus the prototype computation in Equation (1) yields optimal cluster representatives given the support set labels when a Bregman divergence is used. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_8",
"text": " Moreover, any regular exponential family distribution pψ(𝐳|𝜽)subscript𝑝𝜓conditional𝐳𝜽p_{\\psi}(\\mathbf{z}|\\bm{\\theta}) with parameters 𝜽𝜽\\bm{\\theta} and cumulant function ψ𝜓\\psi can be written in terms of a uniquely determined regular Bregman divergence : pψ(𝐳|𝜽)=exp{𝐳T𝜽−ψ(𝜽)−gψ(𝐳)}=exp{−dφ(𝐳,𝝁(𝜽))−gφ(𝐳)}subscript𝑝𝜓conditional𝐳𝜽superscript𝐳𝑇𝜽𝜓𝜽subscript𝑔𝜓𝐳subscript𝑑𝜑𝐳𝝁𝜽subscript𝑔𝜑𝐳p_{\\psi}(\\mathbf{z}|\\bm{\\theta})=\\exp\\{\\mathbf{z}^{T}\\bm{\\theta}-\\psi(\\bm{\\theta})-g_{\\psi}(\\mathbf{z})\\}=\\exp\\{-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}))-g_{\\varphi}(\\mathbf{z})\\} (4) Consider now a regular exponential family mixture model with parameters 𝚪={𝜽k,πk}k=1K𝚪superscriptsubscriptsubscript𝜽𝑘subscript𝜋𝑘𝑘1𝐾\\bm{\\Gamma}=\\{\\bm{\\theta}_{k},\\pi_{k}\\}_{k=1}^{K}: p(𝐳|𝚪)=∑k=1Kπkpψ(𝐳|𝜽k)=∑k=1Kπkexp(−dφ(𝐳,𝝁(𝜽k))−gφ(𝐳))𝑝conditional𝐳𝚪superscriptsubscript𝑘1𝐾subscript𝜋𝑘subscript𝑝𝜓conditional𝐳subscript𝜽𝑘superscriptsubscript𝑘1𝐾subscript𝜋𝑘subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘subscript𝑔𝜑𝐳p(\\mathbf{z}|\\bm{\\Gamma})=\\sum_{k=1}^{K}\\pi_{k}p_{\\psi}(\\mathbf{z}|\\bm{\\theta}_{k})=\\sum_{k=1}^{K}\\pi_{k}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k}))-g_{\\varphi}(\\mathbf{z})) (5) Given 𝚪𝚪\\bm{\\Gamma}, inference of the cluster assignment y𝑦y for an unlabeled point 𝐳𝐳\\mathbf{z} becomes: p(y=k|𝐳)=πkexp(−dφ(𝐳,𝝁(𝜽k)))∑k′πk′exp(−dφ(𝐳,𝝁(𝜽k)))𝑝𝑦conditional𝑘𝐳subscript𝜋𝑘subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘subscriptsuperscript𝑘′subscript𝜋superscript𝑘′subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘p(y=k|\\mathbf{z})=\\frac{\\pi_{k}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k})))}{\\sum_{k^{\\prime}}\\pi_{k^{\\prime}}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k})))} (6) For an equally-weighted mixture model with one cluster per class, cluster assignment inference (6) is equivalent to query class prediction (2) with fϕ(𝐱)=𝐳subscript𝑓italic-ϕ𝐱𝐳f_{\\phi}(\\mathbf{x})=\\mathbf{z} and 𝐜k=𝝁(𝜽k)subscript𝐜𝑘𝝁subscript𝜽𝑘\\mathbf{c}_{k}=\\bm{\\mu}(\\bm{\\theta}_{k}). In this case, prototypical networks are effectively performing mixture density estimation with an exponential family distribution determined by dφsubscript𝑑𝜑d_{\\varphi}. The choice of distance therefore specifies modeling assumptions about the class-conditional data distribution in the embedding space. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_9",
"text": " A simple analysis is useful in gaining insight into the nature of the learned classifier. When we use Euclidean distance d(𝐳,𝐳′)=‖𝐳−𝐳′‖2𝑑𝐳superscript𝐳′superscriptnorm𝐳superscript𝐳′2d(\\mathbf{z},\\mathbf{z^{\\prime}})=\\|\\mathbf{z}-\\mathbf{z}^{\\prime}\\|^{2}, then the model in Equation (2) is equivalent to a linear model with a particular parameterization . To see this, expand the term in the exponent: −‖fϕ(𝐱)−𝐜k‖2superscriptnormsubscript𝑓bold-italic-ϕ𝐱subscript𝐜𝑘2\\displaystyle-\\|f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}\\|^{2} =−fϕ(𝐱)⊤fϕ(𝐱)+2𝐜k⊤fϕ(𝐱)−𝐜k⊤𝐜kabsentsubscript𝑓bold-italic-ϕsuperscript𝐱topsubscript𝑓bold-italic-ϕ𝐱2superscriptsubscript𝐜𝑘topsubscript𝑓bold-italic-ϕ𝐱superscriptsubscript𝐜𝑘topsubscript𝐜𝑘\\displaystyle=-f_{\\bm{\\phi}}(\\mathbf{x})^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})+2\\mathbf{c}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k} (7) The first term in Equation (7) is constant with respect to the class k𝑘k, so it does not affect the softmax probabilities. We can write the remaining terms as a linear model as follows: 2𝐜k⊤fϕ(𝐱)−𝐜k⊤𝐜k=𝐰k⊤fϕ(𝐱)+bk, where 𝐰k=2𝐜k and bk=−𝐜k⊤𝐜k2superscriptsubscript𝐜𝑘topsubscript𝑓bold-italic-ϕ𝐱superscriptsubscript𝐜𝑘topsubscript𝐜𝑘superscriptsubscript𝐰𝑘topsubscript𝑓bold-italic-ϕ𝐱subscript𝑏𝑘, where subscript𝐰𝑘2subscript𝐜𝑘 and subscript𝑏𝑘superscriptsubscript𝐜𝑘topsubscript𝐜𝑘2\\mathbf{c}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k}=\\mathbf{w}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})+b_{k}\\mbox{, where }\\mathbf{w}_{k}=2\\mathbf{c}_{k}\\mbox{ and }b_{k}=-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k} (8) We focus primarily on squared Euclidean distance (corresponding to spherical Gaussian densities) in this work. Our results indicate that Euclidean distance is an effective choice despite the equivalence to a linear model. We hypothesize this is because all of the required non-linearity can be learned within the embedding function. Indeed, this is the approach that modern neural network classification systems currently use, e.g., (14, 28). ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_10",
"text": " Prototypical networks differ from matching networks in the few-shot case with equivalence in the one-shot scenario. Matching networks produce a weighted nearest neighbor classifier given the support set, while prototypical networks produce a linear classifier when squared Euclidean distance is used. In the case of one-shot learning, 𝐜k=𝐱ksubscript𝐜𝑘subscript𝐱𝑘\\mathbf{c}_{k}=\\mathbf{x}_{k} since there is only one support point per class, and matching networks and prototypical networks become equivalent. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_11",
"text": " A natural question is whether it makes sense to use multiple prototypes per class instead of just one. If the number of prototypes per class is fixed and greater than 111, then this would require a partitioning scheme to further cluster the support points within a class. This has been proposed in Mensink et al. and Rippel et al. ; however both methods require a separate partitioning phase that is decoupled from the weight updates, while our approach is simple to learn with ordinary gradient descent methods. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_12",
"text": " Vinyals et al. propose a number of extensions, including decoupling the embedding functions of the support and query points, and using a second-level, fully-conditional embedding (FCE) that takes into account specific points in each episode. These could likewise be incorporated into prototypical networks, however they increase the number of learnable parameters, and FCE imposes an arbitrary ordering on the support set using a bi-directional LSTM. Instead, we show that it is possible to achieve the same level of performance using simple design choices, which we outline next. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_13",
"text": " Vinyals et al. and Ravi and Larochelle apply matching networks using cosine distance. However for both prototypical and matching networks any distance is permissible, and we found that using squared Euclidean distance can greatly improve results for both. We conjecture this is primarily due to cosine distance not being a Bregman divergence, and thus the equivalence to mixture density estimation discussed in Section 2.3 does not hold. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_14",
"text": " A straightforward way to construct episodes, used in Vinyals et al. and Ravi and Larochelle , is to choose Ncsubscript𝑁𝑐N_{c} classes and NSsubscript𝑁𝑆N_{S} support points per class in order to match the expected situation at test-time. That is, if we expect at test-time to perform 555-way classification and 111-shot learning, then training episodes could be comprised of Nc=5subscript𝑁𝑐5N_{c}=5, NS=1subscript𝑁𝑆1N_{S}=1. We have found, however, that it can be extremely beneficial to train with a higher Ncsubscript𝑁𝑐N_{c}, or “way”, than will be used at test-time. In our experiments, we tune the training Ncsubscript𝑁𝑐N_{c} on a held-out validation set. Another consideration is whether to match NSsubscript𝑁𝑆N_{S}, or “shot”, at train and test-time. For prototypical networks, we found that it is usually best to train and test with the same “shot” number. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_15",
"text": " Zero-shot learning differs from few-shot learning in that instead of being given a support set of training points, we are given a class meta-data vector 𝐯ksubscript𝐯𝑘\\mathbf{v}_{k} for each class. These could be determined in advance, or they could be learned from e.g., raw text . Modifying prototypical networks to deal with the zero-shot case is straightforward: we simply define 𝐜k=gϑ(𝐯k)subscript𝐜𝑘subscript𝑔bold-italic-ϑsubscript𝐯𝑘\\mathbf{c}_{k}=g_{\\bm{\\vartheta}}(\\mathbf{v}_{k}) to be a separate embedding of the meta-data vector. An illustration of the zero-shot procedure for prototypical networks as it relates to the few-shot procedure is shown in Figure 1. Since the meta-data vector and query point come from different input domains, we found it was helpful empirically to fix the prototype embedding g𝑔g to have unit length, however we do not constrain the query embedding f𝑓f. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_16",
"text": " For few-shot learning, we performed experiments on Omniglot and the miniImageNet version of ILSVRC-2012 with the splits proposed by Ravi and Larochelle . We perform zero-shot experiments on the 2011 version of the Caltech UCSD bird dataset (CUB-200 2011) . ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_17",
"text": " Omniglot is a dataset of 1623 handwritten characters collected from 50 alphabets. There are 20 examples associated with each character, where each example is drawn by a different human subject. We follow the procedure of Vinyals et al. by resizing the grayscale images to 28 ×\\times 28 and augmenting the character classes with rotations in multiples of 90 degrees. We use 1200 characters plus rotations for training (4,800 classes in total) and the remaining classes, including rotations, for test. Our embedding architecture mirrors that used by Vinyals et al. and is composed of four convolutional blocks. Each block comprises a 64-filter 3 ×\\times 3 convolution, batch normalization layer , a ReLU nonlinearity and a 2 ×\\times 2 max-pooling layer. When applied to the 28 ×\\times 28 Omniglot images this architecture results in a 64-dimensional output space. We use the same encoder for embedding both support and query points. All of our models were trained via SGD with Adam . We used an initial learning rate of 10−3superscript10310^{-3} and cut the learning rate in half every 2000 episodes. No regularization was used other than batch normalization. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_18",
"text": " We trained prototypical networks using Euclidean distance in the 1-shot and 5-shot scenarios with training episodes containing 60 classes and 5 query points per class. We found that it is advantageous to match the training-shot with the test-shot, and to use more classes (higher “way”) per training episode rather than fewer. We compare against various baselines, including the neural statistician and both the fine-tuned and non-fine-tuned versions of matching networks . We computed classification accuracy for our models averaged over 1000 randomly generated episodes from the test set. The results are shown in Table 1 and to our knowledge they represent the state-of-the-art on this dataset. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_19",
"text": " The miniImageNet dataset, originally proposed by Vinyals et al. , is derived from the larger ILSVRC-12 dataset . The splits used by Vinyals et al. consist of 60,000 color images of size 84 ×\\times 84 divided into 100 classes with 600 examples each. For our experiments, we use the splits introduced by Ravi and Larochelle in order to directly compare with state-of-the-art algorithms for few-shot learning. Their splits use a different set of 100 classes, divided into 64 training, 16 validation, and 20 test classes. We follow their procedure by training on the 64 training classes and using the 16 validation classes for monitoring generalization performance only. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_20",
"text": " We use the same four-block embedding architecture as in our Omniglot experiments, though here it results in a 1600-dimensional output space due to the increased size of the images. We also use the same learning rate schedule as in our Omniglot experiments and train until validation loss stops improving. We train using 30-way episodes for 1-shot classification and 20-way episodes for 5-shot classification. We match train shot to test shot and each class contains 15 query points per episode. We compare to the baselines as reported by Ravi and Larochelle , which include a simple nearest neighbor approach on top of features learned by a classification network on the 64 training classes. The other baselines are two non-fine-tuned variants of matching networks (both ordinary and FCE) and the Meta-Learner LSTM. As can be seen in Table 2, prototypical networks achieves state-of-the-art here by a wide margin. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_21",
"text": " We conducted further analysis, to determine the effect of distance metric and the number of training classes per episode on the performance of prototypical networks and matching networks. To make the methods comparable, we use our own implementation of matching networks that utilizes the same embedding architecture as our prototypical networks. In Figure 2 we compare cosine vs. Euclidean distance and 5-way vs. 20-way training episodes in the 1-shot and 5-shot scenarios, with 15 query points per class per episode. We note that 20-way achieves higher accuracy than 5-way and conjecture that the increased difficulty of 20-way classification helps the network to generalize better, because it forces the model to make more fine-grained decisions in the embedding space. Also, using Euclidean distance improves performance substantially over cosine distance. This effect is even more pronounced for prototypical networks, in which computing the class prototype as the mean of embedded support points is more naturally suited to Euclidean distances since cosine distance is not a Bregman divergence. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_22",
"text": " In order to assess the suitability of our approach for zero-shot learning, we also run experiments on the Caltech-UCSD Birds (CUB) 200-2011 dataset . The CUB dataset contains 11,788 images of 200 bird species. We closely follow the procedure of Reed et al. in preparing the data. We use their splits to divide the classes into 100 training, 50 validation, and 50 test. For images we use 1,024-dimensional features extracted by applying GoogLeNet to middle, upper left, upper right, lower left, and lower right crops of the original and horizontally-flipped image222Features downloaded from https://github.com/reedscot/cvpr2016.. At test time we use only the middle crop of the original image. For class meta-data we use the 312-dimensional continuous attribute vectors provided with the CUB dataset. These attributes encode various characteristics of the bird species such as their color, shape, and feather patterns. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_23",
"text": " We learned a simple linear mapping on top of both the 1024-dimensional image features and the 312-dimensional attribute vectors to produce a 1,024-dimensional output space. For this dataset we found it helpful to normalize the class prototypes (embedded attribute vectors) to be of unit length, since the attribute vectors come from a different domain than the images. Training episodes were constructed with 50 classes and 10 query images per class. The embeddings were optimized via SGD with Adam at a fixed learning rate of 10−4superscript10410^{-4} and weight decay of 10−5superscript10510^{-5}. Early stopping on validation loss was used to determine the optimal number of epochs for retraining on the training plus validation set. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_24",
"text": " Table 3 shows that we achieve state-of-the-art results by a large margin when compared to methods utilizing attributes as class meta-data. We compare our method to other embedding approaches, such as ALE , SJE , and DS-SJE/DA-SJE . We also compare to a recent clustering approach which trains an SVM on a learned feature space obtained by fine-tuning AlexNet . These zero-shot classification results demonstrate that our approach is general enough to be applied even when the data points (images) are from a different domain relative to the classes (attributes). ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_25",
"text": " The literature on metric learning is vast (15, 5); we summarize here the work most relevant to our proposed method. Neighborhood Components Analysis (NCA) learns a Mahalanobis distance to maximize K-nearest-neighbor’s (KNN) leave-one-out accuracy in the transformed space. Salakhutdinov and Hinton extend NCA by using a neural network to perform the transformation. Large margin nearest neighbor (LMNN) classification also attempts to optimize KNN accuracy but does so using a hinge loss that encourages the local neighborhood of a point to contain other points with the same label. The DNet-KNN is another margin-based method that improves upon LMNN by utilizing a neural network to perform the embedding instead of a simple linear transformation. Of these, our method is most similar to the non-linear extension of NCA because we use a neural network to perform the embedding and we optimize a softmax based on Euclidean distances in the transformed space, as opposed to a margin loss. A key distinction between our approach and non-linear NCA is that we form a softmax directly over classes, rather than individual points, computed from distances to each class’s prototype representation. This allows each class to have a concise representation independent of the number of data points and obviates the need to store the entire support set to make predictions. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_26",
"text": " Our approach is also similar to the nearest class mean approach , where each class is represented by the mean of its examples. This approach was developed to rapidly incorporate new classes into a classifier without retraining, however it relies on a linear embedding and was designed to handle the case where the novel classes come with a large number of examples. In contrast, our approach utilizes neural networks to non-linearly embed points and we couple this with episodic training in order to handle the few-shot scenario. Mensink et al. attempt to extend their approach to also perform non-linear classification, but they do so by allowing classes to have multiple prototypes. They find these prototypes in a pre-processing step by using k𝑘k-means on the input space and then perform a multi-modal variant of their linear embedding. Prototypical networks, on the other hand, learn a non-linear embedding in an end-to-end manner with no such pre-processing, producing a non-linear classifier that still only requires one prototype per class. In addition, our approach naturally generalizes to other distance functions, particularly Bregman divergences. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_27",
"text": " Another relevant few-shot learning method is the meta-learning approach proposed in Ravi and Larochelle . The key insight here is that LSTM dynamics and gradient descent can be written in effectively the same way. An LSTM can then be trained to itself train a model from a given episode, with the performance goal of generalizing well on the query points. Matching networks and prototypical networks can also be seen as forms of meta-learning, in the sense that they produce simple classifiers dynamically from new training episodes; however the core embeddings they rely on are fixed after training. The FCE extension to matching nets involves a secondary embedding that depends on the support set. However, in the few-shot scenario the amount of data is so small that a simple inductive bias seems to work well, without the need to learn a custom embedding for each episode. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_28",
"text": " Prototypical networks are also related to the neural statistician from the generative modeling literature, which extends the variational autoencoder (12, 24) to learn generative models of datasets rather than individual points. One component of the neural statistician is the “statistic network” which summarizes a set of data points into a statistic vector. It does this by encoding each point within a dataset, taking a sample mean, and applying a post-processing network to obtain an approximate posterior over the statistic vector. Edwards and Storkey test their model for one-shot classification on the Omniglot dataset by considering each character to be a separate dataset and making predictions based on the class whose approximate posterior over the statistic vector has minimal KL-divergence from the posterior inferred by the test point. Like the neural statistician, we also produce a summary statistic for each class. However, ours is a discriminative model, as befits our discriminative task of few-shot classification. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_29",
"text": " With respect to zero-shot learning, the use of embedded meta-data in prototypical networks resembles the method of in that both predict the weights of a linear classifier. The DS-SJE and DA-SJE approach of also learns deep multimodal embedding functions for images and class meta-data. Unlike ours, they learn using an empirical risk loss. Neither nor uses episodic training, which allows us to help speed up training and regularize the model. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_30",
"text": " We have proposed a simple method called prototypical networks for few-shot learning based on the idea that we can represent each class by the mean of its examples in a representation space learned by a neural network. We train these networks to specifically perform well in the few-shot setting by using episodic training. The approach is far simpler and more efficient than recent meta-learning approaches, and produces state-of-the-art results even without sophisticated extensions developed for matching networks (although these can be applied to prototypical nets as well). We show how performance can be greatly improved by carefully considering the chosen distance metric, and by modifying the episodic learning procedure. We further demonstrate how to generalize prototypical networks to the zero-shot setting, and achieve state-of-the-art results on the CUB-200 dataset. A natural direction for future work is to utilize Bregman divergences other than squared Euclidean distance, corresponding to class-conditional distributions beyond spherical Gaussians. We conducted preliminary explorations of this, including learning a variance per dimension for each class. This did not lead to any empirical gains, suggesting that the embedding network has enough flexibility on its own without requiring additional fitted parameters per class. Overall, the simplicity and effectiveness of prototypical networks makes it a promising approach for few-shot learning. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_31",
"text": " We would like to thank Marc Law, Sachin Ravi, Hugo Larochelle, Renjie Liao, and Oriol Vinyals for helpful discussions. This work was supported by the Samsung GRP project and the Canadian Institute for Advanced Research. ",
"title": "Prototypical Networks for Few-shot Learning"
}
] |
What is an example of a DSE approach?
|
An example of DSE approach can be Bayesian optimization, simulated annealing, randomized search or genetic algorithms and all tend to develop automated approaches to find NN architectures exhibiting higher accuracy [9].
|
[
9
] |
[
{
"id": "1602.07360_all_0",
"text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accuracy, a CNN architecture with fewer parameters has several advantages: ∙∙\\bullet More efficient distributed training. Communication among servers is the limiting factor to the scalability of distributed CNN training. For distributed data-parallel training, communication overhead is directly proportional to the number of parameters in the model Iandola et al. (2016). In short, small models train faster due to requiring less communication. ∙∙\\bullet Less overhead when exporting new models to clients. For autonomous driving, companies such as Tesla periodically copy new models from their servers to customers’ cars. This practice is often referred to as an over-the-air update. Consumer Reports has found that the safety of Tesla’s Autopilot semi-autonomous driving functionality has incrementally improved with recent over-the-air updates Consumer Reports (2016). However, over-the-air updates of today’s typical CNN/DNN models can require large data transfers. With AlexNet, this would require 240MB of communication from the server to the car. Smaller models require less communication, making frequent updates more feasible. ∙∙\\bullet Feasible FPGA and embedded deployment. FPGAs often have less than 10MB111For example, the Xilinx Vertex-7 FPGA has a maximum of 8.5 MBytes (i.e. 68 Mbits) of on-chip memory and does not provide off-chip memory. of on-chip memory and no off-chip memory or storage. For inference, a sufficiently small model could be stored directly on the FPGA instead of being bottlenecked by memory bandwidth Qiu et al. (2016), while video frames stream through the FPGA in real time. Further, when deploying CNNs on Application-Specific Integrated Circuits (ASICs), a sufficiently small model could be stored directly on-chip, and smaller models may enable the ASIC to fit on a smaller die. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_1",
"text": " As you can see, there are several advantages of smaller CNN architectures. With this in mind, we focus directly on the problem of identifying a CNN architecture with fewer parameters but equivalent accuracy compared to a well-known model. We have discovered such an architecture, which we call SqueezeNet. In addition, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_2",
"text": " The rest of the paper is organized as follows. In Section 2 we review the related work. Then, in Sections 3 and 4 we describe and evaluate the SqueezeNet architecture. After that, we turn our attention to understanding how CNN architectural design choices impact model size and accuracy. We gain this understanding by exploring the design space of SqueezeNet-like architectures. In Section 5, we do design space exploration on the CNN microarchitecture, which we define as the organization and dimensionality of individual layers and modules. In Section 6, we do design space exploration on the CNN macroarchitecture, which we define as high-level organization of layers in a CNN. Finally, we conclude in Section 7. In short, Sections 3 and 4 are useful for CNN researchers as well as practitioners who simply want to apply SqueezeNet to a new application. The remaining sections are aimed at advanced researchers who intend to design their own CNN architectures. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_3",
"text": " The overarching goal of our work is to identify a model that has very few parameters while preserving accuracy. To address this problem, a sensible approach is to take an existing CNN model and compress it in a lossy fashion. In fact, a research community has emerged around the topic of model compression, and several approaches have been reported. A fairly straightforward approach by Denton et al. is to apply singular value decomposition (SVD) to a pretrained CNN model Denton et al. (2014). Han et al. developed Network Pruning, which begins with a pretrained model, then replaces parameters that are below a certain threshold with zeros to form a sparse matrix, and finally performs a few iterations of training on the sparse CNN Han et al. (2015b). Recently, Han et al. extended their work by combining Network Pruning with quantization (to 8 bits or less) and huffman encoding to create an approach called Deep Compression Han et al. (2015a), and further designed a hardware accelerator called EIE Han et al. (2016a) that operates directly on the compressed model, achieving substantial speedups and energy savings. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_4",
"text": " Convolutions have been used in artificial neural networks for at least 25 years; LeCun et al. helped to popularize CNNs for digit recognition applications in the late 1980s LeCun et al. (1989). In neural networks, convolution filters are typically 3D, with height, width, and channels as the key dimensions. When applied to images, CNN filters typically have 3 channels in their first layer (i.e. RGB), and in each subsequent layer Lisubscript𝐿𝑖L_{i} the filters have the same number of channels as Li−1subscript𝐿𝑖1L_{i-1} has filters. The early work by LeCun et al. LeCun et al. (1989) uses 5x5xChannels222From now on, we will simply abbreviate HxWxChannels to HxW. filters, and the recent VGG Simonyan & Zisserman (2014) architectures extensively use 3x3 filters. Models such as Network-in-Network Lin et al. (2013) and the GoogLeNet family of architectures Szegedy et al. (2014); Ioffe & Szegedy (2015); Szegedy et al. (2015; 2016) use 1x1 filters in some layers. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_5",
"text": " With the trend of designing very deep CNNs, it becomes cumbersome to manually select filter dimensions for each layer. To address this, various higher level building blocks, or modules, comprised of multiple convolution layers with a specific fixed organization have been proposed. For example, the GoogLeNet papers propose Inception modules, which are comprised of a number of different dimensionalities of filters, usually including 1x1 and 3x3, plus sometimes 5x5 Szegedy et al. (2014) and sometimes 1x3 and 3x1 Szegedy et al. (2015). Many such modules are then combined, perhaps with additional ad-hoc layers, to form a complete network. We use the term CNN microarchitecture to refer to the particular organization and dimensions of the individual modules. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_6",
"text": " While the CNN microarchitecture refers to individual layers and modules, we define the CNN macroarchitecture as the system-level organization of multiple modules into an end-to-end CNN architecture. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_7",
"text": " Perhaps the mostly widely studied CNN macroarchitecture topic in the recent literature is the impact of depth (i.e. number of layers) in networks. Simoyan and Zisserman proposed the VGG Simonyan & Zisserman (2014) family of CNNs with 12 to 19 layers and reported that deeper networks produce higher accuracy on the ImageNet-1k dataset Deng et al. (2009). K. He et al. proposed deeper CNNs with up to 30 layers that deliver even higher ImageNet accuracy He et al. (2015a). ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_8",
"text": " The choice of connections across multiple layers or modules is an emerging area of CNN macroarchitectural research. Residual Networks (ResNet) He et al. (2015b) and Highway Networks Srivastava et al. (2015) each propose the use of connections that skip over multiple layers, for example additively connecting the activations from layer 3 to the activations from layer 6. We refer to these connections as bypass connections. The authors of ResNet provide an A/B comparison of a 34-layer CNN with and without bypass connections; adding bypass connections delivers a 2 percentage-point improvement on Top-5 ImageNet accuracy. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_9",
"text": " Neural networks (including deep and convolutional NNs) have a large design space, with numerous options for microarchitectures, macroarchitectures, solvers, and other hyperparameters. It seems natural that the community would want to gain intuition about how these factors impact a NN’s accuracy (i.e. the shape of the design space). Much of the work on design space exploration (DSE) of NNs has focused on developing automated approaches for finding NN architectures that deliver higher accuracy. These automated DSE approaches include bayesian optimization Snoek et al. (2012), simulated annealing Ludermir et al. (2006), randomized search Bergstra & Bengio (2012), and genetic algorithms Stanley & Miikkulainen (2002). To their credit, each of these papers provides a case in which the proposed DSE approach produces a NN architecture that achieves higher accuracy compared to a representative baseline. However, these papers make no attempt to provide intuition about the shape of the NN design space. Later in this paper, we eschew automated approaches – instead, we refactor CNNs in such a way that we can do principled A/B comparisons to investigate how CNN architectural decisions influence model size and accuracy. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_10",
"text": " In the following sections, we first propose and evaluate the SqueezeNet architecture with and without model compression. Then, we explore the impact of design choices in microarchitecture and macroarchitecture for SqueezeNet-like CNN architectures. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_11",
"text": " In this section, we begin by outlining our design strategies for CNN architectures with few parameters. Then, we introduce the Fire module, our new building block out of which to build CNN architectures. Finally, we use our design strategies to construct SqueezeNet, which is comprised mainly of Fire modules. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_12",
"text": " Our overarching objective in this paper is to identify CNN architectures that have few parameters while maintaining competitive accuracy. To achieve this, we employ three main strategies when designing CNN architectures: ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_13",
"text": " Strategy 1. Replace 3x3 filters with 1x1 filters. Given a budget of a certain number of convolution filters, we will choose to make the majority of these filters 1x1, since a 1x1 filter has 9X fewer parameters than a 3x3 filter. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_14",
"text": " Strategy 2. Decrease the number of input channels to 3x3 filters. Consider a convolution layer that is comprised entirely of 3x3 filters. The total quantity of parameters in this layer is (number of input channels) * (number of filters) * (3*3). So, to maintain a small total number of parameters in a CNN, it is important not only to decrease the number of 3x3 filters (see Strategy 1 above), but also to decrease the number of input channels to the 3x3 filters. We decrease the number of input channels to 3x3 filters using squeeze layers, which we describe in the next section. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_15",
"text": " Strategy 3. Downsample late in the network so that convolution layers have large activation maps. In a convolutional network, each convolution layer produces an output activation map with a spatial resolution that is at least 1x1 and often much larger than 1x1. The height and width of these activation maps are controlled by: (1) the size of the input data (e.g. 256x256 images) and (2) the choice of layers in which to downsample in the CNN architecture. Most commonly, downsampling is engineered into CNN architectures by setting the (stride >> 1) in some of the convolution or pooling layers (e.g. Szegedy et al. (2014); Simonyan & Zisserman (2014); Krizhevsky et al. (2012)). If early333In our terminology, an “early” layer is close to the input data. layers in the network have large strides, then most layers will have small activation maps. Conversely, if most layers in the network have a stride of 1, and the strides greater than 1 are concentrated toward the end444In our terminology, the “end” of the network is the classifier. of the network, then many layers in the network will have large activation maps. Our intuition is that large activation maps (due to delayed downsampling) can lead to higher classification accuracy, with all else held equal. Indeed, K. He and H. Sun applied delayed downsampling to four different CNN architectures, and in each case delayed downsampling led to higher classification accuracy He & Sun (2015). ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_16",
"text": " Strategies 1 and 2 are about judiciously decreasing the quantity of parameters in a CNN while attempting to preserve accuracy. Strategy 3 is about maximizing accuracy on a limited budget of parameters. Next, we describe the Fire module, which is our building block for CNN architectures that enables us to successfully employ Strategies 1, 2, and 3. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_17",
"text": " We define the Fire module as follows. A Fire module is comprised of: a squeeze convolution layer (which has only 1x1 filters), feeding into an expand layer that has a mix of 1x1 and 3x3 convolution filters; we illustrate this in Figure 1. The liberal use of 1x1 filters in Fire modules is an application of Strategy 1 from Section 3.1. We expose three tunable dimensions (hyperparameters) in a Fire module: s1x1subscript𝑠1𝑥1s_{1x1}, e1x1subscript𝑒1𝑥1e_{1x1}, and e3x3subscript𝑒3𝑥3e_{3x3}. In a Fire module, s1x1subscript𝑠1𝑥1s_{1x1} is the number of filters in the squeeze layer (all 1x1), e1x1subscript𝑒1𝑥1e_{1x1} is the number of 1x1 filters in the expand layer, and e3x3subscript𝑒3𝑥3e_{3x3} is the number of 3x3 filters in the expand layer. When we use Fire modules we set s1x1subscript𝑠1𝑥1s_{1x1} to be less than (e1x1subscript𝑒1𝑥1e_{1x1} + e3x3subscript𝑒3𝑥3e_{3x3}), so the squeeze layer helps to limit the number of input channels to the 3x3 filters, as per Strategy 2 from Section 3.1. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_18",
"text": " We now describe the SqueezeNet CNN architecture. We illustrate in Figure 2 that SqueezeNet begins with a standalone convolution layer (conv1), followed by 8 Fire modules (fire2-9), ending with a final conv layer (conv10). We gradually increase the number of filters per fire module from the beginning to the end of the network. SqueezeNet performs max-pooling with a stride of 2 after layers conv1, fire4, fire8, and conv10; these relatively late placements of pooling are per Strategy 3 from Section 3.1. We present the full SqueezeNet architecture in Table 1. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_19",
"text": " For brevity, we have omitted number of details and design choices about SqueezeNet from Table 1 and Figure 2. We provide these design choices in the following. The intuition behind these choices may be found in the papers cited below. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_20",
"text": " ∙∙\\bullet So that the output activations from 1x1 and 3x3 filters have the same height and width, we add a 1-pixel border of zero-padding in the input data to 3x3 filters of expand modules. ∙∙\\bullet ReLU Nair & Hinton (2010) is applied to activations from squeeze and expand layers. ∙∙\\bullet Dropout Srivastava et al. (2014) with a ratio of 50% is applied after the fire9 module. ∙∙\\bullet Note the lack of fully-connected layers in SqueezeNet; this design choice was inspired by the NiN Lin et al. (2013) architecture. ∙∙\\bullet When training SqueezeNet, we begin with a learning rate of 0.04, and we linearly decrease the learning rate throughout training, as described in Mishkin et al. (2016). For details on the training protocol (e.g. batch size, learning rate, parameter initialization), please refer to our Caffe-compatible configuration files located here: https://github.com/DeepScale/SqueezeNet. ∙∙\\bullet The Caffe framework does not natively support a convolution layer that contains multiple filter resolutions (e.g. 1x1 and 3x3) Jia et al. (2014). To get around this, we implement our expand layer with two separate convolution layers: a layer with 1x1 filters, and a layer with 3x3 filters. Then, we concatenate the outputs of these layers together in the channel dimension. This is numerically equivalent to implementing one layer that contains both 1x1 and 3x3 filters. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_21",
"text": " We released the SqueezeNet configuration files in the format defined by the Caffe CNN framework. However, in addition to Caffe, several other CNN frameworks have emerged, including MXNet Chen et al. (2015a), Chainer Tokui et al. (2015), Keras Chollet (2016), and Torch Collobert et al. (2011). Each of these has its own native format for representing a CNN architecture. That said, most of these libraries use the same underlying computational back-ends such as cuDNN Chetlur et al. (2014) and MKL-DNN Das et al. (2016). The research community has ported the SqueezeNet CNN architecture for compatibility with a number of other CNN software frameworks: • MXNet Chen et al. (2015a) port of SqueezeNet: Haria (2016) • Chainer Tokui et al. (2015) port of SqueezeNet: Bell (2016) • Keras Chollet (2016) port of SqueezeNet: DT42 (2016) • Torch Collobert et al. (2011) port of SqueezeNet’s Fire Modules: Waghmare (2016) ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_22",
"text": " We now turn our attention to evaluating SqueezeNet. In each of the CNN model compression papers reviewed in Section 2.1, the goal was to compress an AlexNet Krizhevsky et al. (2012) model that was trained to classify images using the ImageNet Deng et al. (2009) (ILSVRC 2012) dataset. Therefore, we use AlexNet555Our baseline is bvlc_alexnet from the Caffe codebase Jia et al. (2014). and the associated model compression results as a basis for comparison when evaluating SqueezeNet. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_23",
"text": " In Table 2, we review SqueezeNet in the context of recent model compression results. The SVD-based approach is able to compress a pretrained AlexNet model by a factor of 5x, while diminishing top-1 accuracy to 56.0% Denton et al. (2014). Network Pruning achieves a 9x reduction in model size while maintaining the baseline of 57.2% top-1 and 80.3% top-5 accuracy on ImageNet Han et al. (2015b). Deep Compression achieves a 35x reduction in model size while still maintaining the baseline accuracy level Han et al. (2015a). Now, with SqueezeNet, we achieve a 50X reduction in model size compared to AlexNet, while meeting or exceeding the top-1 and top-5 accuracy of AlexNet. We summarize all of the aforementioned results in Table 2. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_24",
"text": " It appears that we have surpassed the state-of-the-art results from the model compression community: even when using uncompressed 32-bit values to represent the model, SqueezeNet has a 1.4×1.4\\times smaller model size than the best efforts from the model compression community while maintaining or exceeding the baseline accuracy. Until now, an open question has been: are small models amenable to compression, or do small models “need” all of the representational power afforded by dense floating-point values? To find out, we applied Deep Compression Han et al. (2015a) to SqueezeNet, using 33% sparsity666Note that, due to the storage overhead of storing sparse matrix indices, 33% sparsity leads to somewhat less than a 3×3\\times decrease in model size. and 8-bit quantization. This yields a 0.66 MB model (363×363\\times smaller than 32-bit AlexNet) with equivalent accuracy to AlexNet. Further, applying Deep Compression with 6-bit quantization and 33% sparsity on SqueezeNet, we produce a 0.47MB model (510×510\\times smaller than 32-bit AlexNet) with equivalent accuracy. Our small model is indeed amenable to compression. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_25",
"text": " In addition, these results demonstrate that Deep Compression Han et al. (2015a) not only works well on CNN architectures with many parameters (e.g. AlexNet and VGG), but it is also able to compress the already compact, fully convolutional SqueezeNet architecture. Deep Compression compressed SqueezeNet by 10×10\\times while preserving the baseline accuracy. In summary: by combining CNN architectural innovation (SqueezeNet) with state-of-the-art compression techniques (Deep Compression), we achieved a 510×510\\times reduction in model size with no decrease in accuracy compared to the baseline. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_26",
"text": " Finally, note that Deep Compression Han et al. (2015b) uses a codebook as part of its scheme for quantizing CNN parameters to 6- or 8-bits of precision. Therefore, on most commodity processors, it is not trivial to achieve a speedup of 328=4x3284𝑥\\frac{32}{8}=4x with 8-bit quantization or 326=5.3x3265.3𝑥\\frac{32}{6}=5.3x with 6-bit quantization using the scheme developed in Deep Compression. However, Han et al. developed custom hardware – Efficient Inference Engine (EIE) – that can compute codebook-quantized CNNs more efficiently Han et al. (2016a). In addition, in the months since we released SqueezeNet, P. Gysel developed a strategy called Ristretto for linearly quantizing SqueezeNet to 8 bits Gysel (2016). Specifically, Ristretto does computation in 8 bits, and it stores parameters and activations in 8-bit data types. Using the Ristretto strategy for 8-bit computation in SqueezeNet inference, Gysel observed less than 1 percentage-point of drop in accuracy when using 8-bit instead of 32-bit data types. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_27",
"text": " So far, we have proposed architectural design strategies for small models, followed these principles to create SqueezeNet, and discovered that SqueezeNet is 50x smaller than AlexNet with equivalent accuracy. However, SqueezeNet and other models reside in a broad and largely unexplored design space of CNN architectures. Now, in Sections 5 and 6, we explore several aspects of the design space. We divide this architectural exploration into two main topics: microarchitectural exploration (per-module layer dimensions and configurations) and macroarchitectural exploration (high-level end-to-end organization of modules and other layers). ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_28",
"text": " In this section, we design and execute experiments with the goal of providing intuition about the shape of the microarchitectural design space with respect to the design strategies that we proposed in Section 3.1. Note that our goal here is not to maximize accuracy in every experiment, but rather to understand the impact of CNN architectural choices on model size and accuracy. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_29",
"text": " In SqueezeNet, each Fire module has three dimensional hyperparameters that we defined in Section 3.2: s1x1subscript𝑠1𝑥1s_{1x1}, e1x1subscript𝑒1𝑥1e_{1x1}, and e3x3subscript𝑒3𝑥3e_{3x3}. SqueezeNet has 8 Fire modules with a total of 24 dimensional hyperparameters. To do broad sweeps of the design space of SqueezeNet-like architectures, we define the following set of higher level metaparameters which control the dimensions of all Fire modules in a CNN. We define basee𝑏𝑎𝑠subscript𝑒𝑒base_{e} as the number of expand filters in the first Fire module in a CNN. After every freq𝑓𝑟𝑒𝑞freq Fire modules, we increase the number of expand filters by incre𝑖𝑛𝑐subscript𝑟𝑒incr_{e}. In other words, for Fire module i𝑖i, the number of expand filters is ei=basee+(incre∗⌊ifreq⌋e_{i}=base_{e}+(incr_{e}*{\\left\\lfloor{\\frac{i}{freq}}\\right\\rfloor}). In the expand layer of a Fire module, some filters are 1x1 and some are 3x3; we define ei=ei,1x1+ei,3x3subscript𝑒𝑖subscript𝑒𝑖1𝑥1subscript𝑒𝑖3𝑥3e_{i}=e_{i,{1x1}}+e_{i,{3x3}} with pct3x3𝑝𝑐subscript𝑡3𝑥3pct_{3x3} (in the range (0,1)01(0,1), shared over all Fire modules) as the percentage of expand filters that are 3x3. In other words, ei,3x3=ei∗pct3x3subscript𝑒𝑖3𝑥3subscript𝑒𝑖𝑝𝑐subscript𝑡3𝑥3e_{i,{3x3}}=e_{i}*pct_{3x3}, and ei,1x1=ei∗(1−pct3x3)subscript𝑒𝑖1𝑥1subscript𝑒𝑖1𝑝𝑐subscript𝑡3𝑥3e_{i,{1x1}}=e_{i}*(1-pct_{3x3}). Finally, we define the number of filters in the squeeze layer of a Fire module using a metaparameter called the squeeze ratio (SR) (again, in the range (0,1)01(0,1), shared by all Fire modules): si,1x1=SR∗eisubscript𝑠𝑖1𝑥1𝑆𝑅subscript𝑒𝑖s_{i,{1x1}}=SR*e_{i} (or equivalently si,1x1=SR∗(ei,1x1+ei,3x3)subscript𝑠𝑖1𝑥1𝑆𝑅subscript𝑒𝑖1𝑥1subscript𝑒𝑖3𝑥3s_{i,{1x1}}=SR*(e_{i,{1x1}}+e_{i,{3x3}})). SqueezeNet (Table 1) is an example architecture that we generated with the aforementioned set of metaparameters. Specifically, SqueezeNet has the following metaparameters: basee=128𝑏𝑎𝑠subscript𝑒𝑒128base_{e}=128, incre=128𝑖𝑛𝑐subscript𝑟𝑒128incr_{e}=128, pct3x3=0.5𝑝𝑐subscript𝑡3𝑥30.5pct_{3x3}=0.5, freq=2𝑓𝑟𝑒𝑞2freq=2, and SR=0.125𝑆𝑅0.125SR=0.125. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_30",
"text": " In Section 3.1, we proposed decreasing the number of parameters by using squeeze layers to decrease the number of input channels seen by 3x3 filters. We defined the squeeze ratio (SR) as the ratio between the number of filters in squeeze layers and the number of filters in expand layers. We now design an experiment to investigate the effect of the squeeze ratio on model size and accuracy. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_31",
"text": " In these experiments, we use SqueezeNet (Figure 2) as a starting point. As in SqueezeNet, these experiments use the following metaparameters: basee=128𝑏𝑎𝑠subscript𝑒𝑒128base_{e}=128, incre=128𝑖𝑛𝑐subscript𝑟𝑒128incr_{e}=128, pct3x3=0.5𝑝𝑐subscript𝑡3𝑥30.5pct_{3x3}=0.5, and freq=2𝑓𝑟𝑒𝑞2freq=2. We train multiple models, where each model has a different squeeze ratio (SR)777Note that, for a given model, all Fire layers share the same squeeze ratio. in the range (0.125, 1.0). In Figure 3(a), we show the results of this experiment, where each point on the graph is an independent model that was trained from scratch. SqueezeNet is the SR=0.125 point in this figure.888Note that we named it SqueezeNet because it has a low squeeze ratio (SR). That is, the squeeze layers in SqueezeNet have 0.125x the number of filters as the expand layers. From this figure, we learn that increasing SR beyond 0.125 can further increase ImageNet top-5 accuracy from 80.3% (i.e. AlexNet-level) with a 4.8MB model to 86.0% with a 19MB model. Accuracy plateaus at 86.0% with SR=0.75 (a 19MB model), and setting SR=1.0 further increases model size without improving accuracy. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_32",
"text": " In Section 3.1, we proposed decreasing the number of parameters in a CNN by replacing some 3x3 filters with 1x1 filters. An open question is, how important is spatial resolution in CNN filters? The VGG Simonyan & Zisserman (2014) architectures have 3x3 spatial resolution in most layers’ filters; GoogLeNet Szegedy et al. (2014) and Network-in-Network (NiN) Lin et al. (2013) have 1x1 filters in some layers. In GoogLeNet and NiN, the authors simply propose a specific quantity of 1x1 and 3x3 filters without further analysis.999To be clear, each filter is 1x1xChannels or 3x3xChannels, which we abbreviate to 1x1 and 3x3. Here, we attempt to shed light on how the proportion of 1x1 and 3x3 filters affects model size and accuracy. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_33",
"text": " We use the following metaparameters in this experiment: basee=incre=128𝑏𝑎𝑠subscript𝑒𝑒𝑖𝑛𝑐subscript𝑟𝑒128base_{e}=incr_{e}=128, freq=2𝑓𝑟𝑒𝑞2freq=2, SR=0.500𝑆𝑅0.500SR=0.500, and we vary pct3x3𝑝𝑐subscript𝑡3𝑥3pct_{3x3} from 1% to 99%. In other words, each Fire module’s expand layer has a predefined number of filters partitioned between 1x1 and 3x3, and here we turn the knob on these filters from “mostly 1x1” to “mostly 3x3”. As in the previous experiment, these models have 8 Fire modules, following the same organization of layers as in Figure 2. We show the results of this experiment in Figure 3(b). Note that the 13MB models in Figure 3(a) and Figure 3(b) are the same architecture: SR=0.500𝑆𝑅0.500SR=0.500 and pct3x3=50%𝑝𝑐subscript𝑡3𝑥3percent50pct_{3x3}=50\\%. We see in Figure 3(b) that the top-5 accuracy plateaus at 85.6% using 50% 3x3 filters, and further increasing the percentage of 3x3 filters leads to a larger model size but provides no improvement in accuracy on ImageNet. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_34",
"text": " So far we have explored the design space at the microarchitecture level, i.e. the contents of individual modules of the CNN. Now, we explore design decisions at the macroarchitecture level concerning the high-level connections among Fire modules. Inspired by ResNet He et al. (2015b), we explored three different architectures: ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_35",
"text": " ∙∙\\bullet Vanilla SqueezeNet (as per the prior sections). ∙∙\\bullet SqueezeNet with simple bypass connections between some Fire modules. (Inspired by Srivastava et al. (2015); He et al. (2015b).) ∙∙\\bullet SqueezeNet with complex bypass connections between the remaining Fire modules. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_36",
"text": " We illustrate these three variants of SqueezeNet in Figure 2. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_37",
"text": " Our simple bypass architecture adds bypass connections around Fire modules 3, 5, 7, and 9, requiring these modules to learn a residual function between input and output. As in ResNet, to implement a bypass connection around Fire3, we set the input to Fire4 equal to (output of Fire2 + output of Fire3), where the + operator is elementwise addition. This changes the regularization applied to the parameters of these Fire modules, and, as per ResNet, can improve the final accuracy and/or ability to train the full model. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_38",
"text": " One limitation is that, in the straightforward case, the number of input channels and number of output channels has to be the same; as a result, only half of the Fire modules can have simple bypass connections, as shown in the middle diagram of Fig 2. When the “same number of channels” requirement can’t be met, we use a complex bypass connection, as illustrated on the right of Figure 2. While a simple bypass is “just a wire,” we define a complex bypass as a bypass that includes a 1x1 convolution layer with the number of filters set equal to the number of output channels that are needed. Note that complex bypass connections add extra parameters to the model, while simple bypass connections do not. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_39",
"text": " In addition to changing the regularization, it is intuitive to us that adding bypass connections would help to alleviate the representational bottleneck introduced by squeeze layers. In SqueezeNet, the squeeze ratio (SR) is 0.125, meaning that every squeeze layer has 8x fewer output channels than the accompanying expand layer. Due to this severe dimensionality reduction, a limited amount of information can pass through squeeze layers. However, by adding bypass connections to SqueezeNet, we open up avenues for information to flow around the squeeze layers. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_40",
"text": " We trained SqueezeNet with the three macroarchitectures in Figure 2 and compared the accuracy and model size in Table 3. We fixed the microarchitecture to match SqueezeNet as described in Table 1 throughout the macroarchitecture exploration. Complex and simple bypass connections both yielded an accuracy improvement over the vanilla SqueezeNet architecture. Interestingly, the simple bypass enabled a higher accuracy accuracy improvement than complex bypass. Adding the simple bypass connections yielded an increase of 2.9 percentage-points in top-1 accuracy and 2.2 percentage-points in top-5 accuracy without increasing model size. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_41",
"text": " In this paper, we have proposed steps toward a more disciplined approach to the design-space exploration of convolutional neural networks. Toward this goal we have presented SqueezeNet, a CNN architecture that has 50×50\\times fewer parameters than AlexNet and maintains AlexNet-level accuracy on ImageNet. We also compressed SqueezeNet to less than 0.5MB, or 510×510\\times smaller than AlexNet without compression. Since we released this paper as a technical report in 2016, Song Han and his collaborators have experimented further with SqueezeNet and model compression. Using a new approach called Dense-Sparse-Dense (DSD) Han et al. (2016b), Han et al. use model compression during training as a regularizer to further improve accuracy, producing a compressed set of SqueezeNet parameters that is 1.2 percentage-points more accurate on ImageNet-1k, and also producing an uncompressed set of SqueezeNet parameters that is 4.3 percentage-points more accurate, compared to our results in Table 2. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_42",
"text": " We mentioned near the beginning of this paper that small models are more amenable to on-chip implementations on FPGAs. Since we released the SqueezeNet model, Gschwend has developed a variant of SqueezeNet and implemented it on an FPGA Gschwend (2016). As we anticipated, Gschwend was able to able to store the parameters of a SqueezeNet-like model entirely within the FPGA and eliminate the need for off-chip memory accesses to load model parameters. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_43",
"text": " In the context of this paper, we focused on ImageNet as a target dataset. However, it has become common practice to apply ImageNet-trained CNN representations to a variety of applications such as fine-grained object recognition Zhang et al. (2013); Donahue et al. (2013), logo identification in images Iandola et al. (2015), and generating sentences about images Fang et al. (2015). ImageNet-trained CNNs have also been applied to a number of applications pertaining to autonomous driving, including pedestrian and vehicle detection in images Iandola et al. (2014); Girshick et al. (2015); Ashraf et al. (2016) and videos Chen et al. (2015b), as well as segmenting the shape of the road Badrinarayanan et al. (2015). We think SqueezeNet will be a good candidate CNN architecture for a variety of applications, especially those in which small model size is of importance. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_44",
"text": " SqueezeNet is one of several new CNNs that we have discovered while broadly exploring the design space of CNN architectures. We hope that SqueezeNet will inspire the reader to consider and explore the broad range of possibilities in the design space of CNN architectures and to perform that exploration in a more systematic manner. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
}
] |
what is limitations of gnns?
|
There are two most widely adopted limitations of GNNs : over-smoothing and over-squashing [1]. Over-smoothing is a phenomenon that indicates representations of GNNs get similar to each others as the number of layers increases [6].
|
[
1,
6
] |
[
{
"id": "2202.03036_all_0",
"text": " Graph neural networks (GNNs) have been established as powerful and flexible tools for graph representation learning, with successful applications in drug discovery (Gaudelet et al., 2021), protein design (Ingraham et al., 2019), social network analysis (Fan et al., 2019), and so on. A large class of GNNs build multilayer models, where each layer operates on the previous layer to generate new representations using a message-passing mechanism (Gilmer et al., 2017) to aggregate local neighborhood information. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_1",
"text": " While many different message-passing strategies have been proposed, some critical limitations have been uncovered in this class of GNNs. These include the limited expressiveness of GNNs (Xu et al., 2019; Morris et al., 2019), as well as known problems such as over-smoothing (Li et al., 2018, 2019; Chen et al., 2020; Oono & Suzuki, 2020) and over-squashing (Alon & Yahav, 2021). Over-smoothing manifests as all node representations converging to a constant after sufficiently many layers, while over-squashing occurs when messages from distant nodes are not effectively propagated through certain “bottlenecks” in a graph, since too many messages get compressed into a single fixed-length vector. Designing new architectures beyond neighborhood aggregation is thus essential to solve these problems. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_2",
"text": " Transformers (Vaswani et al., 2017), which have proved to be successful in natural language understanding (Vaswani et al., 2017), computer vision (Dosovitskiy et al., 2020), and biological sequence modeling (Rives et al., 2021), offer the potential to address these issues. Rather than only aggregating local neighborhood information in the message-passing mechanism, the Transformer architecture is able to capture interaction information between any node pair via a single self-attention layer. Moreover, in contrast to GNNs, the Transformer avoids introducing any structural inductive bias at intermediate layers, addressing the expressivity limitation of GNNs. Instead, it encodes structural or positional information about nodes only into input node features, albeit limiting how much information it can learn from the graph structure. Integrating information about the graph structure into the Transformer architecture has thus gained growing attention in the graph representation learning field. However, most existing approaches only encode positional relationships between nodes, rather than explicitly encoding the structural relationships. As a result, they may not identify structural similarities between nodes and could fail to model the structural interaction between nodes (see Figure 1). This could explain why their performance was dominated by sparse GNNs in several tasks (Dwivedi et al., 2022). ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_3",
"text": " In this work, we address the critical question of how to encode structural information into a Transformer architecture. Our principal contribution is to introduce a flexible structure-aware self-attention mechanism that explicitly considers the graph structure and thus captures structural interaction between nodes. The resulting class of Transformers, which we call the Structure-Aware Transformer (SAT), can provide structure-aware representations of graphs, in contrast to most existing position-aware Transformers for graph-structured data. Specifically: • We reformulate the self-attention mechanism in Vaswani et al. (2017) as a kernel smoother and extend the original exponential kernel on node features to also account for local structures, by extracting a subgraph representation centered around each node. • We propose several methods for automatically generating the subgraph representations, enabling the resulting kernel smoother to simultaneously capture structural and attributed similarities between nodes. The resulting representations are theoretically guaranteed to be at least as expressive as the subgraph representations. • We demonstrate the effectiveness of SAT models on five graph and node property prediction benchmarks by showing it achieves better performance than state-of-the-art GNNs and Transformers. Furthermore, we show how SAT can easily leverage any GNN to compute the node representations which incorporate subgraph information and outperform the base GNN, making it an effortless enhancer of any existing GNN. • Finally, we show that we can attribute the performance gains to the structure-aware aspect of our architecture, and showcase how SAT is more interpretable than the classic Transformer with an absolute encoding. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_4",
"text": " We will present the related work and relevant background in Sections 2 and 3 before presenting our method in Section 4 and our experimental findings in Section 5. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_5",
"text": " We present here the work most related to ours, namely the work stemming from message passing GNNs, positional representations on graphs, and graph Transformers. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_6",
"text": " Message passing graph neural networks have recently been one of the leading methods for graph representation learning. An early seminal example is the GCN (Kipf & Welling, 2017), which was based on performing convolutions on the graph. Gilmer et al. (2017) reformulated the early GNNs into a framework of message passing GNNs, which has since then become the predominant framework of GNNs in use today, with extensive examples (Hamilton et al., 2017; Xu et al., 2019; Corso et al., 2020; Hu et al., 2020b; Veličković et al., 2018; Li et al., 2020a; Yang et al., 2022). However, as mentioned above, they suffer from problems of limited expressiveness, over-smoothing, and over-squashing. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_7",
"text": " Because of the limited expressiveness of GNNs, there has been some recent research into the use of absolute encoding (Shaw et al., 2018), which consists of adding or concatenating positional or structural representations to the input node features. While it is often called an absolute positional encoding, we refer to it more generally as an absolute encoding to include both positional and structural encoding, which are both important in graph modeling. Absolute encoding primarily considers position or location relationships between nodes. Examples of position-based methods include the Laplacian positional encoding (Dwivedi & Bresson, 2021; Kreuzer et al., 2021), Weisfeiler–Lehman-based positional encoding (Zhang et al., 2020), and random walk positional encoding (RWPE) (Li et al., 2020b; Dwivedi et al., 2022), while distance-based methods include distances to a predefined set of nodes (You et al., 2019) and shortest path distances between pairs of nodes (Zhang et al., 2020; Li et al., 2020b). Dwivedi et al. (2022) extend these ideas by using a trainable absolute encoding. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_8",
"text": " While the absolute encoding methods listed above can be used with message passing GNNs, they also play a crucial role in the (graph) Transformer architecture. Graph Transformer (Dwivedi & Bresson, 2021) provided an early example of how to generalize the Transformer architecture to graphs, using Laplacian eigenvectors as an absolute encoding and computing attention on the immediate neighborhood of each node, rather than on the full graph. SAN (Kreuzer et al., 2021) also used the Laplacian eigenvectors for computing an absolute encoding, but computed attention on the full graph, while distinguishing between true and created edges. Many graph Transformer methods also use a relative encoding (Shaw et al., 2018) in addition to absolute encoding. This strategy incorporates representations of the relative position or distances between nodes on the graph directly into the self-attention mechanism, as opposed to the absolute encoding which is only applied once to the input node features. Mialon et al. (2021) propose a relative encoding by means of kernels on graphs to bias the self-attention calculation, which is then able to incorporate positional information into Transformers via the choice of kernel function. Other recent work seeks to incorporate structural information into the graph Transformer, for example by encoding some carefully selected graph theoretic properties such as centrality measures and shortest path distances as positional representations (Ying et al., 2021) or by using GNNs to integrate the graph structure (Rong et al., 2020; Jain et al., 2021; Mialon et al., 2021; Shi et al., 2021). ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_9",
"text": " In this work, we combine the best of both worlds from message passing GNNs and from the Transformer architecture. We incorporate both an absolute as well as a novel relative encoding that explicitly incorporates the graph structure, thereby designing a Transformer architecture that takes both local and global information into account. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_10",
"text": " In the following, we refer to a graph as G=(V,E,𝐗)𝐺𝑉𝐸𝐗G=(V,E,\\mathbf{X}), where the node attributes for node u∈V𝑢𝑉u\\in V is denoted by xu∈𝒳⊂dsubscript𝑥𝑢𝒳superscript𝑑absentx_{u}\\in{\\mathcal{X}}\\subset^{d} and the node attributes for all nodes are stored in 𝐗∈n×dsuperscript𝑛𝑑𝐗absent\\mathbf{X}\\in^{n\\times d} for a graph with n𝑛n nodes. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_11",
"text": " While GNNs use the graph structure explicitly, Transformers remove that explicit structure, and instead infer relations between nodes by leveraging the node attributes. In this sense, the Transformer (Vaswani et al., 2017) ignores the graph structure and rather considers the graph as a (multi-) set of nodes, and uses the self-attention mechanism to infer the similarity between nodes. The Transformer itself is composed of two main blocks: a self-attention module followed by a feed-forward neural network. In the self-attention module, the input node features 𝐗𝐗\\mathbf{X} are first projected to query (𝐐𝐐\\mathbf{Q}), key (𝐊𝐊\\mathbf{K}) and value (𝐕𝐕\\mathbf{V}) matrices through a linear projection such that 𝐐=𝐗𝐖𝐐𝐐subscript𝐗𝐖𝐐\\mathbf{Q}=\\mathbf{X}\\mathbf{W_{Q}}, 𝐊=𝐗𝐖𝐊𝐊subscript𝐗𝐖𝐊\\mathbf{K}=\\mathbf{X}\\mathbf{W_{K}} and 𝐕=𝐗𝐖𝐕𝐕subscript𝐗𝐖𝐕\\mathbf{V}=\\mathbf{X}\\mathbf{W_{V}} respectively. We can compute the self-attention via Attn(𝐗):=softmax(𝐐𝐊Tdout)𝐕∈n×dout,assignAttn𝐗softmaxsuperscript𝐐𝐊𝑇subscript𝑑𝑜𝑢𝑡𝐕superscript𝑛subscript𝑑𝑜𝑢𝑡absent\\mathrm{Attn}(\\mathbf{X}):=\\mathrm{softmax}(\\frac{\\mathbf{Q}\\mathbf{K}^{T}}{\\sqrt{d_{out}}})\\mathbf{V}\\in^{n\\times d_{out}}, (1) where doutsubscript𝑑𝑜𝑢𝑡d_{out} refers to the dimension of 𝐐𝐐\\mathbf{Q}, and 𝐖𝐐,𝐖𝐊,𝐖𝐕subscript𝐖𝐐subscript𝐖𝐊subscript𝐖𝐕\\mathbf{W_{Q}},\\mathbf{W_{K}},\\mathbf{W_{V}} are trainable parameters. It is common to use multi-head attention, which concatenates multiple instances of Eq. (1) and has shown to be effective in practice (Vaswani et al., 2017). Then, the output of the self-attention is followed by a skip-connection and a feed-forward network (FFN), which jointly compose a Transformer layer, as shown below: 𝐗′superscript𝐗′\\displaystyle\\mathbf{X}^{\\prime} =𝐗+Attn(𝐗),absent𝐗Attn𝐗\\displaystyle=\\mathbf{X}+\\mathrm{Attn}(\\mathbf{X}), (2) 𝐗′′superscript𝐗′′\\displaystyle\\mathbf{X}^{\\prime\\prime} =FFN(𝐗′):=ReLU(𝐗′W1)W2.absentFFNsuperscript𝐗′assignReLUsuperscript𝐗′subscript𝑊1subscript𝑊2\\displaystyle=\\mathrm{FFN}(\\mathbf{X}^{\\prime}):=\\text{ReLU}(\\mathbf{X}^{\\prime}W_{1})W_{2}. Multiple layers can be stacked to form a Transformer model, which ultimately provides node-level representations of the graph. As the self-attention is equivariant to permutations of the input nodes, the Transformer will always generate the same representations for nodes with the same attributes regardless of their locations and surrounding structures in the graph. It is thus necessary to incorporate such information into the Transformer, generally via absolute encoding. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_12",
"text": " Absolute encoding refers to adding or concatenating the positional or structural representations of the graph to the input node features before the main Transformer model, such as the Laplacian positional encoding (Dwivedi & Bresson, 2021) or RWPE (Dwivedi et al., 2022). The main shortcoming of these encoding methods is that they generally do not provide a measure of the structural similarity between nodes and their neighborhoods. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_13",
"text": " As noticed by Mialon et al. (2021), the self-attention in Eq. (1) can be rewritten as a kernel smoother Attn(xv)=∑u∈Vκexp(xv,xu)∑w∈Vκexp(xv,xw)f(xu),∀v∈V,formulae-sequenceAttnsubscript𝑥𝑣subscript𝑢𝑉subscript𝜅subscript𝑥𝑣subscript𝑥𝑢subscript𝑤𝑉subscript𝜅subscript𝑥𝑣subscript𝑥𝑤𝑓subscript𝑥𝑢for-all𝑣𝑉\\mathrm{Attn}(x_{v})=\\sum_{u\\in V}\\frac{\\kappa_{\\exp}(x_{v},x_{u})}{\\sum_{w\\in V}\\kappa_{\\exp}(x_{v},x_{w})}f(x_{u}),~{}\\forall v\\in V, (3) where f(x)=𝐖𝐕x𝑓𝑥subscript𝐖𝐕𝑥f(x)=\\mathbf{W_{V}}x is the linear value function and κexpsubscript𝜅\\kappa_{\\exp} is a (non-symmetric) exponential kernel on ×dd{}^{d}\\times^{d} parameterized by 𝐖𝐐subscript𝐖𝐐\\mathbf{W_{Q}} and 𝐖𝐊subscript𝐖𝐊\\mathbf{W_{K}}: κexp(x,x′):=exp(⟨𝐖𝐐x,𝐖𝐊x′⟩/dout),assignsubscript𝜅𝑥superscript𝑥′subscript𝐖𝐐𝑥subscript𝐖𝐊superscript𝑥′subscript𝑑𝑜𝑢𝑡\\kappa_{\\exp}(x,x^{\\prime}):=\\exp\\left(\\langle\\mathbf{W_{Q}}x,\\mathbf{W_{K}}x^{\\prime}\\rangle/\\sqrt{d_{out}}\\right), (4) where ⟨⋅,⋅⟩⋅⋅\\langle\\cdot,\\cdot\\rangle is the dot product on d. With this form, Mialon et al. (2021) propose a relative positional encoding strategy via the product of this kernel and a diffusion kernel on the graph, which consequently captures the positional similarity between nodes. However, this method is only position-aware, in contrast to our structure-aware encoding that will be presented in Section 4. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_14",
"text": " In this section, we will describe how to encode the graph structure into the self-attention mechanism and provide a class of Transformer models based on this framework. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_15",
"text": " As presented above, self-attention in the Transformer can be rewritten as a kernel smoother where the kernel is a trainable exponential kernel defined on node features, and which only captures attributed similarity between a pair of nodes. The problem with this kernel smoother is that it cannot filter out nodes that are structurally different from the node of interest when they have the same or similar node features. In order to also incorporate the structural similarity between nodes, we consider a more generalized kernel that additionally accounts for the local substructures around each node. By introducing a set of subgraphs centered at each node, we define our structure-aware attention as: SA-attn(v):=∑u∈Vκgraph(SG(v),SG(u))∑w∈Vκgraph(SG(v),SG(w))f(xu),assignSA-attn𝑣subscript𝑢𝑉subscript𝜅graphsubscript𝑆𝐺𝑣subscript𝑆𝐺𝑢subscript𝑤𝑉subscript𝜅graphsubscript𝑆𝐺𝑣subscript𝑆𝐺𝑤𝑓subscript𝑥𝑢\\text{SA-attn}(v):=\\sum_{u\\in V}\\frac{\\kappa_{\\text{graph}}(S_{G}(v),S_{G}(u))}{\\sum_{w\\in V}\\kappa_{\\text{graph}}(S_{G}(v),S_{G}(w))}f(x_{u}), (5) where SG(v)subscript𝑆𝐺𝑣S_{G}(v) denotes a subgraph in G𝐺G centered at a node v𝑣v associated with node features 𝐗𝐗\\mathbf{X} and κgraphsubscript𝜅graph\\kappa_{\\text{graph}} can be any kernel that compares a pair of subgraphs. This new self-attention function not only takes the attributed similarity into account but also the structural similarity between subgraphs. It thus generates more expressive node representations than the original self-attention, as we will show in Section 4.4. Moreover, this self-attention is no longer equivariant to any permutation of nodes but only to nodes whose features and subgraphs coincide, which is a desirable property. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_16",
"text": " In the rest of the paper, we will consider the following form of κgraphsubscript𝜅graph\\kappa_{\\text{graph}} that already includes a large class of expressive and computationally tractable models: κgraph(SG(v),SG(u))=κexp(φ(v,G),φ(u,G)),subscript𝜅graphsubscript𝑆𝐺𝑣subscript𝑆𝐺𝑢subscript𝜅𝜑𝑣𝐺𝜑𝑢𝐺\\kappa_{\\text{graph}}(S_{G}(v),S_{G}(u))=\\kappa_{\\exp}(\\varphi(v,G),\\varphi(u,G)), (6) where φ(u,G)𝜑𝑢𝐺\\varphi(u,G) is a structure extractor that extracts vector representations of some subgraph centered at u𝑢u with node features 𝐗𝐗\\mathbf{X}. We provide several alternatives of the structure extractor below. It is worth noting that our structure-aware self-attention is flexible enough to be combined with any model that generates representations of subgraphs, including GNNs and (differentiable) graph kernels. For notational simplicity, we assume there are no edge attributes, but our method can easily incorporate edge attributes as long as the structure extractor can accommodate them. The edge attributes are consequently not considered in the self-attention computation, but are incorporated into the structure-aware node representations. In the structure extractors presented in this paper, this means that edge attributes were included whenever the base GNN was able to handle edge attributes. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_17",
"text": " A straightforward way to extract local structural information at node u𝑢u is to apply any existing GNN model to the input graph with node features 𝐗𝐗\\mathbf{X} and take the output node representation at u𝑢u as the subgraph representation at u𝑢u. More formally, if we denote by GNNG(k)superscriptsubscriptGNN𝐺𝑘\\text{GNN}_{G}^{(k)} an arbitrary GNN model with k𝑘k layers applied to G𝐺G with node features 𝐗𝐗\\mathbf{X}, then φ(u,G)=GNNG(k)(u).𝜑𝑢𝐺subscriptsuperscriptGNN𝑘𝐺𝑢\\varphi(u,G)=\\text{GNN}^{(k)}_{G}(u). (7) This extractor is able to represent the k𝑘k-subtree structure rooted at u𝑢u (Xu et al., 2019). While this class of structure extractors is fast to compute and can flexibly leverage any existing GNN, they cannot be more expressive than the Weisfeiler–Lehman test due to the expressiveness limitation of message passing GNNs (Xu et al., 2019). In practice, a small value of k𝑘k already leads to good performance, while not suffering from over-smoothing or over-squashing. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_18",
"text": " A more expressive extractor is to use a GNN to directly compute the representation of the entire k𝑘k-hop subgraph centered at u𝑢u rather than just the node representation u𝑢u. Recent work has explored the idea of using subgraphs rather than subtrees around a node in GNNs, with positive experimental results (Zhang & Li, 2021; Wijesinghe & Wang, 2022), as well as being strictly more powerful than the 1-WL test (Zhang & Li, 2021). We follow the same setup as is done in Zhang & Li (2021), and adapt our GNN extractor to utilize the entire k𝑘k-hop subgraph. The k𝑘k-subgraph GNN extractor aggregates the updated node representations of all nodes within the k𝑘k-hop neighborhood using a pooling function such as summation. Formally, if we denote by 𝒩k(u)subscript𝒩𝑘𝑢{\\mathcal{N}}_{k}(u) the k𝑘k-hop neighborhood of node u𝑢u including itself, the representation of a node u𝑢u is: φ(u,G)=∑v∈𝒩k(u)GNNG(k)(v).𝜑𝑢𝐺subscript𝑣subscript𝒩𝑘𝑢subscriptsuperscriptGNN𝑘𝐺𝑣\\varphi(u,G)=\\sum_{v\\in{\\mathcal{N}}_{k}(u)}\\text{GNN}^{(k)}_{G}(v). (8) ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_19",
"text": " We observe that prior to the pooling function, the k𝑘k-subgraph GNN extractor is equivalent to using the k𝑘k-subtree GNN extractor within each k𝑘k-hop subgraph. So as to capture the attributed similarity as well as structural similarity, we augment the node representation from k𝑘k-subgraph GNN extractor with the original node features via concatenation. While this extractor provides more expressive subgraph representations than the k𝑘k-subtree extractor, it requires enumerating all k𝑘k-hop subgraphs, and consequently does not scale as well as the k𝑘k-subtree extractor to large datasets. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_20",
"text": " Finally, we present a list of other potential structure extractors for different purposes. One possible choice is to directly learn a number of “hidden graphs” as the “anchor subgraphs” to represent subgraphs for better model interpretability, by using the concepts introduced in Nikolentzos & Vazirgiannis (2020). While Nikolentzos & Vazirgiannis (2020) obtain a vector representation of the input graph by counting the number of matching walks between the whole graph and each of the hidden graphs, one could extend this to the node level by comparing the hidden graphs to the k𝑘k-hop subgraph centered around each node. The adjacency matrix of the hidden graphs is a trainable parameter in the network, thereby enabling end-to-end training to identify which subgraph structures are predictive. Then, for a trained model, visualizing the learned hidden graphs provides useful insights about the structural motifs in the dataset. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_21",
"text": " Furthermore, more domain-specific GNNs could also be used to extract potentially more expressive subgraph representations. For instance, Bodnar et al. (2021) recently proposed a new kind of message passing scheme operating on regular cell complexes which benefits from provably stronger expressivity for molecules. Our self-attention mechanism can fully benefit from the development of more domain-specific and expressive GNNs. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_22",
"text": " Finally, another possible structure extractor is to use a non-parametric graph kernel (e.g. a Weisfeiler-Lehman graph kernel) on the k𝑘k-hop subgraphs centered around each node. This provides a flexible way to combine graph kernels and deep learning, which might offer new theoretical insights into the link between the self-attention and kernel methods. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_23",
"text": " Having defined our structure-aware self-attention function, the other components of the Structure-Aware Transformer follow the Transformer architecture as described in Section 3.1; see Figure 2 for a visual overview. Specifically, the self-attention function is followed by a skip-connection, a FFN and two normalization layers before and after the FFN. In addition, we also include the degree factor in the skip-connection, which was found useful for reducing the overwhelming influence of highly connected graph components (Mialon et al., 2021), i.e., xv′=xv+1/dvSA-attn(v),superscriptsubscript𝑥𝑣′subscript𝑥𝑣1subscript𝑑𝑣SA-attn𝑣x_{v}^{\\prime}=x_{v}+1/\\sqrt{d_{v}}\\,\\text{SA-attn}(v), (9) where dvsubscript𝑑𝑣d_{v} denotes the degree of node v𝑣v. After a Transformer layer, we obtain a new graph with the same structure but different node features G′=(V,E,𝐗′)superscript𝐺′𝑉𝐸superscript𝐗′G^{\\prime}=(V,E,\\mathbf{X}^{\\prime}), where 𝐗′superscript𝐗′\\mathbf{X}^{\\prime} corresponds to the output of the Transformer layer. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_24",
"text": " Finally, for graph property prediction, there are various ways to aggregate node-level representations into a graph representation, such as by taking the average or sum. Alternatively, one can use the embedding of a virtual (CLS) node (Jain et al., 2021) that is attached to the input graph without any connectivity to other nodes. We compare these approaches in Section 5. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_25",
"text": " While the self-attention in Eq. (5) is structure-aware, most absolute encoding techniques are only position-aware and could therefore provide complementary information. Indeed, we find that the combination leads to further performance improvements, which we show in Section 5. We choose to use the RWPE (Dwivedi et al., 2022), though any other absolute positional representations, including learnable ones, can also be used. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_26",
"text": " We further argue that only using absolute positional encoding with the Transformer would exhibit a too relaxed structural inductive bias which is not guaranteed to generate similar node representations even if two nodes have similar local structures. This is due to the fact that distance or Laplacian-based positional representations generally serve as structural or positional signatures but do not provide a measure of structural similarity between nodes, especially in the inductive case where two nodes are from different graphs. This is also empirically affirmed in Section 5 by their relatively worse performance without using our structural encoding. In contrast, the subgraph representations used in the structure-aware attention can be tailored to measure the structural similarity between nodes, and thus generate similar node-level representations if they possess similar attributes and surrounding structures. We can formally state this in the following theorem: ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_27",
"text": " The proof is provided in the Appendix. The metric D𝐷D is an optimal matching metric between two multisets which measures how different they are. This theorem shows that two node representations from the SA-attn are similar if the graphs that they belong to have similar multisets of node features and subgraph representations overall, and at the same time, the subgraph representations at these two nodes are similar. In particular, if two nodes belong to the same graph, i.e. G=G′𝐺superscript𝐺′G=G^{\\prime}, then the second and last terms on the right side of Eq. (10) are equal to zero and the distance between their representations is thus constrained by the distance between their corresponding subgraph representations. However, for Transformers with absolute positional encoding, the distance between two node representations is not constrained by their structural similarity, as the distance between two positional representations does not necessarily characterize how structurally similar two nodes are. Despite stronger inductive biases, we will show that our model is still sufficiently expressive in the next section. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_28",
"text": " The expressive power of graph Transformers compared to classic GNNs has hardly been studied, since the soft structural inductive bias introduced in absolute encoding is generally hard to characterize. Thanks to the unique design of our SAT, which relies on a subgraph structure extractor, it becomes possible to study the expressiveness of the output representations. More specifically, we formally show that the node representation from a structure-aware attention layer is at least as expressive as its subgraph representation given by the structure extractor, following the injectivity of the attention function with respect to the query: ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_29",
"text": " Note that the assumptions made in the theorem are mild as one can always add some absolute encoding or random noise to make the attributes of one node different from all other nodes, and similarly for subgraph representations. The countable assumption on 𝒳𝒳{\\mathcal{X}} is generally adopted for expressivity analysis of GNNs (e.g. Xu et al. (2019)). We assume f𝑓f to be any mapping rather than just a linear function as in the definition of the self-attention function since it can be practically approximated by a FFN in multi-layer Transformers through the universal approximation theorem (Hornik, 1991). Theorem 2 suggests that if the structure extractor is sufficiently expressive, the resulting SAT model can also be at least equally expressive. Furthermore, more expressive extractors could lead to more expressively powerful SAT models and thus better prediction performance, which is also empirically confirmed in Section 5. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_30",
"text": " In this section, we evaluate SAT models versus several SOTA methods for graph representation learning, including GNNs and Transformers, on five graph and node prediction tasks, as well as analyze the different components of our architecture to identify what drives the performance. In summary, we discovered the following aspects about SAT: ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_31",
"text": " • The structure-aware framework achieves SOTA performance on graph and node classification tasks, outperforming SOTA graph Transformers and sparse GNNs. • Both instances of the SAT, namely k𝑘k-subtree and k𝑘k-subgraph SAT, always improve upon the base GNN it is built upon, highlighting the improved expressiveness of our structure-aware approach. • We show that incorporating the structure via our structure-aware attention brings a notable improvement relative to the vanilla Transformer with RWPE that just uses node attribute similarity instead of also incorporating structural similarity. We also show that a small value of k𝑘k already leads to good performance, while not suffering from over-smoothing or over-squashing. • We show that choosing a proper absolute positional encoding and a readout method improves performance, but to a much lesser extent than incorporating the structure into the approach. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_32",
"text": " Furthermore, we note that SAT achieves SOTA performance while only considering a small hyperparameter search space. Performance could likely be further improved with more hyperparameter tuning. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_33",
"text": " We assess the performance of our method with five medium to large benchmark datasets for node and graph property prediction, including ZINC (Dwivedi et al., 2020), CLUSTER (Dwivedi et al., 2020), PATTERN (Dwivedi et al., 2020), OGBG-PPA (Hu et al., 2020a) and OGBG-CODE2 (Hu et al., 2020a). ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_34",
"text": " We compare our method to the following GNNs: GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2018), GIN (Xu et al., 2019), PNA (Corso et al., 2020), DeeperGCN (Li et al., 2020a), and ExpC (Yang et al., 2022). Our comparison partners also include several recently proposed Transformers on graphs, including the original Transformer with RWPE (Dwivedi et al., 2022), Graph Transformer (Dwivedi & Bresson, 2021), SAN (Kreuzer et al., 2021), Graphormer (Ying et al., 2021) and GraphTrans (Jain et al., 2021), a model that uses the vanilla Transformer on top of a GNN. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_35",
"text": " All results for the comparison methods are either taken from the original paper or from Dwivedi et al. (2020) if not available. We consider k𝑘k-subtree and k𝑘k-subgraph SAT equipped with different GNN extractors, including GCN, GIN, GraphSAGE and PNA. For OGBG-PPA and OGBG-CODE2, we do not run experiments for k𝑘k-subgraph SAT models due to large memory requirements. Full details on the datasets, experimental setup, and hyperparameters are provided in the Appendix. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_36",
"text": " We show the performance of SATs compared to other GNNs and Transformers in Table 1 and 2. SAT models consistently outperform SOTA methods on these datasets, showing its ability to combine the benefits of both GNNs and Transformers. In particular, for the CODE2 dataset, our SAT models outperform SOTA methods by a large margin despite a relatively small number of parameters and minimal hyperparameter tuning, which will put it at the first place on the OGB leaderboard. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_37",
"text": " Table 3 summarizes the performance of SAT relative to the sparse GNN it uses to extract the subgraph representations, across different GNNs. We observe that both variations of SAT consistently bring large performance gains to its base GNN counterpart, making it a systematic enhancer of any GNN model. Furthermore, PNA, which is the most expressive GNN we considered, has consistently the best performance when used with SAT, empirically validating our theoretical finding in Section 4.4. k𝑘k-subgraph SAT also outperforms or performs equally as k𝑘k-subtree SAT in almost all the cases, showing its superior expressiveness. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_38",
"text": " While Table 3 showcases the added value of the SAT relative to sparse GNNs, we now dissect the components of SAT on the ZINC dataset to identify which aspects of the architecture bring the biggest performance gains. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_39",
"text": " The key contribution of SAT is its ability to explicitly incorporate structural information in the self-attention. Here, we seek to demonstrate that this information provides crucial predictive information, and study how the choice of k𝑘k affects the results. Figure 3(a) shows how the test MAE is impacted by varying k𝑘k for k𝑘k-subtree and k𝑘k-subgraph extractors using PNA on the ZINC dataset. All models use the RWPE. k=0𝑘0k=0 corresponds to the vanilla Transformer only using absolute positional encoding, i.e. not using structure. We find that incorporating structural information leads to substantial improvement in performance, with optimal performance around k=3𝑘3k=3 for both k𝑘k-subtree and k𝑘k-subgraph extractors. As k𝑘k increases beyond k=4𝑘4k=4, the performance in k𝑘k-subtree extractors deteriorated, which is consistent with the observed phenomenon that GNNs work best in shallower networks (Kipf & Welling, 2017). We observe that k𝑘k-subgraph does not suffer as much from this issue, underscoring a new aspect of its usefulness. On the other hand, k𝑘k-subtree extractors are more computationally efficient and scalable to larger OGB datasets. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_40",
"text": " We assess here whether the absolute encoding brought complementary information to SAT. In Figure 3(b), we conduct an ablation study showing the results of SAT with and without absolute positional encoding, including RWPE and Laplacian PE (Dwivedi et al., 2020). Our SAT with a positional encoding outperforms its counterpart without it, confirming the complementary nature of the two encodings. However, we also note that the performance gain brought by the absolute encoding is far less than the gain obtained by using our structure-aware attention, as shown in Figure 3(a) (comparing the instance of k=0𝑘0k=0 to k>0𝑘0k>0), emphasizing that our structure-aware attention is the more important aspect of the model. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_41",
"text": " Finally, we compare the performance of SAT models using different readout methods for aggregating node-level representations on the ZINC dataset in Figure 3(c), including the CLS pooling discussed in Section 4.2. Unlike the remarkable influence of the readout method in GNNs (Xu et al., 2019), we observe very little impact in SAT models. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_42",
"text": " In addition to performance improvement, we show that SAT offers better model interpretability compared to the classic Transformer with only absolute positional encoding. We respectively train a SAT model and a Transformer with a CLS readout on the Mutagenicity dataset, and visualize the attention scores between the (CLS) node and other nodes learned by SAT and the Transformer in Figure 4. The salient difference between the two models is that SAT has structure-aware node embeddings, and thus we can attribute the following interpretability gains to that. While both models manage to identify some chemical motifs known for mutagenicity, such as NO2 and NH2, the attention scores learned by SAT are sparser and more informative, meaning that SAT puts more attention weights on these known mutagenic motifs than the Transformer with RWPE. The vanilla Transformer even fails to put attention on some important atoms such as the H atoms in the NH2 group. The only H atoms highlighted by SAT are those in the NH2 group, suggesting that our SAT indeed takes the structure into account. More focus on these discriminative motifs makes the SAT model less influenced by other chemical patterns that commonly exist in the dataset, such as benzene, and thus leads to overall improved performance. More results are provided in the Appendix. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_43",
"text": " We introduced the SAT model, which successfully incorporates structural information into the Transformer architecture and overcomes the limitations of the absolute encoding. In addition to SOTA empirical performance with minimal hyperparameter tuning, SAT also provides better interpretability than the Transformer. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_44",
"text": " As mentioned above, k𝑘k-subgraph SAT has higher memory requirements than k𝑘k-subtree SAT, which can restrict its applicability if access to high memory GPUs is restricted. We see the main limitation of SAT is that it suffers from the same drawbacks as the Transformer, namely the quadratic complexity of the self-attention computation. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_45",
"text": " Because SAT can be combined with any GNN, a natural extension of our work is to combine SAT with structure extractors which have shown to be strictly more expressive than the 1-WL test, such as the recent topological GNN introduced by Horn et al. (2021). Additionally, the SAT framework is flexible and can incorporate any structure extractor which produces structure-aware node representations, and could even be extended beyond using GNNs, such as differentiable graph kernels. ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
},
{
"id": "2202.03036_all_46",
"text": " Another important area for future work is to focus on reducing the high memory cost and time complexity of the self-attention computation, as is being done in recent efforts for developing a so-called linear transformer, which has linear complexity in both time and space requirements (Tay et al., 2020; Wang et al., 2020; Qin et al., 2022). ",
"title": "Structure-Aware Transformer for Graph Representation Learning"
}
] |
Is it true, as the authors suggest, that a neural network's depth is essential to its success?
|
As mentioned in many paragraphs, network depth is essential for expressing more complex functions, which is also essential for success [0].
|
[
0
] |
[
{
"id": "1507.06228_all_0",
"text": " Many recent empirical breakthroughs in supervised machine learning have been achieved through large and deep neural networks. Network depth (the number of successive computational layers) has played perhaps the most important role in these successes. For instance, within just a few years, the top-5 image classification accuracy on the 1000-class ImageNet dataset has increased from ∼similar-to\\sim84% to ∼similar-to\\sim95% (2, 3) using deeper networks with rather small receptive fields (4, 5). Other results on practical machine learning problems have also underscored the superiority of deeper networks in terms of accuracy and/or performance. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_1",
"text": " In fact, deep networks can represent certain function classes far more efficiently than shallow ones. This is perhaps most obvious for recurrent nets, the deepest of them all. For example, the n𝑛n bit parity problem can in principle be learned by a large feedforward net with n𝑛n binary input units, 1 output unit, and a single but large hidden layer. But the natural solution for arbitrary n𝑛n is a recurrent net with only 3 units and 5 weights, reading the input bit string one bit at a time, making a single recurrent hidden unit flip its state whenever a new 1 is observed . Related observations hold for Boolean circuits (8, 9) and modern neural networks (10, 11, 12). ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_2",
"text": " To deal with the difficulties of training deep networks, some researchers have focused on developing better optimizers (e.g. (13, 14, 15)). Well-designed initialization strategies, in particular the normalized variance-preserving initialization for certain activation functions (16, 17), have been widely adopted for training moderately deep networks. Other similarly motivated strategies have shown promising results in preliminary experiments (18, 19). Experiments showed that certain activation functions based on local competition (20, 21) may help to train deeper networks. Skip connections between layers or to output layers (where error is “injected”) have long been used in neural networks, more recently with the explicit aim to improve the flow of information (22, 23, 2, 24). A related recent technique is based on using soft targets from a shallow teacher network to aid in training deeper student networks in multiple stages , similar to the neural history compressor for sequences, where a slowly ticking teacher recurrent net is “distilled” into a quickly ticking student recurrent net by forcing the latter to predict the hidden units of the former . Finally, deep networks can be trained layer-wise to help in credit assignment (26, 27), but this approach is less attractive compared to direct training. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_3",
"text": " Very deep network training still faces problems, albeit perhaps less fundamental ones than the problem of vanishing gradients in standard recurrent networks . The stacking of several non-linear transformations in conventional feed-forward network architectures typically results in poor propagation of activations and gradients. Hence it remains hard to investigate the benefits of very deep networks for a variety of problems. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_4",
"text": " To overcome this, we take inspiration from Long Short Term Memory (LSTM) recurrent networks (29, 30). We propose to modify the architecture of very deep feedforward networks such that information flow across layers becomes much easier. This is accomplished through an LSTM-inspired adaptive gating mechanism that allows for computation paths along which information can flow across many layers without attenuation. We call such paths information highways. They yield highway networks, as opposed to traditional ‘plain’ networks.111This paper expands upon a shorter report on Highway Networks . More recently, a similar LSTM-inspired model was also proposed . ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_5",
"text": " Our primary contribution is to show that extremely deep highway networks can be trained directly using stochastic gradient descent (SGD), in contrast to plain networks which become hard to optimize as depth increases (Section 3.1). Deep networks with limited computational budget (for which a two-stage training procedure mentioned above was recently proposed ) can also be directly trained in a single stage when converted to highway networks. Their ease of training is supported by experimental results demonstrating that highway networks also generalize well to unseen data. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_6",
"text": " We use boldface letters for vectors and matrices, and italicized capital letters to denote transformation functions. 𝟎0\\mathbf{0} and 𝟏1\\mathbf{1} denote vectors of zeros and ones respectively, and 𝐈𝐈\\mathbf{I} denotes an identity matrix. The function σ(x)𝜎𝑥\\sigma(x) is defined as σ(x)=11+e−x,x∈ℝformulae-sequence𝜎𝑥11superscript𝑒𝑥𝑥ℝ\\sigma(x)=\\frac{1}{1+e^{-x}},x\\in\\mathbb{R}. The dot operator (⋅⋅\\cdotp) is used to denote element-wise multiplication. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_7",
"text": " A plain feedforward neural network typically consists of L𝐿L layers where the lthsuperscript𝑙𝑡ℎl^{th} layer (l∈{1,2,…,L}𝑙12…𝐿l\\in\\{1,2,...,L\\}) applies a non-linear transformation H𝐻H (parameterized by 𝐖𝐇,𝐥subscript𝐖𝐇𝐥\\mathbf{W_{H,l}}) on its input 𝐱𝐥subscript𝐱𝐥\\mathbf{x_{l}} to produce its output 𝐲𝐥subscript𝐲𝐥\\mathbf{y_{l}}. Thus, 𝐱𝟏subscript𝐱1\\mathbf{x_{1}} is the input to the network and 𝐲𝐋subscript𝐲𝐋\\mathbf{y_{L}} is the network’s output. Omitting the layer index and biases for clarity, ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_8",
"text": " 𝐲=H(𝐱,𝐖𝐇).𝐲𝐻𝐱subscript𝐖𝐇\\mathbf{y}=H(\\mathbf{x},\\mathbf{W_{H}}). (1) ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_9",
"text": " H𝐻H is usually an affine transform followed by a non-linear activation function, but in general it may take other forms, possibly convolutional or recurrent. For a highway network, we additionally define two non-linear transforms T(𝐱,𝐖𝐓)𝑇𝐱subscript𝐖𝐓T(\\mathbf{x},\\mathbf{W_{T}}) and C(𝐱,𝐖𝐂)𝐶𝐱subscript𝐖𝐂C(\\mathbf{x},\\mathbf{W_{C}}) such that ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_10",
"text": " 𝐲=H(𝐱,𝐖𝐇)⋅T(𝐱,𝐖𝐓)+𝐱⋅C(𝐱,𝐖𝐂).𝐲⋅𝐻𝐱subscript𝐖𝐇𝑇𝐱subscript𝐖𝐓⋅𝐱𝐶𝐱subscript𝐖𝐂\\mathbf{y}=H(\\mathbf{x},\\mathbf{W_{H}})\\cdotp T(\\mathbf{x},\\mathbf{W_{T}})+\\mathbf{x}\\cdot C(\\mathbf{x},\\mathbf{W_{C}}). (2) ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_11",
"text": " We refer to T𝑇T as the transform gate and C𝐶C as the carry gate, since they express how much of the output is produced by transforming the input and carrying it, respectively. For simplicity, in this paper we set C=1−T𝐶1𝑇C=1-T, giving ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_12",
"text": " 𝐲=H(𝐱,𝐖𝐇)⋅T(𝐱,𝐖𝐓)+𝐱⋅(1−T(𝐱,𝐖𝐓)).𝐲⋅𝐻𝐱subscript𝐖𝐇𝑇𝐱subscript𝐖𝐓⋅𝐱1𝑇𝐱subscript𝐖𝐓\\mathbf{y}=H(\\mathbf{x},\\mathbf{W_{H}})\\cdotp T(\\mathbf{x},\\mathbf{W_{T}})+\\mathbf{x}\\cdot(1-T(\\mathbf{x},\\mathbf{W_{T}})). (3) ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_13",
"text": " The dimensionality of 𝐱,𝐲,H(𝐱,𝐖𝐇)𝐱𝐲𝐻𝐱subscript𝐖𝐇\\mathbf{x},\\mathbf{y},H(\\mathbf{x},\\mathbf{W_{H}}) and T(𝐱,𝐖𝐓)𝑇𝐱subscript𝐖𝐓T(\\mathbf{x},\\mathbf{W_{T}}) must be the same for Equation 3 to be valid. Note that this layer transformation is much more flexible than Equation 1. In particular, observe that for particular values of T𝑇T, ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_14",
"text": " 𝐲={𝐱,if T(𝐱,𝐖𝐓)=𝟎,H(𝐱,𝐖𝐇),if T(𝐱,𝐖𝐓)=𝟏.𝐲cases𝐱if 𝑇𝐱subscript𝐖𝐓0𝐻𝐱subscript𝐖𝐇if 𝑇𝐱subscript𝐖𝐓1\\mathbf{y}=\\begin{cases}\\mathbf{x},&\\text{if }T(\\mathbf{x},\\mathbf{W_{T}})=\\mathbf{0},\\\\ H(\\mathbf{x},\\mathbf{W_{H}}),&\\text{if }T(\\mathbf{x},\\mathbf{W_{T}})=\\mathbf{1}.\\end{cases} (4) ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_15",
"text": " Similarly, for the Jacobian of the layer transform, ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_16",
"text": " d𝐲d𝐱={𝐈,if T(𝐱,𝐖𝐓)=𝟎,H′(𝐱,𝐖𝐇),if T(𝐱,𝐖𝐓)=𝟏.𝑑𝐲𝑑𝐱cases𝐈if 𝑇𝐱subscript𝐖𝐓0superscript𝐻′𝐱subscript𝐖𝐇if 𝑇𝐱subscript𝐖𝐓1\\frac{d\\mathbf{y}}{d\\mathbf{x}}=\\begin{cases}\\mathbf{I},&\\text{if }T(\\mathbf{x},\\mathbf{W_{T}})=\\mathbf{0},\\\\ H^{\\prime}(\\mathbf{x},\\mathbf{W_{H}}),&\\text{if }T(\\mathbf{x},\\mathbf{W_{T}})=\\mathbf{1}.\\end{cases} (5) ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_17",
"text": " Thus, depending on the output of the transform gates, a highway layer can smoothly vary its behavior between that of H𝐻H and that of a layer which simply passes its inputs through. Just as a plain layer consists of multiple computing units such that the ithsuperscript𝑖𝑡ℎi^{th} unit computes yi=Hi(𝐱)subscript𝑦𝑖subscript𝐻𝑖𝐱y_{i}=H_{i}(\\mathbf{x}), a highway network consists of multiple blocks such that the ithsuperscript𝑖𝑡ℎi^{th} block computes a block state Hi(𝐱)subscript𝐻𝑖𝐱H_{i}(\\mathbf{x}) and transform gate output Ti(𝐱)subscript𝑇𝑖𝐱T_{i}(\\mathbf{x}). Finally, it produces the block output yi=Hi(𝐱)∗Ti(𝐱)+xi∗(1−Ti(𝐱))subscript𝑦𝑖subscript𝐻𝑖𝐱subscript𝑇𝑖𝐱subscript𝑥𝑖1subscript𝑇𝑖𝐱y_{i}=H_{i}(\\mathbf{x})*T_{i}(\\mathbf{x})+x_{i}*(1-T_{i}(\\mathbf{x})), which is connected to the next layer.222Our pilot experiments on training very deep networks were successful with a more complex block design closely resembling an LSTM block “unrolled in time”. Here we report results only for a much simplified form. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_18",
"text": " As mentioned earlier, Equation 3 requires that the dimensionality of 𝐱,𝐲,H(𝐱,𝐖𝐇)𝐱𝐲𝐻𝐱subscript𝐖𝐇\\mathbf{x},\\mathbf{y},H(\\mathbf{x},\\mathbf{W_{H}}) and T(𝐱,𝐖𝐓)𝑇𝐱subscript𝐖𝐓T(\\mathbf{x},\\mathbf{W_{T}}) be the same. To change the size of the intermediate representation, one can replace 𝐱𝐱\\mathbf{x} with 𝐱^^𝐱\\mathbf{\\hat{x}} obtained by suitably sub-sampling or zero-padding 𝐱𝐱\\mathbf{x}. Another alternative is to use a plain layer (without highways) to change dimensionality, which is the strategy we use in this study. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_19",
"text": " Convolutional highway layers utilize weight-sharing and local receptive fields for both H𝐻H and T𝑇T transforms. We used the same sized receptive fields for both, and zero-padding to ensure that the block state and transform gate feature maps match the input size. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_20",
"text": " We use the transform gate defined as T(𝐱)=σ(𝐖𝐓T𝐱+𝐛𝐓)𝑇𝐱𝜎superscriptsubscript𝐖𝐓𝑇𝐱subscript𝐛𝐓T(\\mathbf{x})=\\sigma(\\mathbf{W_{T}}^{T}\\mathbf{x}+\\mathbf{b_{T}}), where 𝐖𝐓subscript𝐖𝐓\\mathbf{W_{T}} is the weight matrix and 𝐛𝐓subscript𝐛𝐓\\mathbf{b_{T}} the bias vector for the transform gates. This suggests a simple initialization scheme which is independent of the nature of H𝐻H: bTsubscript𝑏𝑇b_{T} can be initialized with a negative value (e.g. -1, -3 etc.) such that the network is initially biased towards carry behavior. This scheme is strongly inspired by the proposal to initially bias the gates in an LSTM network, to help bridge long-term temporal dependencies early in learning. Note that σ(x)∈(0,1),∀x∈ℝformulae-sequence𝜎𝑥01for-all𝑥ℝ\\sigma(x)\\in(0,1),\\forall x\\in\\mathbb{R}, so the conditions in Equation 4 can never be met exactly. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_21",
"text": " In our experiments, we found that a negative bias initialization for the transform gates was sufficient for training to proceed in very deep networks for various zero-mean initial distributions of WHsubscript𝑊𝐻W_{H} and different activation functions used by H𝐻H. In pilot experiments, SGD did not stall for networks with more than 1000 layers. Although the initial bias is best treated as a hyperparameter, as a general guideline we suggest values of -1, -2 and -3 for convolutional highway networks of depth approximately 10, 20 and 30. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_22",
"text": " All networks were trained using SGD with momentum. An exponentially decaying learning rate was used in Section 3.1. For the rest of the experiments, a simpler commonly used strategy was employed where the learning rate starts at a value λ𝜆\\lambda and decays according to a fixed schedule by a factor γ𝛾\\gamma. λ𝜆\\lambda, γ𝛾\\gamma and the schedule were selected once based on validation set performance on the CIFAR-10 dataset, and kept fixed for all experiments. All convolutional highway networks utilize the rectified linear activation function to compute the block state H𝐻H. To provide a better estimate of the variability of classification results due to random initialization, we report our results in the format Best (mean ±plus-or-minus\\pm std.dev.) based on 5 runs wherever available. Experiments were conducted using Caffe and Brainstorm (https://github.com/IDSIA/brainstorm) frameworks. Source code, hyperparameter search results and related scripts are publicly available at http://people.idsia.ch/~rupesh/very_deep_learning/. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_23",
"text": " To support the hypothesis that highway networks do not suffer from increasing depth, we conducted a series of rigorous optimization experiments, comparing them to plain networks with normalized initialization (16, 17). ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_24",
"text": " We trained both plain and highway networks of varying varying depths on the MNIST digit classification dataset. All networks are thin: each layer has 50 blocks for highway networks and 71 units for plain networks, yielding roughly identical numbers of parameters (≈\\approx5000) per layer. In all networks, the first layer is a fully connected plain layer followed by 9, 19, 49, or 99 fully connected plain or highway layers. Finally, the network output is produced by a softmax layer. We performed a random search of 100 runs for both plain and highway networks to find good settings for the following hyperparameters: initial learning rate, momentum, learning rate exponential decay factor & activation function (either rectified linear or tanh). For highway networks, an additional hyperparameter was the initial value for the transform gate bias (between -1 and -10). Other weights were initialized using the same normalized initialization as plain networks. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_25",
"text": " The training curves for the best performing networks for each depth are shown in Figure 1. As expected, 10 and 20-layer plain networks exhibit very good performance (mean loss <1e−4absent1superscript𝑒4<1e^{-4}), which significantly degrades as depth increases, even though network capacity increases. Highway networks do not suffer from an increase in depth, and 50/100 layer highway networks perform similar to 10/20 layer networks. The 100-layer highway network performed more than 2 orders of magnitude better compared to a similarly-sized plain network. It was also observed that highway networks consistently converged significantly faster than plain ones. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_26",
"text": " As a sanity check for the generalization capability of highway networks, we trained 10-layer convolutional highway networks on MNIST, using two architectures, each with 9 convolutional layers followed by a softmax output. The number of filter maps (width) was set to 16 and 32 for all the layers. We obtained test set performance competitive with state-of-the-art methods with much fewer parameters, as show in Table 1. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_27",
"text": " Maxout networks can cope much better with increased depth than those with traditional activation functions . However, Romero et. al. recently reported that training on CIFAR-10 through plain backpropogation was only possible for maxout networks with a depth up to 5 layers when the number of parameters was limited to ∼similar-to\\sim250K and the number of multiplications to ∼similar-to\\sim30M. Similar limitations were observed for higher computational budgets. Training of deeper networks was only possible through the use of a two-stage training procedure and addition of soft targets produced from a pre-trained shallow teacher network (hint-based training). ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_28",
"text": " We found that it was easy to train highway networks with numbers of parameters and operations comparable to those of fitnets in a single stage using SGD. As shown in Table 2, Highway A and Highway B, which are based on the architectures of Fitnet A and Fitnet B, respectively, obtain similar or higher accuracy on the test set. We were also able to train thinner and deeper networks: for example a 32-layer highway network consisting of alternating receptive fields of size 3x3 and 1x1 with ∼similar-to\\sim1.25M parameters performs better than the earlier teacher network . ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_29",
"text": " It is possible to obtain high performance on the CIFAR-10 and CIFAR-100 datasets by utilizing very large networks and extensive data augmentation. This approach was popularized by Ciresan et. al. and recently extended by Graham . Since our aim is only to demonstrate that deeper networks can be trained without sacrificing ease of training or generalization ability, we only performed experiments in the more common setting of global contrast normalization, small translations and mirroring of images. Following Lin et. al. , we replaced the fully connected layer used in the networks in the previous section with a convolutional layer with a receptive field of size one and a global average pooling layer. The hyperparameters from the last section were re-used for both CIFAR-10 and CIFAR-100, therefore it is quite possible to obtain much better results with better architectures/hyperparameters. The results are tabulated in Table 3. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_30",
"text": " Figure 2 illustrates the inner workings of the best333obtained via random search over hyperparameters to minimize the best training set error achieved using each configuration 50 hidden layer fully-connected highway networks trained on MNIST (top row) and CIFAR-100 (bottom row). The first three columns show the bias, the mean activity over all training samples, and the activity for a single random sample for each transform gate respectively. Block outputs for the same single sample are displayed in the last column. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_31",
"text": " The transform gate biases of the two networks were initialized to -2 and -4 respectively. It is interesting to note that contrary to our expectations most biases decreased further during training. For the CIFAR-100 network the biases increase with depth forming a gradient. Curiously this gradient is inversely correlated with the average activity of the transform gates, as seen in the second column. This indicates that the strong negative biases at low depths are not used to shut down the gates, but to make them more selective. This behavior is also suggested by the fact that the transform gate activity for a single example (column 3) is very sparse. The effect is more pronounced for the CIFAR-100 network, but can also be observed to a lesser extent in the MNIST network. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_32",
"text": " The last column of Figure 2 displays the block outputs and visualizes the concept of “information highways”. Most of the outputs stay constant over many layers forming a pattern of stripes. Most of the change in outputs happens in the early layers (≈15absent15\\approx 15 for MNIST and ≈40absent40\\approx 40 for CIFAR-100). ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_33",
"text": " One possible advantage of the highway architecture over hard-wired shortcut connections is that the network can learn to dynamically adjust the routing of the information based on the current input. This begs the question: does this behaviour manifest itself in trained networks or do they just learn a static routing that applies to all inputs similarly. A partial answer can be found by looking at the mean transform gate activity (second column) and the single example transform gate outputs (third column) in Figure 2. Especially for the CIFAR-100 case, most transform gates are active on average, while they show very selective activity for the single example. This implies that for each sample only a few blocks perform transformation but different blocks are utilized by different samples. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_34",
"text": " This data-dependent routing mechanism is further investigated in Figure 3. In each of the columns we show how the average over all samples of one specific class differs from the total average shown in the second column of Figure 2. For MNIST digits 0 and 7 substantial differences can be seen within the first 15 layers, while for CIFAR class numbers 0 and 1 the differences are sparser and spread out over all layers. In both cases it is clear that the mean activity pattern differs between classes. The gating system acts not just as a mechanism to ease training, but also as an important part of the computation in a trained network. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_35",
"text": " Since we bias all the transform gates towards being closed, in the beginning every layer mostly copies the activations of the previous layer. Does training indeed change this behaviour, or is the final network still essentially equivalent to a network with a much fewer layers? To shed light on this issue, we investigated the extent to which lesioning a single layer affects the total performance of trained networks from Section 3.1. By lesioning, we mean manually setting all the transform gates of a layer to 0 forcing it to simply copy its inputs. For each layer, we evaluated the network on the full training set with the gates of that layer closed. The resulting performance as a function of the lesioned layer is shown in Figure 4. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_36",
"text": " For MNIST (left) it can be seen that the error rises significantly if any one of the early layers is removed, but layers 15−45154515-45 seem to have close to no effect on the final performance. About 60% of the layers don’t learn to contribute to the final result, likely because MNIST is a simple dataset that doesn’t require much depth. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_37",
"text": " We see a different picture for the CIFAR-100 dataset (right) with performance degrading noticeably when removing any of the first ≈40absent40\\approx 40 layers. This suggests that for complex problems a highway network can learn to utilize all of its layers, while for simpler problems like MNIST it will keep many of the unneeded layers idle. Such behavior is desirable for deep networks in general, but appears difficult to obtain using plain networks. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_38",
"text": " Alternative approaches to counter the difficulties posed by depth mentioned in Section 1 often have several limitations. Learning to route information through neural networks with the help of competitive interactions has helped to scale up their application to challenging problems by improving credit assignment , but they still suffer when depth increases beyond ≈\\approx20 even with careful initialization . Effective initialization methods can be difficult to derive for a variety of activation functions. Deep supervision has been shown to hurt performance of thin deep networks . ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_39",
"text": " Very deep highway networks, on the other hand, can directly be trained with simple gradient descent methods due to their specific architecture. This property does not rely on specific non-linear transformations, which may be complex convolutional or recurrent transforms, and derivation of a suitable initialization scheme is not essential. The additional parameters required by the gating mechanism help in routing information through the use of multiplicative connections, responding differently to different inputs, unlike fixed “skip” connections. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_40",
"text": " A possible objection is that many layers might remain unused if the transform gates stay closed. Our experiments show that this possibility does not affect networks adversely—deep and narrow highway networks can match/exceed the accuracy of wide and shallow maxout networks, which would not be possible if layers did not perform useful computations. Additionally, we can exploit the structure of highways to directly evaluate the contribution of each layer as shown in Figure 4. For the first time, highway networks allow us to examine how much computation depth is needed for a given problem, which can not be easily done with plain networks. ",
"title": "Training Very Deep Networks"
},
{
"id": "1507.06228_all_41",
"text": " We thank NVIDIA Corporation for their donation of GPUs and acknowledge funding from the EU project NASCENCE (FP7-ICT-317662). We are grateful to Sepp Hochreiter and Thomas Unterthiner for helpful comments and Jan Koutník for help in conducting experiments. ",
"title": "Training Very Deep Networks"
}
] |
What is the key difference in model structure between Mobilenet style models and Shufflenet?
|
ShuffleNet introduces group convolutions and shuffling, while existing mobilenet style models do not have [46].
|
[
46
] |
[
{
"id": "1801.04381_all_0",
"text": " Neural networks have revolutionized many areas of machine intelligence, enabling superhuman accuracy for challenging image recognition tasks. However, the drive to improve accuracy often comes at a cost: modern state of the art networks require high computational resources beyond the capabilities of many mobile and embedded applications. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_1",
"text": " This paper introduces a new neural network architecture that is specifically tailored for mobile and resource constrained environments. Our network pushes the state of the art for mobile tailored computer vision models, by significantly decreasing the number of operations and memory needed while retaining the same accuracy. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_2",
"text": " Our main contribution is a novel layer module: the inverted residual with linear bottleneck. This module takes as an input a low-dimensional compressed representation which is first expanded to high dimension and filtered with a lightweight depthwise convolution. Features are subsequently projected back to a low-dimensional representation with a linear convolution. The official implementation is available as part of TensorFlow-Slim model library in . ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_3",
"text": " This module can be efficiently implemented using standard operations in any modern framework and allows our models to beat state of the art along multiple performance points using standard benchmarks. Furthermore, this convolutional module is particularly suitable for mobile designs, because it allows to significantly reduce the memory footprint needed during inference by never fully materializing large intermediate tensors. This reduces the need for main memory access in many embedded hardware designs, that provide small amounts of very fast software controlled cache memory. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_4",
"text": " Tuning deep neural architectures to strike an optimal balance between accuracy and performance has been an area of active research for the last several years. Both manual architecture search and improvements in training algorithms, carried out by numerous teams has lead to dramatic improvements over early designs such as AlexNet , VGGNet , GoogLeNet . , and ResNet . Recently there has been lots of progress in algorithmic architecture exploration included hyper-parameter optimization (9, 10, 11) as well as various methods of network pruning (12, 13, 14, 15, 16, 17) and connectivity learning (18, 19). A substantial amount of work has also been dedicated to changing the connectivity structure of the internal convolutional blocks such as in ShuffleNet or introducing sparsity and others . ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_5",
"text": " Recently, (23, 24, 25, 26), opened up a new direction of bringing optimization methods including genetic algorithms and reinforcement learning to architectural search. However one drawback is that the resulting networks end up very complex. In this paper, we pursue the goal of developing better intuition about how neural networks operate and use that to guide the simplest possible network design. Our approach should be seen as complimentary to the one described in and related work. In this vein our approach is similar to those taken by (20, 22) and allows to further improve the performance, while providing a glimpse on its internal operation. Our network design is based on MobileNetV1 . It retains its simplicity and does not require any special operators while significantly improves its accuracy, achieving state of the art on multiple image classification and detection tasks for mobile applications. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_6",
"text": " Depthwise Separable Convolutions are a key building block for many efficient neural network architectures (27, 28, 20) and we use them in the present work as well. The basic idea is to replace a full convolutional operator with a factorized version that splits convolution into two separate layers. The first layer is called a depthwise convolution, it performs lightweight filtering by applying a single convolutional filter per input channel. The second layer is a 1×1111\\times 1 convolution, called a pointwise convolution, which is responsible for building new features through computing linear combinations of the input channels. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_7",
"text": " Standard convolution takes an hi×wi×disubscriptℎ𝑖subscript𝑤𝑖subscript𝑑𝑖h_{i}\\times w_{i}\\times d_{i} input tensor Lisubscript𝐿𝑖L_{i}, and applies convolutional kernel K∈ℛk×k×di×dj𝐾superscriptℛ𝑘𝑘subscript𝑑𝑖subscript𝑑𝑗K\\in{\\cal R}^{k\\times k\\times d_{i}\\times d_{j}} to produce an hi×wi×djsubscriptℎ𝑖subscript𝑤𝑖subscript𝑑𝑗h_{i}\\times w_{i}\\times d_{j} output tensor Ljsubscript𝐿𝑗L_{j}. Standard convolutional layers have the computational cost of hi⋅wi⋅di⋅dj⋅k⋅k⋅subscriptℎ𝑖subscript𝑤𝑖subscript𝑑𝑖subscript𝑑𝑗𝑘𝑘h_{i}\\cdot w_{i}\\cdot d_{i}\\cdot d_{j}\\cdot k\\cdot k. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_8",
"text": " Depthwise separable convolutions are a drop-in replacement for standard convolutional layers. Empirically they work almost as well as regular convolutions but only cost: hi⋅wi⋅di(k2+dj)⋅subscriptℎ𝑖subscript𝑤𝑖subscript𝑑𝑖superscript𝑘2subscript𝑑𝑗h_{i}\\cdot w_{i}\\cdot d_{i}(k^{2}+d_{j}) (1) which is the sum of the depthwise and 1×1111\\times 1 pointwise convolutions. Effectively depthwise separable convolution reduces computation compared to traditional layers by almost a factor of k2superscript𝑘2k^{2}111more precisely, by a factor k2dj/(k2+dj)superscript𝑘2subscript𝑑𝑗superscript𝑘2subscript𝑑𝑗k^{2}d_{j}/(k^{2}+d_{j}). MobileNetV2 uses k=3𝑘3k=3 (3×3333\\times 3 depthwise separable convolutions) so the computational cost is 888 to 999 times smaller than that of standard convolutions at only a small reduction in accuracy . ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_9",
"text": " Consider a deep neural network consisting of n𝑛n layers Lisubscript𝐿𝑖L_{i} each of which has an activation tensor of dimensions hi×wi×disubscriptℎ𝑖subscript𝑤𝑖subscript𝑑𝑖h_{i}\\times w_{i}\\times d_{i}. Throughout this section we will be discussing the basic properties of these activation tensors, which we will treat as containers of hi×wisubscriptℎ𝑖subscript𝑤𝑖h_{i}\\times w_{i} “pixels” with disubscript𝑑𝑖d_{i} dimensions. Informally, for an input set of real images, we say that the set of layer activations (for any layer Lisubscript𝐿𝑖L_{i}) forms a “manifold of interest”. It has been long assumed that manifolds of interest in neural networks could be embedded in low-dimensional subspaces. In other words, when we look at all individual d𝑑d-channel pixels of a deep convolutional layer, the information encoded in those values actually lie in some manifold, which in turn is embeddable into a low-dimensional subspace222Note that dimensionality of the manifold differs from the dimensionality of a subspace that could be embedded via a linear transformation.. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_10",
"text": " At a first glance, such a fact could then be captured and exploited by simply reducing the dimensionality of a layer thus reducing the dimensionality of the operating space. This has been successfully exploited by MobileNetV1 to effectively trade off between computation and accuracy via a width multiplier parameter, and has been incorporated into efficient model designs of other networks as well . Following that intuition, the width multiplier approach allows one to reduce the dimensionality of the activation space until the manifold of interest spans this entire space. However, this intuition breaks down when we recall that deep convolutional neural networks actually have non-linear per coordinate transformations, such as ReLUReLU\\operatorname{ReLU}. For example, ReLUReLU\\operatorname{ReLU} applied to a line in 1D space produces a ’ray’, where as in ℛnsuperscriptℛ𝑛{\\cal R}^{n} space, it generally results in a piece-wise linear curve with n𝑛n-joints. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_11",
"text": " It is easy to see that in general if a result of a layer transformation ReLU(Bx)ReLU𝐵𝑥\\operatorname{ReLU}(Bx) has a non-zero volume S𝑆S, the points mapped to interiorSinterior𝑆\\operatorname{interior}{S} are obtained via a linear transformation B𝐵B of the input, thus indicating that the part of the input space corresponding to the full dimensional output, is limited to a linear transformation. In other words, deep networks only have the power of a linear classifier on the non-zero volume part of the output domain. We refer to supplemental material for a more formal statement. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_12",
"text": " On the other hand, when ReLUReLU\\operatorname{ReLU} collapses the channel, it inevitably loses information in that channel. However if we have lots of channels, and there is a a structure in the activation manifold that information might still be preserved in the other channels. In supplemental materials, we show that if the input manifold can be embedded into a significantly lower-dimensional subspace of the activation space then the ReLUReLU\\operatorname{ReLU} transformation preserves the information while introducing the needed complexity into the set of expressible functions. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_13",
"text": " To summarize, we have highlighted two properties that are indicative of the requirement that the manifold of interest should lie in a low-dimensional subspace of the higher-dimensional activation space: ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_14",
"text": " 1. If the manifold of interest remains non-zero volume after ReLUReLU\\operatorname{ReLU} transformation, it corresponds to a linear transformation. 2. ReLUReLU\\operatorname{ReLU} is capable of preserving complete information about the input manifold, but only if the input manifold lies in a low-dimensional subspace of the input space. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_15",
"text": " These two insights provide us with an empirical hint for optimizing existing neural architectures: assuming the manifold of interest is low-dimensional we can capture this by inserting linear bottleneck layers into the convolutional blocks. Experimental evidence suggests that using linear layers is crucial as it prevents non-linearities from destroying too much information. In Section 6, we show empirically that using non-linear layers in bottlenecks indeed hurts the performance by several percent, further validating our hypothesis333We note that in the presence of shortcuts the information loss is actually less strong.. We note that similar reports where non-linearity was helped were reported in where non-linearity was removed from the input of the traditional residual block and that lead to improved performance on CIFAR dataset. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_16",
"text": " For the remainder of this paper we will be utilizing bottleneck convolutions. We will refer to the ratio between the size of the input bottleneck and the inner size as the expansion ratio. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_17",
"text": " The bottleneck blocks appear similar to residual block where each block contains an input followed by several bottlenecks then followed by expansion . However, inspired by the intuition that the bottlenecks actually contain all the necessary information, while an expansion layer acts merely as an implementation detail that accompanies a non-linear transformation of the tensor, we use shortcuts directly between the bottlenecks. Figure 3 provides a schematic visualization of the difference in the designs. The motivation for inserting shortcuts is similar to that of classical residual connections: we want to improve the ability of a gradient to propagate across multiplier layers. However, the inverted design is considerably more memory efficient (see Section 4 for details), as well as works slightly better in our experiments. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_18",
"text": " The basic implementation structure is illustrated in Table 1. For a block of size h×wℎ𝑤h\\times w, expansion factor t𝑡t and kernel size k𝑘k with d′superscript𝑑′d^{\\prime} input channels and d′′superscript𝑑′′d^{\\prime\\prime} output channels, the total number of multiply add required is h⋅w⋅d′⋅t(d′+k2+d′′)⋅ℎ𝑤superscript𝑑′𝑡superscript𝑑′superscript𝑘2superscript𝑑′′h\\cdot w\\cdot d^{\\prime}\\cdot t(d^{\\prime}+k^{2}+d^{\\prime\\prime}). Compared with (1) this expression has an extra term, as indeed we have an extra 1×1111\\times 1 convolution, however the nature of our networks allows us to utilize much smaller input and output dimensions. In Table 3 we compare the needed sizes for each resolution between MobileNetV1, MobileNetV2 and ShuffleNet. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_19",
"text": " One interesting property of our architecture is that it provides a natural separation between the input/output domains of the building blocks (bottleneck layers), and the layer transformation – that is a non-linear function that converts input to the output. The former can be seen as the capacity of the network at each layer, whereas the latter as the expressiveness. This is in contrast with traditional convolutional blocks, both regular and separable, where both expressiveness and capacity are tangled together and are functions of the output layer depth. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_20",
"text": " In particular, in our case, when inner layer depth is 00 the underlying convolution is the identity function thanks to the shortcut connection. When the expansion ratio is smaller than 111, this is a classical residual convolutional block (8, 30). However, for our purposes we show that expansion ratio greater than 111 is the most useful. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_21",
"text": " This interpretation allows us to study the expressiveness of the network separately from its capacity and we believe that further exploration of this separation is warranted to provide a better understanding of the network properties. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_22",
"text": " Now we describe our architecture in detail. As discussed in the previous section the basic building block is a bottleneck depth-separable convolution with residuals. The detailed structure of this block is shown in Table 1. The architecture of MobileNetV2 contains the initial fully convolution layer with 323232 filters, followed by 191919 residual bottleneck layers described in the Table 2. We use ReLU6ReLU6{\\operatorname{\\mathop{ReLU6}\\,}} as the non-linearity because of its robustness when used with low-precision computation . We always use kernel size 3×3333\\times 3 as is standard for modern networks, and utilize dropout and batch normalization during training. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_23",
"text": " With the exception of the first layer, we use constant expansion rate throughout the network. In our experiments we find that expansion rates between 555 and 101010 result in nearly identical performance curves, with smaller networks being better off with slightly smaller expansion rates and larger networks having slightly better performance with larger expansion rates. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_24",
"text": " For all our main experiments we use expansion factor of 666 applied to the size of the input tensor. For example, for a bottleneck layer that takes 646464-channel input tensor and produces a tensor with 128128128 channels, the intermediate expansion layer is then 64⋅6=384⋅64638464\\cdot 6=384 channels. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_25",
"text": " As in we tailor our architecture to different performance points, by using the input image resolution and width multiplier as tunable hyper parameters, that can be adjusted depending on desired accuracy/performance trade-offs. Our primary network (width multiplier 111, 224×224224224224\\times 224), has a computational cost of 300 million multiply-adds and uses 3.4 million parameters. We explore the performance trade offs, for input resolutions from 969696 to 224224224, and width multipliers of 0.350.350.35 to 1.41.41.4. The network computational cost ranges from 777 multiply adds to 585M MAdds, while the model size vary between 1.7M and 6.9M parameters. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_26",
"text": " One minor implementation difference, with is that for multipliers less than one, we apply width multiplier to all layers except the very last convolutional layer. This improves performance for smaller models. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_27",
"text": " The inverted residual bottleneck layers allow a particularly memory efficient implementation which is very important for mobile applications. A standard efficient implementation of inference that uses for instance TensorFlow or Caffe , builds a directed acyclic compute hypergraph G𝐺G, consisting of edges representing the operations and nodes representing tensors of intermediate computation. The computation is scheduled in order to minimize the total number of tensors that needs to be stored in memory. In the most general case, it searches over all plausible computation orders Σ(G)Σ𝐺\\Sigma(G) and picks the one that minimizes M(G)=minπ∈Σ(G)maxi∈1..n(∑A∈R(i,π,G)|A|)+size(πi).M(G)=\\min_{\\pi\\in\\Sigma(G)}\\max_{i\\in 1..n}\\left(\\sum_{A\\in R(i,\\pi,G)}|A|\\right)+\\text{size}(\\pi_{i}). where R(i,π,G)𝑅𝑖𝜋𝐺R(i,\\pi,G) is the list of intermediate tensors that are connected to any of πi…πnsubscript𝜋𝑖…subscript𝜋𝑛\\pi_{i}\\dots\\pi_{n} nodes, |A|𝐴|A| represents the size of the tensor A𝐴A and size(i)𝑠𝑖𝑧𝑒𝑖size(i) is the total amount of memory needed for internal storage during operation i𝑖i. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_28",
"text": " For graphs that have only trivial parallel structure (such as residual connection), there is only one non-trivial feasible computation order, and thus the total amount and a bound on the memory needed for inference on compute graph G𝐺G can be simplified: M(G)=maxop∈G(∑A∈opinp|A|+∑B∈opout|B|+|op|)𝑀𝐺subscript𝑜𝑝𝐺subscript𝐴subscriptop𝑖𝑛𝑝𝐴subscript𝐵subscriptop𝑜𝑢𝑡𝐵𝑜𝑝M(G)=\\max_{op\\in G}\\left(\\sum_{A\\in\\text{op}_{inp}}|A|+\\sum_{B\\in\\text{op}_{out}}|B|+|op|\\right) (2) Or to restate, the amount of memory is simply the maximum total size of combined inputs and outputs across all operations. In what follows we show that if we treat a bottleneck residual block as a single operation (and treat inner convolution as a disposable tensor), the total amount of memory would be dominated by the size of bottleneck tensors, rather than the size of tensors that are internal to bottleneck (and much larger). ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_29",
"text": " A bottleneck block operator ℱ(x)ℱ𝑥{\\cal F}(x) shown in Figure 3(b) can be expressed as a composition of three operators ℱ(x)=(A∘𝒩∘B)xℱ𝑥delimited-()𝐴𝒩𝐵𝑥{\\cal F}(x)=(A\\circ{\\cal N}\\circ B)x, where A𝐴A is a linear transformation A:ℛs×s×k→ℛs×s×n:𝐴→superscriptℛ𝑠𝑠𝑘superscriptℛ𝑠𝑠𝑛A:{\\cal R}^{s\\times s\\times k}\\rightarrow{\\cal R}^{s\\times s\\times n}, 𝒩𝒩{\\cal N} is a non-linear per-channel transformation: 𝒩:ℛs×s×n→ℛs′×s′×n:𝒩→superscriptℛ𝑠𝑠𝑛superscriptℛsuperscript𝑠′superscript𝑠′𝑛{\\cal N}:{\\cal R}^{s\\times s\\times n}\\rightarrow{\\cal R}^{s^{\\prime}\\times s^{\\prime}\\times n}, and B𝐵B is again a linear transformation to the output domain: B:ℛs′×s′×n→ℛs′×s′×k′:𝐵→superscriptℛsuperscript𝑠′superscript𝑠′𝑛superscriptℛsuperscript𝑠′superscript𝑠′superscript𝑘′B:{\\cal R}^{s^{\\prime}\\times s^{\\prime}\\times n}\\rightarrow{\\cal R}^{s^{\\prime}\\times s^{\\prime}\\times k^{\\prime}}. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_30",
"text": " For our networks 𝒩=ReLU6∘dwise∘ReLU6𝒩ReLU6dwiseReLU6{\\cal N}={\\operatorname{\\mathop{ReLU6}\\,}}\\circ{\\operatorname{\\mathop{dwise}\\,}}\\circ{\\operatorname{\\mathop{ReLU6}\\,}}, but the results apply to any per-channel transformation. Suppose the size of the input domain is |x|𝑥|x| and the size of the output domain is |y|𝑦|y|, then the memory required to compute F(X)𝐹𝑋F(X) can be as low as |s2k|+|s′2k′|+O(max(s2,s′2))superscript𝑠2𝑘superscript𝑠′2superscript𝑘′𝑂superscript𝑠2superscript𝑠′2|s^{2}k|+|s^{\\prime 2}k^{\\prime}|+O(\\max(s^{2},s^{\\prime 2})). ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_31",
"text": " The algorithm is based on the fact that the inner tensor ℐℐ\\cal I can be represented as concatenation of t𝑡t tensors, of size n/t𝑛𝑡n/t each and our function can then be represented as ℱ(x)=∑i=1t(Ai∘N∘Bi)(x)ℱ𝑥superscriptsubscript𝑖1𝑡subscript𝐴𝑖𝑁subscript𝐵𝑖𝑥{\\cal F}(x)=\\sum_{i=1}^{t}(A_{i}\\circ N\\circ B_{i})(x) by accumulating the sum, we only require one intermediate block of size n/t𝑛𝑡n/t to be kept in memory at all times. Using n=t𝑛𝑡n=t we end up having to keep only a single channel of the intermediate representation at all times. The two constraints that enabled us to use this trick is (a) the fact that the inner transformation (which includes non-linearity and depthwise) is per-channel, and (b) the consecutive non-per-channel operators have significant ratio of the input size to the output. For most of the traditional neural networks, such trick would not produce a significant improvement. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_32",
"text": " We note that, the number of multiply-adds operators needed to compute F(X)𝐹𝑋F(X) using t𝑡t-way split is independent of t𝑡t, however in existing implementations we find that replacing one matrix multiplication with several smaller ones hurts runtime performance due to increased cache misses. We find that this approach is the most helpful to be used with t𝑡t being a small constant between 222 and 555. It significantly reduces the memory requirement, but still allows one to utilize most of the efficiencies gained by using highly optimized matrix multiplication and convolution operators provided by deep learning frameworks. It remains to be seen if special framework level optimization may lead to further runtime improvements. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_33",
"text": " We train our models using TensorFlow. We use the standard RMSPropOptimizer with both decay and momentum set to 0.90.90.9. We use batch normalization after every layer, and the standard weight decay is set to 0.000040.000040.00004. Following MobileNetV1 setup we use initial learning rate of 0.0450.0450.045, and learning rate decay rate of 0.980.980.98 per epoch. We use 16 GPU asynchronous workers, and a batch size of 969696. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_34",
"text": " We compare our networks against MobileNetV1, ShuffleNet and NASNet-A models. The statistics of a few selected models is shown in Table 4 with the full performance graph shown in Figure 5. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_35",
"text": " We evaluate and compare the performance of MobileNetV2 and MobileNetV1 as feature extractors for object detection with a modified version of the Single Shot Detector (SSD) on COCO dataset . We also compare to YOLOv2 and original SSD (with VGG-16 as base network) as baselines. We do not compare performance with other architectures such as Faster-RCNN and RFCN since our focus is on mobile/real-time models. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_36",
"text": " SSDLite: In this paper, we introduce a mobile friendly variant of regular SSD. We replace all the regular convolutions with separable convolutions (depthwise followed by 1×1111\\times 1 projection) in SSD prediction layers. This design is in line with the overall design of MobileNets and is seen to be much more computationally efficient. We call this modified version SSDLite. Compared to regular SSD, SSDLite dramatically reduces both parameter count and computational cost as shown in Table 5. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_37",
"text": " For MobileNetV1, we follow the setup in . For MobileNetV2, the first layer of SSDLite is attached to the expansion of layer 15 (with output stride of 16). The second and the rest of SSDLite layers are attached on top of the last layer (with output stride of 323232). This setup is consistent with MobileNetV1 as all layers are attached to the feature map of the same output strides. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_38",
"text": " Both MobileNet models are trained and evaluated with Open Source TensorFlow Object Detection API . The input resolution of both models is 320×320320320320\\times 320. We benchmark and compare both mAP (COCO challenge metrics), number of parameters and number of Multiply-Adds. The results are shown in Table 6. MobileNetV2 SSDLite is not only the most efficient model, but also the most accurate of the three. Notably, MobileNetV2 SSDLite is 20×20\\times more efficient and 10×10\\times smaller while still outperforms YOLOv2 on COCO dataset. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_39",
"text": " In this section, we compare MobileNetV1 and MobileNetV2 models used as feature extractors with DeepLabv3 for the task of mobile semantic segmentation. DeepLabv3 adopts atrous convolution (40, 41, 42), a powerful tool to explicitly control the resolution of computed feature maps, and builds five parallel heads including (a) Atrous Spatial Pyramid Pooling module (ASPP) containing three 3×3333\\times 3 convolutions with different atrous rates, (b) 1×1111\\times 1 convolution head, and (c) Image-level features . We denote by output_stride the ratio of input image spatial resolution to final output resolution, which is controlled by applying the atrous convolution properly. For semantic segmentation, we usually employ output_stride=16output_stride16\\emph{output\\_stride}=16 or 888 for denser feature maps. We conduct the experiments on the PASCAL VOC 2012 dataset , with extra annotated images from and evaluation metric mIOU. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_40",
"text": " To build a mobile model, we experimented with three design variations: (1) different feature extractors, (2) simplifying the DeepLabv3 heads for faster computation, and (3) different inference strategies for boosting the performance. Our results are summarized in Table 7. We have observed that: (a) the inference strategies, including multi-scale inputs and adding left-right flipped images, significantly increase the MAdds and thus are not suitable for on-device applications, (b) using output_stride=16output_stride16\\emph{output\\_stride}=16 is more efficient than output_stride=8output_stride8\\emph{output\\_stride}=8, (c) MobileNetV1 is already a powerful feature extractor and only requires about 4.9−5.74.95.74.9-5.7 times fewer MAdds than ResNet-101 (e.g., mIOU: 78.56 vs 82.70, and MAdds: 941.9B vs 4870.6B), (d) it is more efficient to build DeepLabv3 heads on top of the second last feature map of MobileNetV2 than on the original last-layer feature map, since the second to last feature map contains 320320320 channels instead of 128012801280, and by doing so, we attain similar performance, but require about 2.52.52.5 times fewer operations than the MobileNetV1 counterparts, and (e) DeepLabv3 heads are computationally expensive and removing the ASPP module significantly reduces the MAdds with only a slight performance degradation. In the end of the Table 7, we identify a potential candidate for on-device applications (in bold face), which attains 75.32%percent75.3275.32\\% mIOU and only requires 2.752.752.75B MAdds. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_41",
"text": " Inverted residual connections. The importance of residual connection has been studied extensively (8, 30, 46). The new result reported in this paper is that the shortcut connecting bottleneck perform better than shortcuts connecting the expanded layers (see Figure 6(b) for comparison). ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_42",
"text": " Importance of linear bottlenecks. The linear bottleneck models are strictly less powerful than models with non-linearities, because the activations can always operate in linear regime with appropriate changes to biases and scaling. However our experiments shown in Figure 6(a) indicate that linear bottlenecks improve performance, providing support that non-linearity destroys information in low-dimensional space. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_43",
"text": " We described a very simple network architecture that allowed us to build a family of highly efficient mobile models. Our basic building unit, has several properties that make it particularly suitable for mobile applications. It allows very memory-efficient inference and relies utilize standard operations present in all neural frameworks. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_44",
"text": " For the ImageNet dataset, our architecture improves the state of the art for wide range of performance points. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_45",
"text": " For object detection task, our network outperforms state-of-art realtime detectors on COCO dataset both in terms of accuracy and model complexity. Notably, our architecture combined with the SSDLite detection module is 20×20\\times less computation and 10×10\\times less parameters than YOLOv2. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_46",
"text": " On the theoretical side: the proposed convolutional block has a unique property that allows to separate the network expressiveness (encoded by expansion layers) from its capacity (encoded by bottleneck inputs). Exploring this is an important direction for future research. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
},
{
"id": "1801.04381_all_47",
"text": " We would like to thank Matt Streeter and Sergey Ioffe for their helpful feedback and discussion. ",
"title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks"
}
] |
What does it mean to be multimodal in the context of "multimodal inputs"?
|
It means that the system is able to handle multiple modalities of input data, such as audio and video, text and image data, and even RGB-D data; challenging tasks which require multiple modalities of information to perform well [114].
|
[
114
] |
[
{
"id": "1301.3592_all_0",
"text": " Robotic grasping is a challenging problem involving perception, planning, and control. Some recent works (54, 56, 28, 67) address the perception aspect of this problem by converting it into a detection problem in which, given a noisy, partial view of the object from a camera, the goal is to infer the top locations where a robotic gripper could be placed (see Figure 1). Unlike generic vision problems based on static images, such robotic perception problems are often used in closed loop with controllers, so there are stringent requirements on performance and computational speed. In the past, hand-designing features has been the most popular method for several robotic tasks (40, 32). However, this is cumbersome and time-consuming, especially when we must incorporate new input modalities such as RGB-D cameras. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_1",
"text": " Recent methods based on deep learning have demonstrated state-of-the-art performance in a wide variety of tasks, including visual recognition (35, 60), audio recognition (39, 41), and natural language processing . These techniques are especially powerful because they are capable of learning useful features directly from both unlabeled and labeled data, avoiding the need for hand-engineering. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_2",
"text": " However, most work in deep learning has been applied in the context of recognition. Grasping is inherently a detection problem, and previous applications of deep learning to detection have typically focused on specific vision applications such as face detection and pedestrian detection . Our goal is not only to infer a viable grasp, but to infer the optimal grasp for a given object that maximizes the chance of successfully grasping it, which differs significantly from the problem of object detection. Thus, the first major contribution of our work is to apply deep learning to the problem of robotic grasping, in a fashion which could generalize to similar detection problems. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_3",
"text": " The second major contribution of our work is to propose a new method for handling multimodal data in the context of feature learning. The use of RGB-D data, as opposed to simple 2D image data, has been shown to significantly improve grasp detection results (28, 14, 56). In this work, we present a multimodal feature learning algorithm which adds a structured regularization penalty to the objective function to be optimized during learning. As opposed to previous works in deep learning, which either ignore modality information at the first layer (i.e., encourage all features to use all modalities) or train separate first-layer features for each modality (43, 61), our approach allows for a middle-ground in which each feature is encouraged to use only a subset of the input modalities, but is not forced to use only particular ones. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_4",
"text": " We also propose a two-stage cascaded detection system based on deep learning. Here, we use fewer features for the first pass, providing faster, but only approximately accurate detections. The second pass uses more features, giving more accurate detections. In our experiments, we found that the first deep network, with fewer features, was better at avoiding overfitting but less accurate. We feed the top-ranked rectangles from the first layer into the second layer, leading to robust early rejection of false positives. Unlike manually designed two-step features as in , our method uses deep learning, which allows us to learn detectors that not only give higher performance, but are also computationally efficient. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_5",
"text": " We test our approach on a challenging dataset, where we show that our algorithm improves both recognition and detection performance for grasping rectangle data. We also show that our two-stage approach is not only able to match the performance of a single-stage system, but, in fact, improves results while significantly reducing the computational time needed for detection. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_6",
"text": " In summary, the contributions of this paper are: • We present a deep learning algorithm for detecting robotic grasps. To the best of our knowledge, this is the first work to do so. • In order to handle multimodal inputs, we present a new way to apply structured regularization to the weights to these inputs based on multimodal group regularization. • We present a multi-step cascaded system for detection, significantly reducing its computational cost. • Our method outperforms the state-of-the-art for rectangle-based grasp detection, as well as previous deep learning algorithms. • We implement our algorithm on both a Baxter and a PR2 robot, and show success rates of 84% and 89%, respectively, for executing grasps on a highly varied set of objects. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_7",
"text": " The rest of the paper is organized as follows: We discuss related work in Section II. We present our two-step cascaded detection system in Section III, and some additional details in Section IV. We then describe our feature learning algorithm and structured regularization method in Section V. We present our experiments in Section VI, and discuss results in Section VII. We then present experiments on both Baxter and PR2 robots in Section VIII. We present several interesting directions for future work in Section IX, then conclude in Section X. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_8",
"text": " In this section, we will focus on perception- and learning-based approaches for robotic grasping. For a more complete review of the field, we refer the reader to review papers by Bohg et al. , Sahbani et al. , Bicchi and Kumar and Shimoga . ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_9",
"text": " Most works define a “grasp” as an end-effector configuration which achieves partial or complete form- or force-closure of a given object. This is a challenging problem because it depends on the pose and configuration of the robotic gripper as well as the shape and physical properties of the object to be grasped, and typically requires a search over a large number of possible gripper configurations. Early works (34, 44, 49) focused on testing for form- and force-closure, and synthesizing grasps fulfilling these properties according to some hand-designed “quality score” . More recent works have refined these definitions . These works assumed full knowledge of object shape and physical properties. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_10",
"text": " Grasping Given 3D Model: Fast synthesis of grasps for known 3D models remains an active research topic (14, 20, 65), with recent methods using advanced physical simulation to find optimal grasps. Gallegos et al. performed optimization of grasps given both a 3D model of the object to be grasped and the desired contact points for the robotic gripper. Pokorny et al. define spaces of graspable objects, then map new objects to these spaces to discover grasps. However, these works are only applicable when the full 3D model of the object is exactly known, which may not be the case when a robot is interacting with a new environment. We note that some of these physics-based approaches might be combined with our approach in a multi-pass system, discussed further in Sec. IX. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_11",
"text": " Sensing for Grasping: In a real-world robotic setting, a robot will not have full knowledge of the 3D model and pose of an object to be grasped, but rather only incomplete information from some set of sensors such as color or depth cameras, tactile sensors, etc. This makes the problem of grasping significantly more challenging , as the algorithm must use more limited and potentially noisier information to detect a good grasp. While some works (10, 46) simply attempt to estimate the poses of known objects and then apply full-model grasping algorithms based on these results, others avoid this assumption, functioning on novel objects which the algorithm has not seen before. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_12",
"text": " Such works often made use of other simplifying assumptions, such as assuming that objects belong to one of a set of primitive shapes (47, 6), or are planar . Other works produced impressive results for specific cases, such as grasping the corners of towels . While such works escape the assumption of a fully-known object model, hand-coded grasping rules have a hard time dealing with the wide range of objects seen in real-world human environments, and are difficult and time-consuming to create. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_13",
"text": " Learning for Grasping: Machine learning methods have proven effective for a wide range of perception problems (64, 22, 38, 59, 3), allowing a perception system to learn a mapping from some feature set to various visual properties. Early work by Kamon et al. showed that learning approaches could also be applied to the problem of grasping from vision, introducing a learning component to grasp quality scores. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_14",
"text": " Recent works have employed richer features and learning methods, allowing robots to grasp known objects which might be partially occluded or in an unknown pose as well as fully novel objects which the system has not seen before . Here, we will address the latter case. Earlier work focused on detecting only a single grasping point from 2D partial-view data, using heuristic methods to determine a gripper pose based on this point. . The use of 3D data was shown to significantly improve these results thanks to giving direct physical information about the object in question. With the advent of low-cost RGB-D sensors such as the Kinect, the use of depth data for robotic grasping has become ubiquitous. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_15",
"text": " Several other works attempted to use the learning algorithm to more fully constrain the detected grasps. Ekvall and Kragic and Huebner and Kragic used shape-based approximations as bases for learning algorithms which directly gave an approach vector. Le et al. treated grasp detection as a ranking problem over sets of contact points in image space. Jiang et al. represented a grasp as a 2D oriented rectangle in image space, with two edges corresponding to the gripper plates, using surface normals to determine the grasp approach vector. These approaches allow the detection algorithm to detect more exactly the gripper pose which should be used for grasping. In this work, we will follow the rectangle-based method. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_16",
"text": " Learning-based approaches have shown impressive results in grasping novel objects, showing that learning some parameters of the detection system can outperform human tuning. However, these approaches still require a significant degree of hand-engineering in the form of designing good input features. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_17",
"text": " Other Applications with RGBD Data. Due to the availability of inexpensive depth sensors, RGB-D data has been a significant research focus in recent years for various robotics applications. For example, Jiang et al. consider robotic placement of objects, while Teuliere and Marchand used RGB-D data for visual servoing. Several works, including those of Endres et al. and Whelan et al. have extended and improved Simultaneous Localization and Mapping (SLAM) for RGB-D data. Object detection and recognition has been a major focus in research on RGB-D data (11, 33, 7). Most such works use hand-engineered features such as . The few works that perform feature learning for RGB-D data (59, 3) largely ignore the multimodal nature of the data, not distinguishing the color and depth channels. Here, we present a structured regularization approach which allows us to learn more robust features for RGB-D and other multimodal data. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_18",
"text": " Deep learning approaches have demonstrated the ability to learn useful features directly from data for a wide variety of tasks. Early work by Hinton and Salakhutdinov showed that a deep network trained on images of hand-written digits will learn features corresponding to pen-strokes. Later work using localized convolutional features showed that these networks learn features corresponding to object parts when trained on natural images. This demonstrates that even the basic features learned by these systems will adapt to the data given. In fact, these approaches are not restricted to the visual domain, but rather have been shown to learn useful features for a wide range of domains, such as audio (39, 41) and natural language data . ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_19",
"text": " Deep Learning for Detection: However, the vast majority of work in deep learning focuses on classification problems. Only a handful of previous works have applied these methods to detection problems (45, 37, 9). For example, Osadchy et al. and LeCun et al. applied a deep energy-based model to the problem of face detection, Sermanet et al. applied a convolutional neural network for pedestrian detection, and Coates et al. used a deep learning approach to detect text in images. Girshick et al. used learned convolutional features over image regions for object detection, while Szegedy et al. used a multi-scale approach based on deep networks for the same task. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_20",
"text": " All these approaches focused on object detection and similar problems, in which the goal is to find a bounding box which tightly contains the item to be detected, and for each item, all valid bounding boxes will be similar. However, in robotic grasp detection, there may be several valid grasps for an object in different regions, making it more important to select the one with the highest chance success. In addition, orientation matters much more to robotic grasp detection, as most grasps will only be viable for a small subset of the possible gripper orientations. Our approach to grasp detection will also generalize across object classes, and even to classes never seen before by the system, as opposed to the class-specific nature of object detection. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_21",
"text": " Multimodal Deep Learning: Recent works in deep learning have extended these methods to handle multiple modalities of input data, such as audio and video , text and image data , and even RGB-D data (59, 3). However, all of these approaches have fallen into two camps - either learning completely separate low-level features for each modality (43, 61), or simply concatenating the modalities (59, 3). The former approaches have proven effective for data where the basic modalities differ significantly, such as the aforementioned case of text and images, while the latter is more effective in cases where the modalities are more similar, such as RGB-D data. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_22",
"text": " For some new combinations of modalities and tasks, it may not be clear which of these approaches will give better performance. In fact, in the ideal feature set, different features may use different subsets of the modalities. In this work, we will give a structured regularization method which guides the learning algorithm to select such subsets, without imposing hard constraints on network structure. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_23",
"text": " Structured Learning and Structured Regularization: Several approaches have been proposed which attempt to use a specially-designed regularization function to impose structure on a set of learned parameters without directly enforcing it. Jalali et al. used a group regularization function in the multitask learning setting, where one set of features is used for multiple tasks. This function applies high-order regularization separately to particular groups of parameters. Their function regularized the number of features used for each task in a set of multi-class classification tasks solved by softmax regression. Intuitively, this encodes the belief that only some subset of the input features will be useful for each task, but this set of useful features might vary between tasks. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_24",
"text": " A few works have also explored the use of structured regularization in deep learning. The Topographic ICA algorithm is a feature-learning approach that applies a similar penalty term to feature activations, but not to the weights themselves. Coates and Ng investigate the problem of selecting receptive fields, i.e., subsets of the input features to be used together in a higher-level feature. The structure of the network is learned first, then fixed before learning the parameters of the network. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_25",
"text": " In this work, we will present an algorithm for robotic grasp detection from a single RGB-D view. Our approach will be based on machine learning, but distinguish itself from previous approaches by learning not only the weights used to rank prospective grasps, but also the features used to rank them, which were previously hand-engineered. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_26",
"text": " We will do this using deep learning methods, learning a set of RGB-D features which will be extracted from each candidate grasp, then used to score that grasp. Our approach will include a structured multimodal regularization method which improves the quality of the features learned from RGB-D data without constraining network structure. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_27",
"text": " In our system for robotic grasping, as shown in Fig. 2, the robot first obtains an RGB-D image of the scene containing objects to be grasped. A small deep network is used to score potential grasps in this image, and a small candidate set of the top-ranked grasps is provided to a larger deep network, which yields a single best-ranked grasp. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_28",
"text": " In this work, we will represent potential grasps using oriented rectangles in the image plane as seen on the left in Fig. 2, with one pair of parallel edges corresponding to the robotic gripper . Each rectangle is thus parameterized by the X and Y coordinates of its upper-left corner, its width, height, and orientation in the image plane, giving a five-dimensional search space for potential grasps. Grasps will be ranked based on features extracted from the RGB-D image region contained inside their corresponding rectangle, aligned to the gripper plates, as seen in the center of Fig. 2. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_29",
"text": " To translate a rectangle such as that shown on the right in Fig. 2 into a gripper pose for grasping we find the point with the minimum depth inside the central third (horizontally) of the rectangle. We then use the averaged surface normal around this point to determine the approach vector for the gripper. The orientation of the detected rectangle is translated to a rotation around this vector to orient the gripper. We use the X-Y coordinates of the rectangle center along with the depth of the closest point to determine a grasping point in the robot’s coordinate frame. We compute a pre-grasp position by shifting 10 cm back from the grasping point along this approach vector and position the gripper at this point. We then approach the object along the approach vector and grasp it. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_30",
"text": " Using a standard feature learning approach such as sparse auto-encoder , a deep network can be trained for the problem of grasping rectangle recognition (i.e., does a given rectangle in image space correspond to a valid robotic grasp?). However, in a real-world robotic setting, our system needs to perform detection (i.e., given an image containing an object, how should the robot grasp it?). This task is significantly more challenging than simple recognition. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_31",
"text": " Two-stage Cascaded Detection: In order to perform detection, one naive approach could be to consider each possible oriented rectangle in the image (perhaps discretized to some level), and evaluate each rectangle with a deep network trained for recognition. However, such near-exhaustive search of possible rectangles (based on positions, sizes, and orientations) can be quite expensive in practice for real-time robotic grasping. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_32",
"text": " Motivated by multi-step cascaded approaches in previous work (28, 64), we instead take a two-stage approach to detection: First, we use a reduced feature set to determine a set of top candidates. Then, we use a larger, more robust feature set to rank these candidates. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_33",
"text": " However, these approaches require the design of two separate sets of features. In particular, it can be difficult to manually design a small set of first-stage features which is both quick to compute and robust enough to produce a good set of candidate detections for the second stage. Using deep learning allows us to circumvent the costly manual design of features by simply training networks of two different sizes, using the smaller for the exhaustive first pass, and the larger to re-rank the candidate detection results. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_34",
"text": " Model: To detect robotic grasps from the rectangle representation, we model the probability of a rectangle G(t)superscript𝐺𝑡G^{(t)}, with features x(t)∈ℝNsuperscript𝑥𝑡superscriptℝ𝑁x^{(t)}\\in\\mathbb{R}^{N} being graspable, using a random variable y^(t)∈{0,1}superscript^𝑦𝑡01\\hat{y}^{(t)}\\in\\{0,1\\} which indicates whether or not we predict G(t)superscript𝐺𝑡G^{(t)} to be graspable. We use a deep network, as shown in Fig. 4-left, with two layers of sigmoidal hidden units hsuperscriptℎdelimited-()1h^{} and hsuperscriptℎdelimited-()2h^{}, with K1subscript𝐾1K_{1} and K2subscript𝐾2K_{2} units per layer, respectively. A logistic classifier over the outputs of the second-layer hidden units then predicts P(y^(t)|x(t);Θ)𝑃conditionalsuperscript^𝑦𝑡superscript𝑥𝑡ΘP(\\hat{y}^{(t)}|x^{(t)};\\Theta), so chosen because ground-truth graspability is represented as binary. Each layer ℓℓ\\ell will have a set of weights W(ℓ)superscript𝑊delimited-()ℓW^{(\\ell)} mapping from its inputs to its hidden units, so the parameters of our model are Θ={W,W,W}Θsuperscript𝑊delimited-()1superscript𝑊delimited-()2superscript𝑊delimited-()3\\Theta=\\{W^{},W^{},W^{}\\}. Each hidden unit forms output by a sigmoid σ(a)=1/(1+exp(−a))𝜎𝑎11𝑎\\sigma(a)=1/(1+\\exp(-a)) over its weighted input: hj(t)superscriptsubscriptℎ𝑗delimited-()1𝑡\\displaystyle h_{j}^{(t)} =σ(∑i=1Nxi(t)Wi,j)absent𝜎superscriptsubscript𝑖1𝑁superscriptsubscript𝑥𝑖𝑡subscriptsuperscript𝑊delimited-()1𝑖𝑗\\displaystyle=\\sigma\\left(\\sum_{i=1}^{N}x_{i}^{(t)}W^{}_{i,j}\\right) hj(t)superscriptsubscriptℎ𝑗delimited-()2𝑡\\displaystyle h_{j}^{(t)} =σ(∑i=1K1hi(t)Wi,j)absent𝜎superscriptsubscript𝑖1subscript𝐾1superscriptsubscriptℎ𝑖delimited-()1𝑡subscriptsuperscript𝑊delimited-()2𝑖𝑗\\displaystyle=\\sigma\\left(\\sum_{i=1}^{K_{1}}h_{i}^{(t)}W^{}_{i,j}\\right) P(y^(t)=1|x(t);Θ)𝑃superscript^𝑦𝑡conditional1superscript𝑥𝑡Θ\\displaystyle P(\\hat{y}^{(t)}=1|x^{(t)};\\Theta) =σ(∑i=1K2hi(t)Wi)absent𝜎superscriptsubscript𝑖1subscript𝐾2superscriptsubscriptℎ𝑖delimited-()2𝑡subscriptsuperscript𝑊delimited-()3𝑖\\displaystyle=\\sigma\\left(\\sum_{i=1}^{K_{2}}h_{i}^{(t)}W^{}_{i}\\right) (1) ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_35",
"text": " During inference, our goal is to find the single grasping rectangle with the maximum probability of being graspable for some new object. With G𝐺G representing a particular grasping rectangle position, orientation, and size, we find this best rectangle as: G∗superscript𝐺\\displaystyle G^{*} =arg max 𝐺P(y^(t)=1|ϕ(G);Θ)absent𝐺arg max 𝑃superscript^𝑦𝑡conditional1italic-ϕ𝐺Θ\\displaystyle=\\underset{G}{\\mbox{arg max }}P(\\hat{y}^{(t)}=1|\\phi(G);\\Theta) (2) Here, the function ϕitalic-ϕ\\phi extracts the appropriate input representation for rectangle G𝐺G. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_36",
"text": " During learning, our goal is to learn the parameters ΘΘ\\Theta that optimize the recognition accuracy of our system. Here, input data is given as a set of pairs of features x(t)∈ℝNsuperscript𝑥𝑡superscriptℝ𝑁x^{(t)}\\in\\mathbb{R}^{N} and ground-truth labels y(t)∈{0,1}superscript𝑦𝑡01y^{(t)}\\in\\{0,1\\} for t=1,…,M𝑡1…𝑀t=1,\\ldots,M. As in most deep learning works, we use a two-phase learning approach. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_37",
"text": " In the first phase, we will use unsupervised feature learning to initialize the hidden-layer weights Wsuperscript𝑊delimited-()1W^{} and Wsuperscript𝑊delimited-()2W^{}. Pre-training weights this way is critical to avoid overfitting. We will use a variant of a sparse auto-encoder (SAE) , as illustrated in Fig. 4-right. We define g(h)𝑔ℎg(h) as a sparsity penalty function over hidden unit activations, with λ𝜆\\lambda controlling its weight. With f(W)𝑓𝑊f(W) as a regularization function, weighted by β𝛽\\beta, and x^(t)superscript^𝑥𝑡\\hat{x}^{(t)} as the reconstruction of x(t)superscript𝑥𝑡x^{(t)}, SAE solves the following to initialize hidden-layer weights: W∗superscript𝑊\\displaystyle W^{*} =arg min 𝑊∑t=1M(‖x^(t)−x(t)‖22+λ∑j=1Kg(hj(t)))+βf(W)absent𝑊arg min superscriptsubscript𝑡1𝑀superscriptsubscriptnormsuperscript^𝑥𝑡superscript𝑥𝑡22𝜆superscriptsubscript𝑗1𝐾𝑔superscriptsubscriptℎ𝑗𝑡𝛽𝑓𝑊\\displaystyle=\\underset{W}{\\mbox{arg min }}\\sum_{t=1}^{M}(||\\hat{x}^{(t)}-x^{(t)}||_{2}^{2}+\\lambda\\sum_{j=1}^{K}g(h_{j}^{(t)}))+\\beta f(W) hj(t)superscriptsubscriptℎ𝑗𝑡\\displaystyle h_{j}^{(t)} =σ(∑i=1Nxi(t)Wi,j)absent𝜎superscriptsubscript𝑖1𝑁superscriptsubscript𝑥𝑖𝑡subscript𝑊𝑖𝑗\\displaystyle=\\sigma(\\sum_{i=1}^{N}x_{i}^{(t)}W_{i,j}) x^i(t)superscriptsubscript^𝑥𝑖𝑡\\displaystyle\\hat{x}_{i}^{(t)} =∑j=1Khj(t)Wi,jabsentsuperscriptsubscript𝑗1𝐾superscriptsubscriptℎ𝑗𝑡subscript𝑊𝑖𝑗\\displaystyle=\\sum_{j=1}^{K}h_{j}^{(t)}W_{i,j} (3) We first use this algorithm to initialize Wsuperscript𝑊delimited-()1W^{} to reconstruct x𝑥x. We then fix Wsuperscript𝑊delimited-()1W^{} and learn Wsuperscript𝑊delimited-()2W^{} to reconstruct hsuperscriptℎdelimited-()1h^{}. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_38",
"text": " During the supervised phase of the learning algorithm, we then jointly learn classifier weights Wsuperscript𝑊delimited-()3W^{} and fine-tune hidden layer weights Wsuperscript𝑊delimited-()1W^{} and Wsuperscript𝑊delimited-()2W^{} for recognition. We maximize the log-likelihood of the data along with regularization penalties on hidden layer weights: Θ∗superscriptΘ\\displaystyle\\Theta^{*} =arg max Θ∑t=1MlogP(y^(t)=y(t)|x(t);Θ)absentΘarg max superscriptsubscript𝑡1𝑀𝑃superscript^𝑦𝑡conditionalsuperscript𝑦𝑡superscript𝑥𝑡Θ\\displaystyle=\\underset{\\Theta}{\\mbox{arg max }}\\sum_{t=1}^{M}\\log P(\\hat{y}^{(t)}=y^{(t)}|x^{(t)};\\Theta) −β1f(W)−β2f(W)subscript𝛽1𝑓superscript𝑊delimited-()1subscript𝛽2𝑓superscript𝑊delimited-()2\\displaystyle\\qquad\\qquad\\qquad-\\beta_{1}f(W^{})-\\beta_{2}f(W^{}) (4) ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_39",
"text": " Two-stage Detection Model: During inference for two-stage detection, we will first use a smaller network to produce a set of the top T𝑇T rectangles with the highest probability of being graspable according to network parameters Θ1subscriptΘ1\\Theta_{1}. We will then use a larger network with a separate set of parameters Θ2subscriptΘ2\\Theta_{2} to re-rank these T𝑇T rectangles and obtain a single best one. The only change to learning for the two-stage model is that these two sets of parameters are learned separately, using the same approach. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_40",
"text": " In this section, we will define the set of raw features which our system will use, forming x𝑥x in the equations above, and how they are extracted from an RGB-D image. Some examples of these features are shown in Fig 2. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_41",
"text": " Our algorithm uses only local information - specifically, we extract the RGB-D sub-image contained within each rectangle, and use this to generate features for that rectangle. This image is rotated so that its left and right edges correspond to the gripper plates, and then re-scaled to fit inside the network’s receptive field. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_42",
"text": " From this 24x24 pixel image, seven channels’ worth of features are extracted, giving 24x24x7 = 4032 input features. The first three channels are the image in YUV color space, used because it represents image intensity and color separately. The next is simply the depth channel of the image. The last three are the X, Y, and Z components of surface normals computed based on the depth channel. These are computed after the image is aligned to the gripper so that they are always relative to the gripper plates. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_43",
"text": " Whitening data is critical for deep learning approaches to work well, especially in cases such as multimodal data where the statistics of the input data may vary greatly. While PCA-based approaches have been shown to be effective , they are difficult to apply in cases such as ours where large portions of the data may be masked out. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_44",
"text": " Depth data, in particular, can be difficult to whiten because the range of values may be very different for different patches in the image. Thus, we first whiten each depth patch individually, subtracting the patch-wise mean and dividing by the patch-wise standard deviation, down to some minimum. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_45",
"text": " For multimodal data, the statistics of the data for each modality should match as closely as possible, to avoid learning features which are biased towards or away from using particular modes. This is particularly important when regularizing each modality separately, as in our approach. Thus, we drop mean values for each feature separately, but scale the data for each channel by dividing by the standard deviation of all its features combined. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_46",
"text": " It is important for to preserve aspect ratio when feeding features into the network. This is because distorting image features may cause non-graspable rectangles to appear graspable, as shown in Fig. 5. However, padding with zeros can cause rectangles with less padding to receive higher graspability scores, as the network will have more nonzero inputs. It is important to account for this because in many cases the ideal grasp for an object might be represented by a thin rectangle which would thus contain many zero values in its receptive field from padding. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_47",
"text": " To address this problem, we scale up the magnitude of the available input for each rectangle based on the fraction of the rectangle which is masked out. In particular, we define a multiplicative scaling factor for the inputs from each modality, based on the fraction of each mode which is masked out, since each mode may have a different mask. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_48",
"text": " In the multimodal setting, we assume that the input data x𝑥x is known to come from R𝑅R distinct modalities, for example audio and video data, or depth and RGB data. We define the modality matrix S𝑆S as an R𝑅RxN𝑁N binary matrix, where each element Sr,isubscript𝑆𝑟𝑖S_{r,i} indicates membership of visible unit xisubscript𝑥𝑖x_{i} in a particular modality r𝑟r, such as depth or image intensity. The scaling factor for mode r𝑟r is then defined as: Ψr(t)=∑i=1NSr,i/(∑i=1NSr,iμi(t))superscriptsubscriptΨ𝑟𝑡superscriptsubscript𝑖1𝑁subscript𝑆𝑟𝑖superscriptsubscript𝑖1𝑁subscript𝑆𝑟𝑖superscriptsubscript𝜇𝑖𝑡\\Psi_{r}^{(t)}=\\sum_{i=1}^{N}S_{r,i}/\\left(\\sum_{i=1}^{N}S_{r,i}\\mu_{i}^{(t)}\\right), where μi(t)superscriptsubscript𝜇𝑖𝑡\\mu_{i}^{(t)} is 1 if xi(t)superscriptsubscript𝑥𝑖𝑡x_{i}^{(t)} is masked in, 0 otherwise. The scaling factor for case i𝑖i is: ψi(t)=∑r=1RSr,iΨr(t)superscriptsubscript𝜓𝑖𝑡superscriptsubscript𝑟1𝑅subscript𝑆𝑟𝑖superscriptsubscriptΨ𝑟𝑡\\psi_{i}^{(t)}=\\sum_{r=1}^{R}S_{r,i}\\Psi_{r}^{(t)}. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_49",
"text": " We could simply scale up each value of x𝑥x by its corresponding scale factor when training our model, as x′i(t)=ψi(t)xi(t)superscriptsubscriptsuperscript𝑥′𝑖𝑡superscriptsubscript𝜓𝑖𝑡superscriptsubscript𝑥𝑖𝑡{x^{\\prime}}_{i}^{(t)}=\\psi_{i}^{(t)}x_{i}^{(t)}. However, since our sparse autoencoder penalizes squared error, scaling x𝑥x linearly will scale the error for the corresponding cases quadratically, causing the learning algorithm to lend increased significance to cases where more data is masked out. Instead, we can use the scaled x′superscript𝑥′x^{\\prime} as input to the network, but penalize reconstruction based on the original x𝑥x, only scaling after the squared error has been computed: W∗superscript𝑊\\displaystyle W^{*} =arg min𝑊∑t=1M(∑i=1Nψi(t)(x^i(t)−xi(t))2+λ∑j=1Kg(hj(t)))absent𝑊arg minsuperscriptsubscript𝑡1𝑀superscriptsubscript𝑖1𝑁superscriptsubscript𝜓𝑖𝑡superscriptsuperscriptsubscript^𝑥𝑖𝑡superscriptsubscript𝑥𝑖𝑡2𝜆superscriptsubscript𝑗1𝐾𝑔superscriptsubscriptℎ𝑗𝑡\\displaystyle=\\underset{W}{\\mbox{arg min}}\\sum_{t=1}^{M}\\left(\\sum_{i=1}^{N}\\psi_{i}^{(t)}(\\hat{x}_{i}^{(t)}-x_{i}^{(t)})^{2}+\\lambda\\sum_{j=1}^{K}g(h_{j}^{(t)})\\right) (5) We redefine the hidden units to use the scaled visible input: hj(t)superscriptsubscriptℎ𝑗𝑡\\displaystyle h_{j}^{(t)} =σ(∑i=1Nx′i(t)Wi,j)absent𝜎superscriptsubscript𝑖1𝑁superscriptsubscriptsuperscript𝑥′𝑖𝑡subscript𝑊𝑖𝑗\\displaystyle=\\sigma\\left(\\sum_{i=1}^{N}{x^{\\prime}}_{i}^{(t)}W_{i,j}\\right) (6) This approach is equivalent to adding additional, potentially fractional, ‘virtual’ visible units to the model based on the scaling factor for each mode. In practice, we found it necessary to limit the scaling factor to a maximum of some value c𝑐c, as Ψ′r(t)=min(Ψr(t),c)superscriptsubscriptsuperscriptΨ′𝑟𝑡minsuperscriptsubscriptΨ𝑟𝑡𝑐{\\Psi^{\\prime}}_{r}^{(t)}=\\mbox{min}(\\Psi_{r}^{(t)},c). ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_50",
"text": " As shown in Table III our mask-based scaling technique at the visible layer improves grasping results by over 25% for both metrics. As seen in Figure 6, it removes the network’s inherent bias towards square rectangles, exhibiting a much wider range of aspect ratios that more closely matches that of the ground-truth data. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_51",
"text": " A naive way of applying feature learning to multimodal data is to simply take x𝑥x (as a concatenated vector) as input to the model described above, ignoring information about specific modalities, as seen on the lefthand side of Figure 7. This approach may either 1) prematurely learn features which include all modalities, which can lead to overfitting, or 2) fail to learn associations between modalities with very different underlying statistics. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_52",
"text": " Instead of concatenating multimodal input as a vector, Ngiam et al. proposed training a first layer representation for each modality separately, as shown in Figure 7-middle. This approach makes the assumption that the ideal low-level features for each modality are purely unimodal, while higher-layer features are purely multimodal. This approach may work better for some problems where the modalities have very different basic representations, such as the video and audio data (as used in ), so that separate first layer features may give better performance. However, for modalities such as RGB-D data, where the input modes represent different channels of an image, learning low-level correlations can lead to more robust features – our experiments in Section VI show that simply concatenating the input modalities significantly outperforms training separate first-layer features for robotic grasp detection from RGB-D data. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_53",
"text": " For many problems, it may be difficult to tell which of these approaches will perform better, and time-consuming to tune and comparatively evaluate multiple algorithms. In addition, the ideal feature set for some problems may contain features which use some, but not all, of the input modalities, a case which neither of these approaches are designed to handle. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_54",
"text": " To solve these problems, we propose a new algorithm for feature learning for multimodal data. Our approach incorporates a structured penalty term into the optimization problem to be solved during learning. This technique allows the model to learn correlated features between multiple input modalities, but regularizes the number of modalities used per feature (hidden unit), discouraging the model from learning weak correlations between modalities. With this regularization term, the algorithm can specify how mode-sparse or mode-dense the features should be, representing a continuum between the two extremes outlined above. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_55",
"text": " Regularization in Deep Learning: In a typical deep learning model, L2subscript𝐿2L_{2} regularization (i.e., f(W)=‖W‖22𝑓𝑊superscriptsubscriptnorm𝑊22f(W)=||W||_{2}^{2}) or L1subscript𝐿1L_{1} regularization (i.e., f(W)=‖W‖1𝑓𝑊subscriptnorm𝑊1f(W)=||W||_{1}) are commonly used in training (e.g., as specified in Equations (III-A) and (4)). These are often called a “weight cost” (or “weight decay”), and are left implicit in many works. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_56",
"text": " Applying regularization is well known to improve the generalization performance of feature learning algorithms. One might expect that a simple L1subscript𝐿1L_{1} penalty would eliminate weak correlations in multimodal features, leading to features which use only a subset of the modes each. However, we found that in practice, a value of β𝛽\\beta large enough to cause this also degraded the quality of features for the remaining modes and lead to decreased task performance. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_57",
"text": " Multimodal Regularization: Structured regularization, such as in , takes a set of groups of weights, and applies some regularization function (typically high-order) separately to each group. In our structured multimodal regularization algorithm, each modality will be used as a regularization group separately for each hidden unit. For example, a group-wise p-norm would be applied as: f(W)𝑓𝑊\\displaystyle f(W) =∑j=1K∑r=1R(∑i=1NSr,i|Wi,jp|)1/pabsentsuperscriptsubscript𝑗1𝐾superscriptsubscript𝑟1𝑅superscriptsuperscriptsubscript𝑖1𝑁subscript𝑆𝑟𝑖superscriptsubscript𝑊𝑖𝑗𝑝1𝑝\\displaystyle=\\sum_{j=1}^{K}\\sum_{r=1}^{R}\\left(\\sum_{i=1}^{N}S_{r,i}|W_{i,j}^{p}|\\right)^{1/p} (7) where Sr,isubscript𝑆𝑟𝑖S_{r,i} is 1 if feature i𝑖i belongs to group r𝑟r and 0 otherwise. Using a high value of p𝑝p allows us to penalize higher-valued weights from each mode to each feature more strongly than lower-valued ones. This also means that forming a high-valued weight in a group with other high-valued weights will accrue a lower additional penalty than doing so for a group with only low-valued weights. At the limit (p→∞→𝑝p\\rightarrow\\infty), this group regularization becomes equivalent to the infinity (or max) norm: f(W)𝑓𝑊\\displaystyle f(W) =∑j=1K∑r=1RmaxiSr,i|Wi,j|absentsuperscriptsubscript𝑗1𝐾superscriptsubscript𝑟1𝑅subscript𝑖subscript𝑆𝑟𝑖subscript𝑊𝑖𝑗\\displaystyle=\\sum_{j=1}^{K}\\sum_{r=1}^{R}\\max_{i}S_{r,i}|W_{i,j}| (8) which penalizes only the maximum weight from each mode to each feature. In practice, the infinity norm is not differentiable and therefore is difficult to apply gradient-based optimization methods; in this paper, we use the log-sum-exponential as a differentiable approximation to the max norm. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_58",
"text": " In experiments, this regularization function produces first-layer weights concentrated in fewer modes per feature. However, we found that at values of β𝛽\\beta sufficient to induce the desired mode-wise sparsity patterns, penalizing the maximum also had the undesirable side-effect of causing many of the weights for other modes to saturate at their mode’s maximum, suggesting that the features were overly constrained. In some cases, constraining the weights in this manner also caused the algorithm to learn duplicate (or redundant) features, in effect scaling up the feature’s contribution to reconstruction to compensate for its constrained maximum. This is obviously an undesirable effect, as it reduces the effective size (or diversity) of the learned feature set. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_59",
"text": " This suggests that the max-norm may be overly constraining. A more desirable sparsity function would penalize nonzero weight maxima for each mode for each feature without additional penalty for larger values of these maxima. We can achieve this effect by applying the L0subscript𝐿0L_{0} norm, which takes a value of 0 for an input of 0, and 1 otherwise, on top of the max-norm from above: f(W)𝑓𝑊\\displaystyle f(W) =∑j=1K∑r=1R𝕀{(maxiSr,i|Wi,j|)>0}absentsuperscriptsubscript𝑗1𝐾superscriptsubscript𝑟1𝑅𝕀subscript𝑖subscript𝑆𝑟𝑖subscript𝑊𝑖𝑗0\\displaystyle=\\sum_{j=1}^{K}\\sum_{r=1}^{R}\\mathbb{I}\\{(\\max_{i}S_{r,i}|W_{i,j}|)>0\\} (9) where 𝕀𝕀\\mathbb{I} is the indicator function, which takes a value of 1 if its argument is true, 0 otherwise. Again, for a gradient-based method, we used an approximation to the L0subscript𝐿0L_{0} norm, such as log(1+x2)1superscript𝑥2\\log(1+x^{2}). This regularization function now encodes a direct penalty on the number of modes used for each weight, without further constraining the weights of modes with nonzero maxima. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_60",
"text": " Figure 8 shows features learned from the unsupervised stage of our group-regularized deep learning algorithm. We discuss these features, and their implications for robotic grasping, in Section VII. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_61",
"text": " We used the extended version of the Cornell grasping dataset for our experiments. This dataset, along with code for this paper, is available at http://pr.cs.cornell.edu/deepgrasping. We note that this is an updated version of the dataset used in , containing several more complex objects, and thus results for their algorithms will be different from those in . This dataset contains 1035 images of 280 graspable objects, several of which are shown in Fig. 9. Each image is annotated with several ground-truth positive and negative grasping rectangles. While the vast majority of possible rectangles for most objects will be non-graspable, the dataset contains roughly equal numbers of graspable and non-graspable rectangles. We will show that this is useful for an unsupervised learning algorithm, as it allows learning a good representation for graspable rectangles even from unlabeled data. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_62",
"text": " We performed five-fold cross-validation, and present results for splits on per image (i.e., the training set and the validation set do not share the same image) and per object (i.e., the training set and the validation set do not share any images from the same object) basis. Hyper-parameters were selected by validating performance on a separate set of 300 grasps not used in any of the cross-validation splits. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_63",
"text": " We take seven 24x24 pixel channels as described in Section IV as input, giving 4032 input features to each network. We trained a deep network with 200 hidden units each at the first and second layers using our learning algorithm as described in Sections III and V. Training this network took roughly 30 minutes. For trials involving our two-pass system, we trained a second network with 50 hidden units at each layer in the same manner. During inference we performed an exhaustive search using this network, then used the 200-unit network to re-rank the 100 highest-ranked rectangles found by the 50-unit network. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_64",
"text": " We compare our recognition results in the Cornell grasping dataset with the features from , as well as the combination of these features and Fast Point Feature Histogram (FPFH) features . We used a linear SVM for classification, which gave the best results among all other kernels. We also report chance performance, obtained by randomly selecting a label in the recognition case, and randomly assigning scores to rectangles in the detection case. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_65",
"text": " We also compare our algorithm to other deep learning approaches. We compare to a network trained only with standard L1 regularization, and a network trained in a manner similar to , where three separate sets of first layer features are learned for the depth channel, the combination of the Y, U, and V channels, and the combination of the X, Y, and Z surface normal components. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_66",
"text": " For detection, we compare the top-ranked rectangle for each method with the set of ground-truth rectangles for each image. We present results using two metrics, the “point” and “rectangle” metric. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_67",
"text": " For the point metric, similar to Saxena et al. , we compute the center point of the predicted rectangle, and consider the grasp a success if it is within some distance from at least one ground-truth rectangle center. We note that this metric ignores grasp orientation, and therefore might overestimate the performance of an algorithm for robotic applications. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_68",
"text": " For the rectangle metric, similar to Jiang et al. , let G𝐺G be the top-ranked grasping rectangle predicted by the algorithm, and G∗superscript𝐺G^{*} be a ground-truth rectangle. Any rectangles with an orientation error of more than 30osuperscript30𝑜30^{o} from G𝐺G are rejected. From the remaining set, we use the common bounding box evaluation metric of intersection divided by union - i.e. Area(G∩G∗)/Area(G∪G∗)𝐴𝑟𝑒𝑎𝐺superscript𝐺𝐴𝑟𝑒𝑎𝐺superscript𝐺Area(G\\cap G^{*})/Area(G\\cup G^{*}). Since a ground-truth rectangle can define a large space of graspable rectangles (e.g., covering the entire length of a pen), we consider a prediction to be correct if it scores at least 25% by this metric. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_69",
"text": " Figure 8 shows the features learned by the unsupervised phase of our algorithm which have a high correlation to positive and negative grasping cases. Many of these features show non-zero weights to the depth channel, indicating that it learns the correlation of depths to graspability. We can see that weights to many of the modalities for these features have been eliminated by our structured regularization approach. In particular, many of these features lack weights to the U and V (3rdsuperscript3𝑟𝑑3^{rd} and 4thsuperscript4𝑡ℎ4^{th}) channels, which correspond to color, allowing the system to be more robust to different-colored objects. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_70",
"text": " Figure 10 shows 3D meshes for the depth channels of the four features with the strongest positive and negative correlations to valid grasps. Even without any supervised information, our algorithm was able to learn several features which correlate strongly to graspable cases and non-graspable cases. The first two positive-correlated features represent handles, or other cases with a raised region in the center, while the second two represent circular rims or handles. The negatively-correlated features represent obviously non-graspable cases, such as ridges perpendicular to the gripper plane and “valleys” between the gripper plates. From these features, we can see that even during unsupervised feature learning, our approach is able to learn a representation useful for the task at hand, thanks purely to the fact that the data used is composed of half graspable and half non-graspable cases. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_71",
"text": " From Table I, we see that the recognition performance is significantly improved with deep learning methods, improving 9% over the features from and 4.1% over those features combined with FPFH features. Both L1subscript𝐿1L_{1} and group regularization performed similarly for recognition, but training separate first layer features decreased performance slightly. This shows that learned features, in addition to avoiding hand-design, are able to improve performance significantly over the state of the art. It demonstrates that a deep network is able to learn the concept of “graspability” in a way that generalizes to new objects it hasn’t seen before. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_72",
"text": " Table II shows that even using any one of the three input modalities (RGB, depth, or surface normals), our algorithm is able to learn features which outperform hand-engineered ones for recognition. Depth gives the highest performance of any single-mode network. Combining depth and normal information improves results over either alone, indicating that they give non-redundant information. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_73",
"text": " The highest accuracy is still obtained by using all the input modalities. This shows that combining depth and color information leads to a system which is more robust than either modality alone. This is due to the fact that some graspable cases (rims of monochromatic objects, etc.) can only be detected using depth information, while in others, the depth channel may be extremely noisy, requiring the use of color information. From this, we can see that integrating multimodal information, a major focus of this work, is important in recognizing good robotic grasps. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_74",
"text": " Table III shows that the performance gains from deep learning for recognition carry over to detection, as well. Once mask-based scaling has been applied, all deep learning approaches except for training separate first-layer features outperform the hand-engineered features from by up to 13% for the point metric and 17% for the rectangle metric, while also avoiding the need to design task-specific features. Without mask-based scaling, the system performs poorly, due to the bias illustrated in Fig. 6. Separate first-layer features also give weak detection performance, indicating that the relative scores assigned by this form of network are less robust than those learned using our structured regularization approach. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_75",
"text": " Using structured multimodal regularization also improves results over standard L1subscript𝐿1L_{1}regularization by up to 1.8%, showing that our method also learns more robust features than standard approaches which ignore modality information. Even though using the first-pass network alone underperforms the second-pass network alone by up to 8.3%, integrating both in our two-pass system outperforms the solo second-pass network by up to 2.4%. This shows that the two-pass system improves not only efficiency, but accuracy as well. The performance gains from multimodal regularization and the two-pass system are discussed in detail below. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_76",
"text": " Our system outperforms all baseline approaches by all metrics except for the point metric in the object-wise split case. However, we can see that the chance performance is much higher for the point metric than for the rectangle metric. This shows that the point metric can overstate performance, and the rectangle metric is a better indicator of the accuracy of a grasp detection system. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_77",
"text": " Adaptability: One important advantage of our detection system is that we can flexibly specify the constraints of the gripper in our detection system. This is particularly important for a robot like Baxter, where different objects might require different gripper settings to grasp. We can constrain the detectors to handle this. Figure 11 shows detection scores for systems constrained based on two different settings of Baxter’s gripper, one wide and one thin. The implications of these results for other types of grippers will be discussed in Section IX. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_78",
"text": " Our group regularization term improves detection accuracy over simple L1subscript𝐿1L_{1} regularization. The improvement is more significant for the object-wise split than for the image-wise split because the group regularization helps the network to avoid overfitting, which will tend to occur more when the learning algorithm is evaluated on unseen objects. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_79",
"text": " Figure 12 shows typical cases where a network trained using our group regularization finds a valid grasp, but a network trained with L1subscript𝐿1L_{1} regularization does not. In these cases, the grasp chosen by the L1subscript𝐿1L_{1}-regularized network appears valid for some modalities – the depth channel for the sunglasses and nail polish bottle, and the RGB channels for the scissors. However, when all modalities are considered, the grasp is clearly invalid. The group-regularized network does a better job of combining information from all modalities and is more robust to noise and missing data in the depth channel, as seen in these cases. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_80",
"text": " Using our two-pass system enhanced both computational performance and accuracy. The number of rectangles the full-size network needed to evaluate was reduced by roughly a factor of 1000. Meanwhile, detection performance increased by up to 2.4% as compared to a single pass with the large-size network, even though using the small network alone significantly underperforms the larger network. In most cases, the top 100 rectangles from the first pass contained the top-ranked rectangle from an exhaustive search using the second-stage network, and thus results were unaffected. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_81",
"text": " Figure 13 shows some cases where the first-stage network pruned away rectangles corresponding to weak grasps which might otherwise be chosen by the second-stage network. In these cases, the grasp chosen by the single-stage system might be feasible for a robotic gripper, but the rectangle chosen by the two-stage system represents a grasp which would clearly be successful. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_82",
"text": " The two-stage system also significantly increases the computational efficiency of our detection system. Average inference time for a MATLAB implementation of the deep network was reduced from 24.6s/image for an exhaustive search using the larger network to 13.5s/image using the two-stage system. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_83",
"text": " In order to evaluate the performance of our algorithms in the real world, we ran an extensive series of robotic experiments. To explore the generalizability and effect of the robot on the success rate of our algorithms, we performed experiments on two different robotic platforms, a Baxter Research Robot (“Yogi”) and a PR2 (“Kodiak”). ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_84",
"text": " Baxter: The first platform used is our Baxter Research Robot, which we call “Yogi.” Baxter has two arms with seven degrees of freedom each and a maximum reach of 104 cm, although we used only the left arm for these experiments. The end-effector for this arm is a two-finger parallel gripper. We augmented the gripper tips using rubber bands for additional friction. Baxter’s grippers are interchangable, and we used two settings for these experiments - a “wide” setting with an open width of 8 cm and closed width of 4 cm, and a “thin” setting with an open width of 4 cm and a closed width of 0 cm (completely closed, gripper tips touching). ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_85",
"text": " To detect grasps, we mounted a Kinect sensor to Yogi’s head, approximately 1.75 m above the ground. angled downwards at roughly a 75osuperscript75𝑜75^{o} angle towards a table in front of it. The Kinect gives RGB-D images at a resolution of 640x480 pixels. We calibrated the transformation between the Kinect’s and Yogi’s coordinate frames by marking four points corresponding to a set of 3D axes, and obtaining the coordinates of these points in both Kinect’s and Yogi’s frames. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_86",
"text": " All control for Baxter was done by specifying an end-effector position and orientation, and using the inverse kinematics provided with Baxter to determine a set of joint angles for this pose. Baxter’s built-in control systems were used to drive the arm to these new joint angles. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_87",
"text": " PR2: Our second platform was our PR2 robot, “Kodiak.” Similar to Baxter, PR2 has two 7-DoF arms with approximately 1 m reach, and we used only the left for these experiments. PR2’s grippers open to a width of 8 cm, and are capable of closing completely from that span, so we did not need to use two settings as with Baxter. We augmented PR2’s gripper friction with gaffer tape on the fingertips. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_88",
"text": " For the experiments on PR2, we used the Kinect already mounted to Kodiak’s head, and used ROS’s built-in functionality to obtain 3D locations from that Kinect and transform these to Kodiak’s body frame for manipulation. Control was performed using the ee_cart stiffness controller with trajectories provided by our own custom MATLAB code. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_89",
"text": " Experimental Setup: For each experiment, we placed a single object within a 25 cm x 25 cm square on the table, approximately 1.2 m below the mounting point of the Kinect. This square was chosen to be well-contained within each robot’s workspace, allowing objects to be reached from most approach vectors. Object positions and orientations were varied between trials, although objects were always placed in configurations in which at least one viable grasp was visible and accessible to the robot. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_90",
"text": " When using Baxter, due to the limited stroke (span from open to closed) of its gripper, we pre-selected one of the two gripper settings discussed above for each object. We constrained the search space as illustrated in Fig. 11 to find grasps for that particular setting. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_91",
"text": " To detect grasps, we first took an RGB-D image from the Kinect with no objects in the scene as a background image. The depth channel of this image was used to segment objects from the scene, and to correct for the slant of the Kinect. Once an object was segmented, we used our algorithm, as described above, to obtain a single best-ranked grasping rectangle. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_92",
"text": " The search space for the first-pass network progressed in 15-degree increments from 15 to 180 degrees (angles larger than 180 being mirror-images of grasps already tested), searching over 10-pixel increments across the image for the X and Y coordinates of the upper-left corner of the rectangle. For the thin gripper setting, rectangle widths and heights from 10 to 40 pixels in 10-pixel increments were searched, while for the thick setting these ranged from 40 pixels to 100 pixels in 20-pixel increments. In both cases, rectangles taller than they were wide were ignored. Once a single best-scoring grasp was detected, we translated it to a robotic grasp consisting of a grasping point and an approach vector using the rectangle’s parameters and the surface normal at the rectangle’s center as described above. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_93",
"text": " To execute the grasp, we first positioned the gripper at a location 10 cm back from the grasping point along the approach vector. The gripper was oriented to the approach vector, and rotated around it based on the orientation of the detected grasping rectangle. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_94",
"text": " Since Baxter’s arms are highly compliant, slight imprecisions in end-effector positioning are to be expected – we found that errors of up to 2 cm were typical. Thus, we implemented a visual servoing system using its hand camera, which provides RGB images at a resolution of 320x200 pixels. We used color segmentation to separate the object from the background, and used its lateral position in image space to drive Yogi’s end-effector to center the object. We did not implement visual servoing for PR2 because its gripper positioning was found to be precise to within 0.5 cm. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_95",
"text": " After visual servoing was completed, we drove the gripper 14 cm forwards from its current position along the approach vector, so that the grasping point was well-contained within it. We then closed the gripper, grasping the object, and moved it 30 cm upwards. A grasp was determined to be successful if it was sufficient to lift the object and hold it for one second. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_96",
"text": " Objects to be Grasped: For our robotic experiments, we collected a diverse set of 35 objects within a size of .3 m x .3 m x .3 m and weighing at most 2.5 kg (although most were less than 1 kg) from our offices, homes, and lab. Many of them are shown in Fig. 14. Most of these objects were not present in the training dataset, and thus were completely new to the grasp detection algorithm. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_97",
"text": " Due to the physical limitations of the robots’ grippers, we found that five of these objects were not graspable even when given a hand-chosen grasp. The small pair of pliers was too low to the table to grip properly. The spray paint can was too smooth for the gripper to get enough friction to lift it. The weight of the hammer was too imbalanced, causing the hammer to rotate and slip out of the gripper when grasped. Similar problems were encountered with the bicycle U-lock. The bevel spatula’s handle was too close to the thin-set size of Baxter’s gripper, so that we could not position it precisely enough to grasp it reliably. We did not consider these objects for purposes of our experimental results, since our focus was on evaluating the performance of our grasp detection algorithm. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_98",
"text": " Results: Table LABEL:tbl:expResults shows the results of our robotic experiments on Baxter for the remaining 30 objects, a total of 100 trials. Using our algorithm, Yogi was able to successfully execute a grasp in 84% of the trials. Figure LABEL:fig:yogiGrasping shows Yogi executing several of these grasps. In 8% of the trials, our algorithm detected a valid grasp which was not executed correctly by Yogi. Thus, we were able to successfully detect a good grasp in 92% of the trials. Video of some of these trials is available at http://pr.cs.cornell.edu/deepgrasping. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_99",
"text": " PR2 yielded a higher success rate as seen in Table LABEL:tbl:pr2Results, succeeding in 89% of trials. This is largely due to the much wider span of PR2’s gripper from open to closed and its ability to fully close from its widest position, as well as PR2’s ability to apply a larger gripping force. Some specific instances where PR2 and Baxter’s performance differed are discussed below. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_100",
"text": " For comparison purposes, we ran a small set of control experiments for 16 of the objects in the dataset. The control algorithm simply returned a fixed-size rectangle centered at the object’s center of mass, as determined by depth segmentation from the background. The rectangle was aligned so that the gripper plates ran parallel to the object’s principal axis. This algorithm was only successful in 31% of cases, significantly underperforming our system. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_101",
"text": " On Baxter, our algorithm sometimes detected a grasp which was not realizable by the current setting of its gripper, but might be executable by others. For example, our algorithm detected grasps across the leg of the plush cat, and the region between the handle and body of the umbrella, both too thin for the wide setting of Baxter’s gripper to grasp since it has a minimum span of 4 cm. Since PR2’s gripper can close completely from any position, it did not encounter these issues and thus achieved a 100% success rate for both these objects. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_102",
"text": " The XBox controller proved to be a very difficult object for either robot to grasp. From a top-down angle, there is only a small space of viable grasps with a span of less than 8 cm, but many which have either a slightly larger span (making them non-realizable by either gripper), or are subtly non-viable (e.g. grasps across the two “handles,” which tend to slip off.) All viable grasps are very near to the 8 cm span of both grippers, meaning that even slight imprecision in positioning can lead to failure. Due to this, Baxter achieved a higher success rate for the XBox controller thanks to visual servoing, succeeding in 50% of cases as compared to the 25% success rate for PR2. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_103",
"text": " Our algorithm was able to consistently detect and execute valid grasps for a red cereal box, but had some failures on a white and yellow one. This is because the background for all objects in the dataset is white, leading the algorithm to learn features relating white areas at the edges of the gripper region to graspable cases. However, it was able to detect and execute correct grasps for an all-white ice cube tray, and so does not fail for all white objects. This could be remedied by extending the dataset to include cases with different background colors. Interestingly, even though the parameters of grasps detected for the white box were similar for PR2 and Baxter, PR2 was able to succeed in every case while Baxter succeeded only half the time. This is because PR2’s increased gripper strength allowed it to execute grasps across corners of the box, crushing it slightly in the process. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_104",
"text": " Other failures were due to the limitations of the Kinect sensor. We were never able to properly grasp the martini glass because its glossy finish prevented Kinect from returning any depth estimates for it. Even if a valid grasp were detected using color information only, there was no way to infer a proper grasping position without depth information. Grasps for the metal bookend failed for similar reasons, but it was not as glossy as the martini glass, and gave enough returns for some to succeed. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_105",
"text": " However, our algorithm also had many noteworthy successes. It was able to consistently detect and execute grasps for a crumpled cloth towel, a complex and irregular case which bore little resemblance to any object in the dataset. It was also able to find and grasp the rims of objects such as the plastic baseball cap and coffee mug, cases where there is little visual distinction between the rim and body of the object. These objects underscore the importance of the depth channel for robotic grasping, as none of these grasps would be detectable without depth information. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_106",
"text": " Our algorithm was also able to successfully detect and execute many grasps for which the approach vector was non-vertical. The grasps shown for the coffee mug, desk lamp, cereal box, RC car controller, and toy elephant shown in Fig. LABEL:fig:yogiGrasping were all executed by aligning the gripper to such an approach vector. Indeed, many of these grasps may have failed had the gripper been aligned vertically. This shows that our algorithm is not restricted to detecting top-down grasps, but rather encodes a more general notion of graspability which can be applied to grasps from many angles, albeit within the constraints of visibility from a single-view perspective. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_107",
"text": " While a few failures occurred, our algorithm still achieved a high rate of accuracy for other oddly-shaped objects such as the quad-rotor casing, RC car controller, and glue gun. For objects with clearly defined handles, such as the cheese grater, kitchen tongs, can opener, and knife, our algorithm was able to detect and execute successful grasps in every trial, showing that there is a wide range of objects which it can grasp extremely consistently. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_108",
"text": " Our algorithm focuses on the problem of grasp detection for a two-fingered parallel-plate style gripper. It would be directly applicable to other grippers with fixed configurations, simply requiring new training data labeled with grasps for the gripper in question. Our system would allow even the basic features used for grasp detection to adapt to the gripper. This might be useful in cases such as jamming grippers , or two-fingered grippers with differently-shaped contact surfaces, which might require different features to determine a graspable area. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_109",
"text": " Our detection algorithm does not directly address the problem of 3D orientation of the gripper – this orientation is determined only after an optimal rectangle has been detected, orienting the grasp based on the object’s surface normals. However, just as our approach here considers aligns a 2D feature window to the gripper, an extension of this work might align a 3D window – using voxels, rather than pixels, as its basic unit of representation for input features to the network. This would allow the system to search across the full 6-DoF 3D pose of the gripper, while still leveraging the power of feature learning. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_110",
"text": " Our system gives only a gripper pose as output, but multi-fingered reconfigurable hands also require a configuration of the fingers in order to grasp an object. In this case, our algorithm could be used as a heuristic to find one or more locations likely to be graspable (similar to the first pass in our two-pass system), greatly reducing the search space needed to find an optimal gripper configuration. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_111",
"text": " Our algorithm also depends only on local features to determine grasping locations. However, many household objects may have some areas which are strongly preferable to grasp over others - for example, a knife might be graspable by the blade, or a hot glue gun by the barrel, but both should actually be grasped by their respective handles. Since these regions are more likely to be labeled as graspable in the data, our system already weakly encodes this, but some may not be readily distinguishable using only local information. Adding a term modeling the probability of each region of the image being a semantically-appropriate area to grasp the object would allow us to incorporate this information. This term could be computed once for the entire image, then added to each local detection score, keeping detection efficient. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_112",
"text": " In this work, our visual-servoing algorithm was purely heuristic, simply attempting to center the segmented object underneath the hand camera. However, in future work, a similar feature-learning approach might be applied to hand camera images of graspable and non-graspable regions, improving the visual servoing system’s ability to fine-tune gripper position to ensure a good grasp. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_113",
"text": " Many robotics problems require the use of perceptual information, but can be difficult and time-consuming to engineer good features for, particularly when using RGB-D data. In future work, our approach could be extended to a wide range of such problems. Our system could easily be applied to other detection problems such as object detection or obstacle detection. However, it could also be adapted to other similar problems, such as object tracking and visual servoing. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_114",
"text": " Multimodal data has become extremely important for robotics, due both to the advent of new sensors such as the Kinect and the application of robots to more challenging tasks which require multiple modalities of information to perform well. However, it can be very difficult to design features which do a good job of integrating many modalities. While our work focuses on color, depth, and surface normals as input modes, our structured multimodal regularization algorithm might also be applied to others. This approach could improve performance while allowing roboticists to focus on other engineering challenges. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_115",
"text": " We presented a system for detecting robotic grasps from RGB-D data using a deep learning approach. Our method has several advantages over current state-of-the-art methods. First, using deep learning allows us to avoid hand-engineering features, learning them instead. Second, our results show that deep learning methods significantly outperform even well-designed hand-engineered features from previous work. ",
"title": "Deep learning for detecting robotic grasps"
},
{
"id": "1301.3592_all_116",
"text": " We also presented a novel feature learning algorithm for multimodal data based on group regularization. In extensive experiments, we demonstrated that this algorithm produces better features for robotic grasp detection than existing deep learning approaches to multimodal data. Our experiments and results, both offline and on real robotic platforms, show that our two-stage deep learning system with group regularization is capable of robustly detecting grasps for a wide range of objects, even those previously unseen by the system. ",
"title": "Deep learning for detecting robotic grasps"
}
] |
What happens if author removes the linear supernet design and opt to use the covnentional supernet design?
|
Using the conventional, constant depth method would drop the accuracy [65].
|
[
65
] |
[
{
"id": "2009.02009_all_0",
"text": " As there are growing needs of deep learning applications based on convolutional neural network(CNN) in embedded systems, improving the accuracy of CNNs under a given set of constraints on latency and energy consumption has brought keen interest to researchers as a challenging problem in various areas. A popular hardware solution is to develop a hardware accelerator, called neural processing unit (NPU), that achieves higher performance per watt than CPUs or GPUs. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_1",
"text": " For a given hardware platform, several software techniques have been proposed to accelerate CNNs by approximate computing since deep learning applications can tolerate a certain range of computation inaccuracy. Some examples in this software approach are filter pruning (Li et al., 2016), quantization (Park et al., 2017), low-rank approximation (Kim et al., 2015). Accelerating CNNs is helpful to improve the accuracy by running a more compute-intensive CNN with higher accuracy within a given time budget. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_2",
"text": " On the other hand, various algorithmic solutions have been proposed to improve the CNN architecture by introducing new operations, optimizing the hyper-parameters, or searching for better network architecture. New operations such as depth-wise convolution(DWConv) (Chollet, 2017) and mobile inverted bottleneck (MBConv) (Sandler et al., 2018) have been developed to replace the regular full convolution. Recently, automated neural architecture search (NAS) emerges as the default technique to find a CNN architecture with higher accuracy than manually-designed architectures, particularly image classification. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_3",
"text": " A NAS technique explores a predefined search space and estimates the performance for each candidate architecture to find an optimal one with the highest accuracy under a given latency constraint. Thus there are three factors that affect the performance of NAS, as shown in Figure 1: search space, search strategy, and performance estimation. The search space of a NAS technique is usually restricted by a supernet that defines the topology of the largest network to explore. Since the performance of a network depends on the hardware platform, the NAS technique needs to be customized to a given hardware platform. While numerous NAS techniques have been proposed with various search strategies recently, their assumed hardware platforms are mostly GPUs. In this paper, we present a customized NAS technique for an NPU, which produces a CNN architecture with a better accuracy-latency tradeoff than existing models. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_4",
"text": " One of the most closely related work is the recently proposed NAS technique tailored for Google’s Edge-TPU (Gupta and Akin, 2020). While MBConv is widely used for GPU-aware NAS techniques, they prefer to use a single full convolution by fusing expansion layer and DWConv layer in some parts of the network, observing that the Edge-TPU runs the fused full convolution faster even though the required number of MAC (multiply-accumulate) operations is much larger. It confirms that the number of MAC operations is not a proper measure of latency, and platform-specific performance estimation is required. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_5",
"text": " Since an NPU is much faster than a GPU, it enables us to explore the wider search space for NAS under a given latency constraint. Since there are many factors to define the search space, such as the number of layers, channels, kernel sizes, and so on, the search space grows exponentially as the allowed computation complexity grows. Hence, reducing the search space, as well as the search time, is very challenging for NPU-aware NAS techniques. While the aforementioned work for Google’s Edge TPU trains each architecture candidate from scratch to estimate the performance, it is not computationally efficient. In contrast, we adopt a fast differentiable hardware-aware One-Shot NAS, called Single-Path NAS (Stamoulis et al., 2019), in order to reduce the search time. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_6",
"text": " Figure 2 shows an overview of the proposed NAS methodology that consists of three steps. In the first step, we change the supernet structure of the Single-Path NAS, which has a hierarchical structure based on MobileNetV2 (Sandler et al., 2018): A supernet structure consists of a series of stages that contain a series of blocks containing an MBConv micro-architecture inside. Since the network accuracy depends on the supernet structure, we make two extensions on the supernet structure to widen the search space. First, we allow stages to have a different number of blocks, called depth of the stage, considering the effect of stage depth on the accuracy and the latency. Second, we add parallel layers with different kernel sizes in each block, adopting the idea of mixed depthwise convolution (Tan and Le, 2019b) (MixConv). ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_7",
"text": " With the extended supernet structure, we apply the Single-Path NAS, which is also extended to support the extended supernet structure. In this step, we assume a shorter latency constraint than the required to reduce the search space and the search time. The last step is to scale up the baseline CNN adopting the compound scaling technique proposed in (Tan and Le, 2019a) until the latency constraint is met. The proposed NAS methodology is named as S3NAS since it consists of 3 steps: Supernet design, SinglePath NAS, and Scaling and post-processing. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_8",
"text": " For accurate latency estimation, an analytical latency estimator is devised, based on a cycle-level NPU simulator that runs an entire CNN considering the memory access overhead accurately. Since the NPU assumed in this paper can execute depth-wise separable convolution (DWConv), squeeze-and-excitation (SE), and h-swish activation function efficiently, the proposed supernet prefers DWConv to regular convolution. Observing that the accuracy is improved by around 1% if SE and h-swish activation function are used, we add a post-processing phase after a CNN network is found by NAS to add SE layers and to replace ReLU to h-swish activation function. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_9",
"text": " Experiments show that the proposed NAS technique could improve the accuracy-latency tradeoff over existing SoTA CNN models. Our best model achieves 82.72% top-1 accuracy on ImageNet with 11.66ms latency without any special data augmentation. Note that the latency is estimated by cycle-accurate simulation. For a fair comparison with the related work, the latency of each compared network is also estimated with the same simulator. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_10",
"text": " After an automated NAS technique based on reinforcement learning successfully found a better CNN architecture than manually-designed architectures (Zoph and Le, 2016), extensive research has been conducted to develop various NAS techniques based on reinforcement learning (Zoph et al., 2018; Tan et al., 2019). However, these NAS techniques are computationally intensive because they train each candidate architectures from scratch to estimate the goodness of it. Thus, one-shot neural architecture search approach (Pham et al., 2018) was introduced to reduce the search cost. In this approach, an over-parameterized super-model network is defined, and architecture search is performed by parameter optimization to reduce the complexity of the network. Gradient-based differentiable search has gained increasing popularity, and various NAS techniques have been proposed with different super-models and hyper-parameters (Pham et al., 2018; Guo et al., 2019; Chu et al., 2019; Liu et al., 2018; Cai et al., 2018). ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_11",
"text": " Among diverse techniques to decrease the search cost, Single-Path NAS (Stamoulis et al., 2019) was recently proposed to find a good architecture faster than the existing differentiable NAS techniques. This technique is extended to broaden the search space by including the squeeze-and-excitation (SE) block in the search space (Stamoulis et al., 2020). Our work is grounded on the original Single-Path NAS technique. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_12",
"text": " Finding a hardware-friendly neural architecture has been facilitated as NAS algorithm improved. MNASNet (Tan et al., 2019) added a latency term in the objective function to discover better architectures with a given latency constraint on their target hardware platform. EfficientNet (Tan and Le, 2019a), whose search method is similar to MNASNet, introduced a novel scaling method, called compound scaling, to find more accurate networks as the latency constraint or FLOPS increases. Instead of finding a network directly for a given long latency constraint, they scale up the depth and the width of a small network with shorter latency and the input image size in a balanced way. They could achieve a set of networks with state-of-the-art performance over a range of latency constraints. They removed SE blocks and swish activation function from their search space for hardware platforms that do not support them efficiently to name the resultant network as EfficientNet-lite. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_13",
"text": " While EfficientNet searches a set of networks over a range of latency constraints by scaling up, Once-For-All (Cai et al., 2019) network takes an opposite approach, scaling down. They first train a super-graph architecture by a novel method called progressive shrinking and search a sub-graph network that achieves good accuracy for a given latency constraint without re-training but cheap fine-tuning. They claim that a scaled-down network from the super-graph gives better accuracy than a network that is trained from scratch. They could find more accurate networks than EfficientNet for small latency constraints. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_14",
"text": " To explore more efficient neural architectures on specific hardware, some NAS methods have proposed to define the design space of architecture exploration, tailored for the hardware platform. Gupta et al. (Gupta and Akin, 2020) devised a building block named fused inverted bottleneck convolution block and showed that this block is often more efficient than MBConv on their target NPU, Edge-TPU. They adopted compound scaling method to find high-performing architectures on Edge-TPU. Our work is closely related to this method. We devise a building block that consists of parallel DWConv layers with different kernel sizes, based on a preliminary experiment to find that it is better than the other alternative building blocks in terms of performance per latency (Tan and Le, 2019b). And we increase the search space by allowing stages to have a different number of blocks in the baseline supernet. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_15",
"text": " A neural network typically consists of multiple stages, a sequence of blocks with the same number of output channels (width). There are studies on how to assign the number of blocks (depth) to each stage. Meng et al. (Meng et al., 2020) observed that the way of assigning depth to each stage affects the accuracy. Moreover, they argued that the good depth assignment of each stage could be inherited from the shallow ones as the total depth is increased, and proposed a layer-growing NAS method that could significantly reduce the search space. Furthermore, Radosavovic et al. (Radosavovic et al., 2020) discovered that among neural architectures with similar computational complexity, the ones whose stage width and depth have a quantized linear relationship tend to have higher accuracy. Based on similar observations, we apply this design principle to change the structure of the conventional One-Shot NAS supernet. In addition, we argue that placing more blocks in a stage with a larger width is beneficial. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_16",
"text": " While the original DWConv block uses a single kernel size for depthwise convolution, mixing multiple kernel sizes for depthwise convolution was recently proposed, named as MixConv (Tan and Le, 2019b). Mixing multiple kernel sizes can be understood as having parallel branches inside a block. It is shown that MixConv is more efficient than ordinary DWConv (Tan and Le, 2019b). There exist some recent NAS methods (Mei et al., 2019; Chu et al., 2020) that also broaden their search space using DWConv with multiple kernel sizes to find better neural architectures. We adopt this approach in the supernet and formulate a differentiable latency model of this operation, enabling a latency-aware differentiable One-Shot NAS with MixConv. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_17",
"text": " In this section, we will briefly review the Single-Path NAS technique and our target NPU. Before going further, we define some terminologies used in this paper, as shown in Figure 3. A neural architecture consists of stages at the top level. A stage consists of a sequence of blocks whose output feature maps have the same dimension. In the proposed supernet, a block is defined as MBConv that typically starts with 1×1 conv (expansion layer) and ends with 1×1 conv. Adopting the MixConv approach, the depthwise convolution layer consists of parallel superkernels whose kernel size will be determined during the NAS process. The width of block denotes the number of channels in the final output feature map of the block, and the width of stage is the width of the final block in the stage. We will call the total number of blocks starting from the very first block in the network up to the last block in a specific stage S, as the cumulative depth up to stage S. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_18",
"text": " Differentiable NAS methods usually define architecture parameters to choose which convolution layer to use in the block, training each convolution layer independently. Single-Path NAS (Stamoulis et al., 2019) reduce the search cost by decreasing the number of trainable parameters by sharing the kernel weights between convolution layers. The key idea is designing an over-parameterized depthwise convolution kernel named superkernel, and letting each depthwise convolution kernel of candidate MBConvs directly inherit the weights of this superkernel. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_19",
"text": " Let 𝐰k,esubscript𝐰𝑘𝑒\\mathbf{w}_{k,e} denote the depthwise convolution kernel of candidate MBConv with kernel size k and expansion ratio e (MBConvk,e). First, they introduce a large 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6}, which is the DWConv kernel of MBConv5,6. Then, the inner core of 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6} can be considered as 𝐰3,6subscript𝐰36\\mathbf{w}_{3,6}, a DWConv kernel of MBConv3,6. A superkernel containing these two kernel size options can be expressed as Figure 4: (1) 𝐰∗,6=𝐰3,6+𝟙(usekernelsize 5)⋅𝐰5\\3,6subscript𝐰6subscript𝐰36⋅1usekernelsize5subscript𝐰\\536\\mathbf{w}_{*,6}=\\mathbf{w}_{3,6}+\\mathbbm{1}(\\rm{use\\leavevmode\\nobreak\\ kernel\\leavevmode\\nobreak\\ size\\leavevmode\\nobreak\\ 5})\\cdot\\mathbf{w}_{5\\backslash 3,6} where 𝐰5\\3,esubscript𝐰\\53𝑒\\mathbf{w}_{5\\backslash 3,e} means the outer part, 𝐰5,e−𝐰3,esubscript𝐰5𝑒subscript𝐰3𝑒\\mathbf{w}_{5,e}-\\mathbf{w}_{3,e}. Next, they formulate conditions to determine the kernel size. They define a certain threshold value t𝑡t and compare the norm of the kernel weights with the threshold. If the norm of a subset weight is larger than the threshold, it remains in the supernet. To this end, Eq. (1) is changed as follows: (2) 𝐰∗,6(tk=5)=𝐰3,6+𝟙(∥𝐰5\\3,6∥2>tk=5)⋅𝐰5\\3,6subscript𝐰6subscript𝑡𝑘5subscript𝐰36⋅1superscriptdelimited-∥∥subscript𝐰\\5362subscript𝑡𝑘5subscript𝐰\\536\\mathbf{w}_{*,6}(t_{k=5})=\\mathbf{w}_{3,6}+\\mathbbm{1}(\\lVert\\mathbf{w}_{5\\backslash 3,6}\\rVert^{2}>t_{k=5})\\cdot\\mathbf{w}_{5\\backslash 3,6} ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_20",
"text": " The threshold value is also trainable to be automatically chosen during training. To enable back-propagation, they relax 𝟙(x>t)1𝑥𝑡\\mathbbm{1}(x>t) to σ(x−t)𝜎𝑥𝑡\\sigma(x-t) when computing gradients. In addition, they optimize kernel weights and threshold values simultaneously. For a given tight search time, this method is shown to be more effective than the other methods (Stamoulis et al., 2020). ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_21",
"text": " Moreover, we can vary the number of channels by varying the expansion ratio of each block: we can use only the first half channels of 𝐰5,6subscript𝐰56\\mathbf{w}_{5,6} and 𝐰3,6subscript𝐰36\\mathbf{w}_{3,6} as 𝐰5,3subscript𝐰53\\mathbf{w}_{5,3} and 𝐰3,3subscript𝐰33\\mathbf{w}_{3,3}, respectively. By defining another set of trainable thresholds, the following formula is defined to determine the expansion ratio: (3) 𝐰∗,∗(te=3,te=6,tk=5)=𝟙(∥𝐰∗,3(tk=5)∥2>te=3)⋅𝐰∗,3(tk=5)+𝟙(∥𝐰∗,3(tk=5)∥2>te=3)⋅𝟙(∥𝐰∗,6\\3(tk=5)∥2>te=6)⋅𝐰∗,6\\3(tk=5)subscript𝐰subscript𝑡𝑒3subscript𝑡𝑒6subscript𝑡𝑘5⋅1superscriptdelimited-∥∥subscript𝐰3subscript𝑡𝑘52subscript𝑡𝑒3subscript𝐰3subscript𝑡𝑘5⋅⋅1superscriptdelimited-∥∥subscript𝐰3subscript𝑡𝑘52subscript𝑡𝑒31superscriptdelimited-∥∥subscript𝐰\\63subscript𝑡𝑘52subscript𝑡𝑒6subscript𝐰\\63subscript𝑡𝑘5\\mathbf{w}_{*,*}(t_{e=3},t_{e=6},t_{k=5})=\\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}(t_{k=5})\\rVert^{2}>t_{e=3})\\cdot\\mathbf{w}_{*,3}(t_{k=5})+\\\\ \\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}(t_{k=5})\\rVert^{2}>t_{e=3})\\cdot\\mathbbm{1}(\\lVert\\mathbf{w}_{*,6\\backslash 3}(t_{k=5})\\rVert^{2}>t_{e=6})\\cdot\\mathbf{w}_{*,6\\backslash 3}(t_{k=5}) where 𝐰k,6\\3subscript𝐰𝑘\\63\\mathbf{w}_{k,6\\backslash 3} means the remaining half of channels, 𝐰k,6−𝐰k,3subscript𝐰𝑘6subscript𝐰𝑘3\\mathbf{w}_{k,6}-\\mathbf{w}_{k,3}. Note that if te=3subscript𝑡𝑒3t_{e=3} is sufficiently large, all channels can be removed to make the block a plain skip connection. Thus, they replace the original depthwise convolution kernel of MBConv5,6 with 𝐰∗,∗subscript𝐰\\mathbf{w}_{*,*}, yielding a differentiable and searchable MBConv with respect to the kernel size and expansion ratio. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_22",
"text": " They also design a differentiable latency-aware loss function to consider hardware latency in the search algorithm. To this end, they define a function to estimate latency as follows: ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_23",
"text": " (4) Lel=𝟙(∥𝐰∗,3∥2>te=3)⋅(P5,3l+𝟙(∥𝐰∗,6\\3∥2>te=6)⋅(P5,6l−P5,3l))subscriptsuperscript𝐿𝑙𝑒⋅1superscriptdelimited-∥∥subscript𝐰32subscript𝑡𝑒3subscriptsuperscript𝑃𝑙53⋅1superscriptdelimited-∥∥subscript𝐰\\632subscript𝑡𝑒6subscriptsuperscript𝑃𝑙56subscriptsuperscript𝑃𝑙53\\begin{split}L^{l}_{e}=&\\mathbbm{1}(\\lVert\\mathbf{w}_{*,3}\\rVert^{2}>t_{e=3})\\cdot(P^{l}_{5,3}+\\\\ &\\mathbbm{1}(\\lVert\\mathbf{w}_{*,6\\backslash 3}\\rVert^{2}>t_{e=6})\\cdot(P^{l}_{5,6}-P^{l}_{5,3}))\\end{split} ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_24",
"text": " (5) Ll=P3,6l/P5,6l⋅Lel+𝟙(∥𝐰5\\3,6∥2>tk=5)⋅Lel⋅(1−P3,6l/P5,6l)superscript𝐿𝑙⋅subscriptsuperscript𝑃𝑙36subscriptsuperscript𝑃𝑙56subscriptsuperscript𝐿𝑙𝑒⋅1superscriptdelimited-∥∥subscript𝐰\\5362subscript𝑡𝑘5subscriptsuperscript𝐿𝑙𝑒1subscriptsuperscript𝑃𝑙36subscriptsuperscript𝑃𝑙56\\begin{split}L^{l}=&P^{l}_{3,6}/P^{l}_{5,6}\\cdot L^{l}_{e}+\\\\ &\\mathbbm{1}(\\lVert\\mathbf{w}_{5\\backslash 3,6}\\rVert^{2}>t_{k=5})\\cdot L^{l}_{e}\\cdot(1-P^{l}_{3,6}/P^{l}_{5,6})\\end{split} where Pk,elsubscriptsuperscript𝑃𝑙𝑘𝑒P^{l}_{k,e} is a profiled latency value for MBConvk,e for the l𝑙lth block in the supernet. Note that they used P3,6lsubscriptsuperscript𝑃𝑙36P^{l}_{3,6}, P5,3lsubscriptsuperscript𝑃𝑙53P^{l}_{5,3}, and P5,6lsubscriptsuperscript𝑃𝑙56P^{l}_{5,6} only to formulate Llsuperscript𝐿𝑙L^{l}, and the latency for MBConv3,3 is approximated using these values. Here is the latency-aware loss function designed: ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_25",
"text": " (6) CE+λ⋅log(∑lLl)𝐶𝐸⋅𝜆𝑙𝑜𝑔subscript𝑙superscript𝐿𝑙CE+\\lambda\\cdot log(\\sum_{l}L^{l}) Finally, they search for a neural architecture in two phases. First, they train the supernet by randomly choosing one of the candidate subgraphs in each training step. In this phase, they use CrossEntropy loss only. Next, they enable latency-aware loss function and train the supernet with the loss function, to decide the threshold values. By doing this, they could get a high-quality neural architecture with only eight epochs of ImageNet training set.111In our implementation, we changed the probability of selecting each candidate MBConvs to be equal. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_26",
"text": " Even though the proposed methodology can be applied to any type of NPU, the current implementation is made for an adder-tree type NPU, called MIDAP (Kang et al., 2019). It has a fully-pipelined micro-architecture that consists of separate hardware modules and memory modules for convolution, activation function, and various reduction operations. Since it enables us to make a fully static schedule of operations without resource contention in the data path, we can estimate the end-to-end latency of a CNN quite accurately analytically. Unexpected delay may incur from off-chip DRAM delay that is not fully hidden by double buffering. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_27",
"text": " Another good feature of MIDAP is that it efficiently supports the following operations that would lower the MAC (multiply-accumulate) utilization in other NPUs that have many MAC units: pooling, DWConv, and squeeze-and-excitation (SE). For DWConv operation, it does not use an adder tree but an alternative hardware logic that consists of a set of individual accumulators connected to the multiply units. For pooling and SE operations, reduction logic is included in the pipeline. Note that MIDAP has not been implemented as a real hardware chip yet but as a virtual prototype with a cycle-accurate simulator. Thanks to the cycle-accurate simulator that considers the DRAM access contention and parametrized DRAM access delay, we could build an accurate analytical model for end-to-end latency estimation, based on the profiling result with the simulator. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_28",
"text": " Inverted bottleneck with depth-wise convolution (MBConv) (Sandler et al., 2018) is a popular building block in recent mobile-friendly networks. However, it is not efficiently supported in existing NPUs that do not have specialized hardware units for DWConv (Gholami et al., 2018; Gupta and Akin, 2020). Thus Gupta et al. (Gupta and Akin, 2020) replaced an MBConv block with a fused building block that fuses an expansion layer and DWConv in MBConv into a single full convolution. Even though the fused block increases the number of multiplications significantly, it improves the MAC utilization larger so that the fused block is observed faster than MBConv on their target NPU, EdgeTPU. By adding this building block to their search space, they could successfully obtain different neural architectures for EdgeTPU from those for GPUs. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_29",
"text": " Since DWConv is efficiently supported in MIDAP, however, the improvement of MAC utilization by fusing does not outweigh the increased computation complexity, which is observed in preliminary experiments. The experiment setup is similar to main experiment setup that will be explained in section 5.2. The experimental result is shown in Table 1. The latency constraint for fused block experiment is set to 7.0ms, while others are set to 2.15ms. In the combined experiment, we use the fused block in the 1st and the 2nd stages, and MBConv for the remaining stages since the latency gap between two building blocks is too high. As shown in the table, MBConv block shows the best tradeoff between accuracy and latency. Hence we prefer MBConv to the fused building block as the basic building block in the supernet for MIDAP. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_30",
"text": " In this section, we explain the proposed S3NAS methodology that consists of three steps as displayed in Figure 2. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_31",
"text": " The number of blocks is one of the key parameters in neural networks. It is observed that the total number of blocks affects the accuracy of neural architecture (He et al., 2016; Tan and Le, 2019a). In conventional One-Shot NAS methods, each stage in the supernet has the same number of blocks (Cai et al., 2018; Stamoulis et al., 2019; Wu et al., 2019). On the other hand, some recent studies (Meng et al., 2020; Radosavovic et al., 2020) report that the way of assigning the number of blocks in each stage has a noticeable impact on the accuracy, even with the same number of blocks in total. Hence we allow stages in the supernet to have a different number of blocks. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_32",
"text": " We investigate the impact of assigning the number of blocks in the supernet with another preliminary experiment. We construct a network based on MobileNetV2, which has four blocks in every stage, and observe the change of accuracy as we reduce two blocks in a different stage in each experiment. Figure 5 shows that MBConvs with larger width has more impact on accuracy. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_33",
"text": " As the number of multiplications in a DWConv is W×H×C×K2𝑊𝐻𝐶superscript𝐾2W\\times H\\times C\\times K^{2}, the later stage of DWConv tends to have shorter latency since the reduction of H×W𝐻𝑊H\\times W is larger than the increase of C𝐶C. Thus the impact on the latency by increasing the number of blocks in a later stage is not significant as displayed in Figure 5. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_34",
"text": " Thus, we place more blocks to stages with larger width in the supernet, making the cumulative depth up to a specific stage is proportional to the width of the stage, which is similar to PyramidNet (Han et al., 2017). A recent study (Radosavovic et al., 2020) also claims that neural architectures with a linear relationship between the cumulative depth and the width tend to have higher accuracy with a similar amount of computation complexity. Our experiment shows that our modification to supernet enhances the efficiency of the search result in terms of accuracy as well as latency (Table 4). ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_35",
"text": " Another feature of the proposed supernet is to use mixed convolution (MixConv) that mixes different kernel sizes in the depth-wise convolution layer (Tan and Le, 2019b). Some recent NAS methods (Mei et al., 2019; Chu et al., 2020) also broaden their search space using DWConv with various kernel sizes and could find better neural architectures. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_36",
"text": " Figure 6 depicts our building block structure. This block starts and ends with 1×1 convolution, with N𝑁N searchable superkernels in the middle. Each searchable superkernel is designed similarly to Eq. (3), while we may use different threshold values in each superkernel. The kernel sizes and expansion ratios are selected among predetermined values. If the j𝑗j-th searchable superkernel chooses an expansion ratio ejsubscript𝑒𝑗e_{j}, the j𝑗j-th kernel has ejsubscript𝑒𝑗e_{j} times more channels than the first 1×1 convolution. Compared with the original MixConv suggested in (Tan and Le, 2019b), the proposed building block supports more diverse combinations of kernel sizes and expansion ratios. It enhances the efficiency of search results on our target NPU (Table 5). ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_37",
"text": " We finish this subsection by highlighting the merit of Single-Path NAS on building a MixConv-based differentiable NAS. Conventional multi-path NAS methods would have difficulties when adding inverted bottleneck convolution with MixConv to their search space. Since the number of possible choices of such blocks grows proportionally to the partition number, multi-path NAS methods would introduce a significant increase in memory requirements and the search time. On the contrary, MixConv can be efficiently supported in Single-Path NAS, as explained below. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_38",
"text": " We use a different latency estimation model, and a loss formula from the original SinglePath NAS technique explained in section 3.1. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_39",
"text": " Suppose we concatenate N𝑁N searchable superkernels to build a MixConv-based building block, and let k→=(k1,⋯,kN),e→=(e1,⋯,eN)formulae-sequence→𝑘subscript𝑘1⋯subscript𝑘𝑁→𝑒subscript𝑒1⋯subscript𝑒𝑁\\vec{k}=(k_{1},\\cdots,k_{N}),\\vec{e}=(e_{1},\\cdots,e_{N}) where kj,ejsubscript𝑘𝑗subscript𝑒𝑗k_{j},e_{j} denote the kernel size and the expansion ratio of the j𝑗jth searchable superkernel. The estimated latency of a DWConv operation depends on the kernel size and the expansion ratio. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_40",
"text": " For latency formulation, we first define two condition variables, Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} and Gj,ejsubscript𝐺𝑗subscript𝑒𝑗G_{j,e_{j}}, that denote whether the j𝑗jth searchable superkernel chooses the kernel size kjsubscript𝑘𝑗k_{j} and the expansion ratio ejsubscript𝑒𝑗e_{j}, respectively; For example, Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} is 1 if and only if the j𝑗jth searchable superkernel chooses kjsubscript𝑘𝑗k_{j}, and 0 otherwise. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_41",
"text": " Let κ1<⋯<κKsubscript𝜅1⋯subscript𝜅𝐾\\kappa_{1}<\\cdots<\\kappa_{K} be the candidate kernel sizes, and 0=ϵ1<⋯<ϵE0subscriptitalic-ϵ1⋯subscriptitalic-ϵ𝐸0=\\epsilon_{1}<\\cdots<\\epsilon_{E} denote the candidate expansion ratios of the j𝑗jth searchable superkernel, respectively. Suppose kj=κcsubscript𝑘𝑗subscript𝜅𝑐k_{j}=\\kappa_{c}, then Fj,kjsubscript𝐹𝑗subscript𝑘𝑗F_{j,k_{j}} can be formulated as follows: ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_42",
"text": " (7) Fj,kj=(∏2≤i≤c𝟙(∥𝐰j,κi\\κi−1,ϵE∥2>tj,κi))⋅fj,kj, wherefj,kj={𝟙(∥𝐰j,κc+1\\κc,ϵE∥2<tj,κc+1),if c<K1,if c=Ksubscript𝐹𝑗subscript𝑘𝑗⋅subscriptproduct2𝑖𝑐1superscriptdelimited-∥∥subscript𝐰𝑗\\subscript𝜅𝑖subscript𝜅𝑖1subscriptitalic-ϵ𝐸2subscript𝑡𝑗subscript𝜅𝑖subscript𝑓𝑗subscript𝑘𝑗, wheresubscript𝑓𝑗subscript𝑘𝑗cases1superscriptdelimited-∥∥subscript𝐰𝑗\\subscript𝜅𝑐1subscript𝜅𝑐subscriptitalic-ϵ𝐸2subscript𝑡𝑗subscript𝜅𝑐1if 𝑐𝐾1if 𝑐𝐾\\begin{split}F_{j,k_{j}}&=\\left(\\prod_{2\\leq i\\leq c}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,\\kappa_{i}\\backslash\\kappa_{i-1},\\epsilon_{E}}\\rVert^{2}>t_{j,\\kappa_{i}})\\right)\\cdot f_{j,k_{j}}\\text{, where}\\\\ f_{j,k_{j}}&=\\begin{cases}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,\\kappa_{c+1}\\backslash\\kappa_{c},\\epsilon_{E}}\\rVert^{2}<t_{j,\\kappa_{c+1}}),&\\text{if }c<K\\\\ 1,&\\text{if }c=K\\end{cases}\\end{split} ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_43",
"text": " Figure 7 depicts an example of this formula when the j𝑗jth searchable superkernel that has four candidate kernel sizes κ1<⋯<κ4subscript𝜅1⋯subscript𝜅4\\kappa_{1}<\\cdots<\\kappa_{4} chooses κ2subscript𝜅2\\kappa_{2} as the kernel size: kj=κ2subscript𝑘𝑗subscript𝜅2k_{j}=\\kappa_{2}. It means that weight 𝐰j,κ1,ϵEsubscript𝐰𝑗subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{1},\\epsilon_{E}} and 𝐰j,κ2\\κ1,ϵEsubscript𝐰𝑗\\subscript𝜅2subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{2}\\backslash\\kappa_{1},\\epsilon_{E}} are used, but the remaining weights starting from 𝐰j,κ3\\κ2,ϵEsubscript𝐰𝑗\\subscript𝜅3subscript𝜅2subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{3}\\backslash\\kappa_{2},\\epsilon_{E}} are not used. Since 𝐰j,κ1,ϵEsubscript𝐰𝑗subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{1},\\epsilon_{E}} is always used, it is not included in the formula. To use 𝐰j,κ2\\κ1,ϵEsubscript𝐰𝑗\\subscript𝜅2subscript𝜅1subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{2}\\backslash\\kappa_{1},\\epsilon_{E}}, the norm of it has to be larger than tj,κ2subscript𝑡𝑗subscript𝜅2t_{j,\\kappa_{2}} while the norm of 𝐰j,κ3\\κ2,ϵEsubscript𝐰𝑗\\subscript𝜅3subscript𝜅2subscriptitalic-ϵ𝐸\\mathbf{w}_{j,\\kappa_{3}\\backslash\\kappa_{2},\\epsilon_{E}} should not be larger than tj,κ3subscript𝑡𝑗subscript𝜅3t_{j,\\kappa_{3}} to avoid the use of larger kernel sizes. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_44",
"text": " We can formulate Gj,ejsubscript𝐺𝑗subscript𝑒𝑗G_{j,e_{j}} similarly: Gj,ejsubscript𝐺𝑗subscript𝑒𝑗\\displaystyle G_{j,e_{j}} =(∏2≤i≤d𝟙(∥𝐰j,∗,ϵi\\ϵi−1∥2>tj,ϵi))⋅gj,ej, whereabsent⋅subscriptproduct2𝑖𝑑1superscriptdelimited-∥∥subscript𝐰𝑗\\subscriptitalic-ϵ𝑖subscriptitalic-ϵ𝑖12subscript𝑡𝑗subscriptitalic-ϵ𝑖subscript𝑔𝑗subscript𝑒𝑗, where\\displaystyle=\\left(\\prod_{2\\leq i\\leq d}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,*,\\epsilon_{i}\\backslash\\epsilon_{i-1}}\\rVert^{2}>t_{j,\\epsilon_{i}})\\right)\\cdot g_{j,e_{j}}\\text{, where} gj,ejsubscript𝑔𝑗subscript𝑒𝑗\\displaystyle g_{j,e_{j}} ={𝟙(∥𝐰j,∗,ϵd+1\\ϵd∥2<tj,ϵd+1),if d<E1,if d=Eabsentcases1superscriptdelimited-∥∥subscript𝐰𝑗\\subscriptitalic-ϵ𝑑1subscriptitalic-ϵ𝑑2subscript𝑡𝑗subscriptitalic-ϵ𝑑1if 𝑑𝐸1if 𝑑𝐸\\displaystyle=\\begin{cases}\\mathbbm{1}(\\lVert\\mathbf{w}_{j,*,\\epsilon_{d+1}\\backslash\\epsilon_{d}}\\rVert^{2}<t_{j,\\epsilon_{d+1}}),&\\text{if }d<E\\\\ 1,&\\text{if }d=E\\end{cases} when ej=ϵdsubscript𝑒𝑗subscriptitalic-ϵ𝑑e_{j}=\\epsilon_{d}. Then the condition for a MixConv-based building block to choose k→,e→→𝑘→𝑒\\vec{k},\\vec{e} can be expressed as ∏jNFj,kjGj,ejsuperscriptsubscriptproduct𝑗𝑁subscript𝐹𝑗subscript𝑘𝑗subscript𝐺𝑗subscript𝑒𝑗\\prod_{j}^{N}F_{j,k_{j}}G_{j,e_{j}}. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_45",
"text": " Now, the estimated latency of a single block is formulated as follows: (8) L=∑k→,e→(P(k→,e→)∏jNFj,kjGj,ej)𝐿subscript→𝑘→𝑒𝑃→𝑘→𝑒superscriptsubscriptproduct𝑗𝑁subscript𝐹𝑗subscript𝑘𝑗subscript𝐺𝑗subscript𝑒𝑗L=\\sum_{\\vec{k},\\vec{e}}(P(\\vec{k},\\vec{e})\\prod_{j}^{N}F_{j,k_{j}}G_{j,e_{j}}) where P(k→,e→)𝑃→𝑘→𝑒P(\\vec{k},\\vec{e}) denotes the profiled latency value of a MixConv-based building block corresponding to k→,e→→𝑘→𝑒\\vec{k},\\vec{e}. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_46",
"text": " Unlike the original Single-Path NAS that approximates the latency in Eq. (5) in some cases, we use the profiled latency value in all cases. Note that an expansion ratio can be zero, and if only one superkernel has a nonzero expansion ratio, the MixConv block is reduced to a plain MBConv block. Finally, we can estimate the latency by summing up these estimated latencies for all superkernels in the block, ∑L𝐿\\sum L. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_47",
"text": " Since each superkernel is treated independently, some superkernels may have the same kernel size and expansion ratio. Then, even if two superkernel configurations express an equivalent block, as illustrated in Figure 8, they may have different estimated latency values, which is an artifact of the proposed profiling-based latency estimation method. To avoid this artifact, we enforce that there is only one kernel for each kernel size in the MixConv block. That is, we merge two kernels of the same size into one; For instance, the left MixConv is translated to the right MixConv in Figure 8 before latency estimation. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_48",
"text": " Figure 9 shows the estimated latency and simulated latency of randomly generated 100 models on our search space. It validates the accuracy of the proposed latency model, whose mean absolute percentage error(MAPE) is about 0.16%. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_49",
"text": " The existing hardware-aware differentiable NAS methods mostly define some hyperparameters to balance between accuracy and latency, including SinglePath NAS, whose loss function is defined as Eq. (6). Since there is no information on the target latency in the loss function, in case there is a strict latency constraint, they have to pay additional search costs for the hyperparameters to let the final architecture have no larger latency than the constraint. In addition, this process needs to be repeated whenever the target latency is changed. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_50",
"text": " We propose to modify the loss function to activate the latency-aware loss term only when the estimated latency is larger than the latency constraint as follows: (9) CE+λ1⋅log(1+λ2⋅ReLU((∑L)−T))𝐶𝐸⋅subscript𝜆1𝑙𝑜𝑔1⋅subscript𝜆2𝑅𝑒𝐿𝑈𝐿𝑇CE+\\lambda_{1}\\cdot log(1+\\lambda_{2}\\cdot ReLU((\\sum L)-T)) Although this is not a panacea, this modification significantly eases the search process, which will be discussed in section 5.2 with various experiments. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_51",
"text": " In the second step, we intentionally use shorter latency to reduce the search space for the baseline network. After finding the baseline network with a shorter latency, we apply compound scaling to find an architecture with the final latency constraint. In this step, we conduct post-processing to add SE block and h-swish activation function if beneficial. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_52",
"text": " It is well known that increasing depth (He et al., 2016), width (Zagoruyko and Komodakis, 2016), or input image size improves accuracy while it increases latency. However, if only one of these three factors is increased, the accuracy improvement is quickly saturated. Observing this fact, Tan et al. (Tan and Le, 2019a) proposed a compound scaling method that increases all three factors together. A scaling coefficient is defined for each factor. By judiciously assigning the scaling coefficients in a balanced fashion, they could improve the accuracy much larger than scaling a single factor only. Adopting this approach, we apply the compound scaling to the baseline architecture obtained in the previous step. Based on the ratio between the true latency constraint and the assumed latency constraint in the second step, we find the scaling coefficients considering the estimated latency increment. To keep the linear relationship between the width and cumulative depth, we use the same scaling coefficient for width and depth, differently from (Tan and Le, 2019a). Note that how to realize scaling depends on the baseline architecture. While the baseline architecture assumed in (Tan and Le, 2019a) has a series of identical blocks in each stage, a stage consists of heterogeneous blocks in our baseline architecture. Thus depth scaling is not realized by merely adding new blocks in each stage. We need to choose what types of blocks to add in each stage. We increase the number of blocks with more parameters first. To compute how many blocks to add in a stage, we multiply the depth of the stage by depth coefficient and round the multiplication result. Width scaling is applied to all blocks equally. Finally, we consider latency when we scale. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_53",
"text": " In addition to compound scaling, we add two components in the post-processing step: h-swish activation function and squeeze-and-excitation (SE) block. A recent study (Park and Yoo, 2020) reports that SE and the h-swish activation function are no hurdles for 8-bit quantization. They could quantize a network with SE and h-swish without noticeable accuracy loss. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_54",
"text": " Extensive studies have been conducted to find a better activation function than ReLU, and the swish activation function (Ramachandran et al., 2017) was found. Several neural networks (Tan and Le, 2019b; Mei et al., 2019; Tan and Le, 2019a) use swish activation function instead of ReLU to improve accuracy. Howard et al. (Howard et al., 2019) proposed a quantization-friendly version of the swish activation function called h-swish that has a similar impact on accuracy. So, we replace ReLU with h-swish (Howard et al., 2019) activation function. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_55",
"text": " Squeeze-and-Excitation(SE) is a lightweight operation which is shown to be beneficial to accuracy (Hu et al., 2018). Figure 10 depicts the structure of a SE block. For a given input feature map, it first computes the importance of the feature channels a representative value for global spatial information of each feature channel by global average pooling. After such squeeze operation generates channel-wise statistics, excitation operation captures channel-wise dependencies by two cascaded fully-connected layers to produce activation values, which represents the importance of each feature channel. Finally, channel-wise multiplication is performed between the activation values induced by the excitation operation and the input feature map for each channel. SE block is used in many recent architectures (Tan and Le, 2019a; Howard et al., 2019; Radosavovic et al., 2020). By adding SE blocks to the baseline network, we also observe the accuracy improvement. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_56",
"text": " Figure 11 depicts an example distribution of activation values produced by two different SE blocks for three different images. The authors of the original paper (Hu et al., 2018) conjectured that if such distribution from a SE block does not differ widely between image classes, the SE block is not important. Thus, after training, they obtained averaged activation values of a SE block over multiple images in the same class. They compared the distributions of the averaged values over different image classes. They observed that removing the SE blocks that have similar distributions over different image classes incurs only a marginal loss in accuracy. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_57",
"text": " Inspired by this observation, we propose to remove SE blocks selectively to minimize the additional computation cost caused by SE blocks. We obtain activation values from a SE block for each input image and measure how the distribution of activation values varies over different input images. For each channel c, we calculate the standard deviation σcsubscript𝜎𝑐\\sigma_{c} of activation values over different images. If σcsubscript𝜎𝑐\\sigma_{c} is small in most channels, the activation values from the SE block does not differ much over images. Conceptually, it implies that the SE block does not help to discriminate further which channel is more influential. From the engineering perspective, it means that channel-wise multiplication of a SE block is similar to constant multiplication, which can be handled by the following convolutional layer. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_58",
"text": " We define a metric as the average of standard deviation values σcsubscript𝜎𝑐\\sigma_{c} over all channels that represent the diverseness of the activation distribution over different images. If the metric value is small, we remove the SE block. For example, in Figure 11, our metric of the SE block on the left side has a value of 0.021, while the right side has a value of 0.118, more than 5x larger than the left side; The left side is a better candidate for SE block removal. When we remove SE blocks according to this metric, the accuracy is found to be similar, while the latency got shorter (Table 6). ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_59",
"text": " We evaluate the proposed NAS technique for image classification with the ImageNet dataset. The current implementation is made for MIDAP (Kang et al., 2019) that can perform DWConv and SE operations efficiently so that MBConv is preferred to full 3-D convolution as the basic building block, as explained above. Latencies on the target NPU are obtained with the cycle-accurate simulator222https://github.com/cap-lab/MidapSim. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_60",
"text": " A superkernel has two parameters to search: expansion ratio and kernel size. To limit the search space, we choose the expansion ratio among 0, 2, 4, and 6, and the kernel size between 3 and 5 when MBConv or full convolution is used as the building block. In the case of the MixConv-based building block, we use N𝑁N=3 superkenels whose expansion ratio is 0 or 2; The sum of the expansion ratio of three superkernels has the same range as the expansion ratio of a single MBConv block. To allow three superkernels to have different kernel sizes, we let one of three superkernels be able to have 7 as the kernel size. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_61",
"text": " In the first phase of the neural architecture search, we train the supernet by randomly choosing one of the candidate subgraphs in each training step. We train the supernet for 8 epochs, with λ1=0subscript𝜆10\\lambda_{1}=0 in the loss function of Eq. 9, focusing only on the accuracy. We decrease the learning rate by 0.97 every 2.4 epochs, starting from 0.064. The other setting for network training is displayed in Table 4. Gradient clipping with a value of 10 is used in this phase. In the second phase, we set λ1=15,λ2=100formulae-sequencesubscript𝜆115subscript𝜆2100\\lambda_{1}=15,\\lambda_{2}=100 to consider latency in the loss function, and optimize the weights and threshold values of supernet for 2 epochs. After this second phase finishes, the final architecture topology is decided. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_62",
"text": " Next, we train the final architecture again to determine the filter weights for 350 epochs with the ImageNet again, using the same setting described in Table 4. Unlike the search phase, the learning rate is increased from 0 to 0.064 in the first 5 epochs, then decayed by 0.97 every 2.4 epochs. Since we observed that the batch size is critical to accuracy when using the EfficientNet training code, we use a large batch size. Both network architecture search and final training are conducted on Google Cloud TPUs. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_63",
"text": " In the proposed NAS technique, two major extensions are made to the supernet, compared with the original SinglePath NAS technique. Table 3 shows the proposed supernet architecture with configuration parameters, block types and depths. It starts with a 7x7 convolution layer, followed by 5 stages that have a different number of blocks for feature extraction and 2 fully-connected networks for classification. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_64",
"text": " The first extension is to allow stages to have a different number of blocks. To verify the goodness of this extension, we design two kinds of MBConv-based supernet with 20 blocks in total: a supernet with constant depth(baseline), a supernet with linear depth where the cumulative depth up to a specific stage is proportional to the width of the stage. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_65",
"text": " As shown in Table 4, a supernet with linear depth outperforms a supernet with constant depth in terms of accuracy with similar latency. It confirms that this simple change of block assignment in supernet gives notable accuracy boost with the same latency constraint, without any additional optimization techniques. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_66",
"text": " The second extension is to use multiple parallel superkernels in an MBConv block. To verify the benefit of it, we compare two different supernets with the same number of blocks in each stage. The accuracy and latency performance of the baseline supernet is the same as the previous experimental result shown in Table 4. Table 5 shows that the extended supernet with MixConv-based building blocks gives a better accuracy-latency tradeoff. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_67",
"text": " We apply the proposed NAS method with the supernet architecture described above. The depth of 5 stages is set to 3,4,7,4,113474113,4,7,4,11, respectively. The latency constraint is set to 2.5 ms that corresponds to the latency of EfficientNet-B1 on our target NPU, MIDAP. Table 6 compares our search results with the state-of-the-art models: EdgeTPU (Gupta and Akin, 2020), EfficientNet (Tan and Le, 2019a), Once-For-All (Cai et al., 2019). The latency of the other models is obtained by running the network on the MIDAP cycle-accurate simulator. We compare the accuracy without quantization, assuming that quantization effects will be similar to all models. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_68",
"text": " As shown in Table 6, the baseline model, ours-M, found by the proposed NAS technique has higher accuracy than the other models on our target NPU; ours-M achieves more than 1.7% higher top-1 accuracy than EfficientNet-lite2 with similar latency. Moreover, it is 0.5% higher than EfficientNet-B1, even without using SE and h-swish activation function. Note that the number of parameters and the number of FLOPS in ours-M is larger than EfficientNet-B1. It implies that the complexity of the network is not a direct indicator of the end-to-end latency of the network. The end-to-end latency depends on the NPU architecture, and the proposed NAS technique could find a larger network with shorter latency by adding the latency factor to the loss function directly. The main benefit comes from different block assignment to stages. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_69",
"text": " We improve the baseline network by adding the h-swish activation function and squeeze-and-excitation(SE) block to get the ours-M+ model. Figure 12 shows the topology of ours-M+ architecture in which the height of each block is proportional to the expansion ratio of the block. Compared with the baseline network, ours-M, we achieve around 1% accuracy boost with ours-M+, paying the cost of 16% latency increase. This model outperforms the other models, 0.5% higher accuracy and 14% faster than EfficientNet-B2. Since EfficientNet-B2 is too large to run with the default configuration on MIDAP, we increase the memory size for filter weights. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_70",
"text": " Next, we applied compound scaling (Tan and Le, 2019a) to ours-M+ to obtain ours-L+ and ours-XL+. When we determine scaling coefficients, we keep the linear relationship between the cumulative depth and width of each stage, and scale the input image size more aggressively than (Tan and Le, 2019a). We make the number of filters to be multiples of 16 to maximize the MAC unit utilization on MIDAP. When we train our scaled model, we set the dropout ratio to 0.4, similar to EfficientNet-B4 training. The accuracy of ours-L+ is higher than EfficientNet-B3 and EfficientNet-lite4, while the accuracy of ours-XL+ is similar to EfficientNet-B4. Note that the difference between the searched network and the EfficientNet decreases as the network size increases. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_71",
"text": " Finally, we selectively removed SE blocks from ours-XL+, resulting in ours-XL-rmSE+. We collected the activation values using randomly sampled 10K images from the training dataset and calculated the metric explained in Sec. 4.3.3. After removing SE blocks from ours-XL+ based on the metric, only about 60% of the blocks in the network have SE blocks. As a result, we could make the latency shorter, while the accuracy was slightly improved than ours-XL+. This model achieves 82.72% top-1 accuracy with only 11.66ms latency. It is much better than EfficientNet-EdgeTPU-L (Gupta and Akin, 2020) that achieves 80.62% FP32 top-1 accuracy with more than 20ms on EdgeTPU. Our architecture on MIDAP is about 2 times faster with 2.1% higher accuracy. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_72",
"text": " Finally, we compare the search time. Since the TPU is faster than GPU, we report the wall clock time and the estimated GPU time (in parenthesis) that is 10 times longer than the wall clock time in the last column of Table 6 Our method takes 3 hours, which is much faster than the other methods. Note that we compare the total time to get one architecture from scratch without trained weights. Once-For-All (Cai et al., 2019) would require only short fine-tuning time after a neural architecture is searched. In contrast, we need to train the network after a network architecture is found. It took 40 hours on TPUv3 to train ours-M+. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_73",
"text": " While most NAS techniques are not compared with a random search method, the authors (Li and Talwalkar, 2019) reported that a random search method is highly competitive. So we conducted an experiment to compare the proposed NAS technique with two random search methods, exploring the same search space defined by the supernet structure of ours-M. First, we designed a simple random search method that has the similar time complexity of the proposed technique. In this method, we randomly generate 15 models having a similar latency with ours-M, from the same search space. Then we train each of them for 1 epoch with cosine learning rate decay. After evaluating each of them, we choose the architecture with the topmost top-1 accuracy and fully train it. In the second method, called random selection, we randomly generate 20 models having a similar latency with ours-M and train them fully and take the architecture with the highest top-1 accuracy. Since the random selection method performs search and training simultaneously, it is slower than the proposed technique by the number of randomly generated models. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_74",
"text": " Comparison results are reported in Table 6. It is confirmed that both random selection and random search are quite competitive, but noticeably inferior to ours-M in terms of accuracy. In detail, the worst case of random selection showed 0.8% lower accuracy than ours-M. The best performance obtained from 20 randomly generated models is 79.19%, still lower than the accuracy of ours-M. Note that random search and random selection show similar performance that is no smaller than the other networks. It means that the search space defined by the supernet architecture has a more significant effect on the accuracy than the search method. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_75",
"text": " There are two methods to find an architecture with a loose latency constraint. One is to use compound scaling that scales a small network with shorter latency, and the other is to search a network directly. To compare these two methods, we first scaled ours-M using the same scaling coefficients that we used to scale ours-M+ to ours-L+ and trained it. When conducting a direct search, we scaled the depth and width of the supernet and the input image size first and applied the proposed NAS technique for the scaled supernet. We used batch size 512 instead of 1024 during the architecture search due to the memory limitation of TPU. The comparison result is shown in Table 7 in terms of top-1 accuracy(%) and the latency on the target NPU(ms). Two results were similar while direct search needed 10 hours on TPUv3; It means that compound scaling is an effective method to find a large network fast. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_76",
"text": " To examine how SE and h-swish impact accuracy individually, we compare four combinations as displayed in Table 8. The baseline is ours-M that does not use SE and h-swish activation function. Replacing ReLU with h-swish gives a marginal improvement on accuracy while adding SE blocks improves the accuracy noticeably. Adding both SE and h-swish activation function improves the accuracy by around 1%. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
},
{
"id": "2009.02009_all_77",
"text": " In this work, we propose a fast NPU-aware NAS methodology extending the Single-Path NAS technique (Stamoulis et al., 2019). We modify the supernet architecture by varying the number of blocks in stages and adding mixed depthwise convolution (Tan and Le, 2019b) to the search space. By modifying the loss function to directly include the target latency estimated by a cycle-accurate simulator of the target NPU, we could find a better baseline architecture with a shorter latency than the latency constraint. Using a tight latency constraint, we can reduce the search space to find the baseline network fast. Afterward, we apply compound scaling to find a larger network than the baseline network, and add SE blocks and h-swish activation functions in the post-processing step. Through the proposed NAS methodology, we could obtain a network with 82.72% accuracy with 11.66ms latency on our target NPU, without special data augmentation in training. It dominates the existing network models on the target NPU. It confirms the importance of supernet architecture design for a given NPU and effectiveness of the three-step approach in the proposed NAS methodology: supernet design, SinglePath NAS with a tighter latency constraint, and compound scaling and post-processing. ",
"title": "S3NAS: Fast NPU-aware Neural Architecture Search Methodology"
}
] |
For matching default boxes with ground truth ones, what metric was used?
|
Best Jaccard Overlap was used to match default boxes with ground truth ones [9].
|
[
9
] |
[
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective Search work through the current leading results on PASCAL VOC, COCO, and ILSVRC detection all based on Faster R-CNN albeit with deeper features such as . While accurate, these approaches have been too computationally intensive for embedded systems and, even with high-end hardware, too slow for real-time applications. Often detection speed for these approaches is measured in seconds per frame (SPF), and even the fastest high-accuracy detector, Faster R-CNN, operates at only 7 frames per second (FPS). There have been many attempts to build faster detectors by attacking each stage of the detection pipeline (see related work in Sec. 4), but so far, significantly increased speed comes only at the cost of significantly decreased detection accuracy. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_1",
"text": " This paper presents the first deep network based object detector that does not resample pixels or features for bounding box hypotheses and and is as accurate as approaches that do. This results in a significant improvement in speed for high-accuracy detection (59 FPS with mAP 74.3% on VOC2007 test, vs. Faster R-CNN 7 FPS with mAP 73.2% or YOLO 45 FPS with mAP 63.4%). The fundamental improvement in speed comes from eliminating bounding box proposals and the subsequent pixel or feature resampling stage. We are not the first to do this (cf (4, 5)), but by adding a series of improvements, we manage to increase the accuracy significantly over previous attempts. Our improvements include using a small convolutional filter to predict object categories and offsets in bounding box locations, using separate predictors (filters) for different aspect ratio detections, and applying these filters to multiple feature maps from the later stages of a network in order to perform detection at multiple scales. With these modifications—especially using multiple layers for prediction at different scales—we can achieve high-accuracy using relatively low resolution input, further increasing detection speed. While these contributions may seem small independently, we note that the resulting system improves accuracy on real-time detection for PASCAL VOC from 63.4% mAP for YOLO to 74.3% mAP for our SSD. This is a larger relative improvement in detection accuracy than that from the recent, very high-profile work on residual networks . Furthermore, significantly improving the speed of high-quality detection can broaden the range of settings where computer vision is useful. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_2",
"text": " We summarize our contributions as follows: • We introduce SSD, a single-shot detector for multiple categories that is faster than the previous state-of-the-art for single shot detectors (YOLO), and significantly more accurate, in fact as accurate as slower techniques that perform explicit region proposals and pooling (including Faster R-CNN). • The core of SSD is predicting category scores and box offsets for a fixed set of default bounding boxes using small convolutional filters applied to feature maps. • To achieve high detection accuracy we produce predictions of different scales from feature maps of different scales, and explicitly separate predictions by aspect ratio. • These design features lead to simple end-to-end training and high accuracy, even on low resolution input images, further improving the speed vs accuracy trade-off. • Experiments include timing and accuracy analysis on models with varying input size evaluated on PASCAL VOC, COCO, and ILSVRC and are compared to a range of recent state-of-the-art approaches. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_3",
"text": " This section describes our proposed SSD framework for detection (Sec. 2.1) and the associated training methodology (Sec. 2.2). Afterwards, Sec. 3 presents dataset-specific model details and experimental results. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_4",
"text": " The SSD approach is based on a feed-forward convolutional network that produces a fixed-size collection of bounding boxes and scores for the presence of object class instances in those boxes, followed by a non-maximum suppression step to produce the final detections. The early network layers are based on a standard architecture used for high quality image classification (truncated before any classification layers), which we will call the base network222We use the VGG-16 network as a base, but other networks should also produce good results.. We then add auxiliary structure to the network to produce detections with the following key features: ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_5",
"text": " Multi-scale feature maps for detection We add convolutional feature layers to the end of the truncated base network. These layers decrease in size progressively and allow predictions of detections at multiple scales. The convolutional model for predicting detections is different for each feature layer (cf Overfeat and YOLO that operate on a single scale feature map). ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_6",
"text": " Convolutional predictors for detection Each added feature layer (or optionally an existing feature layer from the base network) can produce a fixed set of detection predictions using a set of convolutional filters. These are indicated on top of the SSD network architecture in Fig. 2. For a feature layer of size m×n𝑚𝑛m\\times n with p𝑝p channels, the basic element for predicting parameters of a potential detection is a 3×3×p33𝑝3\\times 3\\times p small kernel that produces either a score for a category, or a shape offset relative to the default box coordinates. At each of the m×n𝑚𝑛m\\times n locations where the kernel is applied, it produces an output value. The bounding box offset output values are measured relative to a default box position relative to each feature map location (cf the architecture of YOLO that uses an intermediate fully connected layer instead of a convolutional filter for this step). ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_7",
"text": " Default boxes and aspect ratios We associate a set of default bounding boxes with each feature map cell, for multiple feature maps at the top of the network. The default boxes tile the feature map in a convolutional manner, so that the position of each box relative to its corresponding cell is fixed. At each feature map cell, we predict the offsets relative to the default box shapes in the cell, as well as the per-class scores that indicate the presence of a class instance in each of those boxes. Specifically, for each box out of k𝑘k at a given location, we compute c𝑐c class scores and the 444 offsets relative to the original default box shape. This results in a total of (c+4)k𝑐4𝑘(c+4)k filters that are applied around each location in the feature map, yielding (c+4)kmn𝑐4𝑘𝑚𝑛(c+4)kmn outputs for a m×n𝑚𝑛m\\times n feature map. For an illustration of default boxes, please refer to Fig. 1. Our default boxes are similar to the anchor boxes used in Faster R-CNN , however we apply them to several feature maps of different resolutions. Allowing different default box shapes in several feature maps let us efficiently discretize the space of possible output box shapes. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_8",
"text": " The key difference between training SSD and training a typical detector that uses region proposals, is that ground truth information needs to be assigned to specific outputs in the fixed set of detector outputs. Some version of this is also required for training in YOLO and for the region proposal stage of Faster R-CNN and MultiBox. Once this assignment is determined, the loss function and back propagation are applied end-to-end. Training also involves choosing the set of default boxes and scales for detection as well as the hard negative mining and data augmentation strategies. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_9",
"text": " During training we need to determine which default boxes correspond to a ground truth detection and train the network accordingly. For each ground truth box we are selecting from default boxes that vary over location, aspect ratio, and scale. We begin by matching each ground truth box to the default box with the best jaccard overlap (as in MultiBox ). Unlike MultiBox, we then match default boxes to any ground truth with jaccard overlap higher than a threshold (0.5). This simplifies the learning problem, allowing the network to predict high scores for multiple overlapping default boxes rather than requiring it to pick only the one with maximum overlap. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_10",
"text": " The SSD training objective is derived from the MultiBox objective (7, 8) but is extended to handle multiple object categories. Let xijp={1,0}superscriptsubscript𝑥𝑖𝑗𝑝10x_{ij}^{p}=\\{1,0\\} be an indicator for matching the i𝑖i-th default box to the j𝑗j-th ground truth box of category p𝑝p. In the matching strategy above, we can have ∑ixijp≥1subscript𝑖superscriptsubscript𝑥𝑖𝑗𝑝1\\sum_{i}x_{ij}^{p}\\geq 1. The overall objective loss function is a weighted sum of the localization loss (loc) and the confidence loss (conf): L(x,c,l,g)=1N(Lconf(x,c)+αLloc(x,l,g))𝐿𝑥𝑐𝑙𝑔1𝑁subscript𝐿𝑐𝑜𝑛𝑓𝑥𝑐𝛼subscript𝐿𝑙𝑜𝑐𝑥𝑙𝑔L(x,c,l,g)=\\frac{1}{N}(L_{conf}(x,c)+\\alpha L_{loc}(x,l,g)) (1) where N is the number of matched default boxes. If N=0𝑁0N=0, wet set the loss to 0. The localization loss is a Smooth L1 loss between the predicted box (l𝑙l) and the ground truth box (g𝑔g) parameters. Similar to Faster R-CNN , we regress to offsets for the center (cx,cy𝑐𝑥𝑐𝑦cx,cy) of the default bounding box (d𝑑d) and for its width (w𝑤w) and height (hℎh). Lloc(x,l,g)=∑i∈PosN∑m∈{cx,cy,w,h}xijksmoothL1(lim−g^jm)g^jcx=(gjcx−dicx)/diwg^jcy=(gjcy−dicy)/dihg^jw=log(gjwdiw)g^jh=log(gjhdih)formulae-sequencesubscript𝐿𝑙𝑜𝑐𝑥𝑙𝑔superscriptsubscript𝑖𝑃𝑜𝑠𝑁subscript𝑚𝑐𝑥𝑐𝑦𝑤ℎsuperscriptsubscript𝑥𝑖𝑗𝑘subscriptsmoothL1superscriptsubscript𝑙𝑖𝑚superscriptsubscript^𝑔𝑗𝑚superscriptsubscript^𝑔𝑗𝑐𝑥superscriptsubscript𝑔𝑗𝑐𝑥superscriptsubscript𝑑𝑖𝑐𝑥superscriptsubscript𝑑𝑖𝑤superscriptsubscript^𝑔𝑗𝑐𝑦superscriptsubscript𝑔𝑗𝑐𝑦superscriptsubscript𝑑𝑖𝑐𝑦superscriptsubscript𝑑𝑖ℎsuperscriptsubscript^𝑔𝑗𝑤superscriptsubscript𝑔𝑗𝑤superscriptsubscript𝑑𝑖𝑤superscriptsubscript^𝑔𝑗ℎsuperscriptsubscript𝑔𝑗ℎsuperscriptsubscript𝑑𝑖ℎ\\begin{split}L_{loc}(x,l,g)=\\sum_{i\\in Pos}^{N}\\sum_{m\\in\\{cx,cy,w,h\\}}&x_{ij}^{k}\\text{smooth}_{\\text{L1}}(l_{i}^{m}-\\hat{g}_{j}^{m})\\\\ \\hat{g}_{j}^{cx}=(g_{j}^{cx}-d_{i}^{cx})/d_{i}^{w}\\quad\\quad&\\hat{g}_{j}^{cy}=(g_{j}^{cy}-d_{i}^{cy})/d_{i}^{h}\\\\ \\hat{g}_{j}^{w}=\\log\\Big{(}\\frac{g_{j}^{w}}{d_{i}^{w}}\\Big{)}\\quad\\quad&\\hat{g}_{j}^{h}=\\log\\Big{(}\\frac{g_{j}^{h}}{d_{i}^{h}}\\Big{)}\\end{split} (2) The confidence loss is the softmax loss over multiple classes confidences (c𝑐c). Lconf(x,c)=−∑i∈PosNxijplog(c^ip)−∑i∈Neglog(c^i0)wherec^ip=exp(cip)∑pexp(cip)formulae-sequencesubscript𝐿𝑐𝑜𝑛𝑓𝑥𝑐superscriptsubscript𝑖𝑃𝑜𝑠𝑁superscriptsubscript𝑥𝑖𝑗𝑝𝑙𝑜𝑔superscriptsubscript^𝑐𝑖𝑝subscript𝑖𝑁𝑒𝑔𝑙𝑜𝑔superscriptsubscript^𝑐𝑖0wheresuperscriptsubscript^𝑐𝑖𝑝superscriptsubscript𝑐𝑖𝑝subscript𝑝superscriptsubscript𝑐𝑖𝑝L_{conf}(x,c)=-\\sum_{i\\in Pos}^{N}x_{ij}^{p}log(\\hat{c}_{i}^{p})-\\sum_{i\\in Neg}log(\\hat{c}_{i}^{0})\\quad\\text{where}\\quad\\hat{c}_{i}^{p}=\\frac{\\exp(c_{i}^{p})}{\\sum_{p}\\exp(c_{i}^{p})} (3) and the weight term α𝛼\\alpha is set to 1 by cross validation. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_11",
"text": " To handle different object scales, some methods (4, 9) suggest processing the image at different sizes and combining the results afterwards. However, by utilizing feature maps from several different layers in a single network for prediction we can mimic the same effect, while also sharing parameters across all object scales. Previous works (10, 11) have shown that using feature maps from the lower layers can improve semantic segmentation quality because the lower layers capture more fine details of the input objects. Similarly, showed that adding global context pooled from a feature map can help smooth the segmentation results. Motivated by these methods, we use both the lower and upper feature maps for detection. Figure 1 shows two exemplar feature maps (8×8888\\times 8 and 4×4444\\times 4) which are used in the framework. In practice, we can use many more with small computational overhead. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_12",
"text": " Feature maps from different levels within a network are known to have different (empirical) receptive field sizes . Fortunately, within the SSD framework, the default boxes do not necessary need to correspond to the actual receptive fields of each layer. We design the tiling of default boxes so that specific feature maps learn to be responsive to particular scales of the objects. Suppose we want to use m𝑚m feature maps for prediction. The scale of the default boxes for each feature map is computed as: sk=smin+smax−sminm−1(k−1),k∈(1,m)formulae-sequencesubscript𝑠𝑘subscript𝑠minsubscript𝑠maxsubscript𝑠min𝑚1𝑘1𝑘1𝑚s_{k}=s_{\\text{min}}+\\frac{s_{\\text{max}}-s_{\\text{min}}}{m-1}(k-1),\\quad k\\in(1,m) (4) where sminsubscript𝑠mins_{\\text{min}} is 0.2 and smaxsubscript𝑠maxs_{\\text{max}} is 0.9, meaning the lowest layer has a scale of 0.2 and the highest layer has a scale of 0.9, and all layers in between are regularly spaced. We impose different aspect ratios for the default boxes, and denote them as ar∈{1,2,3,12,13}subscript𝑎𝑟1231213a_{r}\\in\\{1,2,3,\\frac{1}{2},\\frac{1}{3}\\}. We can compute the width (wka=skarsuperscriptsubscript𝑤𝑘𝑎subscript𝑠𝑘subscript𝑎𝑟w_{k}^{a}=s_{k}\\sqrt{a_{r}}) and height (hka=sk/arsuperscriptsubscriptℎ𝑘𝑎subscript𝑠𝑘subscript𝑎𝑟h_{k}^{a}=s_{k}/\\sqrt{a_{r}}) for each default box. For the aspect ratio of 1, we also add a default box whose scale is sk′=sksk+1subscriptsuperscript𝑠′𝑘subscript𝑠𝑘subscript𝑠𝑘1s^{\\prime}_{k}=\\sqrt{s_{k}s_{k+1}}, resulting in 6 default boxes per feature map location. We set the center of each default box to (i+0.5|fk|,j+0.5|fk|)𝑖0.5subscript𝑓𝑘𝑗0.5subscript𝑓𝑘(\\frac{i+0.5}{|f_{k}|},\\frac{j+0.5}{|f_{k}|}), where |fk|subscript𝑓𝑘|f_{k}| is the size of the k𝑘k-th square feature map, i,j∈(0,|fk|)𝑖𝑗0subscript𝑓𝑘i,j\\in(0,|f_{k}|). In practice, one can also design a distribution of default boxes to best fit a specific dataset. How to design the optimal tiling is an open question as well. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_13",
"text": " By combining predictions for all default boxes with different scales and aspect ratios from all locations of many feature maps, we have a diverse set of predictions, covering various input object sizes and shapes. For example, in Fig. 1, the dog is matched to a default box in the 4×4444\\times 4 feature map, but not to any default boxes in the 8×8888\\times 8 feature map. This is because those boxes have different scales and do not match the dog box, and therefore are considered as negatives during training. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_14",
"text": " After the matching step, most of the default boxes are negatives, especially when the number of possible default boxes is large. This introduces a significant imbalance between the positive and negative training examples. Instead of using all the negative examples, we sort them using the highest confidence loss for each default box and pick the top ones so that the ratio between the negatives and positives is at most 3:1. We found that this leads to faster optimization and a more stable training. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_15",
"text": " To make the model more robust to various input object sizes and shapes, each training image is randomly sampled by one of the following options: • Use the entire original input image. • Sample a patch so that the minimum jaccard overlap with the objects is 0.1, 0.3, 0.5, 0.7, or 0.9. • Randomly sample a patch. The size of each sampled patch is (0.1, 1) of the original image size, and the aspect ratio is between 1212\\frac{1}{2} and 2. We keep the overlapped part of the ground truth box if the center of it is in the sampled patch. After the aforementioned sampling step, each sampled patch is resized to fixed size and is horizontally flipped with probability of 0.5, in addition to applying some photo-metric distortions similar to those described in . ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_16",
"text": " Our experiments are all based on VGG16 , which is pre-trained on the ILSVRC CLS-LOC dataset . Similar to DeepLab-LargeFOV , we convert fc6 and fc7 to convolutional layers, subsample parameters from fc6 and fc7, change pool5 from 2×2−s222𝑠22\\times 2-s2 to 3×3−s133𝑠13\\times 3-s1, and use the à trous algorithm to fill the ”holes”. We remove all the dropout layers and the fc8 layer. We fine-tune the resulting model using SGD with initial learning rate 10−3superscript10310^{-3}, 0.9 momentum, 0.0005 weight decay, and batch size 32. The learning rate decay policy is slightly different for each dataset, and we will describe details later. The full training and testing code is built on Caffe and is open source at: https://github.com/weiliu89/caffe/tree/ssd . ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_17",
"text": " On this dataset, we compare against Fast R-CNN and Faster R-CNN on VOC2007 test (4952 images). All methods fine-tune on the same pre-trained VGG16 network. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_18",
"text": " Figure 2 shows the architecture details of the SSD300 model. We use conv4_3, conv7 (fc7), conv8_2, conv9_2, conv10_2, and conv11_2 to predict both location and confidences. We set default box with scale 0.1 on conv4_3333For SSD512 model, we add extra conv12_2 for prediction, set sminsubscript𝑠mins_{\\text{min}} to 0.15, and 0.07 on conv4_3.. We initialize the parameters for all the newly added convolutional layers with the ”xavier” method . For conv4_3, conv10_2 and conv11_2, we only associate 4 default boxes at each feature map location – omitting aspect ratios of 1313\\frac{1}{3} and 3. For all other layers, we put 6 default boxes as described in Sec. 2.2.3. Since, as pointed out in , conv4_3 has a different feature scale compared to the other layers, we use the L2 normalization technique introduced in to scale the feature norm at each location in the feature map to 20 and learn the scale during back propagation. We use the 10−3superscript10310^{-3} learning rate for 40k iterations, then continue training for 10k iterations with 10−4superscript10410^{-4} and 10−5superscript10510^{-5}. When training on VOC2007 trainval, Table 1 shows that our low resolution SSD300 model is already more accurate than Fast R-CNN. When we train SSD on a larger 512×512512512512\\times 512 input image, it is even more accurate, surpassing Faster R-CNN by 1.7% mAP. If we train SSD with more (i.e. 07+12) data, we see that SSD300 is already better than Faster R-CNN by 1.1% and that SSD512 is 3.6% better. If we take models trained on COCO trainval35k as described in Sec. 3.4 and fine-tuning them on the 07+12 dataset with SSD512, we achieve the best results: 81.6% mAP. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_19",
"text": " To understand the performance of our two SSD models in more details, we used the detection analysis tool from . Figure 3 shows that SSD can detect various object categories with high quality (large white area). The majority of its confident detections are correct. The recall is around 85-90%, and is much higher with “weak” (0.1 jaccard overlap) criteria. Compared to R-CNN , SSD has less localization error, indicating that SSD can localize objects better because it directly learns to regress the object shape and classify object categories instead of using two decoupled steps. However, SSD has more confusions with similar object categories (especially for animals), partly because we share locations for multiple categories. Figure 4 shows that SSD is very sensitive to the bounding box size. In other words, it has much worse performance on smaller objects than bigger objects. This is not surprising because those small objects may not even have any information at the very top layers. Increasing the input size (e.g. from 300×300300300300\\times 300 to 512×512512512512\\times 512) can help improve detecting small objects, but there is still a lot of room to improve. On the positive side, we can clearly see that SSD performs really well on large objects. And it is very robust to different object aspect ratios because we use default boxes of various aspect ratios per feature map location. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_20",
"text": " To understand SSD better, we carried out controlled experiments to examine how each component affects performance. For all the experiments, we use the same settings and input size (300×300300300300\\times 300), except for specified changes to the settings or component(s). ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_21",
"text": " Data augmentation is crucial. Fast and Faster R-CNN use the original image and the horizontal flip to train. We use a more extensive sampling strategy, similar to YOLO . Table 2 shows that we can improve 8.8% mAP with this sampling strategy. We do not know how much our sampling strategy will benefit Fast and Faster R-CNN, but they are likely to benefit less because they use a feature pooling step during classification that is relatively robust to object translation by design. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_22",
"text": " More default box shapes is better. As described in Sec. 2.2.3, by default we use 6 default boxes per location. If we remove the boxes with 1313\\frac{1}{3} and 3 aspect ratios, the performance drops by 0.6%. By further removing the boxes with 1212\\frac{1}{2} and 2 aspect ratios, the performance drops another 2.1%. Using a variety of default box shapes seems to make the task of predicting boxes easier for the network. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_23",
"text": " Atrous is faster. As described in Sec. 3, we used the atrous version of a subsampled VGG16, following DeepLab-LargeFOV . If we use the full VGG16, keeping pool5 with 2×2−s222𝑠22\\times 2-s2 and not subsampling parameters from fc6 and fc7, and add conv5_3 for prediction, the result is about the same while the speed is about 20% slower. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_24",
"text": " We use the same settings as those used for our basic VOC2007 experiments above, except that we use VOC2012 trainval and VOC2007 trainval and test (21503 images) for training, and test on VOC2012 test (10991 images). We train the models with 10−3superscript10310^{-3} learning rate for 60k iterations, then 10−4superscript10410^{-4} for 20k iterations. Table 4 shows the results of our SSD300 and SSD512444\\ssmallhttp://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?cls=mean&challengeid=11&compid=4 model. We see the same performance trend as we observed on VOC2007 test. Our SSD300 improves accuracy over Fast/Faster R-CNN. By increasing the training and testing image size to 512×512512512512\\times 512, we are 4.5% more accurate than Faster R-CNN. Compared to YOLO, SSD is significantly more accurate, likely due to the use of convolutional default boxes from multiple feature maps and our matching strategy during training. When fine-tuned from models trained on COCO, our SSD512 achieves 80.0% mAP, which is 4.1% higher than Faster R-CNN. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_25",
"text": " To further validate the SSD framework, we trained our SSD300 and SSD512 architectures on the COCO dataset. Since objects in COCO tend to be smaller than PASCAL VOC, we use smaller default boxes for all layers. We follow the strategy mentioned in Sec. 2.2.3, but now our smallest default box has a scale of 0.15 instead of 0.2, and the scale of the default box on conv4_3 is 0.07 (e.g. 21 pixels for a 300×300300300300\\times 300 image)555For SSD512 model, we add extra conv12_2 for prediction, set sminsubscript𝑠mins_{\\text{min}} to 0.1, and 0.04 on conv4_3.. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_26",
"text": " We use the trainval35k for training. We first train the model with 10−3superscript10310^{-3} learning rate for 160k iterations, and then continue training for 40k iterations with 10−4superscript10410^{-4} and 40k iterations with 10−5superscript10510^{-5}. Table 5 shows the results on test-dev2015. Similar to what we observed on the PASCAL VOC dataset, SSD300 is better than Fast R-CNN in both [email protected] and mAP@(0.5:0.95). SSD300 has a similar [email protected] as ION and Faster R-CNN , but is worse in [email protected]. By increasing the image size to 512×512512512512\\times 512, our SSD512 is better than Faster R-CNN in both criteria. Interestingly, we observe that SSD512 is 5.3% better in [email protected], but is only 1.2% better in [email protected]. We also observe that it has much better AP (4.8%) and AR (4.6%) for large objects, but has relatively less improvement in AP (1.3%) and AR (2.0%) for small objects. Compared to ION, the improvement in AR for large and small objects is more similar (5.4% vs. 3.9%). We conjecture that Faster R-CNN is more competitive on smaller objects with SSD because it performs two box refinement steps, in both the RPN part and in the Fast R-CNN part. In Fig. 3.2, we show some detection examples on COCO test-dev with the SSD512 model. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_27",
"text": " We applied the same network architecture we used for COCO to the ILSVRC DET dataset . We train a SSD300 model using the ILSVRC2014 DET train and val1 as used in . We first train the model with 10−3superscript10310^{-3} learning rate for 320k iterations, and then continue training for 80k iterations with 10−4superscript10410^{-4} and 40k iterations with 10−5superscript10510^{-5}. We can achieve 43.4 mAP on the val2 set . Again, it validates that SSD is a general framework for high quality real-time detection. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_28",
"text": " ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_29",
"text": " Without a follow-up feature resampling step as in Faster R-CNN, the classification task for small objects is relatively hard for SSD, as demonstrated in our analysis (see Fig. 4). The data augmentation strategy described in Sec. 2.2 helps to improve the performance dramatically, especially on small datasets such as PASCAL VOC. The random crops generated by the strategy can be thought of as a ”zoom in” operation and can generate many larger training examples. To implement a ”zoom out” operation that creates more small training examples, we first randomly place an image on a canvas of 16×16\\times of the original image size filled with mean values before we do any random crop operation. Because we have more training images by introducing this new ”expansion” data augmentation trick, we have to double the training iterations. We have seen a consistent increase of 2%-3% mAP across multiple datasets, as shown in Table 6. In specific, Figure 3.2 shows that the new augmentation trick significantly improves the performance on small objects. This result underscores the importance of the data augmentation strategy for the final model accuracy. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_30",
"text": " An alternative way of improving SSD is to design a better tiling of default boxes so that its position and scale are better aligned with the receptive field of each position on a feature map. We leave this for future work. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_31",
"text": " ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_32",
"text": " Considering the large number of boxes generated from our method, it is essential to perform non-maximum suppression (nms) efficiently during inference. By using a confidence threshold of 0.01, we can filter out most boxes. We then apply nms with jaccard overlap of 0.45 per class and keep the top 200 detections per image. This step costs about 1.7 msec per image for SSD300 and 20 VOC classes, which is close to the total time (2.4 msec) spent on all newly added layers. We measure the speed with batch size 8 using Titan X and cuDNN v4 with Intel Xeon [email protected]. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_33",
"text": " Table 7 shows the comparison between SSD, Faster R-CNN, and YOLO. Both our SSD300 and SSD512 method outperforms Faster R-CNN in both speed and accuracy. Although Fast YOLO can run at 155 FPS, it has lower accuracy by almost 22% mAP. To the best of our knowledge, SSD300 is the first real-time method to achieve above 70% mAP. Note that about 80% of the forward time is spent on the base network (VGG16 in our case). Therefore, using a faster base network could even further improve the speed, which can possibly make the SSD512 model real-time as well. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_34",
"text": " There are two established classes of methods for object detection in images, one based on sliding windows and the other based on region proposal classification. Before the advent of convolutional neural networks, the state of the art for those two approaches – Deformable Part Model (DPM) and Selective Search – had comparable performance. However, after the dramatic improvement brought on by R-CNN , which combines selective search region proposals and convolutional network based post-classification, region proposal object detection methods became prevalent. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_35",
"text": " The original R-CNN approach has been improved in a variety of ways. The first set of approaches improve the quality and speed of post-classification, since it requires the classification of thousands of image crops, which is expensive and time-consuming. SPPnet speeds up the original R-CNN approach significantly. It introduces a spatial pyramid pooling layer that is more robust to region size and scale and allows the classification layers to reuse features computed over feature maps generated at several image resolutions. Fast R-CNN extends SPPnet so that it can fine-tune all layers end-to-end by minimizing a loss for both confidences and bounding box regression, which was first introduced in MultiBox for learning objectness. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_36",
"text": " The second set of approaches improve the quality of proposal generation using deep neural networks. In the most recent works like MultiBox (7, 8), the Selective Search region proposals, which are based on low-level image features, are replaced by proposals generated directly from a separate deep neural network. This further improves the detection accuracy but results in a somewhat complex setup, requiring the training of two neural networks with a dependency between them. Faster R-CNN replaces selective search proposals by ones learned from a region proposal network (RPN), and introduces a method to integrate the RPN with Fast R-CNN by alternating between fine-tuning shared convolutional layers and prediction layers for these two networks. This way region proposals are used to pool mid-level features and the final classification step is less expensive. Our SSD is very similar to the region proposal network (RPN) in Faster R-CNN in that we also use a fixed set of (default) boxes for prediction, similar to the anchor boxes in the RPN. But instead of using these to pool features and evaluate another classifier, we simultaneously produce a score for each object category in each box. Thus, our approach avoids the complication of merging RPN with Fast R-CNN and is easier to train, faster, and straightforward to integrate in other tasks. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_37",
"text": " Another set of methods, which are directly related to our approach, skip the proposal step altogether and predict bounding boxes and confidences for multiple categories directly. OverFeat , a deep version of the sliding window method, predicts a bounding box directly from each location of the topmost feature map after knowing the confidences of the underlying object categories. YOLO uses the whole topmost feature map to predict both confidences for multiple categories and bounding boxes (which are shared for these categories). Our SSD method falls in this category because we do not have the proposal step but use the default boxes. However, our approach is more flexible than the existing methods because we can use default boxes of different aspect ratios on each feature location from multiple feature maps at different scales. If we only use one default box per location from the topmost feature map, our SSD would have similar architecture to OverFeat ; if we use the whole topmost feature map and add a fully connected layer for predictions instead of our convolutional predictors, and do not explicitly consider multiple aspect ratios, we can approximately reproduce YOLO . ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_38",
"text": " This paper introduces SSD, a fast single-shot object detector for multiple categories. A key feature of our model is the use of multi-scale convolutional bounding box outputs attached to multiple feature maps at the top of the network. This representation allows us to efficiently model the space of possible box shapes. We experimentally validate that given appropriate training strategies, a larger number of carefully chosen default bounding boxes results in improved performance. We build SSD models with at least an order of magnitude more box predictions sampling location, scale, and aspect ratio, than existing methods (5, 7). We demonstrate that given the same VGG-16 base architecture, SSD compares favorably to its state-of-the-art object detector counterparts in terms of both accuracy and speed. Our SSD512 model significantly outperforms the state-of-the-art Faster R-CNN in terms of accuracy on PASCAL VOC and COCO, while being 3×3\\times faster. Our real time SSD300 model runs at 59 FPS, which is faster than the current real time YOLO alternative, while producing markedly superior detection accuracy. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_39",
"text": " Apart from its standalone utility, we believe that our monolithic and relatively simple SSD model provides a useful building block for larger systems that employ an object detection component. A promising future direction is to explore its use as part of a system using recurrent neural networks to detect and track objects in video simultaneously. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_40",
"text": " This work was started as an internship project at Google and continued at UNC. We would like to thank Alex Toshev for helpful discussions and are indebted to the Image Understanding and DistBelief teams at Google. We also thank Philip Ammirato and Patrick Poirson for helpful comments. We thank NVIDIA for providing GPUs and acknowledge support from NSF 1452851, 1446631, 1526367, 1533771. ",
"title": "SSD: Single Shot MultiBox Detector"
}
] |
Is the last 1*1 convolutional layer used because the task requires to output a map of segmentation?
|
Yes 1*1 last convolution layer helps to get desired number of classes for the segmentation map [8].
|
[
8
] |
[
{
"id": "1505.04597_all_0",
"text": " In the last two years, deep convolutional networks have outperformed the state of the art in many visual recognition tasks, e.g. (7, 3). While convolutional networks have already existed for a long time , their success was limited due to the size of the available training sets and the size of the considered networks. The breakthrough by Krizhevsky et al. was due to supervised training of a large network with 8 layers and millions of parameters on the ImageNet dataset with 1 million training images. Since then, even larger and deeper networks have been trained . ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_1",
"text": " The typical use of convolutional networks is on classification tasks, where the output to an image is a single class label. However, in many visual tasks, especially in biomedical image processing, the desired output should include localization, i.e., a class label is supposed to be assigned to each pixel. Moreover, thousands of training images are usually beyond reach in biomedical tasks. Hence, Ciresan et al. trained a network in a sliding-window setup to predict the class label of each pixel by providing a local region (patch) around that pixel as input. First, this network can localize. Secondly, the training data in terms of patches is much larger than the number of training images. The resulting network won the EM segmentation challenge at ISBI 2012 by a large margin. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_2",
"text": " Obviously, the strategy in Ciresan et al. has two drawbacks. First, it is quite slow because the network must be run separately for each patch, and there is a lot of redundancy due to overlapping patches. Secondly, there is a trade-off between localization accuracy and the use of context. Larger patches require more max-pooling layers that reduce the localization accuracy, while small patches allow the network to see only little context. More recent approaches (11, 4) proposed a classifier output that takes into account the features from multiple layers. Good localization and the use of context are possible at the same time. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_3",
"text": " In this paper, we build upon a more elegant architecture, the so-called “fully convolutional network” . We modify and extend this architecture such that it works with very few training images and yields more precise segmentations; see Figure 1. The main idea in is to supplement a usual contracting network by successive layers, where pooling operators are replaced by upsampling operators. Hence, these layers increase the resolution of the output. In order to localize, high resolution features from the contracting path are combined with the upsampled output. A successive convolution layer can then learn to assemble a more precise output based on this information. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_4",
"text": " One important modification in our architecture is that in the upsampling part we have also a large number of feature channels, which allow the network to propagate context information to higher resolution layers. As a consequence, the expansive path is more or less symmetric to the contracting path, and yields a u-shaped architecture. The network does not have any fully connected layers and only uses the valid part of each convolution, i.e., the segmentation map only contains the pixels, for which the full context is available in the input image. This strategy allows the seamless segmentation of arbitrarily large images by an overlap-tile strategy (see Figure 2). To predict the pixels in the border region of the image, the missing context is extrapolated by mirroring the input image. This tiling strategy is important to apply the network to large images, since otherwise the resolution would be limited by the GPU memory. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_5",
"text": " As for our tasks there is very little training data available, we use excessive data augmentation by applying elastic deformations to the available training images. This allows the network to learn invariance to such deformations, without the need to see these transformations in the annotated image corpus. This is particularly important in biomedical segmentation, since deformation used to be the most common variation in tissue and realistic deformations can be simulated efficiently. The value of data augmentation for learning invariance has been shown in Dosovitskiy et al. in the scope of unsupervised feature learning. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_6",
"text": " Another challenge in many cell segmentation tasks is the separation of touching objects of the same class; see Figure 3. To this end, we propose the use of a weighted loss, where the separating background labels between touching cells obtain a large weight in the loss function. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_7",
"text": " The resulting network is applicable to various biomedical segmentation problems. In this paper, we show results on the segmentation of neuronal structures in EM stacks (an ongoing competition started at ISBI 2012), where we outperformed the network of Ciresan et al. . Furthermore, we show results for cell segmentation in light microscopy images from the ISBI cell tracking challenge 2015. Here we won with a large margin on the two most challenging 2D transmitted light datasets. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_8",
"text": " The network architecture is illustrated in Figure 1. It consists of a contracting path (left side) and an expansive path (right side). The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 convolution (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a 1x1 convolution is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_9",
"text": " To allow a seamless tiling of the output segmentation map (see Figure 2), it is important to select the input tile size such that all 2x2 max-pooling operations are applied to a layer with an even x- and y-size. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_10",
"text": " The input images and their corresponding segmentation maps are used to train the network with the stochastic gradient descent implementation of Caffe . Due to the unpadded convolutions, the output image is smaller than the input by a constant border width. To minimize the overhead and make maximum use of the GPU memory, we favor large input tiles over a large batch size and hence reduce the batch to a single image. Accordingly we use a high momentum (0.99) such that a large number of the previously seen training samples determine the update in the current optimization step. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_11",
"text": " The energy function is computed by a pixel-wise soft-max over the final feature map combined with the cross entropy loss function. The soft-max is defined as pk(𝐱)=exp(ak(𝐱))/(∑k′=1Kexp(ak′(𝐱)))subscript𝑝𝑘𝐱subscript𝑎𝑘𝐱superscriptsubscriptsuperscript𝑘′1𝐾subscript𝑎superscript𝑘′𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}})=\\exp({a_{k}(\\boldsymbol{\\mathbf{x}})})/\\left(\\sum_{k^{\\prime}=1}^{K}\\exp(a_{k^{\\prime}}(\\boldsymbol{\\mathbf{x}}))\\right) where ak(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) denotes the activation in feature channel k𝑘k at the pixel position 𝐱∈Ω𝐱Ω\\boldsymbol{\\mathbf{x}}\\in\\Omega with Ω⊂ℤ2Ωsuperscriptℤ2\\Omega\\subset\\mathbb{Z}^{2}. K𝐾K is the number of classes and pk(𝐱)subscript𝑝𝑘𝐱{p}_{k}(\\boldsymbol{\\mathbf{x}}) is the approximated maximum-function. I.e. pk(𝐱)≈1subscript𝑝𝑘𝐱1{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 1 for the k𝑘k that has the maximum activation ak(𝐱)subscript𝑎𝑘𝐱a_{k}(\\boldsymbol{\\mathbf{x}}) and pk(𝐱)≈0subscript𝑝𝑘𝐱0{p}_{k}(\\boldsymbol{\\mathbf{x}})\\approx 0 for all other k𝑘k. The cross entropy then penalizes at each position the deviation of pℓ(𝐱)(𝐱)subscript𝑝ℓ𝐱𝐱{p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}}) from 1 using E=∑𝐱∈Ωw(𝐱)log(pℓ(𝐱)(𝐱))𝐸subscript𝐱Ω𝑤𝐱subscript𝑝ℓ𝐱𝐱E=\\sum_{\\boldsymbol{\\mathbf{x}}\\in\\Omega}w(\\boldsymbol{\\mathbf{x}})\\log({p}_{\\ell(\\boldsymbol{\\mathbf{x}})}(\\boldsymbol{\\mathbf{x}})) (1) where ℓ:Ω→{1,…,K}:ℓ→Ω1…𝐾\\ell:\\Omega\\rightarrow\\{1,\\dots,K\\} is the true label of each pixel and w:Ω→ℝ:𝑤→Ωℝw:\\Omega\\rightarrow\\mathds{R} is a weight map that we introduced to give some pixels more importance in the training. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_12",
"text": " We pre-compute the weight map for each ground truth segmentation to compensate the different frequency of pixels from a certain class in the training data set, and to force the network to learn the small separation borders that we introduce between touching cells (See Figure 3c and d). ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_13",
"text": " The separation border is computed using morphological operations. The weight map is then computed as w(𝐱)=wc(𝐱)+w0⋅exp(−(d1(𝐱)+d2(𝐱))22σ2)𝑤𝐱subscript𝑤𝑐𝐱⋅subscript𝑤0superscriptsubscript𝑑1𝐱subscript𝑑2𝐱22superscript𝜎2w(\\boldsymbol{\\mathbf{x}})=w_{c}(\\boldsymbol{\\mathbf{x}})+w_{0}\\cdot\\exp\\left(-\\frac{(d_{1}(\\boldsymbol{\\mathbf{x}})+d_{2}(\\boldsymbol{\\mathbf{x}}))^{2}}{2\\sigma^{2}}\\right) (2) where wc:Ω→ℝ:subscript𝑤𝑐→Ωℝw_{c}:\\Omega\\rightarrow\\mathds{R} is the weight map to balance the class frequencies, d1:Ω→ℝ:subscript𝑑1→Ωℝd_{1}:\\Omega\\rightarrow\\mathds{R} denotes the distance to the border of the nearest cell and d2:Ω→ℝ:subscript𝑑2→Ωℝd_{2}:\\Omega\\rightarrow\\mathds{R} the distance to the border of the second nearest cell. In our experiments we set w0=10subscript𝑤010w_{0}=10 and σ≈5𝜎5\\sigma\\approx 5 pixels. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_14",
"text": " In deep networks with many convolutional layers and different paths through the network, a good initialization of the weights is extremely important. Otherwise, parts of the network might give excessive activations, while other parts never contribute. Ideally the initial weights should be adapted such that each feature map in the network has approximately unit variance. For a network with our architecture (alternating convolution and ReLU layers) this can be achieved by drawing the initial weights from a Gaussian distribution with a standard deviation of 2/N2𝑁\\sqrt{2/N}, where N𝑁N denotes the number of incoming nodes of one neuron . E.g. for a 3x3 convolution and 64 feature channels in the previous layer N=9⋅64=576𝑁⋅964576N=9\\cdot 64=576. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_15",
"text": " Data augmentation is essential to teach the network the desired invariance and robustness properties, when only few training samples are available. In case of microscopical images we primarily need shift and rotation invariance as well as robustness to deformations and gray value variations. Especially random elastic deformations of the training samples seem to be the key concept to train a segmentation network with very few annotated images. We generate smooth deformations using random displacement vectors on a coarse 3 by 3 grid. The displacements are sampled from a Gaussian distribution with 10 pixels standard deviation. Per-pixel displacements are then computed using bicubic interpolation. Drop-out layers at the end of the contracting path perform further implicit data augmentation. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_16",
"text": " We demonstrate the application of the u-net to three different segmentation tasks. The first task is the segmentation of neuronal structures in electron microscopic recordings. An example of the data set and our obtained segmentation is displayed in Figure 2. We provide the full result as Supplementary Material. The data set is provided by the EM segmentation challenge that was started at ISBI 2012 and is still open for new contributions. The training data is a set of 30 images (512x512 pixels) from serial section transmission electron microscopy of the Drosophila first instar larva ventral nerve cord (VNC). Each image comes with a corresponding fully annotated ground truth segmentation map for cells (white) and membranes (black). The test set is publicly available, but its segmentation maps are kept secret. An evaluation can be obtained by sending the predicted membrane probability map to the organizers. The evaluation is done by thresholding the map at 10 different levels and computation of the “warping error”, the “Rand error” and the “pixel error” . ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_17",
"text": " The u-net (averaged over 7 rotated versions of the input data) achieves without any further pre- or postprocessing a warping error of 0.0003529 (the new best score, see Table 1) and a rand-error of 0.0382. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_18",
"text": " This is significantly better than the sliding-window convolutional network result by Ciresan et al. , whose best submission had a warping error of 0.000420 and a rand error of 0.0504. In terms of rand error the only better performing algorithms on this data set use highly data set specific post-processing methods111The authors of this algorithm have submitted 78 different solutions to achieve this result. applied to the probability map of Ciresan et al. . ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_19",
"text": " We also applied the u-net to a cell segmentation task in light microscopic images. This segmenation task is part of the ISBI cell tracking challenge 2014 and 2015 (10, 13). The first data set “PhC-U373”222Data set provided by Dr. Sanjay Kumar. Department of Bioengineering University of California at Berkeley. Berkeley CA (USA) contains Glioblastoma-astrocytoma U373 cells on a polyacrylimide substrate recorded by phase contrast microscopy (see Figure 4a,b and Supp. Material). It contains 35 partially annotated training images. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_20",
"text": " Here we achieve an average IOU (“intersection over union”) of 92%, which is significantly better than the second best algorithm with 83% (see Table 2). ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_21",
"text": " The second data set “DIC-HeLa”333Data set provided by Dr. Gert van Cappellen Erasmus Medical Center. Rotterdam. The Netherlands are HeLa cells on a flat glass recorded by differential interference contrast (DIC) microscopy (see Figure 3, Figure 4c,d and Supp. Material). It contains 20 partially annotated training images. Here we achieve an average IOU of 77.5% which is significantly better than the second best algorithm with 46%. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
},
{
"id": "1505.04597_all_22",
"text": " The u-net architecture achieves very good performance on very different biomedical segmentation applications. Thanks to data augmentation with elastic deformations, it only needs very few annotated images and has a very reasonable training time of only 10 hours on a NVidia Titan GPU (6 GB). We provide the full Caffe-based implementation and the trained networks444U-net implementation, trained networks and supplementary material available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. We are sure that the u-net architecture can be applied easily to many more tasks. ",
"title": "U-Net: Convolutional Networks for Biomedical Image Segmentation"
}
] |
How can the DeepFool algorithm be adapted to find minimal adversarial perturbations for any L`p norm?
|
To adapt the algorithm to use any l-p norm, only 2 lines in the algorithm (10 and 11) should be substituted with \displaystyle\hat{l}\displaystyle\leftarrow\operatorname*{arg\,min}_{k\neq{\hat{k}(\bm{x}_{0})}}\frac{\left|f^{\prime}_{k}\right|}{\|\bm{w}^{\prime}_{k}\|_{q}},(11)\displaystyle\bm{r}_{i}\displaystyle\leftarrow\frac{|f^{\prime}_{\hat{l}}|}{\|\bm{w}^{\prime}_{\hat{l}}\|_{q}^{q}}|\bm{w}^{\prime}_{\hat{l}}|^{q-1}\odot\text{sign}(\bm{w}^{\prime}_{\hat{l}}), where q = p/(p-1) [13].
|
[
13
] |
[
{
"id": "1511.04599_all_0",
"text": " Deep neural networks are powerful learning models that achieve state-of-the-art pattern recognition performance in many research areas such as bioinformatics (1, 16), speech (12, 6), and computer vision (10, 8). Though deep networks have exhibited very good performance in classification tasks, they have recently been shown to be particularly unstable to adversarial perturbations of the data . In fact, very small and often imperceptible perturbations of the data samples are sufficient to fool state-of-the-art classifiers and result in incorrect classification. (e.g., Figure 1). Formally, for a given classifier, we define an adversarial perturbation as the minimal perturbation 𝒓𝒓\\bm{r} that is sufficient to change the estimated label k^(𝒙)^𝑘𝒙\\hat{k}(\\bm{x}): Δ(𝒙;k^):=min𝒓‖𝒓‖2 subject to k^(𝒙+𝒓)≠k^(𝒙),assignΔ𝒙^𝑘subscript𝒓subscriptnorm𝒓2 subject to ^𝑘𝒙𝒓^𝑘𝒙\\displaystyle\\Delta(\\bm{x};\\hat{k}):=\\min_{\\bm{r}}\\|\\bm{r}\\|_{2}\\text{ subject to }\\hat{k}(\\bm{x}+\\bm{r})\\neq\\hat{k}(\\bm{x}), (1) where 𝒙𝒙\\bm{x} is an image and k^(𝒙)^𝑘𝒙\\hat{k}(\\bm{x}) is the estimated label. We call Δ(𝒙;k^)Δ𝒙^𝑘\\Delta(\\bm{x};\\hat{k}) the robustness of k^^𝑘\\hat{k} at point 𝒙𝒙\\bm{x}. The robustness of classifier k^^𝑘\\hat{k} is then defined as ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_1",
"text": " ρadv(k^)=𝔼𝒙Δ(𝒙;k^)‖𝒙‖2,subscript𝜌adv^𝑘subscript𝔼𝒙Δ𝒙^𝑘subscriptnorm𝒙2\\rho_{\\text{adv}}(\\hat{k})=\\mathbb{E}_{\\bm{x}}\\frac{\\Delta(\\bm{x};\\hat{k})}{\\|\\bm{x}\\|_{2}}, (2) where 𝔼𝒙subscript𝔼𝒙\\mathbb{E}_{\\bm{x}} is the expectation over the distribution of data. The study of adversarial perturbations helps us understand what features are used by a classifier. The existence of such examples is seemingly in contradiction with the generalization ability of the learning algorithms. While deep networks achieve state-of-the-art performance in image classification tasks, they are not robust at all to small adversarial perturbations and tend to misclassify minimally perturbed data that looks visually similar to clean samples. Though adversarial attacks are specific to the classifier, it seems that the adversarial perturbations are generalizable across different models . This can actually become a real concern from a security point of view. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_2",
"text": " An accurate method for finding the adversarial perturbations is thus necessary to study and compare the robustness of different classifiers to adversarial perturbations. It might be the key to a better understanding of the limits of current architectures and to design methods to increase robustness. Despite the importance of the vulnerability of state-of-the-art classifiers to adversarial instability, no well-founded method has been proposed to compute adversarial perturbations and we fill this gap in this paper. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_3",
"text": " Our main contributions are the following: • We propose a simple yet accurate method for computing and comparing the robustness of different classifiers to adversarial perturbations. • We perform an extensive experimental comparison, and show that 1) our method computes adversarial perturbations more reliably and efficiently than existing methods 2) augmenting training data with adversarial examples significantly increases the robustness to adversarial perturbations. • We show that using imprecise approaches for the computation of adversarial perturbations could lead to different and sometimes misleading conclusions about the robustness. Hence, our method provides a better understanding of this intriguing phenomenon and of its influence factors. We now review some of the relevant work. The phenomenon of adversarial instability was first introduced and studied in . The authors estimated adversarial examples by solving penalized optimization problems and presented an analysis showing that the high complexity of neural networks might be a reason explaining the presence of adversarial examples. Unfortunately, the optimization method employed in is time-consuming and therefore does not scale to large datasets. In , the authors showed that convolutional networks are not invariant to some sort of transformations based on the experiments done on Pascal3D+ annotations. Recently, Tsai et al. provided a software to misclassify a given image in a specified class, without necessarily finding the smallest perturbation. Nguyen et al. generated synthetic unrecognizable images, which are classified with high confidence. The authors of also studied a related problem of finding the minimal geometric transformation that fools image classifiers, and provided quantitative measure of the robustness of classifiers to geometric transformations. Closer to our work, the authors of introduced the “fast gradient sign” method, which computes the adversarial perturbations for a given classifier very efficiently. Despite its efficiency, this method provides only a coarse approximation of the optimal perturbation vectors. In fact, it performs a unique gradient step, which often leads to sub-optimal solutions. Then in an attempt to build more robust classifiers to adversarial perturbations, introduced a smoothness penalty in the training procedure that allows to boost the robustness of the classifier. Notably, the method in was applied in order to generate adversarial perturbations. We should finally mention that the phenomenon of adversarial instability also led to theoretical work in that studied the problem of adversarial perturbations on some families of classifiers, and provided upper bounds on the robustness of these classifiers. A deeper understanding of the phenomenon of adversarial instability for more complex classifiers is however needed; the method proposed in this work can be seen as a baseline to efficiently and accurately generate adversarial perturbations in order to better understand this phenomenon. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_4",
"text": " The rest of paper is organized as follows. In Section 2, we introduce an efficient algorithm to find adversarial perturbations in a binary classifier. The extension to the multiclass problem is provided in Section 3. In Section 4, we propose extensive experiments that confirm the accuracy of our method and outline its benefits in building more robust classifiers. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_5",
"text": " As a multiclass classifier can be viewed as aggregation of binary classifiers, we first propose the algorithm for binary classifiers. That is, we assume here k^(𝒙)=sign(f(𝒙))^𝑘𝒙sign𝑓𝒙\\hat{k}(\\bm{x})=\\text{sign}(f(\\bm{x})), where f𝑓f is an arbitrary scalar-valued image classification function f:ℝn→ℝ:𝑓→superscriptℝ𝑛ℝf:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}. We also denote by ℱ≜{𝒙:f(𝒙)=0}≜ℱconditional-set𝒙𝑓𝒙0\\mathscr{F}\\triangleq\\{\\bm{x}:f(\\bm{x})=0\\} the level set at zero of f𝑓f. We begin by analyzing the case where f𝑓f is an affine classifier f(𝒙)=𝒘T𝒙+b𝑓𝒙superscript𝒘𝑇𝒙𝑏f(\\bm{x})=\\bm{w}^{T}\\bm{x}+b, and then derive the general algorithm, which can be applied to any differentiable binary classifier f𝑓f. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_6",
"text": " In the case where the classifier f𝑓f is affine, it can easily be seen that the robustness of f𝑓f at point 𝒙0subscript𝒙0\\bm{x}_{0}, Δ(𝒙0;f)Δsubscript𝒙0𝑓\\Delta(\\bm{x}_{0};f)222From now on, we refer to a classifier either by f𝑓f or its corresponding discrete mapping k^^𝑘\\hat{k}. Therefore, ρadv(k^)=ρadv(f)subscript𝜌adv^𝑘subscript𝜌adv𝑓\\rho_{\\text{adv}}(\\hat{k})=\\rho_{\\text{adv}}(f) and Δ(𝒙;k^)=Δ(𝒙;f)Δ𝒙^𝑘Δ𝒙𝑓\\Delta(\\bm{x};\\hat{k})=\\Delta(\\bm{x};f)., is equal to the distance from 𝒙0subscript𝒙0\\bm{x}_{0} to the separating affine hyperplane ℱ={𝒙:𝒘T𝒙+b=0}ℱconditional-set𝒙superscript𝒘𝑇𝒙𝑏0\\mathscr{F}=\\{\\bm{x}:\\bm{w}^{T}\\bm{x}+b=0\\} (Figure 2). The minimal perturbation to change the classifier’s decision corresponds to the orthogonal projection of 𝒙0subscript𝒙0\\bm{x}_{0} onto ℱℱ\\mathscr{F}. It is given by the closed-form formula: 𝒓∗(𝒙0)subscript𝒓subscript𝒙0\\displaystyle\\bm{r}_{*}(\\bm{x}_{0}) :=argmin‖𝒓‖2assignabsentargminsubscriptnorm𝒓2\\displaystyle:=\\operatorname*{arg\\,min}\\|\\bm{r}\\|_{2} (3) subject to sign (f(𝒙0+𝒓))≠ sign(f(𝒙0))subject to sign 𝑓subscript𝒙0𝒓 sign𝑓subscript𝒙0\\displaystyle\\text{ subject to }\\text{ sign }(f(\\bm{x}_{0}+\\bm{r}))\\neq\\text{ sign}(f(\\bm{x}_{0})) =−f(𝒙0)‖𝒘‖22𝒘.absent𝑓subscript𝒙0superscriptsubscriptnorm𝒘22𝒘\\displaystyle=-\\frac{f(\\bm{x}_{0})}{\\|\\bm{w}\\|_{2}^{2}}\\bm{w}. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_7",
"text": " Assuming now that f𝑓f is a general binary differentiable classifier, we adopt an iterative procedure to estimate the robustness Δ(𝒙0;f)Δsubscript𝒙0𝑓\\Delta(\\bm{x}_{0};f). Specifically, at each iteration, f𝑓f is linearized around the current point 𝒙isubscript𝒙𝑖\\bm{x}_{i} and the minimal perturbation of the linearized classifier is computed as argmin𝒓i‖𝒓i‖2 subject to f(𝒙i)+∇f(𝒙i)T𝒓i=0.subscriptargminsubscript𝒓𝑖subscriptnormsubscript𝒓𝑖2 subject to 𝑓subscript𝒙𝑖∇𝑓superscriptsubscript𝒙𝑖𝑇subscript𝒓𝑖0\\displaystyle\\operatorname*{arg\\,min}_{\\bm{r}_{i}}\\|\\bm{r}_{i}\\|_{2}\\text{ subject to }f(\\bm{x}_{i})+\\nabla f(\\bm{x}_{i})^{T}\\bm{r}_{i}=0. (4) The perturbation 𝒓isubscript𝒓𝑖\\bm{r}_{i} at iteration i𝑖i of the algorithm is computed using the closed form solution in Eq. (3), and the next iterate 𝒙i+1subscript𝒙𝑖1\\bm{x}_{i+1} is updated. The algorithm stops when 𝒙i+1subscript𝒙𝑖1\\bm{x}_{i+1} changes sign of the classifier. The DeepFool algorithm for binary classifiers is summarized in Algorithm 1 and a geometric illustration of the method is shown in Figure 3. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_8",
"text": " In practice, the above algorithm can often converge to a point on the zero level set ℱℱ\\mathscr{F}. In order to reach the other side of the classification boundary, the final perturbation vector 𝒓^^𝒓\\hat{\\bm{r}} is multiplied by a constant 1+η1𝜂1+\\eta, with η≪1much-less-than𝜂1\\eta\\ll 1. In our experiments, we have used η=0.02𝜂0.02\\eta=0.02. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_9",
"text": " We now extend the DeepFool method to the multiclass case. The most common used scheme for multiclass classifiers is one-vs-all. Hence, we also propose our method based on this classification scheme. In this scheme, the classifier has c𝑐c outputs where c𝑐c is the number of classes. Therefore, a classifier can be defined as f:ℝn→ℝc:𝑓→superscriptℝ𝑛superscriptℝ𝑐f:\\mathbb{R}^{n}\\rightarrow\\mathbb{R}^{c} and the classification is done by the following mapping: k^(𝒙)=argmaxkfk(𝒙),^𝑘𝒙subscriptargmax𝑘subscript𝑓𝑘𝒙\\hat{k}(\\bm{x})=\\operatorname*{arg\\,max}_{k}f_{k}(\\bm{x}), (5) where fk(𝒙)subscript𝑓𝑘𝒙f_{k}(\\bm{x}) is the output of f(𝒙)𝑓𝒙f(\\bm{x}) that corresponds to the kthsuperscript𝑘thk^{\\text{th}} class. Similarly to the binary case, we first present the proposed approach for the linear case and then we generalize it to other classifiers. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_10",
"text": " Let f(𝒙)𝑓𝒙f(\\bm{x}) be an affine classifier, i.e., f(𝒙)=𝐖⊤𝒙+𝒃𝑓𝒙superscript𝐖top𝒙𝒃f(\\bm{x})=\\mathbf{W}^{\\top}\\bm{x}+\\bm{b} for a given 𝐖𝐖\\mathbf{W} and 𝒃𝒃\\bm{b}. Since the mapping k^^𝑘\\hat{k} is the outcome of a one-vs-all classification scheme, the minimal perturbation to fool the classifier can be rewritten as follows argmin𝒓‖𝒓‖2s.t. ∃k:𝒘k⊤(𝒙0+𝒓)+bk≥𝒘k^(𝒙0)⊤(𝒙0+𝒓)+bk^(𝒙0),:subscriptargmin𝒓subscriptdelimited-∥∥𝒓2s.t. 𝑘superscriptsubscript𝒘𝑘topsubscript𝒙0𝒓subscript𝑏𝑘superscriptsubscript𝒘^𝑘subscript𝒙0topsubscript𝒙0𝒓subscript𝑏^𝑘subscript𝒙0\\begin{split}&\\operatorname*{arg\\,min}_{\\bm{r}}\\|\\bm{r}\\|_{2}\\\\ &\\text{s.t. }\\exists k:\\bm{w}_{k}^{\\top}(\\bm{x}_{0}+\\bm{r})+b_{k}\\geq\\bm{w}_{\\hat{k}(\\bm{x}_{0})}^{\\top}(\\bm{x}_{0}+\\bm{r})+b_{\\hat{k}(\\bm{x}_{0})},\\end{split} (6) where 𝒘ksubscript𝒘𝑘\\bm{w}_{k} is the kthsuperscript𝑘thk^{\\text{th}} column of 𝐖𝐖\\mathbf{W}. Geometrically, the above problem corresponds to the computation of the distance between 𝒙0subscript𝒙0\\bm{x}_{0} and the complement of the convex polyhedron P𝑃P, P=⋂k=1c{𝒙:fk^(𝒙0)(𝒙)≥fk(𝒙)},𝑃superscriptsubscript𝑘1𝑐conditional-set𝒙subscript𝑓^𝑘subscript𝒙0𝒙subscript𝑓𝑘𝒙\\displaystyle P=\\bigcap_{k=1}^{c}\\{\\bm{x}:f_{\\hat{k}(\\bm{x}_{0})}(\\bm{x})\\geq f_{k}(\\bm{x})\\}, (7) where 𝒙0subscript𝒙0\\bm{x}_{0} is located inside P𝑃P. We denote this distance by dist(𝒙0,Pc)distsubscript𝒙0superscript𝑃𝑐\\text{{dist}}(\\bm{x}_{0},P^{c}). The polyhedron P𝑃P defines the region of the space where f𝑓f outputs the label k^(𝒙0)^𝑘subscript𝒙0\\hat{k}(\\bm{x}_{0}). This setting is depicted in Figure 4. The solution to the problem in Eq. (6) can be computed in closed form as follows. Define l^(𝒙0)^𝑙subscript𝒙0\\hat{l}(\\bm{x}_{0}) to be the closest hyperplane of the boundary of P𝑃P (e.g. l^(𝒙0)=3^𝑙subscript𝒙03\\hat{l}(\\bm{x}_{0})=3 in Figure 4). Formally, l^(𝒙0)^𝑙subscript𝒙0\\hat{l}(\\bm{x}_{0}) can be computed as follows l^(𝒙0)=argmink≠k^(𝒙0)|fk(𝒙0)−fk^(𝒙0)(𝒙0)|‖𝒘k−𝒘k^(𝒙0)‖2.^𝑙subscript𝒙0subscriptargmin𝑘^𝑘subscript𝒙0subscript𝑓𝑘subscript𝒙0subscript𝑓^𝑘subscript𝒙0subscript𝒙0subscriptnormsubscript𝒘𝑘subscript𝒘^𝑘subscript𝒙02\\hat{l}(\\bm{x}_{0})=\\operatorname*{arg\\,min}_{k\\neq{\\hat{k}(\\bm{x}_{0})}}\\frac{\\left|f_{k}(\\bm{x}_{0})-f_{\\hat{k}(\\bm{x}_{0})}(\\bm{x}_{0})\\right|}{\\|\\bm{w}_{k}-\\bm{w}_{\\hat{k}(\\bm{x}_{0})}\\|_{2}}. (8) The minimum perturbation 𝒓∗(𝒙0)subscript𝒓subscript𝒙0\\bm{r}_{*}(\\bm{x}_{0}) is the vector that projects 𝒙0subscript𝒙0\\bm{x}_{0} on the hyperplane indexed by l^(𝒙0)^𝑙subscript𝒙0\\hat{l}(\\bm{x}_{0}), i.e., 𝒓∗(𝒙0)=|fl^(𝒙0)(𝒙0)−fk^(𝒙0)(𝒙0)|‖𝒘l^(𝒙0)−𝒘k^(𝒙0)‖22(𝒘l^(𝒙0)−𝒘k^(𝒙0)).subscript𝒓subscript𝒙0subscript𝑓^𝑙subscript𝒙0subscript𝒙0subscript𝑓^𝑘subscript𝒙0subscript𝒙0superscriptsubscriptnormsubscript𝒘^𝑙subscript𝒙0subscript𝒘^𝑘subscript𝒙022subscript𝒘^𝑙subscript𝒙0subscript𝒘^𝑘subscript𝒙0\\bm{r}_{*}(\\bm{x}_{0})=\\frac{\\left|f_{\\hat{l}(\\bm{x}_{0})}(\\bm{x}_{0})-f_{\\hat{k}(\\bm{x}_{0})}(\\bm{x}_{0})\\right|}{\\|\\bm{w}_{\\hat{l}(\\bm{x}_{0})}-\\bm{w}_{\\hat{k}(\\bm{x}_{0})}\\|_{2}^{2}}(\\bm{w}_{\\hat{l}(\\bm{x}_{0})}-\\bm{w}_{\\hat{k}(\\bm{x}_{0})}). (9) In other words, we find the closest projection of 𝒙0subscript𝒙0\\bm{x}_{0} on faces of P𝑃P. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_11",
"text": " We now extend the DeepFool algorithm to the general case of multiclass differentiable classifiers. For general non-linear classifiers, the set P𝑃P in Eq. (7) that describes the region of the space where the classifier outputs label k^(𝒙0)^𝑘subscript𝒙0\\hat{k}(\\bm{x}_{0}) is no longer a polyhedron. Following the explained iterative linearization procedure in the binary case, we approximate the set P𝑃P at iteration i𝑖i by a polyhedron P~isubscript~𝑃𝑖\\tilde{P}_{i} P~i=⋂k=1c{\\displaystyle\\tilde{P}_{i}=\\bigcap_{k=1}^{c}\\Big{\\{} 𝒙:fk(𝒙i)−fk^(𝒙0)(𝒙i):𝒙subscript𝑓𝑘subscript𝒙𝑖subscript𝑓^𝑘subscript𝒙0subscript𝒙𝑖\\displaystyle\\bm{x}:f_{k}(\\bm{x}_{i})-f_{\\hat{k}(\\bm{x}_{0})}(\\bm{x}_{i}) (10) +∇fk(𝒙i)⊤𝒙−∇fk^(𝒙0)(𝒙i)⊤𝒙≤0}.\\displaystyle+\\nabla f_{k}(\\bm{x}_{i})^{\\top}\\bm{x}-\\nabla f_{\\hat{k}(\\bm{x}_{0})}(\\bm{x}_{i})^{\\top}\\bm{x}\\leq 0\\Big{\\}}. We then approximate, at iteration i𝑖i, the distance between 𝒙isubscript𝒙𝑖\\bm{x}_{i} and the complement of P𝑃P, dist(𝒙i,Pc)distsubscript𝒙𝑖superscript𝑃𝑐\\text{{dist}}(\\bm{x}_{i},P^{c}), by dist(𝒙i,P~ic)distsubscript𝒙𝑖superscriptsubscript~𝑃𝑖𝑐\\text{{dist}}(\\bm{x}_{i},\\tilde{P}_{i}^{c}). Specifically, at each iteration of the algorithm, the perturbation vector that reaches the boundary of the polyhedron P~isubscript~𝑃𝑖\\tilde{P}_{i} is computed, and the current estimate updated. The method is given in Algorithm 2. It should be noted that the proposed algorithm operates in a greedy way and is not guaranteed to converge to the optimal perturbation in (1). However, we have observed in practice that our algorithm yields very small perturbations which are believed to be good approximations of the minimal perturbation. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_12",
"text": " It should be noted that the optimization strategy of DeepFool is strongly tied to existing optimization techniques. In the binary case, it can be seen as Newton’s iterative algorithm for finding roots of a nonlinear system of equations in the underdetermined case . This algorithm is known as the normal flow method. The convergence analysis of this optimization technique can be found for example in . Our algorithm in the binary case can alternatively be seen as a gradient descent algorithm with an adaptive step size that is automatically chosen at each iteration. The linearization in Algorithm 2 is also similar to a sequential convex programming where the constraints are linearized at each step. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_13",
"text": " In this paper, we have measured the perturbations using the ℓ2subscriptℓ2\\ell_{2} norm. Our framework is however not limited to this choice, and the proposed algorithm can simply be adapted to find minimal adversarial perturbations for any ℓpsubscriptℓ𝑝\\ell_{p} norm (p∈(1,∞)𝑝1p\\in(1,\\infty)). To do so, the update steps in line 10 and 11 in Algorithm 2 must be respectively substituted by the following updates l^^𝑙\\displaystyle\\hat{l} ←argmink≠k^(𝒙0)|fk′|‖𝒘k′‖q,←absentsubscriptargmin𝑘^𝑘subscript𝒙0subscriptsuperscript𝑓′𝑘subscriptnormsubscriptsuperscript𝒘′𝑘𝑞\\displaystyle\\leftarrow\\operatorname*{arg\\,min}_{k\\neq{\\hat{k}(\\bm{x}_{0})}}\\frac{\\left|f^{\\prime}_{k}\\right|}{\\|\\bm{w}^{\\prime}_{k}\\|_{q}}, (11) 𝒓isubscript𝒓𝑖\\displaystyle\\bm{r}_{i} ←|fl^′|‖𝒘l^′‖qq|𝒘l^′|q−1⊙sign(𝒘l^′),←absentdirect-productsubscriptsuperscript𝑓′^𝑙superscriptsubscriptnormsubscriptsuperscript𝒘′^𝑙𝑞𝑞superscriptsubscriptsuperscript𝒘′^𝑙𝑞1signsubscriptsuperscript𝒘′^𝑙\\displaystyle\\leftarrow\\frac{|f^{\\prime}_{\\hat{l}}|}{\\|\\bm{w}^{\\prime}_{\\hat{l}}\\|_{q}^{q}}|\\bm{w}^{\\prime}_{\\hat{l}}|^{q-1}\\odot\\text{sign}(\\bm{w}^{\\prime}_{\\hat{l}}), (12) where ⊙direct-product\\odot is the pointwise product and q=pp−1𝑞𝑝𝑝1q=\\frac{p}{p-1}.333To see this, one can apply Holder’s inequality to obtain a lower bound on the ℓpsubscriptℓ𝑝\\ell_{p} norm of the perturbation. In particular, when p=∞𝑝p=\\infty (i.e., the supremum norm ℓ∞subscriptℓ\\ell_{\\infty}), these update steps become l^^𝑙\\displaystyle\\hat{l} ←argmink≠k^(𝒙0)|fk′|‖𝒘k′‖1,←absentsubscriptargmin𝑘^𝑘subscript𝒙0subscriptsuperscript𝑓′𝑘subscriptnormsubscriptsuperscript𝒘′𝑘1\\displaystyle\\leftarrow\\operatorname*{arg\\,min}_{k\\neq{\\hat{k}(\\bm{x}_{0})}}\\frac{\\left|f^{\\prime}_{k}\\right|}{\\|\\bm{w}^{\\prime}_{k}\\|_{1}}, (13) 𝒓isubscript𝒓𝑖\\displaystyle\\bm{r}_{i} ←|fl^′|‖𝒘l^′‖1sign(𝒘l^′).←absentsubscriptsuperscript𝑓′^𝑙subscriptnormsubscriptsuperscript𝒘′^𝑙1signsubscriptsuperscript𝒘′^𝑙\\displaystyle\\leftarrow\\frac{|f^{\\prime}_{\\hat{l}}|}{\\|\\bm{w}^{\\prime}_{\\hat{l}}\\|_{1}}\\text{sign}(\\bm{w}^{\\prime}_{\\hat{l}}). (14) ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_14",
"text": " We now test our DeepFool algorithm on deep convolutional neural networks architectures applied to MNIST, CIFAR-10, and ImageNet image classification datasets. We consider the following deep neural network architectures: • MNIST: A two-layer fully connected network, and a two-layer LeNet convoluational neural network architecture . Both networks are trained with SGD with momentum using the MatConvNet package. • CIFAR-10: We trained a three-layer LeNet architecture, as well as a Network In Network (NIN) architecture . • ILSVRC 2012: We used CaffeNet and GoogLeNet pre-trained models. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_15",
"text": " In order to evaluate the robustness to adversarial perturbations of a classifier f𝑓f, we compute the average robustness ρ^adv(f)subscript^𝜌adv𝑓\\hat{\\rho}_{\\text{adv}}(f), defined by ρ^adv(f)=1|𝒟|∑𝒙∈𝒟‖𝒓^(𝒙)‖2‖𝒙‖2,subscript^𝜌adv𝑓1𝒟subscript𝒙𝒟subscriptnorm^𝒓𝒙2subscriptnorm𝒙2\\hat{\\rho}_{\\text{adv}}(f)=\\frac{1}{|\\mathscr{D}|}\\sum_{\\bm{x}\\in\\mathscr{D}}\\frac{\\|\\hat{\\bm{r}}(\\bm{x})\\|_{2}}{\\|\\bm{x}\\|_{2}}, (15) where 𝒓^(𝒙)^𝒓𝒙\\hat{\\bm{r}}(\\bm{x}) is the estimated minimal perturbation obtained using DeepFool, and 𝒟𝒟\\mathscr{D} denotes the test set444For ILSVRC2012, we used the validation data.. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_16",
"text": " We compare the proposed DeepFool approach to state-of-the-art techniques to compute adversarial perturbations in and . The method in solves a series of penalized optimization problems to find the minimal perturbation, whereas estimates the minimal perturbation by taking the sign of the gradient 𝒓^(𝒙)=ϵsign(∇𝒙J(𝜽,𝒙,y)),^𝒓𝒙italic-ϵsignsubscript∇𝒙𝐽𝜽𝒙𝑦\\displaystyle\\hat{\\bm{r}}(\\bm{x})=\\epsilon\\,\\text{sign}\\left(\\nabla_{\\bm{x}}J(\\bm{\\theta},\\bm{x},y)\\right), with J𝐽J the cost used to train the neural network, 𝜽𝜽\\bm{\\theta} is the model parameters, and y𝑦y is the label of 𝒙𝒙\\bm{x}. The method is called fast gradient sign method. In practice, in the absence of general rules to choose the parameter ϵitalic-ϵ\\epsilon, we chose the smallest ϵitalic-ϵ\\epsilon such that 90%percent9090\\% of the data are misclassified after perturbation.555Using this method, we observed empirically that one cannot reach 100%percent100100\\% misclassification rate on some datasets. In fact, even by increasing ϵitalic-ϵ\\epsilon to be very large, this method can fail in misclassifying all samples. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_17",
"text": " We report in Table 1 the accuracy and average robustness ρ^advsubscript^𝜌adv\\hat{\\rho}_{\\text{adv}} of each classifier computed using different methods. We also show the running time required for each method to compute one adversarial sample. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_18",
"text": " It can be seen that DeepFool estimates smaller perturbations (hence closer to minimal perturbation defined in (1)) than the ones computed using the competitive approaches. For example, the average perturbation obtained using DeepFool is 555 times lower than the one estimated with . On the ILSVRC2012 challenge dataset, the average perturbation is one order of magnitude smaller compared to the fast gradient method. It should be noted moreover that the proposed approach also yields slightly smaller perturbation vectors than the method in . The proposed approach is hence more accurate in detecting directions that can potentially fool neural networks. As a result, DeepFool can be used as a valuable tool to accurately assess the robustness of classifiers. On the complexity aspect, the proposed approach is substantially faster than the standard method proposed in . In fact, while the approach involves a costly minimization of a series of objective functions, we observed empirically that DeepFool converges in a few iterations (i.e., less than 333) to a perturbation vector that fools the classifier. Hence, the proposed approach reaches a more accurate perturbation vector compared to state-of-the-art methods, while being computationally efficient. This makes it readily suitable to be used as a baseline method to estimate the robustness of very deep neural networks on large-scale datasets. In that context, we provide the first quantitative evaluation of the robustness of state-of-the-art classifiers on the large-scale ImageNet dataset. It can be seen that despite their very good test accuracy, these methods are extremely unstable to adversarial perturbations: a perturbation that is 100010001000 smaller in magnitude than the original image is sufficient to fool state-of-the-art deep neural networks. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_19",
"text": " We illustrate in Figure 1 perturbed images generated by the fast gradient sign and DeepFool. It can be observed that the proposed method generates adversarial perturbations which are hardly perceptible, while the fast gradient sign method outputs a perturbation image with higher norm. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_20",
"text": " It should be noted that, when perturbations are measured using the ℓ∞subscriptℓ\\ell_{\\infty} norm, the above conclusions remain unchanged: DeepFool yields adversarial perturbations that are smaller (hence closer to the optimum) compared to other methods for computing adversarial examples. Table 2 reports the ℓ∞subscriptℓ\\ell_{\\infty} robustness to adversarial perturbations measured by ρ^adv∞(f)=1|𝒟|∑𝒙∈𝒟‖𝒓^(𝒙)‖∞‖𝒙‖∞superscriptsubscript^𝜌adv𝑓1𝒟subscript𝒙𝒟subscriptnorm^𝒓𝒙subscriptnorm𝒙\\hat{\\rho}_{\\text{adv}}^{\\infty}(f)=\\frac{1}{|\\mathscr{D}|}\\sum_{\\bm{x}\\in\\mathscr{D}}\\frac{\\|\\hat{\\bm{r}}(\\bm{x})\\|_{\\infty}}{\\|\\bm{x}\\|_{\\infty}}, where 𝒓^(𝒙)^𝒓𝒙\\hat{\\bm{r}}(\\bm{x}) is computed respectively using DeepFool (with p=∞𝑝p=\\infty, see Section 3.3), and the Fast gradient sign method for MNIST and CIFAR-10 tasks. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_21",
"text": " Fine-tuning using adversarial examples ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_22",
"text": " In this section, we fine-tune the networks of Table 1 on adversarial examples to build more robust classifiers for the MNIST and CIFAR-10 tasks. Specifically, for each network, we performed two experiments: (i) Fine-tuning the network on DeepFool’s adversarial examples, (ii) Fine-tuning the network on the fast gradient sign adversarial examples. We fine-tune the networks by performing 5 additional epochs, with a 50%percent5050\\% decreased learning rate only on the perturbed training set. For each experiment, the same training data was used through all 555 extra epochs. For the sake of completeness, we also performed 555 extra epochs on the original data. The evolution of ρ^advsubscript^𝜌adv\\hat{\\rho}_{\\text{adv}} for the different fine-tuning strategies is shown in Figures 6(a) to 6(d), where the robustness ρ^advsubscript^𝜌adv\\hat{\\rho}_{\\text{adv}} is estimated using DeepFool, since this is the most accurate method, as shown in Table 1. Observe that fine-tuning with DeepFool adversarial examples significantly increases the robustness of the networks to adversarial perturbations even after one extra epoch. For example, the robustness of the networks on MNIST is improved by 50% and NIN’s robustness is increased by about 40%. On the other hand, quite surprisingly, the method in can lead to a decreased robustness to adversarial perturbations of the network. We hypothesize that this behavior is due to the fact that perturbations estimated using the fast gradient sign method are much larger than minimal adversarial perturbations. Fine-tuning the network with overly perturbed images decreases the robustness of the networks to adversarial perturbations. To verify this hypothesis, we compare in Figure 7 the adversarial robustness of a network that is fine-tuned with the adversarial examples obtained using DeepFool, where norms of perturbations have been deliberately multiplied by α=1,2,3𝛼123\\alpha=1,2,3. Interestingly, we see that by magnifying the norms of the adversarial perturbations, the robustness of the fine-tuned network is decreased. This might explain why overly perturbed images decrease the robustness of MNIST networks: these perturbations can really change the class of the digits, hence fine-tuning based on these examples can lead to a drop of the robustness (for an illustration, see Figure 8). This lends credence to our hypothesis, and further shows the importance of designing accurate methods to compute minimal perturbations. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_23",
"text": " Table 3 lists the accuracies of the fine-tuned networks. It can be seen that fine-tuning with DeepFool can improve the accuracy of the networks. Conversely, fine-tuning with the approach in has led to a decrease of the test accuracy in all our experiments. This confirms the explanation that the fast gradient sign method outputs overly perturbed images that lead to images that are unlikely to occur in the test data. Hence, it decreases the performance of the method as it acts as a regularizer that does not represent the distribution of the original data. This effect is analogous to geometric data augmentation schemes, where large transformations of the original samples have a counter-productive effect on generalization.666While the authors of reported an increased generalization performance on the MNIST task (from 0.94%percent0.940.94\\% to 0.84%percent0.840.84\\%) using adversarial regularization, it should be noted that the their experimental setup is significantly different as trained the network based on a modified cost function, while we performed straightforward fine-tuning. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_24",
"text": " To emphasize the importance of a correct estimation of the minimal perturbation, we now show that using approximate methods can lead to wrong conclusions regarding the adversarial robustness of networks. We fine-tune the NIN classifier on the fast gradient sign adversarial examples. We follow the procedure described earlier but this time, we decreased the learning rate by 90%. We have evaluated the adversarial robustness of this network at different extra epochs using DeepFool and the fast gradient sign method. As one can see in Figure 9, the red plot exaggerates the effect of training on the adversarial examples. Moreover, it is not sensitive enough to demonstrate the loss of robustness at the first extra epoch. These observations confirm that using an accurate tool to measure the robustness of classifiers is crucial to derive conclusions about the robustness of networks. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_25",
"text": " In this work, we proposed an algorithm, DeepFool, to compute adversarial examples that fool state-of-the-art classifiers. It is based on an iterative linearization of the classifier to generate minimal perturbations that are sufficient to change classification labels. We provided extensive experimental evidence on three datasets and eight classifiers, showing the superiority of the proposed method over state-of-the-art methods to compute adversarial perturbations, as well as the efficiency of the proposed approach. Due to its accurate estimation of the adversarial perturbations, the proposed DeepFool algorithm provides an efficient and accurate way to evaluate the robustness of classifiers and to enhance their performance by proper fine-tuning. The proposed approach can therefore be used as a reliable tool to accurately estimate the minimal perturbation vectors, and build more robust classifiers. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
},
{
"id": "1511.04599_all_26",
"text": " This work has been partly supported by the Hasler Foundation, Switzerland, in the framework of the CORA project. ",
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
}
] |
What’s the effect of the gradient of the lower bound w.r.t. φ on the naïve Monte Carlo estimator?
|
The gradient of the lower bound wrt [10]. \boldsymbol{\phi} is a bit problematic [13]. The usual (naïve) Monte Carlo gradient estimator for this type of problem is impractical for our purposes [2]. Because gradient estimator exhibits exhibits very high variance [21]. Optimization of this objective is equivalent to approximate MAP estimation, where the likelihood gradient is approximated by the gradient of the lower bound [27].
|
[
10,
13,
2,
21,
27
] |
[
{
"id": "1312.6114_all_0",
"text": " How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approximation to the intractable posterior. Unfortunately, the common mean-field approach requires analytical solutions of expectations w.r.t. the approximate posterior, which are also intractable in the general case. We show how a reparameterization of the variational lower bound yields a simple differentiable unbiased estimator of the lower bound; this SGVB (Stochastic Gradient Variational Bayes) estimator can be used for efficient approximate posterior inference in almost any model with continuous latent variables and/or parameters, and is straightforward to optimize using standard stochastic gradient ascent techniques. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_1",
"text": " For the case of an i.i.d. dataset and continuous latent variables per datapoint, we propose the Auto-Encoding VB (AEVB) algorithm. In the AEVB algorithm we make inference and learning especially efficient by using the SGVB estimator to optimize a recognition model that allows us to perform very efficient approximate posterior inference using simple ancestral sampling, which in turn allows us to efficiently learn the model parameters, without the need of expensive iterative inference schemes (such as MCMC) per datapoint. The learned approximate posterior inference model can also be used for a host of tasks such as recognition, denoising, representation and visualization purposes. When a neural network is used for the recognition model, we arrive at the variational auto-encoder. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_2",
"text": " The strategy in this section can be used to derive a lower bound estimator (a stochastic objective function) for a variety of directed graphical models with continuous latent variables. We will restrict ourselves here to the common case where we have an i.i.d. dataset with latent variables per datapoint, and where we like to perform maximum likelihood (ML) or maximum a posteriori (MAP) inference on the (global) parameters, and variational inference on the latent variables. It is, for example, straightforward to extend this scenario to the case where we also perform variational inference on the global parameters; that algorithm is put in the appendix, but experiments with that case are left to future work. Note that our method can be applied to online, non-stationary settings, e.g. streaming data, but here we assume a fixed dataset for simplicity. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_3",
"text": " Let us consider some dataset 𝐗={𝐱(i)}i=1N𝐗superscriptsubscriptsuperscript𝐱𝑖𝑖1𝑁\\mathbf{X}=\\{\\mathbf{x}^{(i)}\\}_{i=1}^{N} consisting of N𝑁N i.i.d. samples of some continuous or discrete variable 𝐱𝐱\\mathbf{x}. We assume that the data are generated by some random process, involving an unobserved continuous random variable 𝐳𝐳\\mathbf{z}. The process consists of two steps: (1) a value 𝐳(i)superscript𝐳𝑖\\mathbf{z}^{(i)} is generated from some prior distribution p𝜽∗(𝐳)subscript𝑝superscript𝜽𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{z}); (2) a value 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} is generated from some conditional distribution p𝜽∗(𝐱|𝐳)subscript𝑝superscript𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{x}|\\mathbf{z}). We assume that the prior p𝜽∗(𝐳)subscript𝑝superscript𝜽𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{z}) and likelihood p𝜽∗(𝐱|𝐳)subscript𝑝superscript𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}^{*}}(\\mathbf{x}|\\mathbf{z}) come from parametric families of distributions p𝜽(𝐳)subscript𝑝𝜽𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{z}) and p𝜽(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}), and that their PDFs are differentiable almost everywhere w.r.t. both 𝜽𝜽\\boldsymbol{\\theta} and 𝐳𝐳\\mathbf{z}. Unfortunately, a lot of this process is hidden from our view: the true parameters 𝜽∗superscript𝜽\\boldsymbol{\\theta}^{*} as well as the values of the latent variables 𝐳(i)superscript𝐳𝑖\\mathbf{z}^{(i)} are unknown to us. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_4",
"text": " Very importantly, we do not make the common simplifying assumptions about the marginal or posterior probabilities. Conversely, we are here interested in a general algorithm that even works efficiently in the case of: 1. Intractability: the case where the integral of the marginal likelihood p𝜽(𝐱)=∫p𝜽(𝐳)p𝜽(𝐱|𝐳)𝑑𝐳subscript𝑝𝜽𝐱subscript𝑝𝜽𝐳subscript𝑝𝜽conditional𝐱𝐳differential-d𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x})=\\int p_{\\boldsymbol{\\theta}}(\\mathbf{z})p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z})\\,d\\mathbf{z} is intractable (so we cannot evaluate or differentiate the marginal likelihood), where the true posterior density p𝜽(𝐳|𝐱)=p𝜽(𝐱|𝐳)p𝜽(𝐳)/p𝜽(𝐱)subscript𝑝𝜽conditional𝐳𝐱subscript𝑝𝜽conditional𝐱𝐳subscript𝑝𝜽𝐳subscript𝑝𝜽𝐱p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x})=p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z})p_{\\boldsymbol{\\theta}}(\\mathbf{z})/p_{\\boldsymbol{\\theta}}(\\mathbf{x}) is intractable (so the EM algorithm cannot be used), and where the required integrals for any reasonable mean-field VB algorithm are also intractable. These intractabilities are quite common and appear in cases of moderately complicated likelihood functions p𝜽(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}), e.g. a neural network with a nonlinear hidden layer. 2. A large dataset: we have so much data that batch optimization is too costly; we would like to make parameter updates using small minibatches or even single datapoints. Sampling-based solutions, e.g. Monte Carlo EM, would in general be too slow, since it involves a typically expensive sampling loop per datapoint. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_5",
"text": " We are interested in, and propose a solution to, three related problems in the above scenario: 1. Efficient approximate ML or MAP estimation for the parameters 𝜽𝜽\\boldsymbol{\\theta}. The parameters can be of interest themselves, e.g. if we are analyzing some natural process. They also allow us to mimic the hidden random process and generate artificial data that resembles the real data. 2. Efficient approximate posterior inference of the latent variable 𝐳𝐳\\mathbf{z} given an observed value 𝐱𝐱\\mathbf{x} for a choice of parameters 𝜽𝜽\\boldsymbol{\\theta}. This is useful for coding or data representation tasks. 3. Efficient approximate marginal inference of the variable 𝐱𝐱\\mathbf{x}. This allows us to perform all kinds of inference tasks where a prior over 𝐱𝐱\\mathbf{x} is required. Common applications in computer vision include image denoising, inpainting and super-resolution. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_6",
"text": " For the purpose of solving the above problems, let us introduce a recognition model qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}): an approximation to the intractable true posterior p𝜽(𝐳|𝐱)subscript𝑝𝜽conditional𝐳𝐱p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x}). Note that in contrast with the approximate posterior in mean-field variational inference, it is not necessarily factorial and its parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} are not computed from some closed-form expectation. Instead, we’ll introduce a method for learning the recognition model parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} jointly with the generative model parameters 𝜽𝜽\\boldsymbol{\\theta}. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_7",
"text": " From a coding theory perspective, the unobserved variables 𝐳𝐳\\mathbf{z} have an interpretation as a latent representation or code. In this paper we will therefore also refer to the recognition model qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) as a probabilistic encoder, since given a datapoint 𝐱𝐱\\mathbf{x} it produces a distribution (e.g. a Gaussian) over the possible values of the code 𝐳𝐳\\mathbf{z} from which the datapoint 𝐱𝐱\\mathbf{x} could have been generated. In a similar vein we will refer to p𝜽(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}) as a probabilistic decoder, since given a code 𝐳𝐳\\mathbf{z} it produces a distribution over the possible corresponding values of 𝐱𝐱\\mathbf{x}. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_8",
"text": " The marginal likelihood is composed of a sum over the marginal likelihoods of individual datapoints logp𝜽(𝐱(1),⋯,𝐱(N))=∑i=1Nlogp𝜽(𝐱(i))subscript𝑝𝜽superscript𝐱1⋯superscript𝐱𝑁superscriptsubscript𝑖1𝑁subscript𝑝𝜽superscript𝐱𝑖\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(1)},\\cdots,\\mathbf{x}^{(N)})=\\sum_{i=1}^{N}\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}), which can each be rewritten as: logp𝜽(𝐱(i))=DKL(qϕ(𝐳|𝐱(i))||p𝜽(𝐳|𝐱(i)))+ℒ(𝜽,ϕ;𝐱(i))\\displaystyle\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)})=D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x}^{(i)}))+\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) (1) The first RHS term is the KL divergence of the approximate from the true posterior. Since this KL-divergence is non-negative, the second RHS term ℒ(𝜽,ϕ;𝐱(i))ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) is called the (variational) lower bound on the marginal likelihood of datapoint i𝑖i, and can be written as: logp𝜽(𝐱(i))≥ℒ(𝜽,ϕ;𝐱(i))subscript𝑝𝜽superscript𝐱𝑖ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)})\\geq\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) =𝔼qϕ(𝐳|𝐱)(−logqϕ(𝐳|𝐱)+logp𝜽(𝐱,𝐳))absentsubscript𝔼subscript𝑞bold-italic-ϕconditional𝐳𝐱delimited-()subscript𝑞bold-italic-ϕconditional𝐳𝐱subscript𝑝𝜽𝐱𝐳\\displaystyle=\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})}\\left(-\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})+\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x},\\mathbf{z})\\right) (2) which can also be written as: ℒ(𝜽,ϕ;𝐱(i))=−DKL(qϕ(𝐳|𝐱(i))||p𝜽(𝐳))+𝔼qϕ(𝐳|𝐱(i))(logp𝜽(𝐱(i)|𝐳))\\displaystyle\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)})=-D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z}))+\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})}\\left(\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z})\\right) (3) We want to differentiate and optimize the lower bound ℒ(𝜽,ϕ;𝐱(i))ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) w.r.t. both the variational parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} and generative parameters 𝜽𝜽\\boldsymbol{\\theta}. However, the gradient of the lower bound w.r.t. ϕbold-italic-ϕ\\boldsymbol{\\phi} is a bit problematic. The usual (naïve) Monte Carlo gradient estimator for this type of problem is: ∇ϕ𝔼qϕ(𝐳)(f(𝐳))=𝔼qϕ(𝐳)(f(𝐳)∇qϕ(𝐳)logqϕ(𝐳))≃1L∑l=1Lf(𝐳)∇qϕ(𝐳(l))logqϕ(𝐳(l))subscript∇bold-italic-ϕsubscript𝔼subscript𝑞bold-italic-ϕ𝐳delimited-()𝑓𝐳subscript𝔼subscript𝑞bold-italic-ϕ𝐳delimited-()𝑓𝐳subscript∇subscript𝑞bold-italic-ϕ𝐳subscript𝑞bold-italic-ϕ𝐳similar-to-or-equals1𝐿superscriptsubscript𝑙1𝐿𝑓𝐳subscript∇subscript𝑞bold-italic-ϕsuperscript𝐳𝑙subscript𝑞bold-italic-ϕsuperscript𝐳𝑙\\nabla_{\\boldsymbol{\\phi}}\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z})}\\left(f(\\mathbf{z})\\right)=\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z})}\\left(f(\\mathbf{z})\\nabla_{q_{\\boldsymbol{\\phi}}(\\mathbf{z})}\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z})\\right)\\simeq\\frac{1}{L}\\sum_{l=1}^{L}f(\\mathbf{z})\\nabla_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}^{(l)})}\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}^{(l)}) where 𝐳(l)∼qϕ(𝐳|𝐱(i))similar-tosuperscript𝐳𝑙subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\mathbf{z}^{(l)}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}). This gradient estimator exhibits exhibits very high variance (see e.g. (BJP12)) and is impractical for our purposes. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_9",
"text": " In this section we introduce a practical estimator of the lower bound and its derivatives w.r.t. the parameters. We assume an approximate posterior in the form qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}), but please note that the technique can be applied to the case qϕ(𝐳)subscript𝑞bold-italic-ϕ𝐳q_{\\boldsymbol{\\phi}}(\\mathbf{z}), i.e. where we do not condition on 𝐱𝐱\\mathbf{x}, as well. The fully variational Bayesian method for inferring a posterior over the parameters is given in the appendix. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_10",
"text": " Under certain mild conditions outlined in section 2.4 for a chosen approximate posterior qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) we can reparameterize the random variable 𝐳~∼qϕ(𝐳|𝐱)similar-to~𝐳subscript𝑞bold-italic-ϕconditional𝐳𝐱\\widetilde{\\mathbf{z}}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) using a differentiable transformation gϕ(ϵ,𝐱)subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}) of an (auxiliary) noise variable ϵbold-italic-ϵ\\boldsymbol{\\epsilon}: 𝐳~=gϕ(ϵ,𝐱) with ϵ∼p(ϵ)~𝐳subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱 with bold-italic-ϵsimilar-to𝑝bold-italic-ϵ\\displaystyle\\widetilde{\\mathbf{z}}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x})\\text{\\quad with \\quad}\\boldsymbol{\\epsilon}\\sim p(\\boldsymbol{\\epsilon}) (4) See section 2.4 for general strategies for chosing such an approriate distribution p(ϵ)𝑝bold-italic-ϵp(\\boldsymbol{\\epsilon}) and function gϕ(ϵ,𝐱)subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}). We can now form Monte Carlo estimates of expectations of some function f(𝐳)𝑓𝐳f(\\mathbf{z}) w.r.t. qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) as follows: 𝔼qϕ(𝐳|𝐱(i))(f(𝐳))=𝔼p(ϵ)(f(gϕ(ϵ,𝐱(i))))subscript𝔼subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖delimited-()𝑓𝐳subscript𝔼𝑝bold-italic-ϵdelimited-()𝑓subscript𝑔bold-italic-ϕbold-italic-ϵsuperscript𝐱𝑖\\displaystyle\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})}\\left(f(\\mathbf{z})\\right)=\\mathbb{E}_{p(\\boldsymbol{\\epsilon})}\\left(f(g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}^{(i)}))\\right) ≃1L∑l=1Lf(gϕ(ϵ(l),𝐱(i))) where ϵ(l)∼p(ϵ)similar-to-or-equalsabsent1𝐿superscriptsubscript𝑙1𝐿𝑓subscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑙superscript𝐱𝑖 where superscriptbold-italic-ϵ𝑙similar-to𝑝bold-italic-ϵ\\displaystyle\\simeq\\frac{1}{L}\\sum_{l=1}^{L}{f(g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(l)},\\mathbf{x}^{(i)}))}\\text{\\quad where \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}) (5) We apply this technique to the variational lower bound (eq. (2)), yielding our generic Stochastic Gradient Variational Bayes (SGVB) estimator ℒ~A(𝜽,ϕ;𝐱(i))≃ℒ(𝜽,ϕ;𝐱(i))similar-to-or-equalssuperscript~ℒ𝐴𝜽bold-italic-ϕsuperscript𝐱𝑖ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\widetilde{\\mathcal{L}}^{A}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)})\\simeq\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}): ℒ~A(𝜽,ϕ;𝐱(i))superscript~ℒ𝐴𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\widetilde{\\mathcal{L}}^{A}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) =1L∑l=1Llogp𝜽(𝐱(i),𝐳(i,l))−logqϕ(𝐳(i,l)|𝐱(i))absent1𝐿superscriptsubscript𝑙1𝐿subscript𝑝𝜽superscript𝐱𝑖superscript𝐳𝑖𝑙subscript𝑞bold-italic-ϕconditionalsuperscript𝐳𝑖𝑙superscript𝐱𝑖\\displaystyle=\\frac{1}{L}\\sum_{l=1}^{L}\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)},\\mathbf{z}^{(i,l)})-\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}^{(i,l)}|\\mathbf{x}^{(i)}) where 𝐳(i,l)where superscript𝐳𝑖𝑙\\displaystyle\\text{where \\quad}\\mathbf{z}^{(i,l)} =gϕ(ϵ(i,l),𝐱(i)) and ϵ(l)∼p(ϵ)absentsubscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑖𝑙superscript𝐱𝑖 and superscriptbold-italic-ϵ𝑙similar-to𝑝bold-italic-ϵ\\displaystyle=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(i,l)},\\mathbf{x}^{(i)})\\text{\\quad and \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}) (6) Often, the KL-divergence DKL(qϕ(𝐳|𝐱(i))||p𝜽(𝐳))D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z})) of eq. (3) can be integrated analytically (see appendix B), such that only the expected reconstruction error 𝔼qϕ(𝐳|𝐱(i))(logp𝜽(𝐱(i)|𝐳))subscript𝔼subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖delimited-()subscript𝑝𝜽conditionalsuperscript𝐱𝑖𝐳\\mathbb{E}_{q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})}\\left(\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z})\\right) requires estimation by sampling. The KL-divergence term can then be interpreted as regularizing ϕbold-italic-ϕ\\boldsymbol{\\phi}, encouraging the approximate posterior to be close to the prior p𝜽(𝐳)subscript𝑝𝜽𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{z}). This yields a second version of the SGVB estimator ℒ~B(𝜽,ϕ;𝐱(i))≃ℒ(𝜽,ϕ;𝐱(i))similar-to-or-equalssuperscript~ℒ𝐵𝜽bold-italic-ϕsuperscript𝐱𝑖ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\widetilde{\\mathcal{L}}^{B}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)})\\simeq\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}), corresponding to eq. (3), which typically has less variance than the generic estimator: ℒ~B(𝜽,ϕ;𝐱(i))superscript~ℒ𝐵𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\widetilde{\\mathcal{L}}^{B}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) =−DKL(qϕ(𝐳|𝐱(i))||p𝜽(𝐳))+1L∑l=1L(logp𝜽(𝐱(i)|𝐳(i,l)))\\displaystyle=-D_{KL}(q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)})||p_{\\boldsymbol{\\theta}}(\\mathbf{z}))+\\frac{1}{L}\\sum_{l=1}^{L}(\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)})) where 𝐳(i,l)where superscript𝐳𝑖𝑙\\displaystyle\\text{where \\quad}\\mathbf{z}^{(i,l)} =gϕ(ϵ(i,l),𝐱(i)) and ϵ(l)∼p(ϵ)absentsubscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑖𝑙superscript𝐱𝑖 and superscriptbold-italic-ϵ𝑙similar-to𝑝bold-italic-ϵ\\displaystyle=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(i,l)},\\mathbf{x}^{(i)})\\text{\\quad and \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}) (7) Given multiple datapoints from a dataset 𝐗𝐗\\mathbf{X} with N𝑁N datapoints, we can construct an estimator of the marginal likelihood lower bound of the full dataset, based on minibatches: ℒ(𝜽,ϕ;𝐗)≃ℒ~M(𝜽,ϕ;𝐗M)=NM∑i=1Mℒ~(𝜽,ϕ;𝐱(i))similar-to-or-equalsℒ𝜽bold-italic-ϕ𝐗superscript~ℒ𝑀𝜽bold-italic-ϕsuperscript𝐗𝑀𝑁𝑀superscriptsubscript𝑖1𝑀~ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{X})\\simeq\\widetilde{\\mathcal{L}}^{M}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{X}^{M})=\\frac{N}{M}\\sum_{i=1}^{M}\\widetilde{\\mathcal{L}}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) (8) where the minibatch 𝐗M={𝐱(i)}i=1Msuperscript𝐗𝑀superscriptsubscriptsuperscript𝐱𝑖𝑖1𝑀\\mathbf{X}^{M}=\\{\\mathbf{x}^{(i)}\\}_{i=1}^{M} is a randomly drawn sample of M𝑀M datapoints from the full dataset 𝐗𝐗\\mathbf{X} with N𝑁N datapoints. In our experiments we found that the number of samples L𝐿L per datapoint can be set to 111 as long as the minibatch size M𝑀M was large enough, e.g. M=100𝑀100M=100. Derivatives ∇𝜽,ϕℒ~(𝜽;𝐗M)subscript∇𝜽bold-italic-ϕ~ℒ𝜽superscript𝐗𝑀\\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}}\\widetilde{\\mathcal{L}}(\\boldsymbol{\\theta};\\mathbf{X}^{M}) can be taken, and the resulting gradients can be used in conjunction with stochastic optimization methods such as SGD or Adagrad (DHS10). See algorithm 1 for a basic approach to compute the stochastic gradients. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_11",
"text": " A connection with auto-encoders becomes clear when looking at the objective function given at eq. (7). The first term is (the KL divergence of the approximate posterior from the prior) acts as a regularizer, while the second term is a an expected negative reconstruction error. The function gϕ(.)g_{\\boldsymbol{\\phi}}(.) is chosen such that it maps a datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} and a random noise vector ϵ(l)superscriptbold-italic-ϵ𝑙\\boldsymbol{\\epsilon}^{(l)} to a sample from the approximate posterior for that datapoint: 𝐳(i,l)=gϕ(ϵ(l),𝐱(i))superscript𝐳𝑖𝑙subscript𝑔bold-italic-ϕsuperscriptbold-italic-ϵ𝑙superscript𝐱𝑖\\mathbf{z}^{(i,l)}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon}^{(l)},\\mathbf{x}^{(i)}) where 𝐳(i,l)∼qϕ(𝐳|𝐱(i))similar-tosuperscript𝐳𝑖𝑙subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\mathbf{z}^{(i,l)}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}). Subsequently, the sample 𝐳(i,l)superscript𝐳𝑖𝑙\\mathbf{z}^{(i,l)} is then input to function logp𝜽(𝐱(i)|𝐳(i,l))subscript𝑝𝜽conditionalsuperscript𝐱𝑖superscript𝐳𝑖𝑙\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)}), which equals the probability density (or mass) of datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} under the generative model, given 𝐳(i,l)superscript𝐳𝑖𝑙\\mathbf{z}^{(i,l)}. This term is a negative reconstruction error in auto-encoder parlance. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_12",
"text": " In order to solve our problem we invoked an alternative method for generating samples from qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}). The essential parameterization trick is quite simple. Let 𝐳𝐳\\mathbf{z} be a continuous random variable, and 𝐳∼qϕ(𝐳|𝐱)similar-to𝐳subscript𝑞bold-italic-ϕconditional𝐳𝐱\\mathbf{z}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) be some conditional distribution. It is then often possible to express the random variable 𝐳𝐳\\mathbf{z} as a deterministic variable 𝐳=gϕ(ϵ,𝐱)𝐳subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱\\mathbf{z}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}), where ϵbold-italic-ϵ\\boldsymbol{\\epsilon} is an auxiliary variable with independent marginal p(ϵ)𝑝bold-italic-ϵp(\\boldsymbol{\\epsilon}), and gϕ(.)g_{\\boldsymbol{\\phi}}(.) is some vector-valued function parameterized by ϕbold-italic-ϕ\\boldsymbol{\\phi}. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_13",
"text": " This reparameterization is useful for our case since it can be used to rewrite an expectation w.r.t qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) such that the Monte Carlo estimate of the expectation is differentiable w.r.t. ϕbold-italic-ϕ\\boldsymbol{\\phi}. A proof is as follows. Given the deterministic mapping 𝐳=gϕ(ϵ,𝐱)𝐳subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱\\mathbf{z}=g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}) we know that qϕ(𝐳|𝐱)∏idzi=p(ϵ)∏idϵisubscript𝑞bold-italic-ϕconditional𝐳𝐱subscriptproduct𝑖𝑑subscript𝑧𝑖𝑝bold-italic-ϵsubscriptproduct𝑖𝑑subscriptitalic-ϵ𝑖q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})\\prod_{i}dz_{i}=p(\\boldsymbol{\\epsilon})\\prod_{i}d\\epsilon_{i}. Therefore111Note that for infinitesimals we use the notational convention d𝐳=∏idzi𝑑𝐳subscriptproduct𝑖𝑑subscript𝑧𝑖d\\mathbf{z}=\\prod_{i}dz_{i}, ∫qϕ(𝐳|𝐱)f(𝐳)𝑑𝐳=∫p(ϵ)f(𝐳)𝑑ϵ=∫p(ϵ)f(gϕ(ϵ,𝐱))𝑑ϵsubscript𝑞bold-italic-ϕconditional𝐳𝐱𝑓𝐳differential-d𝐳𝑝bold-italic-ϵ𝑓𝐳differential-dbold-italic-ϵ𝑝bold-italic-ϵ𝑓subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱differential-dbold-italic-ϵ\\int q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})f(\\mathbf{z})\\,d\\mathbf{z}=\\int p(\\boldsymbol{\\epsilon})f(\\mathbf{z})\\,d\\boldsymbol{\\epsilon}=\\int p(\\boldsymbol{\\epsilon})f(g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}))\\,d\\boldsymbol{\\epsilon}. It follows that a differentiable estimator can be constructed: ∫qϕ(𝐳|𝐱)f(𝐳)𝑑𝐳≃1L∑l=1Lf(gϕ(𝐱,ϵ(l)))similar-to-or-equalssubscript𝑞bold-italic-ϕconditional𝐳𝐱𝑓𝐳differential-d𝐳1𝐿superscriptsubscript𝑙1𝐿𝑓subscript𝑔bold-italic-ϕ𝐱superscriptbold-italic-ϵ𝑙\\int q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x})f(\\mathbf{z})\\,d\\mathbf{z}\\simeq\\frac{1}{L}\\sum_{l=1}^{L}f(g_{\\boldsymbol{\\phi}}(\\mathbf{x},\\boldsymbol{\\epsilon}^{(l)})) where ϵ(l)∼p(ϵ)similar-tosuperscriptbold-italic-ϵ𝑙𝑝bold-italic-ϵ\\boldsymbol{\\epsilon}^{(l)}\\sim p(\\boldsymbol{\\epsilon}). In section 2.3 we applied this trick to obtain a differentiable estimator of the variational lower bound. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_14",
"text": " Take, for example, the univariate Gaussian case: let z∼p(z|x)=𝒩(μ,σ2)similar-to𝑧𝑝conditional𝑧𝑥𝒩𝜇superscript𝜎2z\\sim p(z|x)=\\mathcal{N}(\\mu,\\sigma^{2}). In this case, a valid reparameterization is z=μ+σϵ𝑧𝜇𝜎italic-ϵz=\\mu+\\sigma\\epsilon, where ϵitalic-ϵ\\epsilon is an auxiliary noise variable ϵ∼𝒩(0,1)similar-toitalic-ϵ𝒩01\\epsilon\\sim\\mathcal{N}(0,1). Therefore, 𝔼𝒩(z;μ,σ2)(f(z))=𝔼𝒩(ϵ;0,1)(f(μ+σϵ))≃1L∑l=1Lf(μ+σϵ(l))subscript𝔼𝒩𝑧𝜇superscript𝜎2delimited-()𝑓𝑧subscript𝔼𝒩italic-ϵ01delimited-()𝑓𝜇𝜎italic-ϵsimilar-to-or-equals1𝐿superscriptsubscript𝑙1𝐿𝑓𝜇𝜎superscriptitalic-ϵ𝑙\\mathbb{E}_{\\mathcal{N}(z;\\mu,\\sigma^{2})}\\left(f(z)\\right)=\\mathbb{E}_{\\mathcal{N}(\\epsilon;0,1)}\\left(f(\\mu+\\sigma\\epsilon)\\right)\\simeq\\frac{1}{L}\\sum_{l=1}^{L}f(\\mu+\\sigma\\epsilon^{(l)}) where ϵ(l)∼𝒩(0,1)similar-tosuperscriptitalic-ϵ𝑙𝒩01\\epsilon^{(l)}\\sim\\mathcal{N}(0,1). ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_15",
"text": " For which qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) can we choose such a differentiable transformation gϕ(.)g_{\\boldsymbol{\\phi}}(.) and auxiliary variable ϵ∼p(ϵ)similar-tobold-italic-ϵ𝑝bold-italic-ϵ\\boldsymbol{\\epsilon}\\sim p(\\boldsymbol{\\epsilon})? Three basic approaches are: 1. Tractable inverse CDF. In this case, let ϵ∼𝒰(𝟎,𝐈)similar-tobold-italic-ϵ𝒰0𝐈\\boldsymbol{\\epsilon}\\sim\\mathcal{U}(\\mathbf{0},\\mathbf{I}), and let gϕ(ϵ,𝐱)subscript𝑔bold-italic-ϕbold-italic-ϵ𝐱g_{\\boldsymbol{\\phi}}(\\boldsymbol{\\epsilon},\\mathbf{x}) be the inverse CDF of qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}). Examples: Exponential, Cauchy, Logistic, Rayleigh, Pareto, Weibull, Reciprocal, Gompertz, Gumbel and Erlang distributions. 2. Analogous to the Gaussian example, for any ”location-scale” family of distributions we can choose the standard distribution (with location=0location0\\text{location}=0, scale=1scale1\\text{scale}=1) as the auxiliary variable ϵbold-italic-ϵ\\boldsymbol{\\epsilon}, and let g(.)=location+scale⋅ϵg(.)=\\text{location}+\\text{scale}\\cdot\\boldsymbol{\\epsilon}. Examples: Laplace, Elliptical, Student’s t, Logistic, Uniform, Triangular and Gaussian distributions. 3. Composition: It is often possible to express random variables as different transformations of auxiliary variables. Examples: Log-Normal (exponentiation of normally distributed variable), Gamma (a sum over exponentially distributed variables), Dirichlet (weighted sum of Gamma variates), Beta, Chi-Squared, and F distributions. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_16",
"text": " When all three approaches fail, good approximations to the inverse CDF exist requiring computations with time complexity comparable to the PDF (see e.g. (Dev86) for some methods). ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_17",
"text": " In this section we’ll give an example where we use a neural network for the probabilistic encoder qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) (the approximation to the posterior of the generative model p𝜽(𝐱,𝐳)subscript𝑝𝜽𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x},\\mathbf{z})) and where the parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} and 𝜽𝜽\\boldsymbol{\\theta} are optimized jointly with the AEVB algorithm. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_18",
"text": " Let the prior over the latent variables be the centered isotropic multivariate Gaussian p𝜽(𝐳)=𝒩(𝐳;𝟎,𝐈)subscript𝑝𝜽𝐳𝒩𝐳0𝐈p_{\\boldsymbol{\\theta}}(\\mathbf{z})=\\mathcal{N}(\\mathbf{z};\\mathbf{0},\\mathbf{I}). Note that in this case, the prior lacks parameters. We let p𝜽(𝐱|𝐳)subscript𝑝𝜽conditional𝐱𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{x}|\\mathbf{z}) be a multivariate Gaussian (in case of real-valued data) or Bernoulli (in case of binary data) whose distribution parameters are computed from 𝐳𝐳\\mathbf{z} with a MLP (a fully-connected neural network with a single hidden layer, see appendix C). Note the true posterior p𝜽(𝐳|𝐱)subscript𝑝𝜽conditional𝐳𝐱p_{\\boldsymbol{\\theta}}(\\mathbf{z}|\\mathbf{x}) is in this case intractable. While there is much freedom in the form qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}), we’ll assume the true (but intractable) posterior takes on a approximate Gaussian form with an approximately diagonal covariance. In this case, we can let the variational approximate posterior be a multivariate Gaussian with a diagonal covariance structure222Note that this is just a (simplifying) choice, and not a limitation of our method.: logqϕ(𝐳|𝐱(i))subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\displaystyle\\log q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}) =log𝒩(𝐳;𝝁(i),𝝈2(i)𝐈)absent𝒩𝐳superscript𝝁𝑖superscript𝝈2𝑖𝐈\\displaystyle=\\log\\mathcal{N}(\\mathbf{z};\\boldsymbol{\\mu}^{(i)},\\boldsymbol{\\sigma}^{2(i)}\\mathbf{I}) (9) where the mean and s.d. of the approximate posterior, 𝝁(i)superscript𝝁𝑖\\boldsymbol{\\mu}^{(i)} and 𝝈(i)superscript𝝈𝑖\\boldsymbol{\\sigma}^{(i)}, are outputs of the encoding MLP, i.e. nonlinear functions of datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} and the variational parameters ϕbold-italic-ϕ\\boldsymbol{\\phi} (see appendix C). ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_19",
"text": " As explained in section 2.4, we sample from the posterior 𝐳(i,l)∼qϕ(𝐳|𝐱(i))similar-tosuperscript𝐳𝑖𝑙subscript𝑞bold-italic-ϕconditional𝐳superscript𝐱𝑖\\mathbf{z}^{(i,l)}\\sim q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}^{(i)}) using 𝐳(i,l)=gϕ(𝐱(i),ϵ(l))=𝝁(i)+𝝈(i)⊙ϵ(l)superscript𝐳𝑖𝑙subscript𝑔bold-italic-ϕsuperscript𝐱𝑖superscriptbold-italic-ϵ𝑙superscript𝝁𝑖direct-productsuperscript𝝈𝑖superscriptbold-italic-ϵ𝑙\\mathbf{z}^{(i,l)}=g_{\\boldsymbol{\\phi}}(\\mathbf{x}^{(i)},\\boldsymbol{\\epsilon}^{(l)})=\\boldsymbol{\\mu}^{(i)}+\\boldsymbol{\\sigma}^{(i)}\\odot\\boldsymbol{\\epsilon}^{(l)} where ϵ(l)∼𝒩(𝟎,𝐈)similar-tosuperscriptbold-italic-ϵ𝑙𝒩0𝐈\\boldsymbol{\\epsilon}^{(l)}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}). With ⊙direct-product\\odot we signify an element-wise product. In this model both p𝜽(𝐳)subscript𝑝𝜽𝐳p_{\\boldsymbol{\\theta}}(\\mathbf{z}) (the prior) and qϕ(𝐳|𝐱)subscript𝑞bold-italic-ϕconditional𝐳𝐱q_{\\boldsymbol{\\phi}}(\\mathbf{z}|\\mathbf{x}) are Gaussian; in this case, we can use the estimator of eq. (7) where the KL divergence can be computed and differentiated without estimation (see appendix B). The resulting estimator for this model and datapoint 𝐱(i)superscript𝐱𝑖\\mathbf{x}^{(i)} is: ℒ(𝜽,ϕ;𝐱(i))ℒ𝜽bold-italic-ϕsuperscript𝐱𝑖\\displaystyle\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{x}^{(i)}) ≃12∑j=1J(1+log((σj(i))2)−(μj(i))2−(σj(i))2)+1L∑l=1Llogp𝜽(𝐱(i)|𝐳(i,l))similar-to-or-equalsabsent12superscriptsubscript𝑗1𝐽1superscriptsuperscriptsubscript𝜎𝑗𝑖2superscriptsuperscriptsubscript𝜇𝑗𝑖2superscriptsuperscriptsubscript𝜎𝑗𝑖21𝐿superscriptsubscript𝑙1𝐿subscript𝑝𝜽conditionalsuperscript𝐱𝑖superscript𝐳𝑖𝑙\\displaystyle\\simeq\\frac{1}{2}\\sum_{j=1}^{J}\\left(1+\\log((\\sigma_{j}^{(i)})^{2})-(\\mu_{j}^{(i)})^{2}-(\\sigma_{j}^{(i)})^{2}\\right)+\\frac{1}{L}\\sum_{l=1}^{L}\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)}) where 𝐳(i,l)where superscript𝐳𝑖𝑙\\displaystyle\\text{where\\quad}\\mathbf{z}^{(i,l)} =𝝁(i)+𝝈(i)⊙ϵ(l) and ϵ(l)∼𝒩(0,𝐈)absentsuperscript𝝁𝑖direct-productsuperscript𝝈𝑖superscriptbold-italic-ϵ𝑙 and superscriptbold-italic-ϵ𝑙similar-to𝒩0𝐈\\displaystyle=\\boldsymbol{\\mu}^{(i)}+\\boldsymbol{\\sigma}^{(i)}\\odot\\boldsymbol{\\epsilon}^{(l)}\\text{\\quad and \\quad}\\boldsymbol{\\epsilon}^{(l)}\\sim\\mathcal{N}(0,\\mathbf{I}) (10) As explained above and in appendix C, the decoding term logp𝜽(𝐱(i)|𝐳(i,l))subscript𝑝𝜽conditionalsuperscript𝐱𝑖superscript𝐳𝑖𝑙\\log p_{\\boldsymbol{\\theta}}(\\mathbf{x}^{(i)}|\\mathbf{z}^{(i,l)}) is a Bernoulli or Gaussian MLP, depending on the type of data we are modelling. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_20",
"text": " The wake-sleep algorithm (HDFN95) is, to the best of our knowledge, the only other on-line learning method in the literature that is applicable to the same general class of continuous latent variable models. Like our method, the wake-sleep algorithm employs a recognition model that approximates the true posterior. A drawback of the wake-sleep algorithm is that it requires a concurrent optimization of two objective functions, which together do not correspond to optimization of (a bound of) the marginal likelihood. An advantage of wake-sleep is that it also applies to models with discrete latent variables. Wake-Sleep has the same computational complexity as AEVB per datapoint. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_21",
"text": " Stochastic variational inference (HBWP13) has recently received increasing interest. Recently, (BJP12) introduced a control variate schemes to reduce the high variance of the naïve gradient estimator discussed in section 2.1, and applied to exponential family approximations of the posterior. In (RGB13) some general methods, i.e. a control variate scheme, were introduced for reducing the variance of the original gradient estimator. In (SK13), a similar reparameterization as in this paper was used in an efficient version of a stochastic variational inference algorithm for learning the natural parameters of exponential-family approximating distributions. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_22",
"text": " The AEVB algorithm exposes a connection between directed probabilistic models (trained with a variational objective) and auto-encoders. A connection between linear auto-encoders and a certain class of generative linear-Gaussian models has long been known. In (Row98) it was shown that PCA corresponds to the maximum-likelihood (ML) solution of a special case of the linear-Gaussian model with a prior p(𝐳)=𝒩(0,𝐈)𝑝𝐳𝒩0𝐈p(\\mathbf{z})=\\mathcal{N}(0,\\mathbf{I}) and a conditional distribution p(𝐱|𝐳)=𝒩(𝐱;𝐖𝐳,ϵ𝐈)𝑝conditional𝐱𝐳𝒩𝐱𝐖𝐳italic-ϵ𝐈p(\\mathbf{x}|\\mathbf{z})=\\mathcal{N}(\\mathbf{x};\\mathbf{W}\\mathbf{z},\\epsilon\\mathbf{I}), specifically the case with infinitesimally small ϵitalic-ϵ\\epsilon. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_23",
"text": " In relevant recent work on autoencoders (VLL+10) it was shown that the training criterion of unregularized autoencoders corresponds to maximization of a lower bound (see the infomax principle (Lin89)) of the mutual information between input X𝑋X and latent representation Z𝑍Z. Maximizing (w.r.t. parameters) of the mutual information is equivalent to maximizing the conditional entropy, which is lower bounded by the expected loglikelihood of the data under the autoencoding model (VLL+10), i.e. the negative reconstrution error. However, it is well known that this reconstruction criterion is in itself not sufficient for learning useful representations (BCV13). Regularization techniques have been proposed to make autoencoders learn useful representations, such as denoising, contractive and sparse autoencoder variants (BCV13). The SGVB objective contains a regularization term dictated by the variational bound (e.g. eq. (10)), lacking the usual nuisance regularization hyperparameter required to learn useful representations. Related are also encoder-decoder architectures such as the predictive sparse decomposition (PSD) (KRL08), from which we drew some inspiration. Also relevant are the recently introduced Generative Stochastic Networks (BTL13) where noisy auto-encoders learn the transition operator of a Markov chain that samples from the data distribution. In (SL10) a recognition model was employed for efficient learning with Deep Boltzmann Machines. These methods are targeted at either unnormalized models (i.e. undirected models like Boltzmann machines) or limited to sparse coding models, in contrast to our proposed algorithm for learning a general class of directed probabilistic models. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_24",
"text": " The recently proposed DARN method (GMW13), also learns a directed probabilistic model using an auto-encoding structure, however their method applies to binary latent variables. Even more recently, (RMW14) also make the connection between auto-encoders, directed proabilistic models and stochastic variational inference using the reparameterization trick we describe in this paper. Their work was developed independently of ours and provides an additional perspective on AEVB. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_25",
"text": " We trained generative models of images from the MNIST and Frey Face datasets333Available at http://www.cs.nyu.edu/~roweis/data.html and compared learning algorithms in terms of the variational lower bound, and the estimated marginal likelihood. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_26",
"text": " The generative model (encoder) and variational approximation (decoder) from section 3 were used, where the described encoder and decoder have an equal number of hidden units. Since the Frey Face data are continuous, we used a decoder with Gaussian outputs, identical to the encoder, except that the means were constrained to the interval (0,1)01(0,1) using a sigmoidal activation function at the decoder output. Note that with hidden units we refer to the hidden layer of the neural networks of the encoder and decoder. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_27",
"text": " Parameters are updated using stochastic gradient ascent where gradients are computed by differentiating the lower bound estimator ∇𝜽,ϕℒ(𝜽,ϕ;𝐗)subscript∇𝜽bold-italic-ϕℒ𝜽bold-italic-ϕ𝐗\\nabla_{\\boldsymbol{\\theta},\\boldsymbol{\\phi}}\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\phi};\\mathbf{X}) (see algorithm 1), plus a small weight decay term corresponding to a prior p(𝜽)=𝒩(0,𝐈)𝑝𝜽𝒩0𝐈p(\\boldsymbol{\\theta})=\\mathcal{N}(0,\\mathbf{I}). Optimization of this objective is equivalent to approximate MAP estimation, where the likelihood gradient is approximated by the gradient of the lower bound. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_28",
"text": " We compared performance of AEVB to the wake-sleep algorithm (HDFN95). We employed the same encoder (also called recognition model) for the wake-sleep algorithm and the variational auto-encoder. All parameters, both variational and generative, were initialized by random sampling from 𝒩(0,0.01)𝒩00.01\\mathcal{N}(0,0.01), and were jointly stochastically optimized using the MAP criterion. Stepsizes were adapted with Adagrad (DHS10); the Adagrad global stepsize parameters were chosen from {0.01, 0.02, 0.1} based on performance on the training set in the first few iterations. Minibatches of size M=100𝑀100M=100 were used, with L=1𝐿1L=1 samples per datapoint. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_29",
"text": " We trained generative models (decoders) and corresponding encoders (a.k.a. recognition models) having 500500500 hidden units in case of MNIST, and 200200200 hidden units in case of the Frey Face dataset (to prevent overfitting, since it is a considerably smaller dataset). The chosen number of hidden units is based on prior literature on auto-encoders, and the relative performance of different algorithms was not very sensitive to these choices. Figure 2 shows the results when comparing the lower bounds. Interestingly, superfluous latent variables did not result in overfitting, which is explained by the regularizing nature of the variational bound. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_30",
"text": " For very low-dimensional latent space it is possible to estimate the marginal likelihood of the learned generative models using an MCMC estimator. More information about the marginal likelihood estimator is available in the appendix. For the encoder and decoder we again used neural networks, this time with 100 hidden units, and 3 latent variables; for higher dimensional latent space the estimates became unreliable. Again, the MNIST dataset was used. The AEVB and Wake-Sleep methods were compared to Monte Carlo EM (MCEM) with a Hybrid Monte Carlo (HMC) (DKPR87) sampler; details are in the appendix. We compared the convergence speed for the three algorithms, for a small and large training set size. Results are in figure 3. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_31",
"text": " If we choose a low-dimensional latent space (e.g. 2D), we can use the learned encoders (recognition model) to project high-dimensional data to a low-dimensional manifold. See appendix A for visualisations of the 2D latent manifolds for the MNIST and Frey Face datasets. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_32",
"text": " We have introduced a novel estimator of the variational lower bound, Stochastic Gradient VB (SGVB), for efficient approximate inference with continuous latent variables. The proposed estimator can be straightforwardly differentiated and optimized using standard stochastic gradient methods. For the case of i.i.d. datasets and continuous latent variables per datapoint we introduce an efficient algorithm for efficient inference and learning, Auto-Encoding VB (AEVB), that learns an approximate inference model using the SGVB estimator. The theoretical advantages are reflected in experimental results. ",
"title": "Auto-Encoding Variational Bayes"
},
{
"id": "1312.6114_all_33",
"text": " Since the SGVB estimator and the AEVB algorithm can be applied to almost any inference and learning problem with continuous latent variables, there are plenty of future directions: (i) learning hierarchical generative architectures with deep neural networks (e.g. convolutional networks) used for the encoders and decoders, trained jointly with AEVB; (ii) time-series models (i.e. dynamic Bayesian networks); (iii) application of SGVB to the global parameters; (iv) supervised models with latent variables, useful for learning complicated noise distributions. ",
"title": "Auto-Encoding Variational Bayes"
}
] |
Would it be better to use 1 prototype per class rather than multiple prototypes?
|
[If the number of prototypes per class is fixed and greater than 1, then this would require a partitioning scheme to further cluster the support points within a class [11].
|
[
11
] |
[
{
"id": "1703.05175_all_0",
"text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely overfit. While the problem is quite difficult, it has been demonstrated that humans have the ability to perform even one-shot classification, where only a single example of each new class is given, with a high degree of accuracy . ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_1",
"text": " Two recent approaches have made significant progress in few-shot learning. Vinyals et al. proposed matching networks, which uses an attention mechanism over a learned embedding of the labeled set of examples (the support set) to predict classes for the unlabeled points (the query set). Matching networks can be interpreted as a weighted nearest-neighbor classifier applied within an embedding space. Notably, this model utilizes sampled mini-batches called episodes during training, where each episode is designed to mimic the few-shot task by subsampling classes as well as data points. The use of episodes makes the training problem more faithful to the test environment and thereby improves generalization. Ravi and Larochelle take the episodic training idea further and propose a meta-learning approach to few-shot learning. Their approach involves training an LSTM to produce the updates to a classifier, given an episode, such that it will generalize well to a test-set. Here, rather than training a single model over multiple episodes, the LSTM meta-learner learns to train a custom model for each episode. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_2",
"text": " We attack the problem of few-shot learning by addressing the key issue of overfitting. Since data is severely limited, we work under the assumption that a classifier should have a very simple inductive bias. Our approach, prototypical networks, is based on the idea that there exists an embedding in which points cluster around a single prototype representation for each class. In order to do this, we learn a non-linear mapping of the input into an embedding space using a neural network and take a class’s prototype to be the mean of its support set in the embedding space. Classification is then performed for an embedded query point by simply finding the nearest class prototype. We follow the same approach to tackle zero-shot learning; here each class comes with meta-data giving a high-level description of the class rather than a small number of labeled examples. We therefore learn an embedding of the meta-data into a shared space to serve as the prototype for each class. Classification is performed, as in the few-shot scenario, by finding the nearest class prototype for an embedded query point. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_3",
"text": " In this paper, we formulate prototypical networks for both the few-shot and zero-shot settings. We draw connections to matching networks in the one-shot setting, and analyze the underlying distance function used in the model. In particular, we relate prototypical networks to clustering in order to justify the use of class means as prototypes when distances are computed with a Bregman divergence, such as squared Euclidean distance. We find empirically that the choice of distance is vital, as Euclidean distance greatly outperforms the more commonly used cosine similarity. On several benchmark tasks, we achieve state-of-the-art performance. Prototypical networks are simpler and more efficient than recent meta-learning algorithms, making them an appealing approach to few-shot and zero-shot learning. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_4",
"text": " In few-shot classification we are given a small support set of N𝑁N labeled examples S={(𝐱1,y1),…,(𝐱N,yN)}𝑆subscript𝐱1subscript𝑦1…subscript𝐱𝑁subscript𝑦𝑁S=\\{(\\mathbf{x}_{1},y_{1}),\\ldots,(\\mathbf{x}_{N},y_{N})\\} where each 𝐱i∈ℝDsubscript𝐱𝑖superscriptℝ𝐷\\mathbf{x}_{i}\\in\\mathbb{R}^{D} is the D𝐷D-dimensional feature vector of an example and yi∈{1,…,K}subscript𝑦𝑖1…𝐾y_{i}\\in\\{1,\\ldots,K\\} is the corresponding label. Sksubscript𝑆𝑘S_{k} denotes the set of examples labeled with class k𝑘k. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_5",
"text": " Prototypical networks compute an M𝑀M-dimensional representation 𝐜k∈ℝMsubscript𝐜𝑘superscriptℝ𝑀\\mathbf{c}_{k}\\in\\mathbb{R}^{M}, or prototype, of each class through an embedding function fϕ:ℝD→ℝM:subscript𝑓bold-italic-ϕ→superscriptℝ𝐷superscriptℝ𝑀f_{\\bm{\\phi}}:\\mathbb{R}^{D}\\rightarrow\\mathbb{R}^{M} with learnable parameters ϕbold-italic-ϕ\\bm{\\phi}. Each prototype is the mean vector of the embedded support points belonging to its class: 𝐜k=1|Sk|∑(𝐱i,yi)∈Skfϕ(𝐱i)subscript𝐜𝑘1subscript𝑆𝑘subscriptsubscript𝐱𝑖subscript𝑦𝑖subscript𝑆𝑘subscript𝑓bold-italic-ϕsubscript𝐱𝑖\\mathbf{c}_{k}=\\frac{1}{|S_{k}|}\\sum_{(\\mathbf{x}_{i},y_{i})\\in S_{k}}f_{\\bm{\\phi}}(\\mathbf{x}_{i}) (1) Given a distance function d:ℝM×ℝM→(0,+∞):𝑑→superscriptℝ𝑀superscriptℝ𝑀0d:\\mathbb{R}^{M}\\times\\mathbb{R}^{M}\\rightarrow(0,+\\infty), prototypical networks produce a distribution over classes for a query point 𝐱𝐱\\mathbf{x} based on a softmax over distances to the prototypes in the embedding space: pϕ(y=k|𝐱)=exp(−d(fϕ(𝐱),𝐜k))∑k′exp(−d(fϕ(𝐱),𝐜k′))subscript𝑝bold-italic-ϕ𝑦conditional𝑘𝐱𝑑subscript𝑓bold-italic-ϕ𝐱subscript𝐜𝑘subscriptsuperscript𝑘′𝑑subscript𝑓bold-italic-ϕ𝐱subscript𝐜superscript𝑘′p_{\\bm{\\phi}}(y=k\\,|\\,\\mathbf{x})=\\frac{\\exp(-d(f_{\\bm{\\phi}}(\\mathbf{x}),\\mathbf{c}_{k}))}{\\sum_{k^{\\prime}}\\exp(-d(f_{\\bm{\\phi}}(\\mathbf{x}),\\mathbf{c}_{k^{\\prime}}))} (2) Learning proceeds by minimizing the negative log-probability J(ϕ)=−logpϕ(y=k|𝐱)𝐽bold-italic-ϕsubscript𝑝bold-italic-ϕ𝑦conditional𝑘𝐱J(\\bm{\\phi})=-\\log p_{\\bm{\\phi}}(y=k\\,|\\,\\mathbf{x}) of the true class k𝑘k via SGD. Training episodes are formed by randomly selecting a subset of classes from the training set, then choosing a subset of examples within each class to act as the support set and a subset of the remainder to serve as query points. Pseudocode to compute the loss J(ϕ)𝐽bold-italic-ϕJ(\\bm{\\phi}) for a training episode is provided in Algorithm 1. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_6",
"text": " For a particular class of distance functions, known as regular Bregman divergences , the prototypical networks algorithm is equivalent to performing mixture density estimation on the support set with an exponential family density. A regular Bregman divergence dφsubscript𝑑𝜑d_{\\varphi} is defined as: dφ(𝐳,𝐳′)=φ(𝐳)−φ(𝐳′)−(𝐳−𝐳′)T∇φ(𝐳′),subscript𝑑𝜑𝐳superscript𝐳′𝜑𝐳𝜑superscript𝐳′superscript𝐳superscript𝐳′𝑇∇𝜑superscript𝐳′d_{\\varphi}(\\mathbf{z},\\mathbf{z}^{\\prime})=\\varphi(\\mathbf{z})-\\varphi(\\mathbf{z}^{\\prime})-(\\mathbf{z}-\\mathbf{z}^{\\prime})^{T}\\nabla\\varphi(\\mathbf{z}^{\\prime}), (3) where φ𝜑\\varphi is a differentiable, strictly convex function of the Legendre type. Examples of Bregman divergences include squared Euclidean distance ‖𝐳−𝐳′‖2superscriptnorm𝐳superscript𝐳′2\\|\\mathbf{z}-\\mathbf{z}^{\\prime}\\|^{2} and Mahalanobis distance. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_7",
"text": " Prototype computation can be viewed in terms of hard clustering on the support set, with one cluster per class and each support point assigned to its corresponding class cluster. It has been shown for Bregman divergences that the cluster representative achieving minimal distance to its assigned points is the cluster mean. Thus the prototype computation in Equation (1) yields optimal cluster representatives given the support set labels when a Bregman divergence is used. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_8",
"text": " Moreover, any regular exponential family distribution pψ(𝐳|𝜽)subscript𝑝𝜓conditional𝐳𝜽p_{\\psi}(\\mathbf{z}|\\bm{\\theta}) with parameters 𝜽𝜽\\bm{\\theta} and cumulant function ψ𝜓\\psi can be written in terms of a uniquely determined regular Bregman divergence : pψ(𝐳|𝜽)=exp{𝐳T𝜽−ψ(𝜽)−gψ(𝐳)}=exp{−dφ(𝐳,𝝁(𝜽))−gφ(𝐳)}subscript𝑝𝜓conditional𝐳𝜽superscript𝐳𝑇𝜽𝜓𝜽subscript𝑔𝜓𝐳subscript𝑑𝜑𝐳𝝁𝜽subscript𝑔𝜑𝐳p_{\\psi}(\\mathbf{z}|\\bm{\\theta})=\\exp\\{\\mathbf{z}^{T}\\bm{\\theta}-\\psi(\\bm{\\theta})-g_{\\psi}(\\mathbf{z})\\}=\\exp\\{-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}))-g_{\\varphi}(\\mathbf{z})\\} (4) Consider now a regular exponential family mixture model with parameters 𝚪={𝜽k,πk}k=1K𝚪superscriptsubscriptsubscript𝜽𝑘subscript𝜋𝑘𝑘1𝐾\\bm{\\Gamma}=\\{\\bm{\\theta}_{k},\\pi_{k}\\}_{k=1}^{K}: p(𝐳|𝚪)=∑k=1Kπkpψ(𝐳|𝜽k)=∑k=1Kπkexp(−dφ(𝐳,𝝁(𝜽k))−gφ(𝐳))𝑝conditional𝐳𝚪superscriptsubscript𝑘1𝐾subscript𝜋𝑘subscript𝑝𝜓conditional𝐳subscript𝜽𝑘superscriptsubscript𝑘1𝐾subscript𝜋𝑘subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘subscript𝑔𝜑𝐳p(\\mathbf{z}|\\bm{\\Gamma})=\\sum_{k=1}^{K}\\pi_{k}p_{\\psi}(\\mathbf{z}|\\bm{\\theta}_{k})=\\sum_{k=1}^{K}\\pi_{k}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k}))-g_{\\varphi}(\\mathbf{z})) (5) Given 𝚪𝚪\\bm{\\Gamma}, inference of the cluster assignment y𝑦y for an unlabeled point 𝐳𝐳\\mathbf{z} becomes: p(y=k|𝐳)=πkexp(−dφ(𝐳,𝝁(𝜽k)))∑k′πk′exp(−dφ(𝐳,𝝁(𝜽k)))𝑝𝑦conditional𝑘𝐳subscript𝜋𝑘subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘subscriptsuperscript𝑘′subscript𝜋superscript𝑘′subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘p(y=k|\\mathbf{z})=\\frac{\\pi_{k}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k})))}{\\sum_{k^{\\prime}}\\pi_{k^{\\prime}}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k})))} (6) For an equally-weighted mixture model with one cluster per class, cluster assignment inference (6) is equivalent to query class prediction (2) with fϕ(𝐱)=𝐳subscript𝑓italic-ϕ𝐱𝐳f_{\\phi}(\\mathbf{x})=\\mathbf{z} and 𝐜k=𝝁(𝜽k)subscript𝐜𝑘𝝁subscript𝜽𝑘\\mathbf{c}_{k}=\\bm{\\mu}(\\bm{\\theta}_{k}). In this case, prototypical networks are effectively performing mixture density estimation with an exponential family distribution determined by dφsubscript𝑑𝜑d_{\\varphi}. The choice of distance therefore specifies modeling assumptions about the class-conditional data distribution in the embedding space. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_9",
"text": " A simple analysis is useful in gaining insight into the nature of the learned classifier. When we use Euclidean distance d(𝐳,𝐳′)=‖𝐳−𝐳′‖2𝑑𝐳superscript𝐳′superscriptnorm𝐳superscript𝐳′2d(\\mathbf{z},\\mathbf{z^{\\prime}})=\\|\\mathbf{z}-\\mathbf{z}^{\\prime}\\|^{2}, then the model in Equation (2) is equivalent to a linear model with a particular parameterization . To see this, expand the term in the exponent: −‖fϕ(𝐱)−𝐜k‖2superscriptnormsubscript𝑓bold-italic-ϕ𝐱subscript𝐜𝑘2\\displaystyle-\\|f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}\\|^{2} =−fϕ(𝐱)⊤fϕ(𝐱)+2𝐜k⊤fϕ(𝐱)−𝐜k⊤𝐜kabsentsubscript𝑓bold-italic-ϕsuperscript𝐱topsubscript𝑓bold-italic-ϕ𝐱2superscriptsubscript𝐜𝑘topsubscript𝑓bold-italic-ϕ𝐱superscriptsubscript𝐜𝑘topsubscript𝐜𝑘\\displaystyle=-f_{\\bm{\\phi}}(\\mathbf{x})^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})+2\\mathbf{c}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k} (7) The first term in Equation (7) is constant with respect to the class k𝑘k, so it does not affect the softmax probabilities. We can write the remaining terms as a linear model as follows: 2𝐜k⊤fϕ(𝐱)−𝐜k⊤𝐜k=𝐰k⊤fϕ(𝐱)+bk, where 𝐰k=2𝐜k and bk=−𝐜k⊤𝐜k2superscriptsubscript𝐜𝑘topsubscript𝑓bold-italic-ϕ𝐱superscriptsubscript𝐜𝑘topsubscript𝐜𝑘superscriptsubscript𝐰𝑘topsubscript𝑓bold-italic-ϕ𝐱subscript𝑏𝑘, where subscript𝐰𝑘2subscript𝐜𝑘 and subscript𝑏𝑘superscriptsubscript𝐜𝑘topsubscript𝐜𝑘2\\mathbf{c}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k}=\\mathbf{w}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})+b_{k}\\mbox{, where }\\mathbf{w}_{k}=2\\mathbf{c}_{k}\\mbox{ and }b_{k}=-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k} (8) We focus primarily on squared Euclidean distance (corresponding to spherical Gaussian densities) in this work. Our results indicate that Euclidean distance is an effective choice despite the equivalence to a linear model. We hypothesize this is because all of the required non-linearity can be learned within the embedding function. Indeed, this is the approach that modern neural network classification systems currently use, e.g., (14, 28). ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_10",
"text": " Prototypical networks differ from matching networks in the few-shot case with equivalence in the one-shot scenario. Matching networks produce a weighted nearest neighbor classifier given the support set, while prototypical networks produce a linear classifier when squared Euclidean distance is used. In the case of one-shot learning, 𝐜k=𝐱ksubscript𝐜𝑘subscript𝐱𝑘\\mathbf{c}_{k}=\\mathbf{x}_{k} since there is only one support point per class, and matching networks and prototypical networks become equivalent. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_11",
"text": " A natural question is whether it makes sense to use multiple prototypes per class instead of just one. If the number of prototypes per class is fixed and greater than 111, then this would require a partitioning scheme to further cluster the support points within a class. This has been proposed in Mensink et al. and Rippel et al. ; however both methods require a separate partitioning phase that is decoupled from the weight updates, while our approach is simple to learn with ordinary gradient descent methods. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_12",
"text": " Vinyals et al. propose a number of extensions, including decoupling the embedding functions of the support and query points, and using a second-level, fully-conditional embedding (FCE) that takes into account specific points in each episode. These could likewise be incorporated into prototypical networks, however they increase the number of learnable parameters, and FCE imposes an arbitrary ordering on the support set using a bi-directional LSTM. Instead, we show that it is possible to achieve the same level of performance using simple design choices, which we outline next. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_13",
"text": " Vinyals et al. and Ravi and Larochelle apply matching networks using cosine distance. However for both prototypical and matching networks any distance is permissible, and we found that using squared Euclidean distance can greatly improve results for both. We conjecture this is primarily due to cosine distance not being a Bregman divergence, and thus the equivalence to mixture density estimation discussed in Section 2.3 does not hold. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_14",
"text": " A straightforward way to construct episodes, used in Vinyals et al. and Ravi and Larochelle , is to choose Ncsubscript𝑁𝑐N_{c} classes and NSsubscript𝑁𝑆N_{S} support points per class in order to match the expected situation at test-time. That is, if we expect at test-time to perform 555-way classification and 111-shot learning, then training episodes could be comprised of Nc=5subscript𝑁𝑐5N_{c}=5, NS=1subscript𝑁𝑆1N_{S}=1. We have found, however, that it can be extremely beneficial to train with a higher Ncsubscript𝑁𝑐N_{c}, or “way”, than will be used at test-time. In our experiments, we tune the training Ncsubscript𝑁𝑐N_{c} on a held-out validation set. Another consideration is whether to match NSsubscript𝑁𝑆N_{S}, or “shot”, at train and test-time. For prototypical networks, we found that it is usually best to train and test with the same “shot” number. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_15",
"text": " Zero-shot learning differs from few-shot learning in that instead of being given a support set of training points, we are given a class meta-data vector 𝐯ksubscript𝐯𝑘\\mathbf{v}_{k} for each class. These could be determined in advance, or they could be learned from e.g., raw text . Modifying prototypical networks to deal with the zero-shot case is straightforward: we simply define 𝐜k=gϑ(𝐯k)subscript𝐜𝑘subscript𝑔bold-italic-ϑsubscript𝐯𝑘\\mathbf{c}_{k}=g_{\\bm{\\vartheta}}(\\mathbf{v}_{k}) to be a separate embedding of the meta-data vector. An illustration of the zero-shot procedure for prototypical networks as it relates to the few-shot procedure is shown in Figure 1. Since the meta-data vector and query point come from different input domains, we found it was helpful empirically to fix the prototype embedding g𝑔g to have unit length, however we do not constrain the query embedding f𝑓f. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_16",
"text": " For few-shot learning, we performed experiments on Omniglot and the miniImageNet version of ILSVRC-2012 with the splits proposed by Ravi and Larochelle . We perform zero-shot experiments on the 2011 version of the Caltech UCSD bird dataset (CUB-200 2011) . ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_17",
"text": " Omniglot is a dataset of 1623 handwritten characters collected from 50 alphabets. There are 20 examples associated with each character, where each example is drawn by a different human subject. We follow the procedure of Vinyals et al. by resizing the grayscale images to 28 ×\\times 28 and augmenting the character classes with rotations in multiples of 90 degrees. We use 1200 characters plus rotations for training (4,800 classes in total) and the remaining classes, including rotations, for test. Our embedding architecture mirrors that used by Vinyals et al. and is composed of four convolutional blocks. Each block comprises a 64-filter 3 ×\\times 3 convolution, batch normalization layer , a ReLU nonlinearity and a 2 ×\\times 2 max-pooling layer. When applied to the 28 ×\\times 28 Omniglot images this architecture results in a 64-dimensional output space. We use the same encoder for embedding both support and query points. All of our models were trained via SGD with Adam . We used an initial learning rate of 10−3superscript10310^{-3} and cut the learning rate in half every 2000 episodes. No regularization was used other than batch normalization. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_18",
"text": " We trained prototypical networks using Euclidean distance in the 1-shot and 5-shot scenarios with training episodes containing 60 classes and 5 query points per class. We found that it is advantageous to match the training-shot with the test-shot, and to use more classes (higher “way”) per training episode rather than fewer. We compare against various baselines, including the neural statistician and both the fine-tuned and non-fine-tuned versions of matching networks . We computed classification accuracy for our models averaged over 1000 randomly generated episodes from the test set. The results are shown in Table 1 and to our knowledge they represent the state-of-the-art on this dataset. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_19",
"text": " The miniImageNet dataset, originally proposed by Vinyals et al. , is derived from the larger ILSVRC-12 dataset . The splits used by Vinyals et al. consist of 60,000 color images of size 84 ×\\times 84 divided into 100 classes with 600 examples each. For our experiments, we use the splits introduced by Ravi and Larochelle in order to directly compare with state-of-the-art algorithms for few-shot learning. Their splits use a different set of 100 classes, divided into 64 training, 16 validation, and 20 test classes. We follow their procedure by training on the 64 training classes and using the 16 validation classes for monitoring generalization performance only. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_20",
"text": " We use the same four-block embedding architecture as in our Omniglot experiments, though here it results in a 1600-dimensional output space due to the increased size of the images. We also use the same learning rate schedule as in our Omniglot experiments and train until validation loss stops improving. We train using 30-way episodes for 1-shot classification and 20-way episodes for 5-shot classification. We match train shot to test shot and each class contains 15 query points per episode. We compare to the baselines as reported by Ravi and Larochelle , which include a simple nearest neighbor approach on top of features learned by a classification network on the 64 training classes. The other baselines are two non-fine-tuned variants of matching networks (both ordinary and FCE) and the Meta-Learner LSTM. As can be seen in Table 2, prototypical networks achieves state-of-the-art here by a wide margin. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_21",
"text": " We conducted further analysis, to determine the effect of distance metric and the number of training classes per episode on the performance of prototypical networks and matching networks. To make the methods comparable, we use our own implementation of matching networks that utilizes the same embedding architecture as our prototypical networks. In Figure 2 we compare cosine vs. Euclidean distance and 5-way vs. 20-way training episodes in the 1-shot and 5-shot scenarios, with 15 query points per class per episode. We note that 20-way achieves higher accuracy than 5-way and conjecture that the increased difficulty of 20-way classification helps the network to generalize better, because it forces the model to make more fine-grained decisions in the embedding space. Also, using Euclidean distance improves performance substantially over cosine distance. This effect is even more pronounced for prototypical networks, in which computing the class prototype as the mean of embedded support points is more naturally suited to Euclidean distances since cosine distance is not a Bregman divergence. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_22",
"text": " In order to assess the suitability of our approach for zero-shot learning, we also run experiments on the Caltech-UCSD Birds (CUB) 200-2011 dataset . The CUB dataset contains 11,788 images of 200 bird species. We closely follow the procedure of Reed et al. in preparing the data. We use their splits to divide the classes into 100 training, 50 validation, and 50 test. For images we use 1,024-dimensional features extracted by applying GoogLeNet to middle, upper left, upper right, lower left, and lower right crops of the original and horizontally-flipped image222Features downloaded from https://github.com/reedscot/cvpr2016.. At test time we use only the middle crop of the original image. For class meta-data we use the 312-dimensional continuous attribute vectors provided with the CUB dataset. These attributes encode various characteristics of the bird species such as their color, shape, and feather patterns. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_23",
"text": " We learned a simple linear mapping on top of both the 1024-dimensional image features and the 312-dimensional attribute vectors to produce a 1,024-dimensional output space. For this dataset we found it helpful to normalize the class prototypes (embedded attribute vectors) to be of unit length, since the attribute vectors come from a different domain than the images. Training episodes were constructed with 50 classes and 10 query images per class. The embeddings were optimized via SGD with Adam at a fixed learning rate of 10−4superscript10410^{-4} and weight decay of 10−5superscript10510^{-5}. Early stopping on validation loss was used to determine the optimal number of epochs for retraining on the training plus validation set. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_24",
"text": " Table 3 shows that we achieve state-of-the-art results by a large margin when compared to methods utilizing attributes as class meta-data. We compare our method to other embedding approaches, such as ALE , SJE , and DS-SJE/DA-SJE . We also compare to a recent clustering approach which trains an SVM on a learned feature space obtained by fine-tuning AlexNet . These zero-shot classification results demonstrate that our approach is general enough to be applied even when the data points (images) are from a different domain relative to the classes (attributes). ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_25",
"text": " The literature on metric learning is vast (15, 5); we summarize here the work most relevant to our proposed method. Neighborhood Components Analysis (NCA) learns a Mahalanobis distance to maximize K-nearest-neighbor’s (KNN) leave-one-out accuracy in the transformed space. Salakhutdinov and Hinton extend NCA by using a neural network to perform the transformation. Large margin nearest neighbor (LMNN) classification also attempts to optimize KNN accuracy but does so using a hinge loss that encourages the local neighborhood of a point to contain other points with the same label. The DNet-KNN is another margin-based method that improves upon LMNN by utilizing a neural network to perform the embedding instead of a simple linear transformation. Of these, our method is most similar to the non-linear extension of NCA because we use a neural network to perform the embedding and we optimize a softmax based on Euclidean distances in the transformed space, as opposed to a margin loss. A key distinction between our approach and non-linear NCA is that we form a softmax directly over classes, rather than individual points, computed from distances to each class’s prototype representation. This allows each class to have a concise representation independent of the number of data points and obviates the need to store the entire support set to make predictions. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_26",
"text": " Our approach is also similar to the nearest class mean approach , where each class is represented by the mean of its examples. This approach was developed to rapidly incorporate new classes into a classifier without retraining, however it relies on a linear embedding and was designed to handle the case where the novel classes come with a large number of examples. In contrast, our approach utilizes neural networks to non-linearly embed points and we couple this with episodic training in order to handle the few-shot scenario. Mensink et al. attempt to extend their approach to also perform non-linear classification, but they do so by allowing classes to have multiple prototypes. They find these prototypes in a pre-processing step by using k𝑘k-means on the input space and then perform a multi-modal variant of their linear embedding. Prototypical networks, on the other hand, learn a non-linear embedding in an end-to-end manner with no such pre-processing, producing a non-linear classifier that still only requires one prototype per class. In addition, our approach naturally generalizes to other distance functions, particularly Bregman divergences. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_27",
"text": " Another relevant few-shot learning method is the meta-learning approach proposed in Ravi and Larochelle . The key insight here is that LSTM dynamics and gradient descent can be written in effectively the same way. An LSTM can then be trained to itself train a model from a given episode, with the performance goal of generalizing well on the query points. Matching networks and prototypical networks can also be seen as forms of meta-learning, in the sense that they produce simple classifiers dynamically from new training episodes; however the core embeddings they rely on are fixed after training. The FCE extension to matching nets involves a secondary embedding that depends on the support set. However, in the few-shot scenario the amount of data is so small that a simple inductive bias seems to work well, without the need to learn a custom embedding for each episode. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_28",
"text": " Prototypical networks are also related to the neural statistician from the generative modeling literature, which extends the variational autoencoder (12, 24) to learn generative models of datasets rather than individual points. One component of the neural statistician is the “statistic network” which summarizes a set of data points into a statistic vector. It does this by encoding each point within a dataset, taking a sample mean, and applying a post-processing network to obtain an approximate posterior over the statistic vector. Edwards and Storkey test their model for one-shot classification on the Omniglot dataset by considering each character to be a separate dataset and making predictions based on the class whose approximate posterior over the statistic vector has minimal KL-divergence from the posterior inferred by the test point. Like the neural statistician, we also produce a summary statistic for each class. However, ours is a discriminative model, as befits our discriminative task of few-shot classification. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_29",
"text": " With respect to zero-shot learning, the use of embedded meta-data in prototypical networks resembles the method of in that both predict the weights of a linear classifier. The DS-SJE and DA-SJE approach of also learns deep multimodal embedding functions for images and class meta-data. Unlike ours, they learn using an empirical risk loss. Neither nor uses episodic training, which allows us to help speed up training and regularize the model. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_30",
"text": " We have proposed a simple method called prototypical networks for few-shot learning based on the idea that we can represent each class by the mean of its examples in a representation space learned by a neural network. We train these networks to specifically perform well in the few-shot setting by using episodic training. The approach is far simpler and more efficient than recent meta-learning approaches, and produces state-of-the-art results even without sophisticated extensions developed for matching networks (although these can be applied to prototypical nets as well). We show how performance can be greatly improved by carefully considering the chosen distance metric, and by modifying the episodic learning procedure. We further demonstrate how to generalize prototypical networks to the zero-shot setting, and achieve state-of-the-art results on the CUB-200 dataset. A natural direction for future work is to utilize Bregman divergences other than squared Euclidean distance, corresponding to class-conditional distributions beyond spherical Gaussians. We conducted preliminary explorations of this, including learning a variance per dimension for each class. This did not lead to any empirical gains, suggesting that the embedding network has enough flexibility on its own without requiring additional fitted parameters per class. Overall, the simplicity and effectiveness of prototypical networks makes it a promising approach for few-shot learning. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_31",
"text": " We would like to thank Marc Law, Sachin Ravi, Hugo Larochelle, Renjie Liao, and Oriol Vinyals for helpful discussions. This work was supported by the Samsung GRP project and the Canadian Institute for Advanced Research. ",
"title": "Prototypical Networks for Few-shot Learning"
}
] |
How this paper define a prototype?
|
[The paper learns a non-linear mapping of the input into an embedding space using a neural network and takes a class’s prototype to be the mean of its support set in the embedding space [2]. It learns the embedding of the meta-data into a shared space to serve as the prototype for each class [5].
|
[
2,
5
] |
[
{
"id": "1703.05175_all_0",
"text": " Few-shot classification (20, 16, 13) is a task in which a classifier must be adapted to accommodate new classes not seen in training, given only a few examples of each of these classes. A naive approach, such as re-training the model on the new data, would severely overfit. While the problem is quite difficult, it has been demonstrated that humans have the ability to perform even one-shot classification, where only a single example of each new class is given, with a high degree of accuracy . ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_1",
"text": " Two recent approaches have made significant progress in few-shot learning. Vinyals et al. proposed matching networks, which uses an attention mechanism over a learned embedding of the labeled set of examples (the support set) to predict classes for the unlabeled points (the query set). Matching networks can be interpreted as a weighted nearest-neighbor classifier applied within an embedding space. Notably, this model utilizes sampled mini-batches called episodes during training, where each episode is designed to mimic the few-shot task by subsampling classes as well as data points. The use of episodes makes the training problem more faithful to the test environment and thereby improves generalization. Ravi and Larochelle take the episodic training idea further and propose a meta-learning approach to few-shot learning. Their approach involves training an LSTM to produce the updates to a classifier, given an episode, such that it will generalize well to a test-set. Here, rather than training a single model over multiple episodes, the LSTM meta-learner learns to train a custom model for each episode. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_2",
"text": " We attack the problem of few-shot learning by addressing the key issue of overfitting. Since data is severely limited, we work under the assumption that a classifier should have a very simple inductive bias. Our approach, prototypical networks, is based on the idea that there exists an embedding in which points cluster around a single prototype representation for each class. In order to do this, we learn a non-linear mapping of the input into an embedding space using a neural network and take a class’s prototype to be the mean of its support set in the embedding space. Classification is then performed for an embedded query point by simply finding the nearest class prototype. We follow the same approach to tackle zero-shot learning; here each class comes with meta-data giving a high-level description of the class rather than a small number of labeled examples. We therefore learn an embedding of the meta-data into a shared space to serve as the prototype for each class. Classification is performed, as in the few-shot scenario, by finding the nearest class prototype for an embedded query point. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_3",
"text": " In this paper, we formulate prototypical networks for both the few-shot and zero-shot settings. We draw connections to matching networks in the one-shot setting, and analyze the underlying distance function used in the model. In particular, we relate prototypical networks to clustering in order to justify the use of class means as prototypes when distances are computed with a Bregman divergence, such as squared Euclidean distance. We find empirically that the choice of distance is vital, as Euclidean distance greatly outperforms the more commonly used cosine similarity. On several benchmark tasks, we achieve state-of-the-art performance. Prototypical networks are simpler and more efficient than recent meta-learning algorithms, making them an appealing approach to few-shot and zero-shot learning. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_4",
"text": " In few-shot classification we are given a small support set of N𝑁N labeled examples S={(𝐱1,y1),…,(𝐱N,yN)}𝑆subscript𝐱1subscript𝑦1…subscript𝐱𝑁subscript𝑦𝑁S=\\{(\\mathbf{x}_{1},y_{1}),\\ldots,(\\mathbf{x}_{N},y_{N})\\} where each 𝐱i∈ℝDsubscript𝐱𝑖superscriptℝ𝐷\\mathbf{x}_{i}\\in\\mathbb{R}^{D} is the D𝐷D-dimensional feature vector of an example and yi∈{1,…,K}subscript𝑦𝑖1…𝐾y_{i}\\in\\{1,\\ldots,K\\} is the corresponding label. Sksubscript𝑆𝑘S_{k} denotes the set of examples labeled with class k𝑘k. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_5",
"text": " Prototypical networks compute an M𝑀M-dimensional representation 𝐜k∈ℝMsubscript𝐜𝑘superscriptℝ𝑀\\mathbf{c}_{k}\\in\\mathbb{R}^{M}, or prototype, of each class through an embedding function fϕ:ℝD→ℝM:subscript𝑓bold-italic-ϕ→superscriptℝ𝐷superscriptℝ𝑀f_{\\bm{\\phi}}:\\mathbb{R}^{D}\\rightarrow\\mathbb{R}^{M} with learnable parameters ϕbold-italic-ϕ\\bm{\\phi}. Each prototype is the mean vector of the embedded support points belonging to its class: 𝐜k=1|Sk|∑(𝐱i,yi)∈Skfϕ(𝐱i)subscript𝐜𝑘1subscript𝑆𝑘subscriptsubscript𝐱𝑖subscript𝑦𝑖subscript𝑆𝑘subscript𝑓bold-italic-ϕsubscript𝐱𝑖\\mathbf{c}_{k}=\\frac{1}{|S_{k}|}\\sum_{(\\mathbf{x}_{i},y_{i})\\in S_{k}}f_{\\bm{\\phi}}(\\mathbf{x}_{i}) (1) Given a distance function d:ℝM×ℝM→(0,+∞):𝑑→superscriptℝ𝑀superscriptℝ𝑀0d:\\mathbb{R}^{M}\\times\\mathbb{R}^{M}\\rightarrow(0,+\\infty), prototypical networks produce a distribution over classes for a query point 𝐱𝐱\\mathbf{x} based on a softmax over distances to the prototypes in the embedding space: pϕ(y=k|𝐱)=exp(−d(fϕ(𝐱),𝐜k))∑k′exp(−d(fϕ(𝐱),𝐜k′))subscript𝑝bold-italic-ϕ𝑦conditional𝑘𝐱𝑑subscript𝑓bold-italic-ϕ𝐱subscript𝐜𝑘subscriptsuperscript𝑘′𝑑subscript𝑓bold-italic-ϕ𝐱subscript𝐜superscript𝑘′p_{\\bm{\\phi}}(y=k\\,|\\,\\mathbf{x})=\\frac{\\exp(-d(f_{\\bm{\\phi}}(\\mathbf{x}),\\mathbf{c}_{k}))}{\\sum_{k^{\\prime}}\\exp(-d(f_{\\bm{\\phi}}(\\mathbf{x}),\\mathbf{c}_{k^{\\prime}}))} (2) Learning proceeds by minimizing the negative log-probability J(ϕ)=−logpϕ(y=k|𝐱)𝐽bold-italic-ϕsubscript𝑝bold-italic-ϕ𝑦conditional𝑘𝐱J(\\bm{\\phi})=-\\log p_{\\bm{\\phi}}(y=k\\,|\\,\\mathbf{x}) of the true class k𝑘k via SGD. Training episodes are formed by randomly selecting a subset of classes from the training set, then choosing a subset of examples within each class to act as the support set and a subset of the remainder to serve as query points. Pseudocode to compute the loss J(ϕ)𝐽bold-italic-ϕJ(\\bm{\\phi}) for a training episode is provided in Algorithm 1. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_6",
"text": " For a particular class of distance functions, known as regular Bregman divergences , the prototypical networks algorithm is equivalent to performing mixture density estimation on the support set with an exponential family density. A regular Bregman divergence dφsubscript𝑑𝜑d_{\\varphi} is defined as: dφ(𝐳,𝐳′)=φ(𝐳)−φ(𝐳′)−(𝐳−𝐳′)T∇φ(𝐳′),subscript𝑑𝜑𝐳superscript𝐳′𝜑𝐳𝜑superscript𝐳′superscript𝐳superscript𝐳′𝑇∇𝜑superscript𝐳′d_{\\varphi}(\\mathbf{z},\\mathbf{z}^{\\prime})=\\varphi(\\mathbf{z})-\\varphi(\\mathbf{z}^{\\prime})-(\\mathbf{z}-\\mathbf{z}^{\\prime})^{T}\\nabla\\varphi(\\mathbf{z}^{\\prime}), (3) where φ𝜑\\varphi is a differentiable, strictly convex function of the Legendre type. Examples of Bregman divergences include squared Euclidean distance ‖𝐳−𝐳′‖2superscriptnorm𝐳superscript𝐳′2\\|\\mathbf{z}-\\mathbf{z}^{\\prime}\\|^{2} and Mahalanobis distance. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_7",
"text": " Prototype computation can be viewed in terms of hard clustering on the support set, with one cluster per class and each support point assigned to its corresponding class cluster. It has been shown for Bregman divergences that the cluster representative achieving minimal distance to its assigned points is the cluster mean. Thus the prototype computation in Equation (1) yields optimal cluster representatives given the support set labels when a Bregman divergence is used. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_8",
"text": " Moreover, any regular exponential family distribution pψ(𝐳|𝜽)subscript𝑝𝜓conditional𝐳𝜽p_{\\psi}(\\mathbf{z}|\\bm{\\theta}) with parameters 𝜽𝜽\\bm{\\theta} and cumulant function ψ𝜓\\psi can be written in terms of a uniquely determined regular Bregman divergence : pψ(𝐳|𝜽)=exp{𝐳T𝜽−ψ(𝜽)−gψ(𝐳)}=exp{−dφ(𝐳,𝝁(𝜽))−gφ(𝐳)}subscript𝑝𝜓conditional𝐳𝜽superscript𝐳𝑇𝜽𝜓𝜽subscript𝑔𝜓𝐳subscript𝑑𝜑𝐳𝝁𝜽subscript𝑔𝜑𝐳p_{\\psi}(\\mathbf{z}|\\bm{\\theta})=\\exp\\{\\mathbf{z}^{T}\\bm{\\theta}-\\psi(\\bm{\\theta})-g_{\\psi}(\\mathbf{z})\\}=\\exp\\{-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}))-g_{\\varphi}(\\mathbf{z})\\} (4) Consider now a regular exponential family mixture model with parameters 𝚪={𝜽k,πk}k=1K𝚪superscriptsubscriptsubscript𝜽𝑘subscript𝜋𝑘𝑘1𝐾\\bm{\\Gamma}=\\{\\bm{\\theta}_{k},\\pi_{k}\\}_{k=1}^{K}: p(𝐳|𝚪)=∑k=1Kπkpψ(𝐳|𝜽k)=∑k=1Kπkexp(−dφ(𝐳,𝝁(𝜽k))−gφ(𝐳))𝑝conditional𝐳𝚪superscriptsubscript𝑘1𝐾subscript𝜋𝑘subscript𝑝𝜓conditional𝐳subscript𝜽𝑘superscriptsubscript𝑘1𝐾subscript𝜋𝑘subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘subscript𝑔𝜑𝐳p(\\mathbf{z}|\\bm{\\Gamma})=\\sum_{k=1}^{K}\\pi_{k}p_{\\psi}(\\mathbf{z}|\\bm{\\theta}_{k})=\\sum_{k=1}^{K}\\pi_{k}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k}))-g_{\\varphi}(\\mathbf{z})) (5) Given 𝚪𝚪\\bm{\\Gamma}, inference of the cluster assignment y𝑦y for an unlabeled point 𝐳𝐳\\mathbf{z} becomes: p(y=k|𝐳)=πkexp(−dφ(𝐳,𝝁(𝜽k)))∑k′πk′exp(−dφ(𝐳,𝝁(𝜽k)))𝑝𝑦conditional𝑘𝐳subscript𝜋𝑘subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘subscriptsuperscript𝑘′subscript𝜋superscript𝑘′subscript𝑑𝜑𝐳𝝁subscript𝜽𝑘p(y=k|\\mathbf{z})=\\frac{\\pi_{k}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k})))}{\\sum_{k^{\\prime}}\\pi_{k^{\\prime}}\\exp(-d_{\\varphi}(\\mathbf{z},\\bm{\\mu}(\\bm{\\theta}_{k})))} (6) For an equally-weighted mixture model with one cluster per class, cluster assignment inference (6) is equivalent to query class prediction (2) with fϕ(𝐱)=𝐳subscript𝑓italic-ϕ𝐱𝐳f_{\\phi}(\\mathbf{x})=\\mathbf{z} and 𝐜k=𝝁(𝜽k)subscript𝐜𝑘𝝁subscript𝜽𝑘\\mathbf{c}_{k}=\\bm{\\mu}(\\bm{\\theta}_{k}). In this case, prototypical networks are effectively performing mixture density estimation with an exponential family distribution determined by dφsubscript𝑑𝜑d_{\\varphi}. The choice of distance therefore specifies modeling assumptions about the class-conditional data distribution in the embedding space. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_9",
"text": " A simple analysis is useful in gaining insight into the nature of the learned classifier. When we use Euclidean distance d(𝐳,𝐳′)=‖𝐳−𝐳′‖2𝑑𝐳superscript𝐳′superscriptnorm𝐳superscript𝐳′2d(\\mathbf{z},\\mathbf{z^{\\prime}})=\\|\\mathbf{z}-\\mathbf{z}^{\\prime}\\|^{2}, then the model in Equation (2) is equivalent to a linear model with a particular parameterization . To see this, expand the term in the exponent: −‖fϕ(𝐱)−𝐜k‖2superscriptnormsubscript𝑓bold-italic-ϕ𝐱subscript𝐜𝑘2\\displaystyle-\\|f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}\\|^{2} =−fϕ(𝐱)⊤fϕ(𝐱)+2𝐜k⊤fϕ(𝐱)−𝐜k⊤𝐜kabsentsubscript𝑓bold-italic-ϕsuperscript𝐱topsubscript𝑓bold-italic-ϕ𝐱2superscriptsubscript𝐜𝑘topsubscript𝑓bold-italic-ϕ𝐱superscriptsubscript𝐜𝑘topsubscript𝐜𝑘\\displaystyle=-f_{\\bm{\\phi}}(\\mathbf{x})^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})+2\\mathbf{c}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k} (7) The first term in Equation (7) is constant with respect to the class k𝑘k, so it does not affect the softmax probabilities. We can write the remaining terms as a linear model as follows: 2𝐜k⊤fϕ(𝐱)−𝐜k⊤𝐜k=𝐰k⊤fϕ(𝐱)+bk, where 𝐰k=2𝐜k and bk=−𝐜k⊤𝐜k2superscriptsubscript𝐜𝑘topsubscript𝑓bold-italic-ϕ𝐱superscriptsubscript𝐜𝑘topsubscript𝐜𝑘superscriptsubscript𝐰𝑘topsubscript𝑓bold-italic-ϕ𝐱subscript𝑏𝑘, where subscript𝐰𝑘2subscript𝐜𝑘 and subscript𝑏𝑘superscriptsubscript𝐜𝑘topsubscript𝐜𝑘2\\mathbf{c}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k}=\\mathbf{w}_{k}^{\\top}f_{\\bm{\\phi}}(\\mathbf{x})+b_{k}\\mbox{, where }\\mathbf{w}_{k}=2\\mathbf{c}_{k}\\mbox{ and }b_{k}=-\\mathbf{c}_{k}^{\\top}\\mathbf{c}_{k} (8) We focus primarily on squared Euclidean distance (corresponding to spherical Gaussian densities) in this work. Our results indicate that Euclidean distance is an effective choice despite the equivalence to a linear model. We hypothesize this is because all of the required non-linearity can be learned within the embedding function. Indeed, this is the approach that modern neural network classification systems currently use, e.g., (14, 28). ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_10",
"text": " Prototypical networks differ from matching networks in the few-shot case with equivalence in the one-shot scenario. Matching networks produce a weighted nearest neighbor classifier given the support set, while prototypical networks produce a linear classifier when squared Euclidean distance is used. In the case of one-shot learning, 𝐜k=𝐱ksubscript𝐜𝑘subscript𝐱𝑘\\mathbf{c}_{k}=\\mathbf{x}_{k} since there is only one support point per class, and matching networks and prototypical networks become equivalent. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_11",
"text": " A natural question is whether it makes sense to use multiple prototypes per class instead of just one. If the number of prototypes per class is fixed and greater than 111, then this would require a partitioning scheme to further cluster the support points within a class. This has been proposed in Mensink et al. and Rippel et al. ; however both methods require a separate partitioning phase that is decoupled from the weight updates, while our approach is simple to learn with ordinary gradient descent methods. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_12",
"text": " Vinyals et al. propose a number of extensions, including decoupling the embedding functions of the support and query points, and using a second-level, fully-conditional embedding (FCE) that takes into account specific points in each episode. These could likewise be incorporated into prototypical networks, however they increase the number of learnable parameters, and FCE imposes an arbitrary ordering on the support set using a bi-directional LSTM. Instead, we show that it is possible to achieve the same level of performance using simple design choices, which we outline next. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_13",
"text": " Vinyals et al. and Ravi and Larochelle apply matching networks using cosine distance. However for both prototypical and matching networks any distance is permissible, and we found that using squared Euclidean distance can greatly improve results for both. We conjecture this is primarily due to cosine distance not being a Bregman divergence, and thus the equivalence to mixture density estimation discussed in Section 2.3 does not hold. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_14",
"text": " A straightforward way to construct episodes, used in Vinyals et al. and Ravi and Larochelle , is to choose Ncsubscript𝑁𝑐N_{c} classes and NSsubscript𝑁𝑆N_{S} support points per class in order to match the expected situation at test-time. That is, if we expect at test-time to perform 555-way classification and 111-shot learning, then training episodes could be comprised of Nc=5subscript𝑁𝑐5N_{c}=5, NS=1subscript𝑁𝑆1N_{S}=1. We have found, however, that it can be extremely beneficial to train with a higher Ncsubscript𝑁𝑐N_{c}, or “way”, than will be used at test-time. In our experiments, we tune the training Ncsubscript𝑁𝑐N_{c} on a held-out validation set. Another consideration is whether to match NSsubscript𝑁𝑆N_{S}, or “shot”, at train and test-time. For prototypical networks, we found that it is usually best to train and test with the same “shot” number. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_15",
"text": " Zero-shot learning differs from few-shot learning in that instead of being given a support set of training points, we are given a class meta-data vector 𝐯ksubscript𝐯𝑘\\mathbf{v}_{k} for each class. These could be determined in advance, or they could be learned from e.g., raw text . Modifying prototypical networks to deal with the zero-shot case is straightforward: we simply define 𝐜k=gϑ(𝐯k)subscript𝐜𝑘subscript𝑔bold-italic-ϑsubscript𝐯𝑘\\mathbf{c}_{k}=g_{\\bm{\\vartheta}}(\\mathbf{v}_{k}) to be a separate embedding of the meta-data vector. An illustration of the zero-shot procedure for prototypical networks as it relates to the few-shot procedure is shown in Figure 1. Since the meta-data vector and query point come from different input domains, we found it was helpful empirically to fix the prototype embedding g𝑔g to have unit length, however we do not constrain the query embedding f𝑓f. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_16",
"text": " For few-shot learning, we performed experiments on Omniglot and the miniImageNet version of ILSVRC-2012 with the splits proposed by Ravi and Larochelle . We perform zero-shot experiments on the 2011 version of the Caltech UCSD bird dataset (CUB-200 2011) . ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_17",
"text": " Omniglot is a dataset of 1623 handwritten characters collected from 50 alphabets. There are 20 examples associated with each character, where each example is drawn by a different human subject. We follow the procedure of Vinyals et al. by resizing the grayscale images to 28 ×\\times 28 and augmenting the character classes with rotations in multiples of 90 degrees. We use 1200 characters plus rotations for training (4,800 classes in total) and the remaining classes, including rotations, for test. Our embedding architecture mirrors that used by Vinyals et al. and is composed of four convolutional blocks. Each block comprises a 64-filter 3 ×\\times 3 convolution, batch normalization layer , a ReLU nonlinearity and a 2 ×\\times 2 max-pooling layer. When applied to the 28 ×\\times 28 Omniglot images this architecture results in a 64-dimensional output space. We use the same encoder for embedding both support and query points. All of our models were trained via SGD with Adam . We used an initial learning rate of 10−3superscript10310^{-3} and cut the learning rate in half every 2000 episodes. No regularization was used other than batch normalization. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_18",
"text": " We trained prototypical networks using Euclidean distance in the 1-shot and 5-shot scenarios with training episodes containing 60 classes and 5 query points per class. We found that it is advantageous to match the training-shot with the test-shot, and to use more classes (higher “way”) per training episode rather than fewer. We compare against various baselines, including the neural statistician and both the fine-tuned and non-fine-tuned versions of matching networks . We computed classification accuracy for our models averaged over 1000 randomly generated episodes from the test set. The results are shown in Table 1 and to our knowledge they represent the state-of-the-art on this dataset. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_19",
"text": " The miniImageNet dataset, originally proposed by Vinyals et al. , is derived from the larger ILSVRC-12 dataset . The splits used by Vinyals et al. consist of 60,000 color images of size 84 ×\\times 84 divided into 100 classes with 600 examples each. For our experiments, we use the splits introduced by Ravi and Larochelle in order to directly compare with state-of-the-art algorithms for few-shot learning. Their splits use a different set of 100 classes, divided into 64 training, 16 validation, and 20 test classes. We follow their procedure by training on the 64 training classes and using the 16 validation classes for monitoring generalization performance only. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_20",
"text": " We use the same four-block embedding architecture as in our Omniglot experiments, though here it results in a 1600-dimensional output space due to the increased size of the images. We also use the same learning rate schedule as in our Omniglot experiments and train until validation loss stops improving. We train using 30-way episodes for 1-shot classification and 20-way episodes for 5-shot classification. We match train shot to test shot and each class contains 15 query points per episode. We compare to the baselines as reported by Ravi and Larochelle , which include a simple nearest neighbor approach on top of features learned by a classification network on the 64 training classes. The other baselines are two non-fine-tuned variants of matching networks (both ordinary and FCE) and the Meta-Learner LSTM. As can be seen in Table 2, prototypical networks achieves state-of-the-art here by a wide margin. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_21",
"text": " We conducted further analysis, to determine the effect of distance metric and the number of training classes per episode on the performance of prototypical networks and matching networks. To make the methods comparable, we use our own implementation of matching networks that utilizes the same embedding architecture as our prototypical networks. In Figure 2 we compare cosine vs. Euclidean distance and 5-way vs. 20-way training episodes in the 1-shot and 5-shot scenarios, with 15 query points per class per episode. We note that 20-way achieves higher accuracy than 5-way and conjecture that the increased difficulty of 20-way classification helps the network to generalize better, because it forces the model to make more fine-grained decisions in the embedding space. Also, using Euclidean distance improves performance substantially over cosine distance. This effect is even more pronounced for prototypical networks, in which computing the class prototype as the mean of embedded support points is more naturally suited to Euclidean distances since cosine distance is not a Bregman divergence. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_22",
"text": " In order to assess the suitability of our approach for zero-shot learning, we also run experiments on the Caltech-UCSD Birds (CUB) 200-2011 dataset . The CUB dataset contains 11,788 images of 200 bird species. We closely follow the procedure of Reed et al. in preparing the data. We use their splits to divide the classes into 100 training, 50 validation, and 50 test. For images we use 1,024-dimensional features extracted by applying GoogLeNet to middle, upper left, upper right, lower left, and lower right crops of the original and horizontally-flipped image222Features downloaded from https://github.com/reedscot/cvpr2016.. At test time we use only the middle crop of the original image. For class meta-data we use the 312-dimensional continuous attribute vectors provided with the CUB dataset. These attributes encode various characteristics of the bird species such as their color, shape, and feather patterns. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_23",
"text": " We learned a simple linear mapping on top of both the 1024-dimensional image features and the 312-dimensional attribute vectors to produce a 1,024-dimensional output space. For this dataset we found it helpful to normalize the class prototypes (embedded attribute vectors) to be of unit length, since the attribute vectors come from a different domain than the images. Training episodes were constructed with 50 classes and 10 query images per class. The embeddings were optimized via SGD with Adam at a fixed learning rate of 10−4superscript10410^{-4} and weight decay of 10−5superscript10510^{-5}. Early stopping on validation loss was used to determine the optimal number of epochs for retraining on the training plus validation set. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_24",
"text": " Table 3 shows that we achieve state-of-the-art results by a large margin when compared to methods utilizing attributes as class meta-data. We compare our method to other embedding approaches, such as ALE , SJE , and DS-SJE/DA-SJE . We also compare to a recent clustering approach which trains an SVM on a learned feature space obtained by fine-tuning AlexNet . These zero-shot classification results demonstrate that our approach is general enough to be applied even when the data points (images) are from a different domain relative to the classes (attributes). ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_25",
"text": " The literature on metric learning is vast (15, 5); we summarize here the work most relevant to our proposed method. Neighborhood Components Analysis (NCA) learns a Mahalanobis distance to maximize K-nearest-neighbor’s (KNN) leave-one-out accuracy in the transformed space. Salakhutdinov and Hinton extend NCA by using a neural network to perform the transformation. Large margin nearest neighbor (LMNN) classification also attempts to optimize KNN accuracy but does so using a hinge loss that encourages the local neighborhood of a point to contain other points with the same label. The DNet-KNN is another margin-based method that improves upon LMNN by utilizing a neural network to perform the embedding instead of a simple linear transformation. Of these, our method is most similar to the non-linear extension of NCA because we use a neural network to perform the embedding and we optimize a softmax based on Euclidean distances in the transformed space, as opposed to a margin loss. A key distinction between our approach and non-linear NCA is that we form a softmax directly over classes, rather than individual points, computed from distances to each class’s prototype representation. This allows each class to have a concise representation independent of the number of data points and obviates the need to store the entire support set to make predictions. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_26",
"text": " Our approach is also similar to the nearest class mean approach , where each class is represented by the mean of its examples. This approach was developed to rapidly incorporate new classes into a classifier without retraining, however it relies on a linear embedding and was designed to handle the case where the novel classes come with a large number of examples. In contrast, our approach utilizes neural networks to non-linearly embed points and we couple this with episodic training in order to handle the few-shot scenario. Mensink et al. attempt to extend their approach to also perform non-linear classification, but they do so by allowing classes to have multiple prototypes. They find these prototypes in a pre-processing step by using k𝑘k-means on the input space and then perform a multi-modal variant of their linear embedding. Prototypical networks, on the other hand, learn a non-linear embedding in an end-to-end manner with no such pre-processing, producing a non-linear classifier that still only requires one prototype per class. In addition, our approach naturally generalizes to other distance functions, particularly Bregman divergences. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_27",
"text": " Another relevant few-shot learning method is the meta-learning approach proposed in Ravi and Larochelle . The key insight here is that LSTM dynamics and gradient descent can be written in effectively the same way. An LSTM can then be trained to itself train a model from a given episode, with the performance goal of generalizing well on the query points. Matching networks and prototypical networks can also be seen as forms of meta-learning, in the sense that they produce simple classifiers dynamically from new training episodes; however the core embeddings they rely on are fixed after training. The FCE extension to matching nets involves a secondary embedding that depends on the support set. However, in the few-shot scenario the amount of data is so small that a simple inductive bias seems to work well, without the need to learn a custom embedding for each episode. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_28",
"text": " Prototypical networks are also related to the neural statistician from the generative modeling literature, which extends the variational autoencoder (12, 24) to learn generative models of datasets rather than individual points. One component of the neural statistician is the “statistic network” which summarizes a set of data points into a statistic vector. It does this by encoding each point within a dataset, taking a sample mean, and applying a post-processing network to obtain an approximate posterior over the statistic vector. Edwards and Storkey test their model for one-shot classification on the Omniglot dataset by considering each character to be a separate dataset and making predictions based on the class whose approximate posterior over the statistic vector has minimal KL-divergence from the posterior inferred by the test point. Like the neural statistician, we also produce a summary statistic for each class. However, ours is a discriminative model, as befits our discriminative task of few-shot classification. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_29",
"text": " With respect to zero-shot learning, the use of embedded meta-data in prototypical networks resembles the method of in that both predict the weights of a linear classifier. The DS-SJE and DA-SJE approach of also learns deep multimodal embedding functions for images and class meta-data. Unlike ours, they learn using an empirical risk loss. Neither nor uses episodic training, which allows us to help speed up training and regularize the model. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_30",
"text": " We have proposed a simple method called prototypical networks for few-shot learning based on the idea that we can represent each class by the mean of its examples in a representation space learned by a neural network. We train these networks to specifically perform well in the few-shot setting by using episodic training. The approach is far simpler and more efficient than recent meta-learning approaches, and produces state-of-the-art results even without sophisticated extensions developed for matching networks (although these can be applied to prototypical nets as well). We show how performance can be greatly improved by carefully considering the chosen distance metric, and by modifying the episodic learning procedure. We further demonstrate how to generalize prototypical networks to the zero-shot setting, and achieve state-of-the-art results on the CUB-200 dataset. A natural direction for future work is to utilize Bregman divergences other than squared Euclidean distance, corresponding to class-conditional distributions beyond spherical Gaussians. We conducted preliminary explorations of this, including learning a variance per dimension for each class. This did not lead to any empirical gains, suggesting that the embedding network has enough flexibility on its own without requiring additional fitted parameters per class. Overall, the simplicity and effectiveness of prototypical networks makes it a promising approach for few-shot learning. ",
"title": "Prototypical Networks for Few-shot Learning"
},
{
"id": "1703.05175_all_31",
"text": " We would like to thank Marc Law, Sachin Ravi, Hugo Larochelle, Renjie Liao, and Oriol Vinyals for helpful discussions. This work was supported by the Samsung GRP project and the Canadian Institute for Advanced Research. ",
"title": "Prototypical Networks for Few-shot Learning"
}
] |
What metrics are used to compare the performance of ULMFiT against existing approaches?
|
For consistency, the authors reported all results as error rates where lower is better [40].
|
[
40
] |
[
{
"id": "1801.06146_all_0",
"text": " Inductive transfer learning has had a large impact on computer vision (CV). Applied CV models (including object detection, classification, and segmentation) are rarely trained from scratch, but instead are fine-tuned from models that have been pretrained on ImageNet, MS-COCO, and other datasets Sharif Razavian et al. (2014); Long et al. (2015a); He et al. (2016); Huang et al. (2017). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_1",
"text": " Text classification is a category of Natural Language Processing (NLP) tasks with real-world applications such as spam, fraud, and bot detection Jindal and Liu (2007); Ngai et al. (2011); Chu et al. (2012), emergency response Caragea et al. (2011), and commercial document classification, such as for legal discovery Roitblat et al. (2010). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_2",
"text": " While Deep Learning models have achieved state-of-the-art on many NLP tasks, these models are trained from scratch, requiring large datasets, and days to converge. Research in NLP focused mostly on transductive transfer Blitzer et al. (2007). For inductive transfer, fine-tuning pretrained word embeddings Mikolov et al. (2013), a simple transfer technique that only targets a model’s first layer, has had a large impact in practice and is used in most state-of-the-art models. Recent approaches that concatenate embeddings derived from other tasks with the input at different layers Peters et al. (2017); McCann et al. (2017); Peters et al. (2018) still train the main task model from scratch and treat pretrained embeddings as fixed parameters, limiting their usefulness. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_3",
"text": " In light of the benefits of pretraining Erhan et al. (2010), we should be able to do better than randomly initializing the remaining parameters of our models. However, inductive transfer via fine-tuning has been unsuccessful for NLP Mou et al. (2016). Dai and Le (2015) first proposed fine-tuning a language model (LM) but require millions of in-domain documents to achieve good performance, which severely limits its applicability. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_4",
"text": " We show that not the idea of LM fine-tuning but our lack of knowledge of how to train them effectively has been hindering wider adoption. LMs overfit to small datasets and suffered catastrophic forgetting when fine-tuned with a classifier. Compared to CV, NLP models are typically more shallow and thus require different fine-tuning methods. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_5",
"text": " We propose a new method, Universal Language Model Fine-tuning (ULMFiT) that addresses these issues and enables robust inductive transfer learning for any NLP task, akin to fine-tuning ImageNet models: The same 3-layer LSTM architecture—with the same hyperparameters and no additions other than tuned dropout hyperparameters—outperforms highly engineered models and transfer learning approaches on six widely studied text classification tasks. On IMDb, with 100100100 labeled examples, ULMFiT matches the performance of training from scratch with 10×10\\times and—given 505050k unlabeled examples—with 100×100\\times more data. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_6",
"text": " Our contributions are the following: 1) We propose Universal Language Model Fine-tuning (ULMFiT), a method that can be used to achieve CV-like transfer learning for any task for NLP. 2) We propose discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing, novel techniques to retain previous knowledge and avoid catastrophic forgetting during fine-tuning. 3) We significantly outperform the state-of-the-art on six representative text classification datasets, with an error reduction of 18-24% on the majority of datasets. 4) We show that our method enables extremely sample-efficient transfer learning and perform an extensive ablation analysis. 5) We make the pretrained models and our code available to enable wider adoption. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_7",
"text": " Features in deep neural networks in CV have been observed to transition from general to task-specific from the first to the last layer Yosinski et al. (2014). For this reason, most work in CV focuses on transferring the first layers of the model Long et al. (2015b). Sharif Razavian et al. (2014) achieve state-of-the-art results using features of an ImageNet model as input to a simple classifier. In recent years, this approach has been superseded by fine-tuning either the last Donahue et al. (2014) or several of the last layers of a pretrained model and leaving the remaining layers frozen Long et al. (2015a). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_8",
"text": " In NLP, only recently have methods been proposed that go beyond transferring word embeddings. The prevailing approach is to pretrain embeddings that capture additional context via other tasks. Embeddings at different levels are then used as features, concatenated either with the word embeddings or with the inputs at intermediate layers. This method is known as hypercolumns Hariharan et al. (2015) in CV333A hypercolumn at a pixel in CV is the vector of activations of all CNN units above that pixel. In analogy, a hypercolumn for a word or sentence in NLP is the concatenation of embeddings at different layers in a pretrained model. and is used by Peters et al. (2017), Peters et al. (2018), Wieting and Gimpel (2017), Conneau et al. (2017), and McCann et al. (2017) who use language modeling, paraphrasing, entailment, and Machine Translation (MT) respectively for pretraining. Specifically, Peters et al. (2018) require engineered custom architectures, while we show state-of-the-art performance with the same basic architecture across a range of tasks. In CV, hypercolumns have been nearly entirely superseded by end-to-end fine-tuning Long et al. (2015a). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_9",
"text": " A related direction is multi-task learning (MTL) Caruana (1993). This is the approach taken by Rei (2017) and Liu et al. (2018) who add a language modeling objective to the model that is trained jointly with the main task model. MTL requires the tasks to be trained from scratch every time, which makes it inefficient and often requires careful weighting of the task-specific objective functions Chen et al. (2017). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_10",
"text": " Fine-tuning has been used successfully to transfer between similar tasks, e.g. in QA Min et al. (2017), for distantly supervised sentiment analysis Severyn and Moschitti (2015), or MT domains Sennrich et al. (2015) but has been shown to fail between unrelated ones Mou et al. (2016). Dai and Le (2015) also fine-tune a language model, but overfit with 101010k labeled examples and require millions of in-domain documents for good performance. In contrast, ULMFiT leverages general-domain pretraining and novel fine-tuning techniques to prevent overfitting even with only 100100100 labeled examples and achieves state-of-the-art results also on small datasets. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_11",
"text": " We are interested in the most general inductive transfer learning setting for NLP Pan and Yang (2010): Given a static source task 𝒯Ssubscript𝒯𝑆\\mathcal{T}_{S} and any target task 𝒯Tsubscript𝒯𝑇\\mathcal{T}_{T} with 𝒯S≠𝒯Tsubscript𝒯𝑆subscript𝒯𝑇\\mathcal{T}_{S}\\neq\\mathcal{T}_{T}, we would like to improve performance on 𝒯Tsubscript𝒯𝑇\\mathcal{T}_{T}. Language modeling can be seen as the ideal source task and a counterpart of ImageNet for NLP: It captures many facets of language relevant for downstream tasks, such as long-term dependencies Linzen et al. (2016), hierarchical relations Gulordava et al. (2018), and sentiment Radford et al. (2017). In contrast to tasks like MT McCann et al. (2017) and entailment Conneau et al. (2017), it provides data in near-unlimited quantities for most domains and languages. Additionally, a pretrained LM can be easily adapted to the idiosyncrasies of a target task, which we show significantly improves performance (see Section 5). Moreover, language modeling already is a key component of existing tasks such as MT and dialogue modeling. Formally, language modeling induces a hypothesis space ℋℋ\\mathcal{H} that should be useful for many other NLP tasks Vapnik and Kotz (1982); Baxter (2000). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_12",
"text": " We propose Universal Language Model Fine-tuning (ULMFiT), which pretrains a language model (LM) on a large general-domain corpus and fine-tunes it on the target task using novel techniques. The method is universal in the sense that it meets these practical criteria: 1) It works across tasks varying in document size, number, and label type; 2) it uses a single architecture and training process; 3) it requires no custom feature engineering or preprocessing; and 4) it does not require additional in-domain documents or labels. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_13",
"text": " In our experiments, we use the state-of-the-art language model AWD-LSTM Merity et al. (2017a), a regular LSTM (with no attention, short-cut connections, or other sophisticated additions) with various tuned dropout hyperparameters. Analogous to CV, we expect that downstream performance can be improved by using higher-performance language models in the future. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_14",
"text": " ULMFiT consists of the following steps, which we show in Figure 1: a) General-domain LM pretraining (§3.1); b) target task LM fine-tuning (§3.2); and c) target task classifier fine-tuning (§3.3). We discuss these in the following sections. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_15",
"text": " An ImageNet-like corpus for language should be large and capture general properties of language. We pretrain the language model on Wikitext-103 Merity et al. (2017b) consisting of 28,595 preprocessed Wikipedia articles and 103 million words. Pretraining is most beneficial for tasks with small datasets and enables generalization even with 100100100 labeled examples. We leave the exploration of more diverse pretraining corpora to future work, but expect that they would boost performance. While this stage is the most expensive, it only needs to be performed once and improves performance and convergence of downstream models. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_16",
"text": " No matter how diverse the general-domain data used for pretraining is, the data of the target task will likely come from a different distribution. We thus fine-tune the LM on data of the target task. Given a pretrained general-domain LM, this stage converges faster as it only needs to adapt to the idiosyncrasies of the target data, and it allows us to train a robust LM even for small datasets. We propose discriminative fine-tuning and slanted triangular learning rates for fine-tuning the LM, which we introduce in the following. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_17",
"text": " As different layers capture different types of information Yosinski et al. (2014), they should be fine-tuned to different extents. To this end, we propose a novel fine-tuning method, discriminative fine-tuning444 An unrelated method of the same name exists for deep Boltzmann machines Salakhutdinov and Hinton (2009).. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_18",
"text": " Instead of using the same learning rate for all layers of the model, discriminative fine-tuning allows us to tune each layer with different learning rates. For context, the regular stochastic gradient descent (SGD) update of a model’s parameters θ𝜃\\theta at time step t𝑡t looks like the following Ruder (2016): θt=θt−1−η⋅∇θJ(θ)subscript𝜃𝑡subscript𝜃𝑡1⋅𝜂subscript∇𝜃𝐽𝜃\\theta_{t}=\\theta_{t-1}-\\eta\\cdot\\nabla_{\\theta}J(\\theta) (1) where η𝜂\\eta is the learning rate and ∇θJ(θ)subscript∇𝜃𝐽𝜃\\nabla_{\\theta}J(\\theta) is the gradient with regard to the model’s objective function. For discriminative fine-tuning, we split the parameters θ𝜃\\theta into {θ1,…,θL}superscript𝜃1…superscript𝜃𝐿\\{\\theta^{1},\\ldots,\\theta^{L}\\} where θlsuperscript𝜃𝑙\\theta^{l} contains the parameters of the model at the l𝑙l-th layer and L𝐿L is the number of layers of the model. Similarly, we obtain {η1,…,ηL}superscript𝜂1…superscript𝜂𝐿\\{\\eta^{1},\\ldots,\\eta^{L}\\} where ηlsuperscript𝜂𝑙\\eta^{l} is the learning rate of the l𝑙l-th layer. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_19",
"text": " The SGD update with discriminative fine-tuning is then the following: θtl=θt−1l−ηl⋅∇θlJ(θ)superscriptsubscript𝜃𝑡𝑙superscriptsubscript𝜃𝑡1𝑙⋅superscript𝜂𝑙subscript∇superscript𝜃𝑙𝐽𝜃\\theta_{t}^{l}=\\theta_{t-1}^{l}-\\eta^{l}\\cdot\\nabla_{\\theta^{l}}J(\\theta) (2) We empirically found it to work well to first choose the learning rate ηLsuperscript𝜂𝐿\\eta^{L} of the last layer by fine-tuning only the last layer and using ηl−1=ηl/2.6superscript𝜂𝑙1superscript𝜂𝑙2.6\\eta^{l-1}=\\eta^{l}/2.6 as the learning rate for lower layers. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_20",
"text": " For adapting its parameters to task-specific features, we would like the model to quickly converge to a suitable region of the parameter space in the beginning of training and then refine its parameters. Using the same learning rate (LR) or an annealed learning rate throughout training is not the best way to achieve this behaviour. Instead, we propose slanted triangular learning rates (STLR), which first linearly increases the learning rate and then linearly decays it according to the following update schedule, which can be seen in Figure 2: cut=⌊T⋅cut_frac⌋p={t/cut,ift<cut1−t−cutcut⋅(1/cut_frac−1),otherwiseηt=ηmax⋅1+p⋅(ratio−1)ratio𝑐𝑢𝑡⋅𝑇𝑐𝑢𝑡_𝑓𝑟𝑎𝑐𝑝cases𝑡𝑐𝑢𝑡if𝑡𝑐𝑢𝑡1𝑡𝑐𝑢𝑡⋅𝑐𝑢𝑡1𝑐𝑢𝑡_𝑓𝑟𝑎𝑐1otherwisesubscript𝜂𝑡⋅subscript𝜂𝑚𝑎𝑥1⋅𝑝𝑟𝑎𝑡𝑖𝑜1𝑟𝑎𝑡𝑖𝑜\\begin{split}cut&=\\lfloor T\\cdot cut\\_frac\\rfloor\\\\ p&=\\begin{cases}t/cut,&\\text{if}\\ t<cut\\\\ 1-\\frac{t-cut}{cut\\cdot(1/cut\\_frac-1)},&\\text{otherwise}\\end{cases}\\\\ \\eta_{t}&=\\eta_{max}\\cdot\\frac{1+p\\cdot(ratio-1)}{ratio}\\end{split} (3) where T𝑇T is the number of training iterations555In other words, the number of epochs times the number of updates per epoch., cut_frac𝑐𝑢𝑡_𝑓𝑟𝑎𝑐cut\\_frac is the fraction of iterations we increase the LR, cut𝑐𝑢𝑡cut is the iteration when we switch from increasing to decreasing the LR, p𝑝p is the fraction of the number of iterations we have increased or will decrease the LR respectively, ratio𝑟𝑎𝑡𝑖𝑜ratio specifies how much smaller the lowest LR is from the maximum LR ηmaxsubscript𝜂𝑚𝑎𝑥\\eta_{max}, and ηtsubscript𝜂𝑡\\eta_{t} is the learning rate at iteration t𝑡t. We generally use cut_frac=0.1𝑐𝑢𝑡_𝑓𝑟𝑎𝑐0.1cut\\_frac=0.1, ratio=32𝑟𝑎𝑡𝑖𝑜32ratio=32 and ηmax=0.01subscript𝜂𝑚𝑎𝑥0.01\\eta_{max}=0.01. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_21",
"text": " STLR modifies triangular learning rates Smith (2017) with a short increase and a long decay period, which we found key for good performance.666We also credit personal communication with the author. In Section 5, we compare against aggressive cosine annealing, a similar schedule that has recently been used to achieve state-of-the-art performance in CV Loshchilov and Hutter (2017).777While Loshchilov and Hutter (2017) use multiple annealing cycles, we generally found one cycle to work best. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_22",
"text": " Finally, for fine-tuning the classifier, we augment the pretrained language model with two additional linear blocks. Following standard practice for CV classifiers, each block uses batch normalization Ioffe and Szegedy (2015) and dropout, with ReLU activations for the intermediate layer and a softmax activation that outputs a probability distribution over target classes at the last layer. Note that the parameters in these task-specific classifier layers are the only ones that are learned from scratch. The first linear layer takes as the input the pooled last hidden layer states. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_23",
"text": " The signal in text classification tasks is often contained in a few words, which may occur anywhere in the document. As input documents can consist of hundreds of words, information may get lost if we only consider the last hidden state of the model. For this reason, we concatenate the hidden state at the last time step 𝐡Tsubscript𝐡𝑇\\mathbf{h}_{T} of the document with both the max-pooled and the mean-pooled representation of the hidden states over as many time steps as fit in GPU memory 𝐇={𝐡1,…,𝐡T}𝐇subscript𝐡1…subscript𝐡𝑇\\mathbf{H}=\\{\\mathbf{h}_{1},\\ldots,\\mathbf{h}_{T}\\}: 𝐡c=(𝐡T,𝚖𝚊𝚡𝚙𝚘𝚘𝚕(𝐇),𝚖𝚎𝚊𝚗𝚙𝚘𝚘𝚕(𝐇))subscript𝐡𝑐subscript𝐡𝑇𝚖𝚊𝚡𝚙𝚘𝚘𝚕𝐇𝚖𝚎𝚊𝚗𝚙𝚘𝚘𝚕𝐇\\mathbf{h}_{c}=(\\mathbf{h}_{T},\\mathtt{maxpool}(\\mathbf{H}),\\mathtt{meanpool}(\\mathbf{H})) (4) where ()() is concatenation. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_24",
"text": " Fine-tuning the target classifier is the most critical part of the transfer learning method. Overly aggressive fine-tuning will cause catastrophic forgetting, eliminating the benefit of the information captured through language modeling; too cautious fine-tuning will lead to slow convergence (and resultant overfitting). Besides discriminative fine-tuning and triangular learning rates, we propose gradual unfreezing for fine-tuning the classifier. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_25",
"text": " Rather than fine-tuning all layers at once, which risks catastrophic forgetting, we propose to gradually unfreeze the model starting from the last layer as this contains the least general knowledge Yosinski et al. (2014): We first unfreeze the last layer and fine-tune all unfrozen layers for one epoch. We then unfreeze the next lower frozen layer and repeat, until we fine-tune all layers until convergence at the last iteration. This is similar to ‘chain-thaw’ Felbo et al. (2017), except that we add a layer at a time to the set of ‘thawed’ layers, rather than only training a single layer at a time. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_26",
"text": " While discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing all are beneficial on their own, we show in Section 5 that they complement each other and enable our method to perform well across diverse datasets. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_27",
"text": " Language models are trained with backpropagation through time (BPTT) to enable gradient propagation for large input sequences. In order to make fine-tuning a classifier for large documents feasible, we propose BPTT for Text Classification (BPT3C): We divide the document into fixed-length batches of size b𝑏b. At the beginning of each batch, the model is initialized with the final state of the previous batch; we keep track of the hidden states for mean and max-pooling; gradients are back-propagated to the batches whose hidden states contributed to the final prediction. In practice, we use variable length backpropagation sequences Merity et al. (2017a). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_28",
"text": " Similar to existing work Peters et al. (2017, 2018), we are not limited to fine-tuning a unidirectional language model. For all our experiments, we pretrain both a forward and a backward LM. We fine-tune a classifier for each LM independently using BPT3C and average the classifier predictions. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_29",
"text": " While our approach is equally applicable to sequence labeling tasks, we focus on text classification tasks in this work due to their important real-world applications. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_30",
"text": " We evaluate our method on six widely-studied datasets, with varying numbers of documents and varying document length, used by state-of-the-art text classification and transfer learning approaches Johnson and Zhang (2017); McCann et al. (2017) as instances of three common text classification tasks: sentiment analysis, question classification, and topic classification. We show the statistics for each dataset and task in Table 1. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_31",
"text": " For sentiment analysis, we evaluate our approach on the binary movie review IMDb dataset Maas et al. (2011) and on the binary and five-class version of the Yelp review dataset compiled by Zhang et al. (2015). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_32",
"text": " We use the six-class version of the small TREC dataset Voorhees and Tice (1999) dataset of open-domain, fact-based questions divided into broad semantic categories. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_33",
"text": " For topic classification, we evaluate on the large-scale AG news and DBpedia ontology datasets created by Zhang et al. (2015). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_34",
"text": " We use the same pre-processing as in earlier work Johnson and Zhang (2017); McCann et al. (2017). In addition, to allow the language model to capture aspects that might be relevant for classification, we add special tokens for upper-case words, elongation, and repetition. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_35",
"text": " We are interested in a model that performs robustly across a diverse set of tasks. To this end, if not mentioned otherwise, we use the same set of hyperparameters across tasks, which we tune on the IMDb validation set. We use the AWD-LSTM language model Merity et al. (2017a) with an embedding size of 400400400, 333 layers, 115011501150 hidden activations per layer, and a BPTT batch size of 707070. We apply dropout of 0.40.40.4 to layers, 0.30.30.3 to RNN layers, 0.40.40.4 to input embedding layers, 0.050.050.05 to embedding layers, and weight dropout of 0.50.50.5 to the RNN hidden-to-hidden matrix. The classifier has a hidden layer of size 505050. We use Adam with β1=0.7subscript𝛽10.7\\beta_{1}=0.7 instead of the default β1=0.9subscript𝛽10.9\\beta_{1}=0.9 and β2=0.99subscript𝛽20.99\\beta_{2}=0.99, similar to Dozat and Manning (2017). We use a batch size of 646464, a base learning rate of 0.0040.0040.004 and 0.010.010.01 for fine-tuning the LM and the classifier respectively, and tune the number of epochs on the validation set of each task888On small datasets such as TREC-6, we fine-tune the LM only for 151515 epochs without overfitting, while we can fine-tune longer on larger datasets. We found 505050 epochs to be a good default for fine-tuning the classifier.. We otherwise use the same practices used in Merity et al. (2017a). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_36",
"text": " For each task, we compare against the current state-of-the-art. For the IMDb and TREC-6 datasets, we compare against CoVe McCann et al. (2017), a state-of-the-art transfer learning method for NLP. For the AG, Yelp, and DBpedia datasets, we compare against the state-of-the-art text categorization method by Johnson and Zhang (2017). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_37",
"text": " For consistency, we report all results as error rates (lower is better). We show the test error rates on the IMDb and TREC-6 datasets used by McCann et al. (2017) in Table 2. Our method outperforms both CoVe, a state-of-the-art transfer learning method based on hypercolumns, as well as the state-of-the-art on both datasets. On IMDb, we reduce the error dramatically by 43.9% and 22% with regard to CoVe and the state-of-the-art respectively. This is promising as the existing state-of-the-art requires complex architectures Peters et al. (2018), multiple forms of attention McCann et al. (2017) and sophisticated embedding schemes Johnson and Zhang (2016), while our method employs a regular LSTM with dropout. We note that the language model fine-tuning approach of Dai and Le (2015) only achieves an error of 7.64 vs. 4.6 for our method on IMDb, demonstrating the benefit of transferring knowledge from a large ImageNet-like corpus using our fine-tuning techniques. IMDb in particular is reflective of real-world datasets: Its documents are generally a few paragraphs long—similar to emails (e.g for legal discovery) and online comments (e.g for community management); and sentiment analysis is similar to many commercial applications, e.g. product response tracking and support email routing. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_38",
"text": " On TREC-6, our improvement—similar as the improvements of state-of-the-art approaches—is not statistically significant, due to the small size of the 500-examples test set. Nevertheless, the competitive performance on TREC-6 demonstrates that our model performs well across different dataset sizes and can deal with examples that range from single sentences—in the case of TREC-6—to several paragraphs for IMDb. Note that despite pretraining on more than two orders of magnitude less data than the 7 million sentence pairs used by McCann et al. (2017), we consistently outperform their approach on both datasets. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_39",
"text": " We show the test error rates on the larger AG, DBpedia, Yelp-bi, and Yelp-full datasets in Table 3. Our method again outperforms the state-of-the-art significantly. On AG, we observe a similarly dramatic error reduction by 23.7% compared to the state-of-the-art. On DBpedia, Yelp-bi, and Yelp-full, we reduce the error by 4.8%, 18.2%, 2.0% respectively. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_40",
"text": " In order to assess the impact of each contribution, we perform a series of analyses and ablations. We run experiments on three corpora, IMDb, TREC-6, and AG that are representative of different tasks, genres, and sizes. For all experiments, we split off 10%percent1010\\% of the training set and report error rates on this validation set with unidirectional LMs. We fine-tune the classifier for 505050 epochs and train all methods but ULMFiT with early stopping. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_41",
"text": " One of the main benefits of transfer learning is being able to train a model for a task with a small number of labels. We evaluate ULMFiT on different numbers of labeled examples in two settings: only labeled examples are used for LM fine-tuning (‘supervised’); and all task data is available and can be used to fine-tune the LM (‘semi-supervised’). We compare ULMFiT to training from scratch—which is necessary for hypercolumn-based approaches. We split off balanced fractions of the training data, keep the validation set fixed, and use the same hyperparameters as before. We show the results in Figure 3. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_42",
"text": " On IMDb and AG, supervised ULMFiT with only 100100100 labeled examples matches the performance of training from scratch with 10×10\\times and 20×20\\times more data respectively, clearly demonstrating the benefit of general-domain LM pretraining. If we allow ULMFiT to also utilize unlabeled examples (505050k for IMDb, 100100100k for AG), at 100100100 labeled examples, we match the performance of training from scratch with 50×50\\times and 100×100\\times more data on AG and IMDb respectively. On TREC-6, ULMFiT significantly improves upon training from scratch; as examples are shorter and fewer, supervised and semi-supervised ULMFiT achieve similar results. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_43",
"text": " We compare using no pretraining with pretraining on WikiText-103 Merity et al. (2017b) in Table 4. Pretraining is most useful for small and medium-sized datasets, which are most common in commercial applications. However, even for large datasets, pretraining improves performance. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_44",
"text": " In order to gauge the importance of choosing an appropriate LM, we compare a vanilla LM with the same hyperparameters without any dropout999To avoid overfitting, we only train the vanilla LM classifier for 555 epochs and keep dropout of 0.40.40.4 in the classifier. with the AWD-LSTM LM with tuned dropout parameters in Table 5. Using our fine-tuning techniques, even a regular LM reaches surprisingly good performance on the larger datasets. On the smaller TREC-6, a vanilla LM without dropout runs the risk of overfitting, which decreases performance. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_45",
"text": " We compare no fine-tuning against fine-tuning the full model Erhan et al. (2010) (‘Full’), the most commonly used fine-tuning method, with and without discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’) in Table 6. Fine-tuning the LM is most beneficial for larger datasets. ‘Discr’ and ‘Stlr’ improve performance across all three datasets and are necessary on the smaller TREC-6, where regular fine-tuning is not beneficial. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_46",
"text": " We compare training from scratch, fine-tuning the full model (‘Full’), only fine-tuning the last layer (‘Last’) Donahue et al. (2014), ‘Chain-thaw’ Felbo et al. (2017), and gradual unfreezing (‘Freez’). We furthermore assess the importance of discriminative fine-tuning (‘Discr’) and slanted triangular learning rates (‘Stlr’). We compare the latter to an alternative, aggressive cosine annealing schedule (‘Cos’) Loshchilov and Hutter (2017). We use a learning rate ηL=0.01superscript𝜂𝐿0.01\\eta^{L}=0.01 for ‘Discr’, learning rates of 0.0010.0010.001 and 0.00010.00010.0001 for the last and all other layers respectively for ‘Chain-thaw’ as in Felbo et al. (2017), and a learning rate of 0.0010.0010.001 otherwise. We show the results in Table 7. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_47",
"text": " Fine-tuning the classifier significantly improves over training from scratch, particularly on the small TREC-6. ‘Last’, the standard fine-tuning method in CV, severely underfits and is never able to lower the training error to 00. ‘Chain-thaw’ achieves competitive performance on the smaller datasets, but is outperformed significantly on the large AG. ‘Freez’ provides similar performance as ‘Full’. ‘Discr’ consistently boosts the performance of ‘Full’ and ‘Freez’, except for the large AG. Cosine annealing is competitive with slanted triangular learning rates on large data, but under-performs on smaller datasets. Finally, full ULMFiT classifier fine-tuning (bottom row) achieves the best performance on IMDB and TREC-6 and competitive performance on AG. Importantly, ULMFiT is the only method that shows excellent performance across the board—and is therefore the only universal method. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_48",
"text": " While our results demonstrate that how we fine-tune the classifier makes a significant difference, fine-tuning for inductive transfer is currently under-explored in NLP as it mostly has been thought to be unhelpful Mou et al. (2016). To better understand the fine-tuning behavior of our model, we compare the validation error of the classifier fine-tuned with ULMFiT and ‘Full’ during training in Figure 4. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_49",
"text": " On all datasets, fine-tuning the full model leads to the lowest error comparatively early in training, e.g. already after the first epoch on IMDb. The error then increases as the model starts to overfit and knowledge captured through pretraining is lost. In contrast, ULMFiT is more stable and suffers from no such catastrophic forgetting; performance remains similar or improves until late epochs, which shows the positive effect of the learning rate schedule. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_50",
"text": " At the cost of training a second model, ensembling the predictions of a forward and backwards LM-classifier brings a performance boost of around 0.50.50.5–0.70.70.7. On IMDb we lower the test error from 5.305.305.30 of a single model to 4.584.584.58 for the bidirectional model. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_51",
"text": " While we have shown that ULMFiT can achieve state-of-the-art performance on widely used text classification tasks, we believe that language model fine-tuning will be particularly useful in the following settings compared to existing transfer learning approaches Conneau et al. (2017); McCann et al. (2017); Peters et al. (2018): a) NLP for non-English languages, where training data for supervised pretraining tasks is scarce; b) new NLP tasks where no state-of-the-art architecture exists; and c) tasks with limited amounts of labeled data (and some amounts of unlabeled data). ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_52",
"text": " Given that transfer learning and particularly fine-tuning for NLP is under-explored, many future directions are possible. One possible direction is to improve language model pretraining and fine-tuning and make them more scalable: for ImageNet, predicting far fewer classes only incurs a small performance drop Huh et al. (2016), while recent work shows that an alignment between source and target task label sets is important Mahajan et al. (2018)—focusing on predicting a subset of words such as the most frequent ones might retain most of the performance while speeding up training. Language modeling can also be augmented with additional tasks in a multi-task learning fashion Caruana (1993) or enriched with additional supervision, e.g. syntax-sensitive dependencies Linzen et al. (2016) to create a model that is more general or better suited for certain downstream tasks, ideally in a weakly-supervised manner to retain its universal properties. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_53",
"text": " Another direction is to apply the method to novel tasks and models. While an extension to sequence labeling is straightforward, other tasks with more complex interactions such as entailment or question answering may require novel ways to pretrain and fine-tune. Finally, while we have provided a series of analyses and ablations, more studies are required to better understand what knowledge a pretrained language model captures, how this changes during fine-tuning, and what information different tasks require. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
},
{
"id": "1801.06146_all_54",
"text": " We have proposed ULMFiT, an effective and extremely sample-efficient transfer learning method that can be applied to any NLP task. We have also proposed several novel fine-tuning techniques that in conjunction prevent catastrophic forgetting and enable robust learning across a diverse range of tasks. Our method significantly outperformed existing transfer learning techniques and the state-of-the-art on six representative text classification tasks. We hope that our results will catalyze new developments in transfer learning for NLP. ",
"title": "Universal Language Model Fine-tuning for Text Classification"
}
] |
What are the applications of dreambooth?
|
Applications of Text-based image generation includes recontextualization & manipulation of subjects, , original art renditions, novel view synthesis and much more [5].
|
[
5
] |
[
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requires synthesizing instances of specific subjects (e.g., objects, animals) in new contexts such that they naturally and seamlessly blend into the scene. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_1",
"text": " Recently developed large text-to-image models have shown unprecedented capabilities, by enabling high-quality and diverse synthesis of images based on a text prompt written in natural language (61, 54). One of the main advantages of such models is the strong semantic prior learned from a large collection of image-caption pairs. Such a prior learns, for instance, to bind the word “dog” with various instances of dogs that can appear in different poses and contexts in an image. While the synthesis capabilities of these models are unprecedented, they lack the ability to mimic the appearance of subjects in a given reference set, and synthesize novel renditions of the same subjects in different contexts. The main reason is that the expressiveness of their output domain is limited; even the most detailed textual description of an object may yield instances with different appearances. Furthermore, even models whose text embedding lies in a shared language-vision space cannot accurately reconstruct the appearance of given subjects but only create variations of the image content (Figure 2). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_2",
"text": " In this work, we present a new approach for “personalization” of text-to-image diffusion models (adapting them to user-specific image generation needs). Our goal is to expand the language-vision dictionary of the model such that it binds new words with specific subjects the user wants to generate. Once the new dictionary is embedded in the model, it can use these words to synthesize novel photorealistic images of the subject, contextualized in different scenes, while preserving their key identifying features. The effect is akin to a “magic photo booth”—once a few images of the subject are taken, the booth generates photos of the subject in different conditions and scenes, as guided by simple and intuitive text prompts (Figure 1). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_3",
"text": " More formally, given a few images of a subject (∼similar-to\\sim3-5), our objective is to implant the subject into the output domain of the model such that it can be synthesized with a unique identifier. To that end, we propose a technique to represent a given subject with rare token identifiers and fine-tune a pre-trained, diffusion-based text-to-image framework. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_4",
"text": " We fine-tune the text-to-image model with the input images and text prompts containing a unique identifier followed by the class name of the subject (e.g., “A (V) dog”). The latter enables the model to use its prior knowledge on the subject class while the class-specific instance is bound with the unique identifier. In order to prevent language drift (34, 40) that causes the model to associate the class name (e.g., “dog”) with the specific instance, we propose an autogenous, class-specific prior preservation loss, which leverages the semantic prior on the class that is embedded in the model, and encourages it to generate diverse instances of the same class as our subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_5",
"text": " We apply our approach to a myriad of text-based image generation applications including recontextualization of subjects, modification of their properties, original art renditions, and more, paving the way to a new stream of previously unassailable tasks. We highlight the contribution of each component in our method via ablation studies, and compare with alternative baselines and related work. We also conduct a user study to evaluate subject and prompt fidelity in our synthesized images, compared to alternative approaches. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_6",
"text": " To the best of our knowledge, ours is the first technique that tackles this new challenging problem of subject-driven generation, allowing users, from just a few casually captured images of a subject, synthesize novel renditions of the subject in different contexts while maintaining its distinctive features. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_7",
"text": " To evaluate this new task, we also construct a new dataset that contains various subjects captured in different contexts, and propose a new evaluation protocol that measures the subject fidelity and prompt fidelity of the generated results. We make our dataset and evaluation protocol publicly available on the project webpage. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_8",
"text": " Image Composition. Image composition techniques (70, 13, 38) aim to clone a given subject into a new background such that the subject melds into the scene. To consider composition in novel poses, one may apply 3D reconstruction techniques (41, 6, 8, 68, 49) which usually works on rigid objects and require a larger number of views. Some drawbacks include scene integration (lighting, shadows, contact) and the inability to generate novel scenes. In contrast, our approach enable generation of subjects in novel poses and new contexts. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_9",
"text": " Text-to-Image Editing and Synthesis. Text-driven image manipulation has recently achieved significant progress using GANs (22, 9, 28, 29, 30) combined with image-text representations such as CLIP , yielding realistic manipulations using text (48, 21, 71, 2, 7, 43). These methods work well on structured scenarios (e.g. human face editing) and can struggle over diverse datasets where subjects are varied. Crowson et al. use VQ-GAN and train over more diverse data to alleviate this concern. Other works (4, 31) exploit the recent diffusion models (25, 63, 65, 25, 64, 58, 45, 66, 60, 62), which achieve state-of-the-art generation quality over highly diverse datasets, often surpassing GANs . While most works that require only text are limited to global editing (14, 33), Bar-Tal et al. proposed a text-based localized editing technique without using masks, showing impressive results. While most of these editing approaches allow modification of global properties or local editing of a given image, none enables generating novel renditions of a given subject in new contexts. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_10",
"text": " There also exists work on text-to-image synthesis (16, 24, 67, 35, 36, 50, 51, 55, 74, 14, 19, 58, 27). Recent large text-to-image models such as Imagen , DALL-E2 , Parti , CogView2 and Stable Diffusion demonstrated unprecedented semantic generation. These models do not provide fine-grained control over a generated image and use text guidance only. Specifically, it is challenging or impossible to preserve the identity of a subject consistently across synthesized images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_11",
"text": " Controllable Generative Models. There are various approaches to control generative models, where some of them might prove to be viable directions for subject-driven prompt-guided image synthesis. Liu et al. propose a diffusion-based technique allowing for image variations guided by reference image or text. To overcome subject modification, several works (44, 3) assume a user-provided mask to restrict the modified area. Inversion (12, 15, 54) can be used to preserve a subject while modifying context. Prompt-to-prompt allows for local and global editing without an input mask. These methods fall short of identity-preserving novel sample generation of a subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_12",
"text": " In the context of GANs, Pivotal Tuning allows for real image editing by finetuning the model with an inverted latent code anchor, and Nitzan et al. extended this work to GAN finetuning on faces to train a personalized prior, which requires around 100 images and are limited to the face domain. Casanova et al. propose an instance conditioned GAN that can generate variations of an instance, although it can struggle with unique subjects and does not preserve all subject details. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_13",
"text": " Finally, the concurrent work of Gal et al. proposes a method to represent visual concepts, like an object or a style, through new tokens in the embedding space of a frozen text-to-image model, resulting in small personalized token embeddings. While this method is limited by the expressiveness of the frozen diffusion model, our fine-tuning approach enables us to embed the subject within the model’s output domain, resulting in the generation of novel images of the subject which preserve its key visual features. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_14",
"text": " Given only a few (typically 3-5) casually captured images of a specific subject, without any textual description, our objective is to generate new images of the subject with high detail fidelity and with variations guided by text prompts. Example variations include changing the subject location, changing subject properties such as color or shape, modifying the subject’s pose, viewpoint, and other semantic modifications. We do not impose any restrictions on input image capture settings and the subject image can have varying contexts. We next provide some background on text-to-image diffusion models (Sec. 3.1), then present our fine-tuning technique to bind a unique identifier with a subject described in a few images (Sec. 3.2), and finally propose a class-specific prior-preservation loss that enables us to overcome language drift in our fine-tuned model (Sec. 3.3). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_15",
"text": " Diffusion models are probabilistic generative models that are trained to learn a data distribution by the gradual denoising of a variable sampled from a Gaussian distribution. Specifically, we are interested in a pre-trained text-to-image diffusion model 𝐱^θsubscript^𝐱𝜃\\hat{\\mathbf{x}}_{\\theta} that, given an initial noise map ϵ∼𝒩(𝟎,𝐈)similar-tobold-italic-ϵ𝒩0𝐈{\\bm{\\epsilon}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) and a conditioning vector 𝐜=Γ(𝐏)𝐜Γ𝐏\\mathbf{c}=\\Gamma(\\mathbf{P}) generated using a text encoder ΓΓ\\Gamma and a text prompt 𝐏𝐏\\mathbf{P}, generates an image 𝐱gen=𝐱^θ(ϵ,𝐜)subscript𝐱gensubscript^𝐱𝜃bold-italic-ϵ𝐜\\mathbf{x}_{\\text{gen}}=\\hat{\\mathbf{x}}_{\\theta}({\\bm{\\epsilon}},\\mathbf{c}). They are trained using a squared error loss to denoise a variably-noised image or latent code 𝐳t≔αt𝐱+σtϵ≔subscript𝐳𝑡subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ\\mathbf{z}_{t}\\coloneqq\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}} as follows: 𝔼𝐱,𝐜,ϵ,t(wt‖𝐱^θ(αt𝐱+σtϵ,𝐜)−𝐱‖22)subscript𝔼𝐱𝐜bold-italic-ϵ𝑡delimited-()subscript𝑤𝑡subscriptsuperscriptnormsubscript^𝐱𝜃subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ𝐜𝐱22\\mathbb{E}_{\\mathbf{x},\\mathbf{c},{\\bm{\\epsilon}},t}\\!\\left(w_{t}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}},\\mathbf{c})-\\mathbf{x}\\|^{2}_{2}\\right) (1) where 𝐱𝐱\\mathbf{x} is the ground-truth image, 𝐜𝐜\\mathbf{c} is a conditioning vector (e.g., obtained from a text prompt), and αt,σt,wtsubscript𝛼𝑡subscript𝜎𝑡subscript𝑤𝑡\\alpha_{t},\\sigma_{t},w_{t} are terms that control the noise schedule and sample quality, and are functions of the diffusion process time t∼𝒰((0,1))similar-to𝑡𝒰01t\\sim\\mathcal{U}((0,1)). A more detailed description is given in the supplementary material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_16",
"text": " Our first task is to implant the subject instance into the output domain of the model such that we can query the model for varied novel images of the subject. One natural idea is to fine-tune the model using the few-shot dataset of the subject. Careful care had to be taken when fine-tuning generative models such as GANs in a few-shot scenario as it can cause overfitting and mode-collapse - as well as not capturing the target distribution sufficiently well. There has been research on techniques to avoid these pitfalls (56, 47, 37, 42, 69), although, in contrast to our work, this line of work primarily seeks to generate images that resemble the target distribution but has no requirement of subject preservation. With regards to these pitfalls, we observe the peculiar finding that, given a careful fine-tuning setup using the diffusion loss from Eq 1, large text-to-image diffusion models seem to excel at integrating new information into their domain without forgetting the prior or overfitting to a small set of training images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_17",
"text": " Our goal is to “implant” a new (unique identifier, subject) pair into the diffusion model’s “dictionary” . In order to bypass the overhead of writing detailed image descriptions for a given image set we opt for a simpler approach and label all input images of the subject “a (identifier) (class noun)”, where (identifier) is a unique identifier linked to the subject and (class noun) is a coarse class descriptor of the subject (e.g. cat, dog, watch, etc.). The class descriptor can be provided by the user or obtained using a classifier. We use a class descriptor in the sentence in order to tether the prior of the class to our unique subject and find that using a wrong class descriptor, or no class descriptor increases training time and language drift while decreasing performance. In essence, we seek to leverage the model’s prior of the specific class and entangle it with the embedding of our subject’s unique identifier so we can leverage the visual prior to generate new poses and articulations of the subject in different contexts. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_18",
"text": " We generally find existing English words (e.g. “unique”, “special”) suboptimal since the model has to learn to disentangle them from their original meaning and to re-entangle them to reference our subject. This motivates the need for an identifier that has a weak prior in both the language model and the diffusion model. A hazardous way of doing this is to select random characters in the English language and concatenate them to generate a rare identifier (e.g. “xxy5syt00”). In reality, the tokenizer might tokenize each letter separately, and the prior for the diffusion model is strong for these letters. We often find that these tokens incur the similar weaknesses as using common English words. Our approach is to find rare tokens in the vocabulary, and then invert these tokens into text space, in order to minimize the probability of the identifier having a strong prior. We perform a rare-token lookup in the vocabulary and obtain a sequence of rare token identifiers f(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}), where f𝑓f is a tokenizer; a function that maps character sequences to tokens and 𝐕^^𝐕\\hat{\\mathbf{V}} is the decoded text stemming from the tokens f(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}). The sequence can be of variable length k𝑘k, and find that relatively short sequences of k={1,…,3}𝑘1…3k=\\{1,...,3\\} work well. Then, by inverting the vocabulary using the de-tokenizer on f(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}) we obtain a sequence of characters that define our unique identifier 𝐕^^𝐕\\hat{\\mathbf{V}}. For Imagen, we find that using uniform random sampling of tokens that correspond to 3 or fewer Unicode characters (without spaces) and using tokens in the T5-XXL tokenizer range of {5000,…,10000}5000…10000\\{5000,...,10000\\} works well. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_19",
"text": " In our experience, the best results for maximum subject fidelity are achieved by fine-tuning all layers of the model. This includes fine-tuning layers that are conditioned on the text embeddings, which gives rise to the problem of language drift. Language drift has been an observed problem in language models (34, 40), where a model that is pre-trained on a large text corpus and later fine-tuned for a specific task progressively loses syntactic and semantic knowledge of the language. To the best of our knowledge, we are the first to find a similar phenomenon affecting diffusion models, where to model slowly forgets how to generate subjects of the same class as the target subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_20",
"text": " Another problem is the possibility of reduced output diversity. Text-to-image diffusion models naturally posses high amounts of output diversity. When fine-tuning on a small set of images we would like to be able to generate the subject in novel viewpoints, poses and articulations. Yet, there is a risk of reducing the amount of variability in the output poses and views of the subject (e.g. snapping to the few-shot views). We observe that this is often the case, especially when the model is trained for too long. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_21",
"text": " To mitigate the two aforementioned issues, we propose an autogenous class-specific prior preservation loss that encourages diversity and counters language drift. In essence, our method is to supervise the model with its own generated samples, in order for it to retain the prior once the few-shot fine-tuning begins. This allows it to generate diverse images of the class prior, as well as retain knowledge about the class prior that it can use in conjunction with knowledge about the subject instance. Specifically, we generate data 𝐱pr=𝐱^(𝐳t1,𝐜pr)subscript𝐱pr^𝐱subscript𝐳subscript𝑡1subscript𝐜pr\\mathbf{x}_{\\text{pr}}=\\hat{\\mathbf{x}}(\\mathbf{z}_{t_{1}},\\mathbf{c}_{\\text{pr}}) by using the ancestral sampler on the frozen pre-trained diffusion model with random initial noise 𝐳t1∼𝒩(𝟎,𝐈)similar-tosubscript𝐳subscript𝑡1𝒩0𝐈\\mathbf{z}_{t_{1}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) and conditioning vector 𝐜pr≔Γ(f(”a (class noun)”))≔subscript𝐜prΓ𝑓”a (class noun)”\\mathbf{c}_{\\text{pr}}\\coloneqq\\Gamma(f(\\text{\"a (class noun)\"})). The loss becomes: 𝔼𝐱,𝐜,ϵ,ϵ′,t(wt∥𝐱^θ(αt𝐱+σtϵ,𝐜)−𝐱∥22+λwt′∥𝐱^θ(αt′𝐱pr+σt′ϵ′,𝐜pr)−𝐱pr∥22),subscript𝔼𝐱𝐜bold-italic-ϵsuperscriptbold-italic-ϵ′𝑡delimited-()subscript𝑤𝑡subscriptsuperscriptdelimited-∥∥subscript^𝐱𝜃subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ𝐜𝐱22𝜆subscript𝑤superscript𝑡′subscriptsuperscriptdelimited-∥∥subscript^𝐱𝜃subscript𝛼superscript𝑡′subscript𝐱prsubscript𝜎superscript𝑡′superscriptbold-italic-ϵ′subscript𝐜prsubscript𝐱pr22\\mathbb{E}_{\\mathbf{x},\\mathbf{c},{\\bm{\\epsilon}},{\\bm{\\epsilon}}^{\\prime},t}(w_{t}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}},\\mathbf{c})-\\mathbf{x}\\|^{2}_{2}+\\\\ \\lambda w_{t^{\\prime}}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t^{\\prime}}\\mathbf{x}_{\\text{pr}}+\\sigma_{t^{\\prime}}{\\bm{\\epsilon}}^{\\prime},\\mathbf{c}_{\\text{pr}})-\\mathbf{x}_{\\text{pr}}\\|^{2}_{2}), (2) where the second term is the prior-preservation term that supervises the model with its own generated images, and λ𝜆\\lambda controls for the relative weight of this term. Figure 3 illustrates the model fine-tuning with the class-generated samples and prior-preservation loss. Despite being simple, we find this prior-preservation loss is effective in encouraging output diversity and in overcoming language-drift. We also find that we can train the model for more iterations without risking overfitting. We find that ∼similar-to\\sim 1000 iterations with λ=1𝜆1\\lambda=1 and learning rate 10−5superscript10510^{-5} for Imagen and 5×10−65superscript1065\\times 10^{-6} for Stable Diffusion , and with a subject dataset size of 3-5 images is enough to achieve good results. During this process, ∼1000similar-toabsent1000\\sim 1000 “a (class noun)” samples are generated - but less can be used. The training process takes about 5 minutes on one TPUv4 for Imagen, and 5 minutes on a NVIDIA A100 for Stable Diffusion. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_22",
"text": " In this section, we show experiments and applications. Our method enables a large expanse of text-guided semantic modifications of our subject instances, including recontextualization, modification of subject properties such as material and species, art rendition, and viewpoint modification. Importantly, across all of these modifications, we are able to preserve the unique visual features that give the subject its identity and essence. If the task is recontextualization, then the subject features are unmodified, but appearance (e.g., pose) may change. If the task is a stronger semantic modification, such as crossing between our subject and another species/object, then the key features of the subject are preserved after modification. In this section, we reference the subject’s unique identifier using (V). We include specific Imagen and Stable Diffusion implementation details in the supp. material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_23",
"text": " We collected a dataset of 30 subjects, including unique objects and pets such as backpacks, stuffed animals, dogs, cats, sunglasses, cartoons, etc. We separate each subject into two categories: objects and live subjects/pets. 21 of the 30 subjects are objects, and 9 are live subjects/pets. We provide one sample image for each of the subjects in Figure 5. Images for this dataset were collected by the authors or sourced from Unsplash . We also collected 25 prompts: 20 recontextualization prompts and 5 property modification prompts for objects; 10 recontextualization, 10 accessorization, and 5 property modification prompts for live subjects/pets. The full list of prompts can be found in the supplementary material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_24",
"text": " For the evaluation suite we generate four images per subject and per prompt, totaling 3,000 images. This allows us to robustly measure performances and generalization capabilities of a method. We make our dataset and evaluation protocol publicly available on the project webpage for future use in evaluating subject-driven generation. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_25",
"text": " One important aspect to evaluate is subject fidelity: the preservation of subject details in generated images. For this, we compute two metrics: CLIP-I and DINO . CLIP-I is the average pairwise cosine similarity between CLIP embeddings of generated and real images. Although this metric has been used in other work , it is not constructed to distinguish between different subjects that could have highly similar text descriptions (e.g. two different yellow clocks). Our proposed DINO metric is the average pairwise cosine similarity between the ViT-S/16 DINO embeddings of generated and real images. This is our preferred metric, since, by construction and in contrast to supervised networks, DINO is not trained to ignore differences between subjects of the same class. Instead, the self-supervised training objective encourages distinction of unique features of a subject or image. The second important aspect to evaluate is prompt fidelity, measured as the average cosine similarity between prompt and image CLIP embeddings. We denote this as CLIP-T. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_26",
"text": " We compare our results with Textual Inversion, the recent concurrent work of Gal et al. , using the hyperparameters provided in their work. We find that this work is the only comparable work in the literature that is subject-driven, text-guided and generates novel images. We generate images for DreamBooth using Imagen, DreamBooth using Stable Diffusion and Textual Inversion using Stable Diffusion. We compute DINO and CLIP-I subject fidelity metrics and the CLIP-T prompt fidelity metric. In Table 1 we show sizeable gaps in both subject and prompt fidelity metrics for DreamBooth over Textual Inversion. We find that DreamBooth (Imagen) achieves higher scores for both subject and prompt fidelity than DreamBooth (Stable Diffusion), approaching the upper-bound of subject fidelity for real images. We believe that this is due to the larger expressive power and higher output quality of Imagen. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_27",
"text": " Further, we compare Textual Inversion (Stable Diffusion) and DreamBooth (Stable Diffusion) by conducting a user study. For subject fidelity, we asked 72 users to answer questionnaires of 25 comparative questions (3 users per questionnaire), totaling 1800 answers. Samples are randomly selected from a large pool. Each question shows the set of real images for a subject, and one generated image of that subject by each method (with a random prompt). Users are asked to answer the question: “Which of the two images best reproduces the identity (e.g. item type and details) of the reference item?”, and we include a “Cannot Determine / Both Equally” option. Similarly for prompt fidelity, we ask “Which of the two images is best described by the reference text?”. We average results using majority voting and present them in Table 2. We find an overwhelming preference for DreamBooth for both subject fidelity and prompt fidelity. This shines a light on results in Table 1, where DINO differences of around 0.10.10.1 and CLIP-T differences of 0.050.050.05 are significant in terms of user preference. Finally, we show qualitative comparisons in Figure 4. We observe that DreamBooth better preserves subject identity, and is more faithful to prompts. We show samples of the user study in the supp. material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_28",
"text": " We fine-tune Imagen on 15 subjects from our dataset, with and without our proposed prior preservation loss (PPL). The prior preservation loss seeks to combat language drift and preserve the prior. We compute a prior preservation metric (PRES) by computing the average pairwise DINO embeddings between generated images of random subjects of the prior class and real images of our specific subject. The higher this metric, the more similar random subjects of the class are to our specific subject, indicating collapse of the prior. We report results in Table 3 and observe that PPL substantially counteracts language drift and helps retain the ability to generate diverse images of the prior class. Additionally, we compute a diversity metric (DIV) using the average LPIPS cosine similarity between generated images of same subject with same prompt. We observe that our model trained with PPL achieves higher diversity (with slightly diminished subject fidelity), which can also be observed qualitatively in Figure 6, where our model trained with PPL overfits less to the environment of the reference images and can generate the dog in more diverse poses and articulations. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_29",
"text": " We finetune Imagen on a subset of our dataset subjects (5 subjects) with no class noun, a randomly sampled incorrect class noun, and the correct class noun. With the correct class noun for our subject, we are able to faithfully fit to the subject, take advantage of the class prior, allowing us to generate our subject in various contexts. When an incorrect class noun (e.g. “can” for a backpack) is used, we run into contention between our subject and and the class prior - sometimes obtaining cylindrical backpacks, or otherwise misshapen subjects. If we train with no class noun, the model does not leverage the class prior, has difficulty learning the subject and converging, and can generate erroneous samples. Subject fidelity results are shown in Table 4, with substantially higher subject fidelity for our proposed approach. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_30",
"text": " We can generate novel images for a specific subject in different contexts (Figure 7) with descriptive prompts (“a (V) (class noun) (context description)”). Importantly, we are able to generate the subject in new poses and articulations, with previously unseen scene structure and realistic integration of the subject in the scene (e.g. contact, shadows, reflections). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_31",
"text": " Given a prompt “a painting of a (V) (class noun) in the style of (famous painter)” or “a statue of a (V) (class noun) in the style of (famous sculptor)” we are able to generate artistic renditions of our subject. Unlike style transfer, where the source structure is preserved and only the style is transferred, we are able to generate meaningful, novel variations depending on the artistic style, while preserving subject identity. E.g, as shown in Figure 8, “Michelangelo”, we generated a pose that is novel and not seen in the input images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_32",
"text": " We are able to render the subject under novel viewpoints. In Figure 8, we generate new images of the input cat (with consistent complex fur patterns) under new viewpoints. We highlight that the model has not seen this specific cat from behind, below, or above - yet it is able to extrapolate knowledge from the class prior to generate these novel views given only 4 frontal images of the subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_33",
"text": " We are able to modify subject properties. For example, we show crosses between a specific Chow Chow dog and different animal species in the bottom row of Figure 8. We prompt the model with sentences of the following structure: “a cross of a (V) dog and a (target species)”. In particular, we can see in this example that the identity of the dog is well preserved even when the species changes - the face of the dog has certain unique features that are well preserved and melded with the target species. Other property modifications are possible, such as material modification (e.g. “a transparent (V) teapot” in Figure 7). Some are harder than others and depend on the prior of the base generation model. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_34",
"text": " We illustrate some failure models of our method in Figure 9. The first is related to not being able to accurately generate the prompted context. Possible reasons are a weak prior for these contexts, or difficulty in generating both the subject and specified concept together due to low probability of co-occurrence in the training set. The second is context-appearance entanglement, where the appearance of the subject changes due to the prompted context, exemplified in Figure 9 with color changes of the backpack. Third, we also observe overfitting to the real images that happen when the prompt is similar to the original setting in which the subject was seen. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_35",
"text": " Other limitations are that some subjects are easier to learn than others (e.g. dogs and cats). Occasionally, with subjects that are rarer, the model is unable to support as many subject variations. Finally, there is also variability in the fidelity of the subject and some generated images might contain hallucinated subject features, depending on the strength of the model prior, and the complexity of the semantic modification. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_36",
"text": " We presented an approach for synthesizing novel renditions of a subject using a few images of the subject and the guidance of a text prompt. Our key idea is to embed a given subject instance in the output domain of a text-to-image diffusion model by binding the subject to a unique identifier. Remarkably - this fine-tuning process can work given only 3-5 subject images, making the technique particularly accessible. We demonstrated a variety of applications with animals and objects in generated photorealistic scenes, in most cases indistinguishable from real images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_37",
"text": " We thank Rinon Gal, Adi Zicher, Ron Mokady, Bill Freeman, Dilip Krishnan, Huiwen Chang and Daniel Cohen-Or for their valuable inputs that helped improve this work, and to Mohammad Norouzi, Chitwan Saharia and William Chan for providing us with their support and the pretrained Imagen models. Finally, a special thanks to David Salesin for his feedback, advice and for his support for the project. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
}
] |
Can't we use parallelization with RNN layers approach with any possible way?
|
Because hidden state of each input position depends on previous hidden state therefore RNN can not be parallelized [1]. Whereas Transformer due to attention layers are highly parallel [36].
|
[
1,
36
] |
[
{
"id": "1706.03762_all_0",
"text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5). Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures (38, 24, 15). ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_1",
"text": " Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states htsubscriptℎ𝑡h_{t}, as a function of the previous hidden state ht−1subscriptℎ𝑡1h_{t-1} and the input for position t𝑡t. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks and conditional computation , while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_2",
"text": " Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences (2, 19). In all but a few cases , however, such attention mechanisms are used in conjunction with a recurrent network. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_3",
"text": " In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_4",
"text": " The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU , ByteNet and ConvS2S , all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions . In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_5",
"text": " Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations (4, 27, 28, 22). ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_6",
"text": " End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks . ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_7",
"text": " To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as (17, 18) and . ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_8",
"text": " Most competitive neural sequence transduction models have an encoder-decoder structure (5, 2, 35). Here, the encoder maps an input sequence of symbol representations (x1,…,xn)subscript𝑥1…subscript𝑥𝑛(x_{1},...,x_{n}) to a sequence of continuous representations 𝐳=(z1,…,zn)𝐳subscript𝑧1…subscript𝑧𝑛\\mathbf{z}=(z_{1},...,z_{n}). Given 𝐳𝐳\\mathbf{z}, the decoder then generates an output sequence (y1,…,ym)subscript𝑦1…subscript𝑦𝑚(y_{1},...,y_{m}) of symbols one element at a time. At each step the model is auto-regressive , consuming the previously generated symbols as additional input when generating the next. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_9",
"text": " The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_10",
"text": " The encoder is composed of a stack of N=6𝑁6N=6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection around each of the two sub-layers, followed by layer normalization . That is, the output of each sub-layer is LayerNorm(x+Sublayer(x))LayerNorm𝑥Sublayer𝑥\\mathrm{LayerNorm}(x+\\mathrm{Sublayer}(x)), where Sublayer(x)Sublayer𝑥\\mathrm{Sublayer}(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel=512subscript𝑑model512d_{\\text{model}}=512. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_11",
"text": " The decoder is also composed of a stack of N=6𝑁6N=6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i𝑖i can depend only on the known outputs at positions less than i𝑖i. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_12",
"text": " An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_13",
"text": " We call our particular attention \"Scaled Dot-Product Attention\" (Figure 2). The input consists of queries and keys of dimension dksubscript𝑑𝑘d_{k}, and values of dimension dvsubscript𝑑𝑣d_{v}. We compute the dot products of the query with all keys, divide each by dksubscript𝑑𝑘\\sqrt{d_{k}}, and apply a softmax function to obtain the weights on the values. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_14",
"text": " In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q𝑄Q. The keys and values are also packed together into matrices K𝐾K and V𝑉V. We compute the matrix of outputs as: ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_15",
"text": " Attention(Q,K,V)=softmax(QKTdk)VAttention𝑄𝐾𝑉softmax𝑄superscript𝐾𝑇subscript𝑑𝑘𝑉\\mathrm{Attention}(Q,K,V)=\\mathrm{softmax}(\\frac{QK^{T}}{\\sqrt{d_{k}}})V (1) ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_16",
"text": " The two most commonly used attention functions are additive attention , and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of 1dk1subscript𝑑𝑘\\frac{1}{\\sqrt{d_{k}}}. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_17",
"text": " While for small values of dksubscript𝑑𝑘d_{k} the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of dksubscript𝑑𝑘d_{k} . We suspect that for large values of dksubscript𝑑𝑘d_{k}, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients 111To illustrate why the dot products get large, assume that the components of q𝑞q and k𝑘k are independent random variables with mean 00 and variance 111. Then their dot product, q⋅k=∑i=1dkqiki⋅𝑞𝑘superscriptsubscript𝑖1subscript𝑑𝑘subscript𝑞𝑖subscript𝑘𝑖q\\cdot k=\\sum_{i=1}^{d_{k}}q_{i}k_{i}, has mean 00 and variance dksubscript𝑑𝑘d_{k}.. To counteract this effect, we scale the dot products by 1dk1subscript𝑑𝑘\\frac{1}{\\sqrt{d_{k}}}. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_18",
"text": " Instead of performing a single attention function with dmodelsubscript𝑑modeld_{\\text{model}}-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values hℎh times with different, learned linear projections to dksubscript𝑑𝑘d_{k}, dksubscript𝑑𝑘d_{k} and dvsubscript𝑑𝑣d_{v} dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding dvsubscript𝑑𝑣d_{v}-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_19",
"text": " Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_20",
"text": " MultiHead(Q,K,V)MultiHead𝑄𝐾𝑉\\displaystyle\\mathrm{MultiHead}(Q,K,V) =Concat(head1,…,headh)WOabsentConcatsubscripthead1…subscriptheadhsuperscript𝑊𝑂\\displaystyle=\\mathrm{Concat}(\\mathrm{head_{1}},...,\\mathrm{head_{h}})W^{O} whereheadiwheresubscriptheadi\\displaystyle\\text{where}~{}\\mathrm{head_{i}} =Attention(QWiQ,KWiK,VWiV)absentAttention𝑄subscriptsuperscript𝑊𝑄𝑖𝐾subscriptsuperscript𝑊𝐾𝑖𝑉subscriptsuperscript𝑊𝑉𝑖\\displaystyle=\\mathrm{Attention}(QW^{Q}_{i},KW^{K}_{i},VW^{V}_{i}) ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_21",
"text": " Where the projections are parameter matrices WiQ∈ℝdmodel×dksubscriptsuperscript𝑊𝑄𝑖superscriptℝsubscript𝑑modelsubscript𝑑𝑘W^{Q}_{i}\\in\\mathbb{R}^{d_{\\text{model}}\\times d_{k}}, WiK∈ℝdmodel×dksubscriptsuperscript𝑊𝐾𝑖superscriptℝsubscript𝑑modelsubscript𝑑𝑘W^{K}_{i}\\in\\mathbb{R}^{d_{\\text{model}}\\times d_{k}}, WiV∈ℝdmodel×dvsubscriptsuperscript𝑊𝑉𝑖superscriptℝsubscript𝑑modelsubscript𝑑𝑣W^{V}_{i}\\in\\mathbb{R}^{d_{\\text{model}}\\times d_{v}} and WO∈ℝhdv×dmodelsuperscript𝑊𝑂superscriptℝℎsubscript𝑑𝑣subscript𝑑modelW^{O}\\in\\mathbb{R}^{hd_{v}\\times d_{\\text{model}}}. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_22",
"text": " In this work we employ h=8ℎ8h=8 parallel attention layers, or heads. For each of these we use dk=dv=dmodel/h=64subscript𝑑𝑘subscript𝑑𝑣subscript𝑑modelℎ64d_{k}=d_{v}=d_{\\text{model}}/h=64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_23",
"text": " The Transformer uses multi-head attention in three different ways: • In \"encoder-decoder attention\" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as (38, 2, 9). • The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. • Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to −∞-\\infty) all values in the input of the softmax which correspond to illegal connections. See Figure 2. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_24",
"text": " In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_25",
"text": " FFN(x)=max(0,xW1+b1)W2+b2FFN𝑥0𝑥subscript𝑊1subscript𝑏1subscript𝑊2subscript𝑏2\\mathrm{FFN}(x)=\\max(0,xW_{1}+b_{1})W_{2}+b_{2} (2) ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_26",
"text": " While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel=512subscript𝑑model512d_{\\text{model}}=512, and the inner-layer has dimensionality dff=2048subscript𝑑𝑓𝑓2048d_{ff}=2048. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_27",
"text": " Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodelsubscript𝑑modeld_{\\text{model}}. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to . In the embedding layers, we multiply those weights by dmodelsubscript𝑑model\\sqrt{d_{\\text{model}}}. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_28",
"text": " Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add \"positional encodings\" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodelsubscript𝑑modeld_{\\text{model}} as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed . ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_29",
"text": " In this work, we use sine and cosine functions of different frequencies: ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_30",
"text": " PE(pos,2i)=sin(pos/100002i/dmodel)𝑃subscript𝐸𝑝𝑜𝑠2𝑖𝑠𝑖𝑛𝑝𝑜𝑠superscript100002𝑖subscript𝑑model\\displaystyle PE_{(pos,2i)}=sin(pos/10000^{2i/d_{\\text{model}}}) PE(pos,2i+1)=cos(pos/100002i/dmodel)𝑃subscript𝐸𝑝𝑜𝑠2𝑖1𝑐𝑜𝑠𝑝𝑜𝑠superscript100002𝑖subscript𝑑model\\displaystyle PE_{(pos,2i+1)}=cos(pos/10000^{2i/d_{\\text{model}}}) ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_31",
"text": " where pos𝑝𝑜𝑠pos is the position and i𝑖i is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π2𝜋2\\pi to 10000⋅2π⋅100002𝜋10000\\cdot 2\\pi. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k𝑘k, PEpos+k𝑃subscript𝐸𝑝𝑜𝑠𝑘PE_{pos+k} can be represented as a linear function of PEpos𝑃subscript𝐸𝑝𝑜𝑠PE_{pos}. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_32",
"text": " We also experimented with using learned positional embeddings instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_33",
"text": " In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations (x1,…,xn)subscript𝑥1…subscript𝑥𝑛(x_{1},...,x_{n}) to another sequence of equal length (z1,…,zn)subscript𝑧1…subscript𝑧𝑛(z_{1},...,z_{n}), with xi,zi∈ℝdsubscript𝑥𝑖subscript𝑧𝑖superscriptℝ𝑑x_{i},z_{i}\\in\\mathbb{R}^{d}, such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_34",
"text": " One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_35",
"text": " The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies . Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_36",
"text": " As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n)𝑂𝑛O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n𝑛n is smaller than the representation dimensionality d𝑑d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece and byte-pair representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size r𝑟r in the input sequence centered around the respective output position. This would increase the maximum path length to O(n/r)𝑂𝑛𝑟O(n/r). We plan to investigate this approach further in future work. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_37",
"text": " A single convolutional layer with kernel width k<n𝑘𝑛k<n does not connect all pairs of input and output positions. Doing so requires a stack of O(n/k)𝑂𝑛𝑘O(n/k) convolutional layers in the case of contiguous kernels, or O(logk(n))𝑂𝑙𝑜subscript𝑔𝑘𝑛O(log_{k}(n)) in the case of dilated convolutions , increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of k𝑘k. Separable convolutions , however, decrease the complexity considerably, to O(k⋅n⋅d+n⋅d2)𝑂⋅𝑘𝑛𝑑⋅𝑛superscript𝑑2O(k\\cdot n\\cdot d+n\\cdot d^{2}). Even with k=n𝑘𝑛k=n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_38",
"text": " As side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_39",
"text": " This section describes the training regime for our models. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_40",
"text": " We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding , which has a shared source-target vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary . Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_41",
"text": " We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days). ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_42",
"text": " We used the Adam optimizer with β1=0.9subscript𝛽10.9\\beta_{1}=0.9, β2=0.98subscript𝛽20.98\\beta_{2}=0.98 and ϵ=10−9italic-ϵsuperscript109\\epsilon=10^{-9}. We varied the learning rate over the course of training, according to the formula: ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_43",
"text": " lrate=dmodel−0.5⋅min(step_num−0.5,step_num⋅warmup_steps−1.5)𝑙𝑟𝑎𝑡𝑒⋅superscriptsubscript𝑑model0.5𝑠𝑡𝑒𝑝_𝑛𝑢superscript𝑚0.5⋅𝑠𝑡𝑒𝑝_𝑛𝑢𝑚𝑤𝑎𝑟𝑚𝑢𝑝_𝑠𝑡𝑒𝑝superscript𝑠1.5lrate=d_{\\text{model}}^{-0.5}\\cdot\\min({step\\_num}^{-0.5},{step\\_num}\\cdot{warmup\\_steps}^{-1.5}) (3) ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_44",
"text": " This corresponds to increasing the learning rate linearly for the first warmup_steps𝑤𝑎𝑟𝑚𝑢𝑝_𝑠𝑡𝑒𝑝𝑠warmup\\_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used warmup_steps=4000𝑤𝑎𝑟𝑚𝑢𝑝_𝑠𝑡𝑒𝑝𝑠4000warmup\\_steps=4000. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_45",
"text": " We employ three types of regularization during training: ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_46",
"text": " We apply dropout to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of Pdrop=0.1subscript𝑃𝑑𝑟𝑜𝑝0.1P_{drop}=0.1. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_47",
"text": " During training, we employed label smoothing of value ϵls=0.1subscriptitalic-ϵ𝑙𝑠0.1\\epsilon_{ls}=0.1 . This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_48",
"text": " On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.02.02.0 BLEU, establishing a new state-of-the-art BLEU score of 28.428.428.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.53.53.5 days on 888 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_49",
"text": " On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.041.041.0, outperforming all of the previously published single models, at less than 1/4141/4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate Pdrop=0.1subscript𝑃𝑑𝑟𝑜𝑝0.1P_{drop}=0.1, instead of 0.30.30.3. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_50",
"text": " For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 444 and length penalty α=0.6𝛼0.6\\alpha=0.6 . These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length + 505050, but terminate early when possible . ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_51",
"text": " Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU 222We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively.. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_52",
"text": " To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the development set, newstest2013. We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_53",
"text": " In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2. While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_54",
"text": " In Table 3 rows (B), we observe that reducing the attention key size dksubscript𝑑𝑘d_{k} hurts model quality. This suggests that determining compatibility is not easy and that a more sophisticated compatibility function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected, bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our sinusoidal positional encoding with learned positional embeddings , and observe nearly identical results to the base model. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_55",
"text": " To evaluate if the Transformer can generalize to other tasks we performed experiments on English constituency parsing. This task presents specific challenges: the output is subject to strong structural constraints and is significantly longer than the input. Furthermore, RNN sequence-to-sequence models have not been able to attain state-of-the-art results in small-data regimes . ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_56",
"text": " We trained a 4-layer transformer with dmodel=1024subscript𝑑𝑚𝑜𝑑𝑒𝑙1024d_{model}=1024 on the Wall Street Journal (WSJ) portion of the Penn Treebank , about 40K training sentences. We also trained it in a semi-supervised setting, using the larger high-confidence and BerkleyParser corpora from with approximately 17M sentences . We used a vocabulary of 16K tokens for the WSJ only setting and a vocabulary of 32K tokens for the semi-supervised setting. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_57",
"text": " We performed only a small number of experiments to select the dropout, both attention and residual (section 5.4), learning rates and beam size on the Section 22 development set, all other parameters remained unchanged from the English-to-German base translation model. During inference, we increased the maximum output length to input length + 300300300. We used a beam size of 212121 and α=0.3𝛼0.3\\alpha=0.3 for both WSJ only and the semi-supervised setting. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_58",
"text": " Our results in Table 4 show that despite the lack of task-specific tuning our model performs surprisingly well, yielding better results than all previously reported models with the exception of the Recurrent Neural Network Grammar . ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_59",
"text": " In contrast to RNN sequence-to-sequence models , the Transformer outperforms the BerkeleyParser even when training only on the WSJ training set of 40K sentences. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_60",
"text": " In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_61",
"text": " For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_62",
"text": " We are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_63",
"text": " The code we used to train and evaluate our models is available at https://github.com/tensorflow/tensor2tensor. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_64",
"text": " We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful comments, corrections and inspiration. ",
"title": "Attention Is All You Need"
}
] |
What can be the future work related to this paper?
|
In the future, our work can be extended to adapt our methods to further various multiple KGs with studies of appropriate scale for KG modularization [36].
|
[
36
] |
[
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap et al., 2019b), CommonsenseQA (Talmor et al., 2018), and PhysicalIQA (Bisk et al., 2020), each requiring different type of commonsense knowledge (e.g., social, taxonomic, causal, declarative, etc) to select the correct answer. While large-scale neural systems (Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019b) have shown human-level accuracy on these benchmarks, recent studies (Mitra et al., 2019) also criticize that these models solve individual datasets, rather than learning how to perform general semantic reasoning. To this end, Ma et al. (2021) suggested zero-shot evaluation as a genuine measure for the reasoning capability of the machine. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_1",
"text": " Inspired by this new metric, in this work, we focus on building unsupervised zero-shot multiple-choice QA systems. That is, we target an arbitrary commonsense reasoning task where conventional approaches (that rely heavily on task-specific supervision) are not applicable to such zero-shot learning scenarios. To learn QA models without expensive annotation efforts, recent works (Ma et al., 2021; Banerjee and Baral, 2020; Malaviya et al., 2020) propose to generate a synthetic QA dataset using a commonsense KG such as ATOMIC (Sap et al., 2019a) and ConceptNet (Speer et al., 2017). Such an approach mostly focuses only on one specific type of reasoning relations (e.g., if-then relation, or declarative relation), neglecting the fact that real-world QA systems require simultaneously considering different types of reasoning abilities (e.g., declarative and social, or causal and physical reasoning; Ilievski et al., 2021; Chang et al., 2021). ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_2",
"text": " To consider different types of reasoning, this paper extends ideas from the aforementioned zero-shot learning to the multi-source case such that it benefits from different types of commonsense knowledge on individual KGs. For example, ATOMIC (Sap et al., 2019a) focuses on social commonsense while ConceptNet (Speer et al., 2017) contains conceptual knowledge. A practical approach is multi-task learning (MTL; Caruana, 1997; Liu et al., 2019a), which learns a shared encoder for different synthetic QA datasets from multiple KGs. Despite its effectiveness, MTL scheme suffers from interference among different KGs, which results in forgetting previously learned knowledge when trained on new KG which has different kinds of knowledge (Pilault et al., 2021; Pfeiffer et al., 2021; Wang et al., 2021a; Wu et al., 2020). ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_3",
"text": " To address these limitations, we propose a novel, modularized framework that aims to learn multiple expert models for KGs, then conduct zero-shot fusion to allow collaboration among KGs. For this purpose, we leverage AdapterFusion (Pfeiffer et al., 2021) where multiple tiny modules between Transformer blocks called adapters (Houlsby et al., 2019) can be combined after independent training, thus allowing a continual integration of the adapters without retraining the entire framework. Specifically, we treat the adapters as different KG-specific experts, and combine them using an attention-like fusion module. To improve the fusion of adapters, we suggest a KG-alignment adapter that guides to the apt expert adapters. Here, we use KGs in three different synthetic supervision training: (1) KG-specific QA datasets to train the KG-specific expert adapters, (2) a KG classification datasets to train the KG-alignment adapter, and (3) a balanced mixture of KG-specific QA datasets to train the fusion module. Our modularized method alleviates the interference between different KGs, which is the pitfall of MTL from our empirical observation, and thus combines multiple KGs into a synergetic zero-shot framework. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_4",
"text": " Our contributions are: (1) We suggest a simple, yet effective KG modularization strategy for the use of multiple KGs in commonsense reasoning. (2) We then explore the use of AdapterFusion (Pfeiffer et al., 2021) for better knowledge aggregation based on the KG modularization in zero-shot setting. We believe that such modularized transfer learning is critical to using different knowledge sources synergetically against interference between them. (3) In extensive experiments on various commonsense reasoning benchmarks, our framework achieves significant improvements over baselines using a single KG, even using multiple KGs, which implies the robustness in commonsense reasoning. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_5",
"text": " Many researchers have recently focused on building unsupervised models without any benchmark supervisions (i.e., zero-shot learning). In such zero-shot setting, KGs are often used as an external resource for improving model prior (e.g., continually learned from pre-trained language models) (Banerjee and Baral, 2020; Bosselut and Choi, 2019; Ma et al., 2021), especially for commonsense reasoning, as much existing work couples language models with neural/symbolic commonsense KGs. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_6",
"text": " However, most of existing work are either assuming the existence of the alignment information between tasks and KGs (Banerjee and Baral, 2020) or an integrated KG (Ma et al., 2021). For example, ATOMIC2020subscriptsuperscriptATOMIC2020\\texttt{ATOMIC}^{20}_{20} (Hwang et al., 2021), a commonsense KG which incorporates tuples from ConceptNet and ATOMIC with new relations and further crowdsourcing, combines multiple KGs into a new integrated KG, but as widely known (Ilievski et al., 2020; Hwang et al., 2021), heterogeneous schema between different KGs may limit triplets that can be integrated.111Only 172K tuples of the 3.4M tuples and 5 relations of 36 relations in ConceptNet are integrated into ATOMIC2020subscriptsuperscriptATOMIC2020\\texttt{ATOMIC}^{20}_{20}. Rather than such symbolic KG integration with the inevitable loss of knowledge, in this work, we explore the neural KG integration leveraging the multiple KGs without additional processing and alignment information between KG and task. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_7",
"text": " The idea of having specialized parameters, or so-called experts, has been widely studied to integrate multiple sources of knowledge via transfer learning. The adapter module (Rebuffi et al., 2017; Houlsby et al., 2019) has been explored as one of such approaches, introducing a small number of task-specific parameters at every layer of pre-trained language model (PLM) while sharing the parameters of underlying PLM which is fixed. To address the limitations of transfer learning due to high re-training cost, many works utilize the multiple adapter modules for individual tasks with different domains (Puigcerver et al., 2020; Bapna et al., 2019; Rücklé et al., 2020; Madotto et al., 2021) considering each adapter to be an expert of each domain. Similar to our work, K-Adapter (Wang et al., 2021a) encodes factual and linguistic knowledge to each adapter, but in this paper, we further explore how to mitigate catastrophic forgetting or interference among multiple adapters for better knowledge transfer in zero-shot setting. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_8",
"text": " MTL (Liu et al., 2019a; Zhang and Yang, 2017; Caruana, 1997) learns a shared representation while aggregating knowledge across multiple learning tasks, often leading to better generalization ability of a model. However, parametric aggregation of knowledge with MTL has following limitations: (1) retraining the full model when adding new tasks (Houlsby et al., 2019; Pfeiffer et al., 2021, 2020b) (2) catastrophic forgetting and interference between tasks leading to difficulties of solving each task equally well (Pilault et al., 2021; Wu et al., 2020; Yu et al., 2020) and (3) inconsistent effect (Lourie et al., 2021). To deal with these challenges, Mixture-of-Experts (MoE) is a parameterized generalization of ensembling techniques, which has been adapted for MTL with gating network trained to optimize each task (Ma et al., 2018). However, simple linear gating networks are too shallow and thus may destruct task knowledge for commonsense reasoning. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_9",
"text": " To address this problem, AdapterFusion (Pfeiffer et al., 2021) has been proposed to fuse task specific parameters called adapters for the given target task leveraging attention-like mechanism. AdapterFusion aggregates adapters, which is trained independently for each task, in a non-destructive manner mitigating aforementioned MTL problems such as forgetting and interference between tasks. Recently, it has been used for zero-shot cross-lingual transfer framework (Pfeiffer et al., 2020c; Wang et al., 2021b), which motivates our work to transfer multi-source knowledge with less interference for zero-shot commonsense reasoning. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_10",
"text": " In our setup, we repurpose synthetic QA generation (Ma et al., 2021) for the task of knowledge-driven zero-shot learning for commonsense reasoning, i.e., we transform a KG into multiple (Qi,Ai)subscript𝑄𝑖subscript𝐴𝑖(Q_{i},A_{i}) pairs where Qisubscript𝑄𝑖Q_{i} is a natural language question and Ai={Ai,1,…,Ai,m}subscript𝐴𝑖subscript𝐴𝑖1…subscript𝐴𝑖𝑚A_{i}=\\{A_{i,1},...,A_{i,m}\\} is the set of options with m𝑚m answer candidates. Specifically, given a triple (ehead,r,etail)superscript𝑒ℎ𝑒𝑎𝑑𝑟superscript𝑒𝑡𝑎𝑖𝑙(e^{head},r,e^{tail}) in a KG, where eheadsuperscript𝑒ℎ𝑒𝑎𝑑e^{head}, etailsuperscript𝑒𝑡𝑎𝑖𝑙e^{tail} and r𝑟r denote head/tail entity and relation respectively, we transform eheadsuperscript𝑒ℎ𝑒𝑎𝑑e^{head} and r𝑟r into a natural language question Qisubscript𝑄𝑖Q_{i} using templates. For the option set Aisubscript𝐴𝑖A_{i}, we use the combination of the correct answer etailsuperscript𝑒𝑡𝑎𝑖𝑙e^{tail} and m−1𝑚1m-1 distractors which are tail entities from other triples sampled randomly (Ma et al., 2021). Details are described in Appendix B. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_11",
"text": " First, we modularize the KGs to preserve their intrinsic knowledge. Considering the importance of using a suitable and well-aligned KG (Ma et al., 2019, 2021) on a downstream task, the subtle difference between each KG should be learned by the model without any interference from each other. Accordingly, we adopt the adapter module (Houlsby et al., 2019) which repurposes a pre-trained language model (PLM) to incorporate each KG as tiny modules in between Transformer blocks. Specifically, as illustrated in Figure 2 (except for green area), the adapter training strategy involves injecting new layers (parameterized by ΦΦ\\Phi) into the original PLM (parameterized by θ𝜃\\theta). The weights of the original PLM are untouched, while the new adapter layers are initialized at random. Formally, we call each adapter trained with 𝒟QAksubscriptsuperscript𝒟𝑘𝑄𝐴\\mbox{${\\cal D}$}^{k}_{QA} as an expert adapter for KG k𝑘k, parameterized by ΦQAksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k}. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_12",
"text": " When a QA sample (Qi,Ai)subscript𝑄𝑖subscript𝐴𝑖(Q_{i},A_{i}) is given for dataset 𝒟QAksuperscriptsubscript𝒟𝑄𝐴𝑘\\mbox{${\\cal D}$}_{QA}^{k}, we first concatenate question Qisubscript𝑄𝑖Q_{i} and each answer option Ai={Ai,1,…,Ai,m}subscript𝐴𝑖subscript𝐴𝑖1…subscript𝐴𝑖𝑚A_{i}=\\{A_{i,1},...,A_{i,m}\\} to generate input sequences Ti={Ti,1,…,Ti,m}subscript𝑇𝑖subscript𝑇𝑖1…subscript𝑇𝑖𝑚T_{i}=\\{T_{i,1},...,T_{i,m}\\}. Then, we compute a score Si,jsubscript𝑆𝑖𝑗S_{i,j} (Ma et al., 2021) for the answer candidate Ai,jsubscript𝐴𝑖𝑗A_{i,j} is computed as follows: Si,j=−1|Ti,j|∑t=1|Ti,j|logP(wt|…wt−1,wt+1…;θ,Φ)subscript𝑆𝑖𝑗1subscript𝑇𝑖𝑗superscriptsubscript𝑡1subscript𝑇𝑖𝑗𝑙𝑜𝑔𝑃conditionalsubscript𝑤𝑡…subscript𝑤𝑡1subscript𝑤𝑡1…𝜃ΦS_{i,j}=-\\frac{1}{|T_{i,j}|}\\sum_{t=1}^{|T_{i,j}|}logP(w_{t}|...w_{t-1},w_{t+1}...;\\theta,\\Phi) (2) where wtsubscript𝑤𝑡w_{t} is a word token in the sequence Ti,jsubscript𝑇𝑖𝑗T_{i,j} and P𝑃P is the conditional probability from Transformer blocks parameterized by θ𝜃\\theta and ΦΦ\\Phi. To train the adapter ΦQAksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k}, we use the marginal ranking loss (Ma et al., 2021) as follows: ℒQA=1m∑i=1Nk∑j=1j≠labelmmax(0,η−Si,label+Si,j)subscriptℒ𝑄𝐴1𝑚superscriptsubscript𝑖1subscript𝑁𝑘superscriptsubscript𝑗1𝑗𝑙𝑎𝑏𝑒𝑙𝑚𝑚𝑎𝑥0𝜂subscript𝑆𝑖𝑙𝑎𝑏𝑒𝑙subscript𝑆𝑖𝑗\\mbox{${\\cal L}$}_{QA}=\\frac{1}{m}\\sum_{i=1}^{N_{k}}\\sum_{\\begin{subarray}{c}j=1\\\\ j\\neq label\\end{subarray}}^{m}max(0,\\eta-S_{i,label}+S_{i,j}) (3) where η𝜂\\eta represents the margin. ΦQAk←argminΦℒQA(𝒟QAk;θ,Φ)←superscriptsubscriptΦ𝑄𝐴𝑘subscriptargminΦsubscriptℒ𝑄𝐴subscriptsuperscript𝒟𝑘𝑄𝐴𝜃Φ\\Phi_{QA}^{k}\\leftarrow\\operatorname*{argmin}_{\\Phi}\\mbox{${\\cal L}$}_{QA}(\\mathcal{D}^{k}_{QA};\\theta,\\Phi) (4) where KG-invariant parameters θ𝜃\\theta are fixed and only KG-dependent parameters ΦQAksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k} are learned, which enables to store the corresponding knowledge separately without any interference. Further, we can parallelize the training of adapter for all KGs. The efficiency of adapter training allows our modularization to be more scalable. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_13",
"text": " Once the expert adapters are learned, we combine the knowledge from each expert adapter using an attention-like mechanism. We present a novel fusion strategy as shown in Figure 2, which is referred to as the zero-shot fusion. In contrast to AdapterFusion (Pfeiffer et al., 2021) where the focus is learning to transfer knowledge to a specific target task, our zero-shot fusion aims to generalize this transfer to any arbitrary target task. Specifically, the zero-shot fusion parameters ΨΨ\\Psi learn to combine fixed expert adapters which are parameterized by ΦQA1,…,ΦQAKsuperscriptsubscriptΦ𝑄𝐴1…superscriptsubscriptΦ𝑄𝐴𝐾\\Phi_{QA}^{1},...,\\Phi_{QA}^{K}. In each Transformer layer l𝑙l of PLM with the injected fusion layer, the zero-shot fusion parameters ΨQAsubscriptΨ𝑄𝐴\\Psi_{QA} consist of query, key, and value matrices, denoted by WlQsuperscriptsubscriptW𝑙𝑄\\textbf{W}_{l}^{Q}, WlKsuperscriptsubscriptW𝑙𝐾\\textbf{W}_{l}^{K}, and WlVsuperscriptsubscriptW𝑙𝑉\\textbf{W}_{l}^{V} respectively. These parameters are used to learn the balancing between the representation of each expert adapters through attention-like mechanism. While fixing both the parameters θ𝜃\\theta and all expert adapters ΦQA1,…,ΦQAKsuperscriptsubscriptΦ𝑄𝐴1…superscriptsubscriptΦ𝑄𝐴𝐾\\Phi_{QA}^{1},...,\\Phi_{QA}^{K}, the only trainable weights ΨQAsubscriptΨ𝑄𝐴\\Psi_{QA} on the fusion layer learns to combine the knowledge from different K𝐾K expert adapters by using the subset of {𝒟QAk}k=1Ksuperscriptsubscriptsuperscriptsubscript𝒟𝑄𝐴𝑘𝑘1𝐾\\{\\mbox{${\\cal D}$}_{QA}^{k}\\}_{k=1}^{K} by random sampling. Here, we balance the ratio between the K𝐾K knowledge-driven datasets as N𝑁N samples (details are in Appendix D). Formally, ΨQA←argminΨ∑k=1KℒQA(𝒟QAk;θ,{ΦQAk}k=1K,Ψ)←subscriptΨ𝑄𝐴subscriptargminΨsuperscriptsubscript𝑘1𝐾subscriptℒ𝑄𝐴subscriptsuperscript𝒟𝑘𝑄𝐴𝜃superscriptsubscriptsuperscriptsubscriptΦ𝑄𝐴𝑘𝑘1𝐾Ψ\\Psi_{QA}\\leftarrow\\operatorname*{argmin}_{\\Psi}\\sum_{k=1}^{K}\\mbox{${\\cal L}$}_{QA}(\\mathcal{D}^{k}_{QA};\\theta,\\{\\Phi_{QA}^{k}\\}_{k=1}^{K},\\Psi) (5) where ΨΨ\\Psi refers to the initialized zero-shot fusion parameters. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_14",
"text": " More specifically, in the l𝑙l-th Transformer layer, let hPLMlsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙h_{PLM}^{l} and hEk,lsuperscriptsubscriptℎ𝐸𝑘𝑙h_{E}^{k,l} be the representations of underlying PLM parameterized by θ𝜃\\theta and an expert adapter parameterized by ΦQAksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k}, respectively. Then, using the hidden representation hPLMlsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙h_{PLM}^{l} of PLM as a query, the fusion layer performs the attention-like function as follows: Kl,VlsubscriptK𝑙subscriptV𝑙\\displaystyle\\textbf{K}_{l},\\textbf{V}_{l} =(hE1,l,…,hEK,l)absentsuperscriptsubscriptℎ𝐸1𝑙…superscriptsubscriptℎ𝐸𝐾𝑙\\displaystyle=(h_{E}^{1,l},...,h_{E}^{K,l}) (6) QlsubscriptQ𝑙\\displaystyle\\textbf{Q}_{l} =hPLMlabsentsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙\\displaystyle=h_{PLM}^{l} (7) zlsubscriptz𝑙\\displaystyle\\textbf{z}_{l} =Attention(QlWlQ,KlWlK,VlWlV)absentAttentionsubscriptQ𝑙superscriptsubscriptW𝑙𝑄subscriptK𝑙superscriptsubscriptW𝑙𝐾subscriptV𝑙superscriptsubscriptW𝑙𝑉\\displaystyle=\\text{Attention}(\\textbf{Q}_{l}\\textbf{W}_{l}^{Q},\\textbf{K}_{l}\\textbf{W}_{l}^{K},\\textbf{V}_{l}\\textbf{W}_{l}^{V}) (8) where zlsubscriptz𝑙\\textbf{z}_{l} is passed to the next Transformer layer. Given a sample, the zero-shot fusion learns the suitable balancing parameters between the expert adapters for zero-shot reasoning. Eventually, it learns to identify generalizability across commonsense reasoning tasks. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_15",
"text": " AdapterFusion uses the PLM hidden representation hPLMlsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙h_{PLM}^{l} as a query which is learned when training on a specific downstream task. In our zero-shot setting, however, we use a mixture of synthetic QA for fusion training, which is not exactly a training dataset for a downstream task. To compensate for this issue, we present KG-Classifier adapter, which is a KG alignment-aware adapter, which is motivated from the fact that the ability to find which KG has an alignment with the given sample can be helpful as a role of providing a guidance for better performance (Ma et al., 2019, 2021). ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_16",
"text": " Specifically, we propose a novel training task for KG-Classifier adapter, which requires predicting the KG for the given sample of the task. For that, given {𝒟QAk}k=1Ksuperscriptsubscriptsuperscriptsubscript𝒟𝑄𝐴𝑘𝑘1𝐾\\{\\mbox{${\\cal D}$}_{QA}^{k}\\}_{k=1}^{K}, we first transform a QA sample (Qi,Ai)subscript𝑄𝑖subscript𝐴𝑖(Q_{i},A_{i}) into a new KG classification sample (Qi;Ai,label)subscript𝑄𝑖subscript𝐴𝑖𝑙𝑎𝑏𝑒𝑙(Q_{i};A_{i,label}) where (;)(;) is the concatenation. Then, we obtain a new label yi∈{0,1}Ksubscript𝑦𝑖superscript01𝐾y_{i}\\in\\{0,1\\}^{K} indicating the corresponding KG source. The samples are in Appendix E. Formally, KG classification dataset 𝒟KGCsubscript𝒟𝐾𝐺𝐶\\mbox{${\\cal D}$}_{KGC} is defined as: 𝒟KGC={((Qi;Ai,label),yi)}i=1Msubscript𝒟𝐾𝐺𝐶superscriptsubscriptsubscript𝑄𝑖subscript𝐴𝑖𝑙𝑎𝑏𝑒𝑙subscript𝑦𝑖𝑖1𝑀\\mbox{${\\cal D}$}_{KGC}=\\{((Q_{i};A_{i,label}),y_{i})\\}_{i=1}^{M} (9) where M𝑀M is the total size of {𝒟QAk}k=1Ksuperscriptsubscriptsuperscriptsubscript𝒟𝑄𝐴𝑘𝑘1𝐾\\{\\mbox{${\\cal D}$}_{QA}^{k}\\}_{k=1}^{K}. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_17",
"text": " Based on 𝒟KGCsubscript𝒟𝐾𝐺𝐶\\mbox{${\\cal D}$}_{KGC}, we learn the KG-Classifier adapter parameterized by θ𝜃\\theta and ΦKGCsubscriptΦ𝐾𝐺𝐶\\Phi_{KGC}. First, a classification sample i𝑖i is encoded into hCLS∈ℝHsubscriptℎ𝐶𝐿𝑆superscriptℝ𝐻h_{CLS}\\in\\mathbb{R}^{H} then scored as y^i∈ℝKsubscript^𝑦𝑖superscriptℝ𝐾\\hat{y}_{i}\\in\\mathbb{R}^{K} with a linear layer WKGC∈ℝK×Hsubscript𝑊𝐾𝐺𝐶superscriptℝ𝐾𝐻W_{KGC}\\in\\mathbb{R}^{K\\times H}, i.e., y^i=WKGChCLSsubscript^𝑦𝑖subscript𝑊𝐾𝐺𝐶subscriptℎ𝐶𝐿𝑆\\hat{y}_{i}=W_{KGC}h_{CLS}. Once y^isubscript^𝑦𝑖\\hat{y}_{i} is normalized by a softmax layer, the network is trained to minimize the cross-entropy loss ℒKGCsubscriptℒ𝐾𝐺𝐶\\mbox{${\\cal L}$}_{KGC} between the prediction y^isubscript^𝑦𝑖\\hat{y}_{i} and its ground truth yisubscript𝑦𝑖y_{i}: ΦKGC←argminΦ∑i=1MℒKGC(yi,y^i;θ,Φ)←subscriptΦ𝐾𝐺𝐶subscriptargminΦsuperscriptsubscript𝑖1𝑀subscriptℒ𝐾𝐺𝐶subscript𝑦𝑖subscript^𝑦𝑖𝜃Φ\\Phi_{KGC}\\leftarrow\\operatorname*{argmin}_{\\Phi}\\sum_{i=1}^{M}\\mbox{${\\cal L}$}_{KGC}(y_{i},\\hat{y}_{i};\\theta,\\Phi) (10) ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_18",
"text": " We propose to use the representation of KG-Classifier adapter as a query in attention-like mechanism, referred to as the zero-shot fusion with KG-Classifier adapter. That is, using the hidden representation hKGClsuperscriptsubscriptℎ𝐾𝐺𝐶𝑙h_{KGC}^{l} of a KG-Classifier adapter parameterized by ΦKGCsubscriptΦ𝐾𝐺𝐶\\Phi_{KGC} as a query, we substitute QlsubscriptQ𝑙\\textbf{Q}_{l} in Eq. (11) as follows: Ql=hKGClsubscriptQ𝑙superscriptsubscriptℎ𝐾𝐺𝐶𝑙\\textbf{Q}_{l}=h_{KGC}^{l} (11) The overall zero-shot fusion architecture including KG-Classifier is illustrated in Figure 2. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_19",
"text": " In this section we evaluate the efficacy of our framework on five commonsense reasoning tasks. We denote KG-Classifier adapter by KG-C adapter. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_20",
"text": " All our experiments are conducted in a zero-shot setting, in which the models do not have access to the official training data or labels of the benchmark. For the evaluation, we use the validation set of each benchmark222Since the official test sets are not publicly available, however, the validation set of each benchmark can be role as an test set since it is not used for hyperparameter tuning or model selection. We use accuracy as a metric. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_21",
"text": " We evaluate our proposed framework on five question-answering benchmarks for commonsense reasoning: SocialIQA (SIQA) (Sap et al., 2019b), CommonsenseQA (CSQA) (Talmor et al., 2018), Abductive NLI (a-NLI) (Bhagavatula et al., 2020), PhysicalIQA (PIQA) (Bisk et al., 2020), and WinoGrande (WG) (Sakaguchi et al., 2020). Each commonsense benchmark evaluates a specific kind of knowledge: social commonsense for SIQA, concept-level commonsense for CSQA, abductive reasoning for a-NLI, physical commonsense for PIQA, and pronoun resolution ability for WG.333Some benchmarks have a strong alignment with a certain KG due to its construction strategy: SIQA-ATOMIC, and CSQA-ConceptNet. To make a direct comparison with Ma et al. (2021), we use the same KGs to generate data samples. The details are presented in Appendix G. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_22",
"text": " We compare our framework with the following baselines. First, to show the characteristics of each benchmark, we use the random or the most frequent label as Random and Majority baseline, respectively. RoBERTa-L and GPT2-L is the performance of each PLM without any fine-tuning. Also, as the baseline for the unsupervised learning model using KGs, we report the performance of Self-talk (Shwartz et al., 2020), COMET-DynaGen (Bosselut and Choi, 2019), SMLM (Banerjee and Baral, 2020) as presented in original papers. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_23",
"text": " For further analysis in §§\\S4.4 and §§\\S4.5, we set the following models that are pre-trained on the synthetic QA datasets from KGs as baselines: • Single-Task Learning (STL): The model is pre-trained on a synthetic QA dataset generated from a single KG. Specifically, we experiment two architectural choices: PLM (STL-PLM) and PLM with adapters (STL-Adapter). For each architecture, there are four STL models for each of synthetic QA datasets derived from ATOMIC, ConceptNet, WikiData, and WordNet. We note that the trained STL-Adapter is an expert adapter from a specific KG in our framework. The performance of each STL baseline is shown in Appendix I Table 9 and Table 10. • Multi-Task Learning (MTL): The model is pre-trained on multiple synthetic QA datasets, each of which is generated from a KG. We experiment with a PLM trained on all four aforementioned synthetic QA datasets. We note that the difference between STL-PLM and MTL is whether to use one synthetic QA dataset or multiple synthetic QA datasets for its training. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_24",
"text": " We employ RoBERTa-L (Liu et al., 2019b) from Hugging Face’s transformers toolkit for all experiments. We follow the default settings from Ma et al. (2021). Our implementation uses Adapter (Houlsby et al., 2019) and AdapterFusion (Pfeiffer et al., 2021) as a base model architecture from AdpaterHub (Pfeiffer et al., 2020a). We run our experiments with three different random seeds. The implementation details are described in Appendix H. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_25",
"text": " Table 2 shows the zero-shot evaluation results on five benchmark datasets. Generally, zero-shot fusion scores higher than the baselines across all benchmarks, and further, zero-shot fusion shows the best performance in all benchmarks except WG. We note that although Ma et al. (2021) uses the synthetic QA dataset after sample filtering, our method achieves comparable performance with the best performance in WG, even with the raw dataset. Also, the average score of all evaluation benchmarks (the last column of Table 2) shows that zero-shot fusion has generalisability in commonsense reasoning. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_26",
"text": " In addition, zero-shot fusion achieves consistent improvements over MTL. These results indicate that our proposed zero-shot fusion method attributes to fusing the knowledge of multiple KGs more synergetically regardless of the task. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_27",
"text": " Moreover, as an ablation, we compare the zero-shot fusion with and without KG-C adapter to explore the efficacy of the KG-C adapter. We can observe that zero-shot fusion with KG-C adapter improves the average accuracy by 0.4%, which implies that the use of KG-C adapter improves the overall performance and makes our method generalize better on most of the evaluation benchmarks. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_28",
"text": " To assess the effects of the KG-C adapter itself, we visualize and compare the final layer (CLS) token representation between PLM and KG-C adapter. Figure 3 shows t-SNE (Van der Maaten and Hinton, 2008) plots of all representation of five benchmark datasets. In this figure, every sample is mapped into a 1024-dimensional feature space through RoBERTa-L model and projected back into a two-dimensional plane by t-SNE. We can observe that KG-C adapter can separate the samples of different benchmarks well despite being unseen data. It verifies that KG-awareness acquired with the KG classification task is beneficial to categorize the given sample. The KG-C adapter can thus generate a relevant KG-aware query for a given sample and help to fuse representations from suitable expert adapters in our proposed framework. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_29",
"text": " Further, we explore how the KG-C adapter affects zero-shot fusion which is based on an attention-like mechanism (Pfeiffer et al., 2021) compared to zero-shot fusion without KG-C adapter. Here, while zero-shot fusion without KG-C adapter simply uses the representation of PLM as a query, zero-shot fusion with KG-C adapter leverages the representation of KG-C adapter. To illustrate this strength, we visualize the attention probability of (CLS) token from each fusion layer as a representative in Figure 4. The column of the darker cell indicates the adapter that has the bigger influence on the fused representation. We can observe that zero-shot fusion with KG-C adapter fuses the knowledge from different experts with a subtle difference rather than focusing on a single expert severely. This implies that KG-C adapter enables the delicate balancing between multiple knowledge sources based on the KG-alignment awareness, which leads to performance improvements in commonsense reasoning tasks. Interestingly, both cases have the ability not to focus on the expert adapter based on WikiData, which can be seen as a redundant expert.444The zero-shot fusion with KG-C adapter using AT, CN, and WN shows the best average performance in Table 10. This observation would benefit from the further study that explores the optimal combination of KGs by expert selection or rejection. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_30",
"text": " In this experiment, we compare the amount of interference in the MTL and zero-shot fusion with KG-C adapter. We propose a novel evaluation metric, the interference ratio, which is the percentage of the incorrectly predicted samples by the multi-KG models among the correctly predicted samples from the STL models in common. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_31",
"text": " Using the interference ratio, we can precisely compare the negative effects of multi-KG models on knowledge aggregation since the only reason to get the correct samples wrong is the interference caused by learning with additional KGs. We present the interference ratio of the models on five benchmark datasets in Figure 5. This figure shows that MTL has the higher interference ratio than the competing models across all benchmarks. Our method achieves a substantially better ratio, especially when KG-C adapter is used. This demonstrates the efficacy of our framework in mitigating interference between knowledge, which is one of the major problems of MTL. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_32",
"text": " To verify the ability of our model to aggregate different types of KGs, we compare the relative performance gains of MTL and zero-shot fusion with KG-C adapter when increasing the number of KGs. The performance of all KG-combinations for each framework is presented in Table 9 and Table 10. We visualize the improvement of performance for five benchmark development sets, leveraging heatmaps in Figure 6. Here, for the sake of brevity, we denote our framework with KG-C adapter as our method. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_33",
"text": " For MTL in Figure 6 (a), the color of the cell denotes the relative improvement of MTL with the combination of KGs over the best performance among the STL-PLM of KGs. Also, for our method in Figure 6 (b), the relative improvement is measured based on the best performance among the STL-Adapter of KGs, considering the difference of the base architecture for MTL (i.e. PLM) and zero-shot fusion (i.e. PLM with adapter). The green and red colors denote the increase and decrease of performance, respectively, when using multiple KGs together. The greener color on the cells indicates that the approach benefits from an increasing number of KGs, which implies aggregating knowledge successfully. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_34",
"text": " In Figure 6, while the MTL tends to show the decrease of the performance when more KGs are utilized for training, our method obtains relative performance improvement across most of benchmarks. In both framework, the slightly degraded performance of the combination of KGs without ATOMIC could be due to the strong alignment between ATOMIC and SIQA. Except for the above case, we can observe that as more KGs are leveraged, the color of the cell gets greener, which implies that our method gains more advantages for better performance. This demonstrates that our method enables knowledge aggregation for multiple KGs synergetically. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_35",
"text": " Despite the existence of various types of commonsense KGs, utilizing multiple KGs has not been explored enough in the commonsense reasoning field. Motivated by this, this paper proposes a modularized transfer learning framework to fuse the knowledge from multiple KGs efficiently for zero-shot commonsense reasoning. Our framework consists of KG modularization for expert adapter, zero-shot fusion and KG-Classifier adapter. Extensive experiments show that our framework obtains strong improvements over MTL on five commonsense reasoning benchmarks. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_36",
"text": " In the future, our work can be extended to adapt our methods to further various multiple KGs with studies of appropriate scale for KG modularization. In addition, based on our hypothesis that the existence of an optimal combination, we can explore the study for the optional use of modularized KG experts for the best transfer learning. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
}
] |
How did adding the segment branch affect the results ?
|
Adding a segment branch improves the AP, additionally, the mask branch only adds a small computational overhead, enabling a fast system and rapid experimentation [2].
|
[
2
] |
[
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network (FCN) frameworks for object detection and semantic segmentation, respectively. These methods are conceptually intuitive and offer flexibility and robustness, together with fast training and inference time. Our goal in this work is to develop a comparably enabling framework for instance segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_1",
"text": " Instance segmentation is challenging because it requires the correct detection of all objects in an image while also precisely segmenting each instance. It therefore combines elements from the classical computer vision tasks of object detection, where the goal is to classify individual objects and localize each using a bounding box, and semantic segmentation, where the goal is to classify each pixel into a fixed set of categories without differentiating object instances.111Following common terminology, we use object detection to denote detection via bounding boxes, not masks, and semantic segmentation to denote per-pixel classification without differentiating instances. Yet we note that instance segmentation is both semantic and a form of detection. Given this, one might expect a complex method is required to achieve good results. However, we show that a surprisingly simple, flexible, and fast system can surpass prior state-of-the-art instance segmentation results. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_2",
"text": " Our method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting segmentation masks on each Region of Interest (RoI), in parallel with the existing branch for classification and bounding box regression (Figure 1). The mask branch is a small FCN applied to each RoI, predicting a segmentation mask in a pixel-to-pixel manner. Mask R-CNN is simple to implement and train given the Faster R-CNN framework, which facilitates a wide range of flexible architecture designs. Additionally, the mask branch only adds a small computational overhead, enabling a fast system and rapid experimentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_3",
"text": " In principle Mask R-CNN is an intuitive extension of Faster R-CNN, yet constructing the mask branch properly is critical for good results. Most importantly, Faster R-CNN was not designed for pixel-to-pixel alignment between network inputs and outputs. This is most evident in how RoIPool (18, 12), the de facto core operation for attending to instances, performs coarse spatial quantization for feature extraction. To fix the misalignment, we propose a simple, quantization-free layer, called RoIAlign, that faithfully preserves exact spatial locations. Despite being a seemingly minor change, RoIAlign has a large impact: it improves mask accuracy by relative 10% to 50%, showing bigger gains under stricter localization metrics. Second, we found it essential to decouple mask and class prediction: we predict a binary mask for each class independently, without competition among classes, and rely on the network’s RoI classification branch to predict the category. In contrast, FCNs usually perform per-pixel multi-class categorization, which couples segmentation and classification, and based on our experiments works poorly for instance segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_4",
"text": " Without bells and whistles, Mask R-CNN surpasses all previous state-of-the-art single-model results on the COCO instance segmentation task , including the heavily-engineered entries from the 2016 competition winner. As a by-product, our method also excels on the COCO object detection task. In ablation experiments, we evaluate multiple basic instantiations, which allows us to demonstrate its robustness and analyze the effects of core factors. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_5",
"text": " Our models can run at about 200ms per frame on a GPU, and training on COCO takes one to two days on a single 8-GPU machine. We believe the fast train and test speeds, together with the framework’s flexibility and accuracy, will benefit and ease future research on instance segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_6",
"text": " Finally, we showcase the generality of our framework via the task of human pose estimation on the COCO keypoint dataset . By viewing each keypoint as a one-hot binary mask, with minimal modification Mask R-CNN can be applied to detect instance-specific poses. Mask R-CNN surpasses the winner of the 2016 COCO keypoint competition, and at the same time runs at 5 fps. Mask R-CNN, therefore, can be seen more broadly as a flexible framework for instance-level recognition and can be readily extended to more complex tasks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_7",
"text": " We have released code to facilitate future research. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_8",
"text": " The Region-based CNN (R-CNN) approach to bounding-box object detection is to attend to a manageable number of candidate object regions (42, 20) and evaluate convolutional networks (25, 24) independently on each RoI. R-CNN was extended (18, 12) to allow attending to RoIs on feature maps using RoIPool, leading to fast speed and better accuracy. Faster R-CNN advanced this stream by learning the attention mechanism with a Region Proposal Network (RPN). Faster R-CNN is flexible and robust to many follow-up improvements (e.g., (38, 27, 21)), and is the current leading framework in several benchmarks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_9",
"text": " Driven by the effectiveness of R-CNN, many approaches to instance segmentation are based on segment proposals. Earlier methods (13, 15, 16, 9) resorted to bottom-up segments (42, 2). DeepMask and following works (34, 8) learn to propose segment candidates, which are then classified by Fast R-CNN. In these methods, segmentation precedes recognition, which is slow and less accurate. Likewise, Dai et al. proposed a complex multiple-stage cascade that predicts segment proposals from bounding-box proposals, followed by classification. Instead, our method is based on parallel prediction of masks and class labels, which is simpler and more flexible. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_10",
"text": " Most recently, Li et al. combined the segment proposal system in and object detection system in for “fully convolutional instance segmentation” (FCIS). The common idea in (8, 11, 26) is to predict a set of position-sensitive output channels fully convolutionally. These channels simultaneously address object classes, boxes, and masks, making the system fast. But FCIS exhibits systematic errors on overlapping instances and creates spurious edges (Figure 6), showing that it is challenged by the fundamental difficulties of segmenting instances. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_11",
"text": " Another family of solutions (23, 4, 3, 29) to instance segmentation are driven by the success of semantic segmentation. Starting from per-pixel classification results (e.g., FCN outputs), these methods attempt to cut the pixels of the same category into different instances. In contrast to the segmentation-first strategy of these methods, Mask R-CNN is based on an instance-first strategy. We expect a deeper incorporation of both strategies will be studied in the future. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_12",
"text": " Mask R-CNN is conceptually simple: Faster R-CNN has two outputs for each candidate object, a class label and a bounding-box offset; to this we add a third branch that outputs the object mask. Mask R-CNN is thus a natural and intuitive idea. But the additional mask output is distinct from the class and box outputs, requiring extraction of much finer spatial layout of an object. Next, we introduce the key elements of Mask R-CNN, including pixel-to-pixel alignment, which is the main missing piece of Fast/Faster R-CNN. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_13",
"text": " We begin by briefly reviewing the Faster R-CNN detector . Faster R-CNN consists of two stages. The first stage, called a Region Proposal Network (RPN), proposes candidate object bounding boxes. The second stage, which is in essence Fast R-CNN , extracts features using RoIPool from each candidate box and performs classification and bounding-box regression. The features used by both stages can be shared for faster inference. We refer readers to for latest, comprehensive comparisons between Faster R-CNN and other frameworks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_14",
"text": " Mask R-CNN adopts the same two-stage procedure, with an identical first stage (which is RPN). In the second stage, in parallel to predicting the class and box offset, Mask R-CNN also outputs a binary mask for each RoI. This is in contrast to most recent systems, where classification depends on mask predictions (e.g. (33, 10, 26)). Our approach follows the spirit of Fast R-CNN that applies bounding-box classification and regression in parallel (which turned out to largely simplify the multi-stage pipeline of original R-CNN ). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_15",
"text": " Formally, during training, we define a multi-task loss on each sampled RoI as L=Lcls+Lbox+Lmask𝐿subscript𝐿𝑐𝑙𝑠subscript𝐿𝑏𝑜𝑥subscript𝐿𝑚𝑎𝑠𝑘L=L_{cls}+L_{box}+L_{mask}. The classification loss Lclssubscript𝐿𝑐𝑙𝑠L_{cls} and bounding-box loss Lboxsubscript𝐿𝑏𝑜𝑥L_{box} are identical as those defined in . The mask branch has a Km2𝐾superscript𝑚2Km^{2}-dimensional output for each RoI, which encodes K𝐾K binary masks of resolution m×m𝑚𝑚m\\times m, one for each of the K𝐾K classes. To this we apply a per-pixel sigmoid, and define Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} as the average binary cross-entropy loss. For an RoI associated with ground-truth class k𝑘k, Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} is only defined on the k𝑘k-th mask (other mask outputs do not contribute to the loss). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_16",
"text": " Our definition of Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} allows the network to generate masks for every class without competition among classes; we rely on the dedicated classification branch to predict the class label used to select the output mask. This decouples mask and class prediction. This is different from common practice when applying FCNs to semantic segmentation, which typically uses a per-pixel softmax and a multinomial cross-entropy loss. In that case, masks across classes compete; in our case, with a per-pixel sigmoid and a binary loss, they do not. We show by experiments that this formulation is key for good instance segmentation results. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_17",
"text": " A mask encodes an input object’s spatial layout. Thus, unlike class labels or box offsets that are inevitably collapsed into short output vectors by fully-connected (fc) layers, extracting the spatial structure of masks can be addressed naturally by the pixel-to-pixel correspondence provided by convolutions. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_18",
"text": " Specifically, we predict an m×m𝑚𝑚m\\times m mask from each RoI using an FCN . This allows each layer in the mask branch to maintain the explicit m×m𝑚𝑚m\\times m object spatial layout without collapsing it into a vector representation that lacks spatial dimensions. Unlike previous methods that resort to fc layers for mask prediction (33, 34, 10), our fully convolutional representation requires fewer parameters, and is more accurate as demonstrated by experiments. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_19",
"text": " This pixel-to-pixel behavior requires our RoI features, which themselves are small feature maps, to be well aligned to faithfully preserve the explicit per-pixel spatial correspondence. This motivated us to develop the following RoIAlign layer that plays a key role in mask prediction. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_20",
"text": " RoIPool is a standard operation for extracting a small feature map (e.g., 7×\\times7) from each RoI. RoIPool first quantizes a floating-number RoI to the discrete granularity of the feature map, this quantized RoI is then subdivided into spatial bins which are themselves quantized, and finally feature values covered by each bin are aggregated (usually by max pooling). Quantization is performed, e.g., on a continuous coordinate x𝑥x by computing (x/16)delimited-()𝑥16(x/16), where 16 is a feature map stride and (⋅)delimited-()⋅(\\cdot) is rounding; likewise, quantization is performed when dividing into bins (e.g., 7×\\times7). These quantizations introduce misalignments between the RoI and the extracted features. While this may not impact classification, which is robust to small translations, it has a large negative effect on predicting pixel-accurate masks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_21",
"text": " To address this, we propose an RoIAlign layer that removes the harsh quantization of RoIPool, properly aligning the extracted features with the input. Our proposed change is simple: we avoid any quantization of the RoI boundaries or bins (i.e., we use x/16𝑥16x/16 instead of (x/16)delimited-()𝑥16(x/16)). We use bilinear interpolation to compute the exact values of the input features at four regularly sampled locations in each RoI bin, and aggregate the result (using max or average), see Figure 3 for details. We note that the results are not sensitive to the exact sampling locations, or how many points are sampled, as long as no quantization is performed. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_22",
"text": " RoIAlign leads to large improvements as we show in §4.2. We also compare to the RoIWarp operation proposed in . Unlike RoIAlign, RoIWarp overlooked the alignment issue and was implemented in as quantizing RoI just like RoIPool. So even though RoIWarp also adopts bilinear resampling motivated by , it performs on par with RoIPool as shown by experiments (more details in Table 2c), demonstrating the crucial role of alignment. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_23",
"text": " To demonstrate the generality of our approach, we instantiate Mask R-CNN with multiple architectures. For clarity, we differentiate between: (i) the convolutional backbone architecture used for feature extraction over an entire image, and (ii) the network head for bounding-box recognition (classification and regression) and mask prediction that is applied separately to each RoI. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_24",
"text": " We denote the backbone architecture using the nomenclature network-depth-features. We evaluate ResNet and ResNeXt networks of depth 50 or 101 layers. The original implementation of Faster R-CNN with ResNets extracted features from the final convolutional layer of the 4-th stage, which we call C4. This backbone with ResNet-50, for example, is denoted by ResNet-50-C4. This is a common choice used in (19, 10, 21, 39). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_25",
"text": " We also explore another more effective backbone recently proposed by Lin et al. , called a Feature Pyramid Network (FPN). FPN uses a top-down architecture with lateral connections to build an in-network feature pyramid from a single-scale input. Faster R-CNN with an FPN backbone extracts RoI features from different levels of the feature pyramid according to their scale, but otherwise the rest of the approach is similar to vanilla ResNet. Using a ResNet-FPN backbone for feature extraction with Mask R-CNN gives excellent gains in both accuracy and speed. For further details on FPN, we refer readers to . ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_26",
"text": " For the network head we closely follow architectures presented in previous work to which we add a fully convolutional mask prediction branch. Specifically, we extend the Faster R-CNN box heads from the ResNet and FPN papers. Details are shown in Figure 4. The head on the ResNet-C4 backbone includes the 5-th stage of ResNet (namely, the 9-layer ‘res5’ ), which is compute-intensive. For FPN, the backbone already includes res5 and thus allows for a more efficient head that uses fewer filters. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_27",
"text": " We note that our mask branches have a straightforward structure. More complex designs have the potential to improve performance but are not the focus of this work. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_28",
"text": " We set hyper-parameters following existing Fast/Faster R-CNN work (12, 36, 27). Although these decisions were made for object detection in original papers (12, 36, 27), we found our instance segmentation system is robust to them. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_29",
"text": " As in Fast R-CNN, an RoI is considered positive if it has IoU with a ground-truth box of at least 0.5 and negative otherwise. The mask loss Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} is defined only on positive RoIs. The mask target is the intersection between an RoI and its associated ground-truth mask. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_30",
"text": " We adopt image-centric training . Images are resized such that their scale (shorter edge) is 800 pixels . Each mini-batch has 2 images per GPU and each image has N𝑁N sampled RoIs, with a ratio of 1:3 of positive to negatives . N𝑁N is 64 for the C4 backbone (as in (12, 36)) and 512 for FPN (as in ). We train on 8 GPUs (so effective mini-batch size is 16) for 160k iterations, with a learning rate of 0.02 which is decreased by 10 at the 120k iteration. We use a weight decay of 0.0001 and momentum of 0.9. With ResNeXt , we train with 1 image per GPU and the same number of iterations, with a starting learning rate of 0.01. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_31",
"text": " The RPN anchors span 5 scales and 3 aspect ratios, following . For convenient ablation, RPN is trained separately and does not share features with Mask R-CNN, unless specified. For every entry in this paper, RPN and Mask R-CNN have the same backbones and so they are shareable. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_32",
"text": " At test time, the proposal number is 300 for the C4 backbone (as in ) and 1000 for FPN (as in ). We run the box prediction branch on these proposals, followed by non-maximum suppression . The mask branch is then applied to the highest scoring 100 detection boxes. Although this differs from the parallel computation used in training, it speeds up inference and improves accuracy (due to the use of fewer, more accurate RoIs). The mask branch can predict K𝐾K masks per RoI, but we only use the k𝑘k-th mask, where k𝑘k is the predicted class by the classification branch. The m𝑚m×\\timesm𝑚m floating-number mask output is then resized to the RoI size, and binarized at a threshold of 0.5. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_33",
"text": " Note that since we only compute masks on the top 100 detection boxes, Mask R-CNN adds a small overhead to its Faster R-CNN counterpart (e.g., ∼similar-to\\scriptstyle\\sim20% on typical models). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_34",
"text": " We perform a thorough comparison of Mask R-CNN to the state of the art along with comprehensive ablations on the COCO dataset . We report the standard COCO metrics including AP (averaged over IoU thresholds), AP50, AP75, and APS, APM, APL (AP at different scales). Unless noted, AP is evaluating using mask IoU. As in previous work (5, 27), we train using the union of 80k train images and a 35k subset of val images (trainval35k), and report ablations on the remaining 5k val images (minival). We also report results on test-dev . ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_35",
"text": " We compare Mask R-CNN to the state-of-the-art methods in instance segmentation in Table 1. All instantiations of our model outperform baseline variants of previous state-of-the-art models. This includes MNC and FCIS , the winners of the COCO 2015 and 2016 segmentation challenges, respectively. Without bells and whistles, Mask R-CNN with ResNet-101-FPN backbone outperforms FCIS+++ , which includes multi-scale train/test, horizontal flip test, and online hard example mining (OHEM) . While outside the scope of this work, we expect many such improvements to be applicable to ours. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_36",
"text": " Mask R-CNN outputs are visualized in Figures 2 and 5. Mask R-CNN achieves good results even under challenging conditions. In Figure 6 we compare our Mask R-CNN baseline and FCIS+++ . FCIS+++ exhibits systematic artifacts on overlapping instances, suggesting that it is challenged by the fundamental difficulty of instance segmentation. Mask R-CNN shows no such artifacts. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_37",
"text": " We run a number of ablations to analyze Mask R-CNN. Results are shown in Table 2 and discussed in detail next. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_38",
"text": " Table 2a shows Mask R-CNN with various backbones. It benefits from deeper networks (50 vs. 101) and advanced designs including FPN and ResNeXt. We note that not all frameworks automatically benefit from deeper or advanced networks (see benchmarking in ). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_39",
"text": " Mask R-CNN decouples mask and class prediction: as the existing box branch predicts the class label, we generate a mask for each class without competition among classes (by a per-pixel sigmoid and a binary loss). In Table 2b, we compare this to using a per-pixel softmax and a multinomial loss (as commonly used in FCN ). This alternative couples the tasks of mask and class prediction, and results in a severe loss in mask AP (5.5 points). This suggests that once the instance has been classified as a whole (by the box branch), it is sufficient to predict a binary mask without concern for the categories, which makes the model easier to train. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_40",
"text": " Our default instantiation predicts class-specific masks, i.e., one m𝑚m×\\timesm𝑚m mask per class. Interestingly, Mask R-CNN with class-agnostic masks (i.e., predicting a single m𝑚m×\\timesm𝑚m output regardless of class) is nearly as effective: it has 29.7 mask AP vs. 30.3 for the class-specific counterpart on ResNet-50-C4. This further highlights the division of labor in our approach which largely decouples classification and segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_41",
"text": " An evaluation of our proposed RoIAlign layer is shown in Table 2c. For this experiment we use the ResNet-50-C4 backbone, which has stride 16. RoIAlign improves AP by about 3 points over RoIPool, with much of the gain coming at high IoU (AP75). RoIAlign is insensitive to max/average pool; we use average in the rest of the paper. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_42",
"text": " Additionally, we compare with RoIWarp proposed in MNC that also adopt bilinear sampling. As discussed in §3, RoIWarp still quantizes the RoI, losing alignment with the input. As can be seen in Table 2c, RoIWarp performs on par with RoIPool and much worse than RoIAlign. This highlights that proper alignment is key. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_43",
"text": " We also evaluate RoIAlign with a ResNet-50-C5 backbone, which has an even larger stride of 32 pixels. We use the same head as in Figure 4 (right), as the res5 head is not applicable. Table 2d shows that RoIAlign improves mask AP by a massive 7.3 points, and mask AP75 by 10.5 points (50% relative improvement). Moreover, we note that with RoIAlign, using stride-32 C5 features (30.9 AP) is more accurate than using stride-16 C4 features (30.3 AP, Table 2c). RoIAlign largely resolves the long-standing challenge of using large-stride features for detection and segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_44",
"text": " Finally, RoIAlign shows a gain of 1.5 mask AP and 0.5 box AP when used with FPN, which has finer multi-level strides. For keypoint detection that requires finer alignment, RoIAlign shows large gains even with FPN (Table 6). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_45",
"text": " Segmentation is a pixel-to-pixel task and we exploit the spatial layout of masks by using an FCN. In Table 2e, we compare multi-layer perceptrons (MLP) and FCNs, using a ResNet-50-FPN backbone. Using FCNs gives a 2.1 mask AP gain over MLPs. We note that we choose this backbone so that the conv layers of the FCN head are not pre-trained, for a fair comparison with MLP. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_46",
"text": " We compare Mask R-CNN to the state-of-the-art COCO bounding-box object detection in Table 3. For this result, even though the full Mask R-CNN model is trained, only the classification and box outputs are used at inference (the mask output is ignored). Mask R-CNN using ResNet-101-FPN outperforms the base variants of all previous state-of-the-art models, including the single-model variant of G-RMI , the winner of the COCO 2016 Detection Challenge. Using ResNeXt-101-FPN, Mask R-CNN further improves results, with a margin of 3.0 points box AP over the best previous single model entry from (which used Inception-ResNet-v2-TDM). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_47",
"text": " As a further comparison, we trained a version of Mask R-CNN but without the mask branch, denoted by “Faster R-CNN, RoIAlign” in Table 3. This model performs better than the model presented in due to RoIAlign. On the other hand, it is 0.9 points box AP lower than Mask R-CNN. This gap of Mask R-CNN on box detection is therefore due solely to the benefits of multi-task training. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_48",
"text": " Lastly, we note that Mask R-CNN attains a small gap between its mask and box AP: e.g., 2.7 points between 37.1 (mask, Table 1) and 39.8 (box, Table 3). This indicates that our approach largely closes the gap between object detection and the more challenging instance segmentation task. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_49",
"text": " We train a ResNet-101-FPN model that shares features between the RPN and Mask R-CNN stages, following the 4-step training of Faster R-CNN . This model runs at 195ms per image on an Nvidia Tesla M40 GPU (plus 15ms CPU time resizing the outputs to the original resolution), and achieves statistically the same mask AP as the unshared one. We also report that the ResNet-101-C4 variant takes ∼similar-to\\scriptstyle\\sim400ms as it has a heavier box head (Figure 4), so we do not recommend using the C4 variant in practice. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_50",
"text": " Although Mask R-CNN is fast, we note that our design is not optimized for speed, and better speed/accuracy trade-offs could be achieved , e.g., by varying image sizes and proposal numbers, which is beyond the scope of this paper. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_51",
"text": " Mask R-CNN is also fast to train. Training with ResNet-50-FPN on COCO trainval35k takes 32 hours in our synchronized 8-GPU implementation (0.72s per 16-image mini-batch), and 44 hours with ResNet-101-FPN. In fact, fast prototyping can be completed in less than one day when training on the train set. We hope such rapid training will remove a major hurdle in this area and encourage more people to perform research on this challenging topic. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_52",
"text": " Our framework can easily be extended to human pose estimation. We model a keypoint’s location as a one-hot mask, and adopt Mask R-CNN to predict K𝐾K masks, one for each of K𝐾K keypoint types (e.g., left shoulder, right elbow). This task helps demonstrate the flexibility of Mask R-CNN. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_53",
"text": " We note that minimal domain knowledge for human pose is exploited by our system, as the experiments are mainly to demonstrate the generality of the Mask R-CNN framework. We expect that domain knowledge (e.g., modeling structures ) will be complementary to our simple approach. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_54",
"text": " We make minor modifications to the segmentation system when adapting it for keypoints. For each of the K𝐾K keypoints of an instance, the training target is a one-hot m×m𝑚𝑚m\\times m binary mask where only a single pixel is labeled as foreground. During training, for each visible ground-truth keypoint, we minimize the cross-entropy loss over an m2superscript𝑚2m^{2}-way softmax output (which encourages a single point to be detected). We note that as in instance segmentation, the K𝐾K keypoints are still treated independently. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_55",
"text": " We adopt the ResNet-FPN variant, and the keypoint head architecture is similar to that in Figure 4 (right). The keypoint head consists of a stack of eight 3×\\times3 512-d conv layers, followed by a deconv layer and 2×\\times bilinear upscaling, producing an output resolution of 56×\\times56. We found that a relatively high resolution output (compared to masks) is required for keypoint-level localization accuracy. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_56",
"text": " Models are trained on all COCO trainval35k images that contain annotated keypoints. To reduce overfitting, as this training set is smaller, we train using image scales randomly sampled from (640, 800) pixels; inference is on a single scale of 800 pixels. We train for 90k iterations, starting from a learning rate of 0.02 and reducing it by 10 at 60k and 80k iterations. We use bounding-box NMS with a threshold of 0.5. Other details are identical as in §3.1. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_57",
"text": " We evaluate the person keypoint AP (APkpkp{}^{\\text{kp}}) and experiment with a ResNet-50-FPN backbone; more backbones will be studied in the appendix. Table 4 shows that our result (62.7 APkpkp{}^{\\text{kp}}) is 0.9 points higher than the COCO 2016 keypoint detection winner that uses a multi-stage processing pipeline (see caption of Table 4). Our method is considerably simpler and faster. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_58",
"text": " More importantly, we have a unified model that can simultaneously predict boxes, segments, and keypoints while running at 5 fps. Adding a segment branch (for the person category) improves the APkpkp{}^{\\text{kp}} to 63.1 (Table 4) on test-dev. More ablations of multi-task learning on minival are in Table 5. Adding the mask branch to the box-only (i.e., Faster R-CNN) or keypoint-only versions consistently improves these tasks. However, adding the keypoint branch reduces the box/mask AP slightly, suggesting that while keypoint detection benefits from multitask training, it does not in turn help the other tasks. Nevertheless, learning all three tasks jointly enables a unified system to efficiently predict all outputs simultaneously (Figure 7). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_59",
"text": " We also investigate the effect of RoIAlign on keypoint detection (Table 6). Though this ResNet-50-FPN backbone has finer strides (e.g., 4 pixels on the finest level), RoIAlign still shows significant improvement over RoIPool and increases APkpkp{}^{\\text{kp}} by 4.4 points. This is because keypoint detections are more sensitive to localization accuracy. This again indicates that alignment is essential for pixel-level localization, including masks and keypoints. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_60",
"text": " Given the effectiveness of Mask R-CNN for extracting object bounding boxes, masks, and keypoints, we expect it be an effective framework for other instance-level tasks. ",
"title": "Mask R-CNN"
}
] |
What are some common methods used in facial recognition and how do they compare in terms of effectiveness and challenges?
|
There are broadly 4 methods that are used in FR [0]. Holistic methods were the first-ever attempt to solve the FR problem [1]. But they were too primitive and could not account for uncontrolled facial changes that did not fit its assumptions [3]. Then, there are local feature-based methods that try to extract invariant properties with local filtering [67]. However, although better than holistic methods, these are also short of complexity and capacity to address the vastness of facial appearances [85]. The first learning-based methods also lacked the robustness to address the non-linearity and complexity of FR [86].
|
[
0,
1,
3,
67,
85,
86
] |
[
{
"id": "1804.06655_all_0",
"text": " Face recognition (FR) has been the prominent biometric technique for identity authentication and has been widely used in many areas, such as military, finance, public security and daily life. FR has been a long-standing research topic in the CVPR community. In the early 1990s, the study of FR became popular following the introduction of the historical Eigenface approach . The milestones of feature-based FR over the past years are presented in Fig. 1, in which the times of four major technical streams are highlighted. The holistic approaches derive the low-dimensional representation through certain distribution assumptions, such as linear subspace , manifold , and sparse representation . This idea dominated the FR community in the 1990s and 2000s. However, a well-known problem is that these theoretically plausible holistic methods fail to address the uncontrolled facial changes that deviate from their prior assumptions. In the early 2000s, this problem gave rise to local-feature-based FR. Gabor and LBP , as well as their multilevel and high-dimensional extensions , achieved robust performance through some invariant properties of local filtering. Unfortunately, handcrafted features suffered from a lack of distinctiveness and compactness. In the early 2010s, learning-based local descriptors were introduced to the FR community , in which local filters are learned for better distinctiveness and the encoding codebook is learned for better compactness. However, these shallow representations still have an inevitable limitation on robustness against the complex nonlinear facial appearance variations. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_1",
"text": " In general, traditional methods attempted to recognize human face by one or two layer representations, such as filtering responses, histogram of the feature codes, or distribution of the dictionary atoms. The research community studied intensively to separately improve the preprocessing, local descriptors, and feature transformation, but these approaches improved FR accuracy slowly. What’s worse, most methods aimed to address one aspect of unconstrained facial changes only, such as lighting, pose, expression, or disguise. There was no any integrated technique to address these unconstrained challenges integrally. As a result, with continuous efforts of more than a decade, “shallow” methods only improved the accuracy of the LFW benchmark to about 95% , which indicates that “shallow” methods are insufficient to extract stable identity feature invariant to real-world changes. Due to the insufficiency of this technical, facial recognition systems were often reported with unstable performance or failures with countless false alarms in real-world applications. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_2",
"text": " But all that changed in 2012 when AlexNet won the ImageNet competition by a large margin using a technique called deep learning . Deep learning methods, such as convolutional neural networks, use a cascade of multiple layers of processing units for feature extraction and transformation. They learn multiple levels of representations that correspond to different levels of abstraction. The levels form a hierarchy of concepts, showing strong invariance to the face pose, lighting, and expression changes, as shown in Fig. 2. It can be seen from the figure that the first layer of the deep neural network is somewhat similar to the Gabor feature found by human scientists with years of experience. The second layer learns more complex texture features. The features of the third layer are more complex, and some simple structures have begun to appear, such as high-bridged nose and big eyes. In the fourth, the network output is enough to explain a certain facial attribute, which can make a special response to some clear abstract concepts such as smile, roar, and even blue eye. In conclusion, in deep convolutional neural networks (CNN), the lower layers automatically learn the features similar to Gabor and SIFT designed for years or even decades (such as initial layers in Fig. 2), and the higher layers further learn higher level abstraction. Finally, the combination of these higher level abstraction represents facial identity with unprecedented stability. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_3",
"text": " In 2014, DeepFace achieved the SOTA accuracy on the famous LFW benchmark , approaching human performance on the unconstrained condition for the first time (DeepFace: 97.35% vs. Human: 97.53%), by training a 9-layer model on 4 million facial images. Inspired by this work, research focus has shifted to deep-learning-based approaches, and the accuracy was dramatically boosted to above 99.80% in just three years. Deep learning technique has reshaped the research landscape of FR in almost all aspects such as algorithm designs, training/test datasets, application scenarios and even the evaluation protocols. Therefore, it is of great significance to review the breakthrough and rapid development process in recent years. There have been several surveys on FR (24, 25, 26, 27, 28) and its subdomains, and they mostly summarized and compared a diverse set of techniques related to a specific FR scene, such as illumination-invariant FR , 3D FR , pose-invariant FR . Unfortunately, due to their earlier publication dates, none of them covered the deep learning methodology that is most successful nowadays. This survey focuses only on recognition problem, and one can refer to Ranjan et al. for a brief review of a full deep FR pipeline with detection and alignment, or refer to Jin et al. for a survey of face alignment. Specifically, the major contributions of this survey are as follows: ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_4",
"text": " • A systematic review on the evolution of the network architectures and loss functions for deep FR is provided. Various loss functions are categorized into Euclidean-distance-based loss, angular/cosine-margin-based loss and softmax loss and its variations. Both the mainstream network architectures, such as Deepface , DeepID series (34, 35, 21, 36), VGGFace , FaceNet , and VGGFace2 , and other architectures designed for FR are covered. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_5",
"text": " • We categorize the new face processing methods based on deep learning, such as those used to handle recognition difficulty on pose changes, into two classes: “one-to-many augmentation” and “many-to-one normalization”, and discuss how emerging generative adversarial network (GAN) facilitates deep FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_6",
"text": " • We present a comparison and analysis on public available databases that are of vital importance for both model training and testing. Major FR benchmarks, such as LFW , IJB-A/B/C (41, 42, 43), Megaface , and MS-Celeb-1M , are reviewed and compared, in term of the four aspects: training methodology, evaluation tasks and metrics, and recognition scenes, which provides an useful reference for training and testing deep FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_7",
"text": " • Besides the general purpose tasks defined by the major databases, we summarize a dozen scenario-specific databases and solutions that are still challenging for deep learning, such as anti-attack, cross-pose FR, and cross-age FR. By reviewing specially designed methods for these unsolved problems, we attempt to reveal the important issues for future research on deep FR, such as adversarial samples, algorithm/data biases, and model interpretability. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_8",
"text": " The remainder of this survey is structured as follows. In Section II, we introduce some background concepts and terminologies, and then we briefly introduce each component of FR. In Section III, different network architectures and loss functions are presented. Then, we summarize the face processing algorithms and the datasets. In Section V, we briefly introduce several methods of deep FR used for different scenes. Finally, the conclusion of this paper and discussion of future works are presented in Section VI. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_9",
"text": " As mentioned in , there are three modules needed for FR system, as shown in Fig. 3. First, a face detector is used to localize faces in images or videos. Second, with the facial landmark detector, the faces are aligned to normalized canonical coordinates. Third, the FR module is implemented with these aligned face images. We only focus on the FR module throughout the remainder of this paper. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_10",
"text": " Before a face image is fed to an FR module, face anti-spoofing, which recognizes whether the face is live or spoofed, is applied to avoid different types of attacks. Then, recognition can be performed. As shown in Fig. 3(c), an FR module consists of face processing, deep feature extraction and face matching, and it can be described as follows: ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_11",
"text": " M(F(Pi(Ii)),F(Pj(Ij)))𝑀𝐹subscript𝑃𝑖subscript𝐼𝑖𝐹subscript𝑃𝑗subscript𝐼𝑗M(F(P_{i}(I_{i})),F(P_{j}(I_{j}))) (1) where Iisubscript𝐼𝑖I_{i} and Ijsubscript𝐼𝑗I_{j} are two face images, respectively. P𝑃P stands for face processing to handle intra-personal variations before training and testing, such as poses, illuminations, expressions and occlusions. F𝐹F denotes feature extraction, which encodes the identity information. The feature extractor is learned by loss functions when training, and is utilized to extract features of faces when testing. M𝑀M means a face matching algorithm used to compute similarity scores of features to determine the specific identity of faces. Different from object classification, the testing identities are usually disjoint from the training data in FR, which makes the learned classifier cannot be used to recognize testing faces. Therefore, face matching algorithm is an essential part in FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_12",
"text": " Although deep-learning-based approaches have been widely used, Mehdipour et al. proved that various conditions, such as poses, illuminations, expressions and occlusions, still affect the performance of deep FR. Accordingly, face processing is introduced to address this problem. The face processing methods are categorized as “one-to-many augmentation” and “many-to-one normalization”, as shown in Table I. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_13",
"text": " • “One-to-many augmentation”. These methods generate many patches or images of the pose variability from a single image to enable deep networks to learn pose-invariant representations. • “Many-to-one normalization”. These methods recover the canonical view of face images from one or many images of a nonfrontal view; then, FR can be performed as if it were under controlled conditions. Note that we mainly focus on deep face processing method designed for pose variations in this paper, since pose is widely regarded as a major challenge in automatic FR applications and other variations can be solved by the similar methods. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_14",
"text": " Network Architecture. The architectures can be categorized as backbone and assembled networks, as shown in Table II. Inspired by the extraordinary success on the ImageNet challenge, the typical CNN architectures, e.g. AlexNet, VGGNet, GoogleNet, ResNet and SENet (22, 75, 76, 77, 78), are introduced and widely used as the baseline models in FR (directly or slightly modified). In addition to the mainstream, some assembled networks, e.g. multi-task networks and multi-input networks, are utilized in FR. Hu et al. shows that accumulating the results of assembled networks provides an increase in performance compared with an individual network. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_15",
"text": " Loss Function. The softmax loss is commonly used as the supervision signal in object recognition, and it encourages the separability of features. However, the softmax loss is not sufficiently effective for FR because intra-variations could be larger than inter-differences and more discriminative features are required when recognizing different people. Many works focus on creating novel loss functions to make features not only more separable but also discriminative, as shown in Table III. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_16",
"text": " FR can be categorized as face verification and face identification. In either scenario, a set of known subjects is initially enrolled in the system (the gallery), and during testing, a new subject (the probe) is presented. After the deep networks are trained on massive data with the supervision of an appropriate loss function, each of the test images is passed through the networks to obtain a deep feature representation. Using cosine distance or L2 distance, face verification computes one-to-one similarity between the gallery and probe to determine whether the two images are of the same subject, whereas face identification computes one-to-many similarity to determine the specific identity of a probe face. In addition to these, other methods are introduced to postprocess the deep features such that the face matching is performed efficiently and accurately, such as metric learning, sparse-representation-based classifier (SRC), and so forth. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_17",
"text": " To sum up, we present FR modules and their commonly-used methods in Fig. 4 to help readers to get a view of the whole FR. In deep FR, various training and testing face databases are constructed, and different architectures and losses of deep FR always follow those of deep object classification and are modified according to unique characteristics of FR. Moreover, in order to address unconstrained facial changes, face processing methods are further designed to handle poses, expressions and occlusions variations. Benefiting from these strategies, deep FR system significantly improves the SOTA and surpasses human performance. When the applications of FR becomes more and more mature in general scenario, recently, different solutions are driven for more difficult specific scenarios, such as cross-pose FR, cross-age FR, video FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_18",
"text": " For most applications, it is difficult to include the candidate faces during the training stage, which makes FR become a “zero-shot” learning task. Fortunately, since all human faces share a similar shape and texture, the representation learned from a small proportion of faces can generalize well to the rest. Based on this theory, a straightforward way to improve generalized performance is to include as many IDs as possible in the training set. For example, Internet giants such as Facebook and Google have reported their deep FR system trained by 106−107superscript106superscript10710^{6}-10^{7} IDs (38, 20). ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_19",
"text": " Unfortunately, these personal datasets, as well as prerequisite GPU clusters for distributed model training, are not accessible for academic community. Currently, public available training databases for academic research consist of only 103−105superscript103superscript10510^{3}-10^{5} IDs. Instead, academic community makes effort to design effective loss functions and adopts efficient architectures to make deep features more discriminative using the relatively small training data sets. For instance, the accuracy of most popular LFW benchmark has been boosted from 97% to above 99.8% in the pasting four years, as enumerated in Table IV. In this section, we survey the research efforts on different loss functions and network architectures that have significantly improved deep FR methods. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_20",
"text": " Inheriting from the object classification network such as AlexNet, the initial Deepface and DeepID adopted cross-entropy based softmax loss for feature learning. After that, people realized that the softmax loss is not sufficient by itself to learn discriminative features, and more researchers began to explore novel loss functions for enhanced generalization ability. This becomes the hottest research topic in deep FR research, as illustrated in Fig. 5. Before 2017, Euclidean-distance-based loss played an important role; In 2017, angular/cosine-margin-based loss as well as feature and weight normalization became popular. It should be noted that, although some loss functions share the similar basic idea, the new one is usually designed to facilitate the training procedure by easier parameter or sample selection. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_21",
"text": " Euclidean-distance-based loss is a metric learning method (118, 119) that embeds images into Euclidean space in which intra-variance is reduced and inter-variance is enlarged. The contrastive loss and the triplet loss are the commonly used loss functions. The contrastive loss (35, 21, 36, 61, 120) requires face image pairs, and then pulls together positive pairs and pushes apart negative pairs. ℒ=yijmax(0,‖f(xi)−f(xj)‖2−ϵ+)+(1−yij)max(0,ϵ−−‖f(xi)−f(xj)‖2)ℒsubscript𝑦𝑖𝑗𝑚𝑎𝑥0subscriptdelimited-∥∥𝑓subscript𝑥𝑖𝑓subscript𝑥𝑗2superscriptitalic-ϵ1subscript𝑦𝑖𝑗𝑚𝑎𝑥0superscriptitalic-ϵsubscriptdelimited-∥∥𝑓subscript𝑥𝑖𝑓subscript𝑥𝑗2\\begin{split}\\mathcal{L}=&y_{ij}max\\left(0,\\left\\|f(x_{i})-f(x_{j})\\right\\|_{2}-\\epsilon^{+}\\right)\\\\ &+(1-y_{ij})max\\left(0,\\epsilon^{-}-\\left\\|f(x_{i})-f(x_{j})\\right\\|_{2}\\right)\\end{split} (2) where yij=1subscript𝑦𝑖𝑗1y_{ij}=1 means xisubscript𝑥𝑖x_{i} and xjsubscript𝑥𝑗x_{j} are matching samples and yij=0subscript𝑦𝑖𝑗0y_{ij}=0 means non-matching samples. f(⋅)𝑓⋅f(\\cdot) is the feature embedding, ϵ+superscriptitalic-ϵ\\epsilon^{+} and ϵ−superscriptitalic-ϵ\\epsilon^{-} control the margins of the matching and non-matching pairs respectively. DeepID2 combined the face identification (softmax) and verification (contrastive loss) supervisory signals to learn a discriminative representation, and joint Bayesian (JB) was applied to obtain a robust embedding space. Extending from DeepID2 , DeepID2+ increased the dimension of hidden representations and added supervision to early convolutional layers. DeepID3 further introduced VGGNet and GoogleNet to their work. However, the main problem with the contrastive loss is that the margin parameters are often difficult to choose. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_22",
"text": " Contrary to contrastive loss that considers the absolute distances of the matching pairs and non-matching pairs, triplet loss considers the relative difference of the distances between them. Along with FaceNet proposed by Google, Triplet loss (38, 37, 81, 80, 58, 60) was introduced into FR. It requires the face triplets, and then it minimizes the distance between an anchor and a positive sample of the same identity and maximizes the distance between the anchor and a negative sample of a different identity. FaceNet made ‖f(xia)−f(xip)‖22+α<−‖f(xia)−f(xin)‖22superscriptsubscriptnorm𝑓superscriptsubscript𝑥𝑖𝑎𝑓superscriptsubscript𝑥𝑖𝑝22𝛼superscriptsubscriptnorm𝑓superscriptsubscript𝑥𝑖𝑎𝑓superscriptsubscript𝑥𝑖𝑛22\\left\\|f(x_{i}^{a})-f(x_{i}^{p})\\right\\|_{2}^{2}+\\alpha<-\\left\\|f(x_{i}^{a})-f(x_{i}^{n})\\right\\|_{2}^{2} using hard triplet face samples, where xiasuperscriptsubscript𝑥𝑖𝑎x_{i}^{a}, xipsuperscriptsubscript𝑥𝑖𝑝x_{i}^{p} and xinsuperscriptsubscript𝑥𝑖𝑛x_{i}^{n} are the anchor, positive and negative samples, respectively, α𝛼\\alpha is a margin and f(⋅)𝑓⋅f(\\cdot) represents a nonlinear transformation embedding an image into a feature space. Inspired by FaceNet , TPE and TSE learned a linear projection W𝑊W to construct triplet loss. Other methods optimize deep models using both triplet loss and softmax loss (59, 58, 60, 121). They first train networks with softmax and then fine-tune them with triplet loss. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_23",
"text": " However, the contrastive loss and triplet loss occasionally encounter training instability due to the selection of effective training samples, some paper begun to explore simple alternatives. Center loss and its variants (82, 116, 102) are good choices for reducing intra-variance. The center loss learned a center for each class and penalized the distances between the deep features and their corresponding class centers. This loss can be defined as follows: ℒC=12∑i=1m‖xi−cyi‖22subscriptℒ𝐶12superscriptsubscript𝑖1𝑚superscriptsubscriptnormsubscript𝑥𝑖subscript𝑐subscript𝑦𝑖22\\mathcal{L}_{C}=\\frac{1}{2}\\sum_{i=1}^{m}\\left\\|x_{i}-c_{y_{i}}\\right\\|_{2}^{2} (3) where xisubscript𝑥𝑖x_{i} denotes the i𝑖i-th deep feature belonging to the yisubscript𝑦𝑖y_{i}-th class and cyisubscript𝑐subscript𝑦𝑖c_{y_{i}} denotes the yisubscript𝑦𝑖y_{i}-th class center of deep features. To handle the long-tailed data, a range loss , which is a variant of center loss, is used to minimize k greatest range’s harmonic mean values in one class and maximize the shortest inter-class distance within one batch. Wu et al. proposed a center-invariant loss that penalizes the difference between each center of classes. Deng et al. selected the farthest intra-class samples and the nearest inter-class samples to compute a margin loss. However, the center loss and its variants suffer from massive GPU memory consumption on the classification layer, and prefer balanced and sufficient training data for each identity. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_24",
"text": " In 2017, people had a deeper understanding of loss function in deep FR and thought that samples should be separated more strictly to avoid misclassifying the difficult samples. Angular/cosine-margin-based loss (104, 84, 105, 106, 108) is proposed to make learned features potentially separable with a larger angular/cosine distance. The decision boundary in softmax loss is (W1−W2)x+b1−b2=0subscript𝑊1subscript𝑊2𝑥subscript𝑏1subscript𝑏20\\left(W_{1}-W_{2}\\right)x+b_{1}-b_{2}=0, where x𝑥x is feature vector, Wisubscript𝑊𝑖W_{i} and bisubscript𝑏𝑖b_{i} are weights and bias in softmax loss, respectively. Liu et al. reformulated the original softmax loss into a large-margin softmax (L-Softmax) loss. They constrain b1=b2=0subscript𝑏1subscript𝑏20b_{1}=b_{2}=0, so the decision boundaries for class 1 and class 2 become ‖x‖(‖W1‖cos(mθ1)−‖W2‖cos(θ2))=0norm𝑥normsubscript𝑊1𝑐𝑜𝑠𝑚subscript𝜃1normsubscript𝑊2𝑐𝑜𝑠subscript𝜃20\\left\\|x\\right\\|\\left(\\left\\|W_{1}\\right\\|cos\\left(m\\theta_{1}\\right)-\\left\\|W_{2}\\right\\|cos\\left(\\theta_{2}\\right)\\right)=0 and ‖x‖(‖W1‖‖W2‖cos(θ1)−cos(mθ2))=0norm𝑥normsubscript𝑊1normsubscript𝑊2𝑐𝑜𝑠subscript𝜃1𝑐𝑜𝑠𝑚subscript𝜃20\\left\\|x\\right\\|\\left(\\left\\|W_{1}\\right\\|\\left\\|W_{2}\\right\\|cos\\left(\\theta_{1}\\right)-cos\\left(m\\theta_{2}\\right)\\right)=0, respectively, where m𝑚m is a positive integer introducing an angular margin, and θisubscript𝜃𝑖\\theta_{i} is the angle between Wisubscript𝑊𝑖W_{i} and x𝑥x. Due to the non-monotonicity of the cosine function, a piece-wise function is applied in L-softmax to guarantee the monotonicity. The loss function is defined as follows: ℒi=−log(e‖Wyi‖‖xi‖φ(θyi)e‖Wyi‖‖xi‖φ(θyi)+∑j≠yie‖Wyi‖‖xi‖cos(θj))subscriptℒ𝑖𝑙𝑜𝑔superscript𝑒normsubscript𝑊𝑦𝑖normsubscript𝑥𝑖𝜑subscript𝜃𝑦𝑖superscript𝑒normsubscript𝑊𝑦𝑖normsubscript𝑥𝑖𝜑subscript𝜃𝑦𝑖subscript𝑗subscript𝑦𝑖superscript𝑒normsubscript𝑊𝑦𝑖normsubscript𝑥𝑖𝑐𝑜𝑠subscript𝜃𝑗\\mathcal{L}_{i}=-log\\left(\\frac{e^{\\left\\|W_{yi}\\right\\|\\left\\|x_{i}\\right\\|\\varphi(\\theta_{yi})}}{e^{\\left\\|W_{yi}\\right\\|\\left\\|x_{i}\\right\\|\\varphi(\\theta_{yi})+\\sum_{j\\neq y_{i}}e^{\\left\\|W_{yi}\\right\\|\\left\\|x_{i}\\right\\|cos(\\theta_{j})}}}\\right) (4) where φ(θ)=(−1)kcos(mθ)−2k,θ∈(kπm,(k+1)πm)formulae-sequence𝜑𝜃superscript1𝑘𝑐𝑜𝑠𝑚𝜃2𝑘𝜃𝑘𝜋𝑚𝑘1𝜋𝑚\\varphi(\\theta)=(-1)^{k}cos(m\\theta)-2k,\\theta\\in\\left(\\frac{k\\pi}{m},\\frac{(k+1)\\pi}{m}\\right) (5) Considering that L-Softmax is difficult to converge, it is always combined with softmax loss to facilitate and ensure the convergence. Therefore, the loss function is changed into: fyi=λ‖Wyi‖‖xi‖cos(θyi)+‖Wyi‖‖xi‖φ(θyi)1+λsubscript𝑓subscript𝑦𝑖𝜆normsubscript𝑊subscript𝑦𝑖normsubscript𝑥𝑖𝑐𝑜𝑠subscript𝜃subscript𝑦𝑖normsubscript𝑊subscript𝑦𝑖normsubscript𝑥𝑖𝜑subscript𝜃subscript𝑦𝑖1𝜆f_{y_{i}}=\\frac{\\lambda\\left\\|W_{y_{i}}\\right\\|\\left\\|x_{i}\\right\\|cos(\\theta_{y_{i}})+\\left\\|W_{y_{i}}\\right\\|\\left\\|x_{i}\\right\\|\\varphi(\\theta_{y_{i}})}{1+\\lambda}, where λ𝜆\\lambda is a dynamic hyper-parameter. Based on L-Softmax, A-Softmax loss further normalized the weight W𝑊W by L2 norm (‖W‖=1norm𝑊1\\left\\|W\\right\\|=1) such that the normalized vector will lie on a hypersphere, and then the discriminative face features can be learned on a hypersphere manifold with an angular margin (Fig. 6). Liu et al. introduced a deep hyperspherical convolution network (SphereNet) that adopts hyperspherical convolution as its basic convolution operator and is supervised by angular-margin-based loss. To overcome the optimization difficulty of L-Softmax and A-Softmax, which incorporate the angular margin in a multiplicative manner, ArcFace and CosFace , AMS loss respectively introduced an additive angular/cosine margin cos(θ+m)𝑐𝑜𝑠𝜃𝑚cos(\\theta+m) and cosθ−m𝑐𝑜𝑠𝜃𝑚cos\\theta-m. They are extremely easy to implement without tricky hyper-parameters λ𝜆\\lambda, and are more clear and able to converge without the softmax supervision. The decision boundaries under the binary classification case are given in Table V. Based on large margin, FairLoss and AdaptiveFace further proposed to adjust the margins for different classes adaptively to address the problem of unbalanced data. Compared to Euclidean-distance-based loss, angular/cosine-margin-based loss explicitly adds discriminative constraints on a hypershpere manifold, which intrinsically matches the prior that human face lies on a manifold. However, Wang et al. showed that angular/cosine-margin-based loss can achieve better results on a clean dataset, but is vulnerable to noise and becomes worse than center loss and softmax in the high-noise region as shown in Fig. 7. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_25",
"text": " In 2017, in addition to reformulating softmax loss into an angular/cosine-margin-based loss as mentioned above, some works tries to normalize the features and weights in loss functions to improve the model performance, which can be written as follows: W^=W‖W‖,x^=αx‖x‖formulae-sequence^𝑊𝑊norm𝑊^𝑥𝛼𝑥norm𝑥\\hat{W}=\\frac{W}{\\left\\|W\\right\\|},\\hat{x}=\\alpha\\frac{x}{\\left\\|x\\right\\|} (6) where α𝛼\\alpha is a scaling parameter, x𝑥x is the learned feature vector, W𝑊W is weight of last fully connected layer. Scaling x𝑥x to a fixed radius α𝛼\\alpha is important, as Wang et al. proved that normalizing both features and weights to 1 will make the softmax loss become trapped at a very high value on the training set. After that, the loss function, e.g. softmax, can be performed using the normalized features and weights. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_26",
"text": " Some papers (84, 108) first normalized the weights only and then added angular/cosine margin into loss functions to make the learned features be discriminative. In contrast, some works, such as (109, 111), adopted feature normalization only to overcome the bias to the sample distribution of the softmax. Based on the observation of that the L2-norm of features learned using the softmax loss is informative of the quality of the face, L2-softmax enforced all the features to have the same L2-norm by feature normalization such that similar attention is given to good quality frontal faces and blurry faces with extreme pose. Rather than scaling x𝑥x to the parameter α𝛼\\alpha, Hasnat et al. normalized features with x^=x−μσ2^𝑥𝑥𝜇superscript𝜎2\\hat{x}=\\frac{x-\\mu}{\\sqrt{\\sigma^{2}}}, where μ𝜇\\mu and σ2superscript𝜎2\\sigma^{2} are the mean and variance. Ring loss encouraged the norm of samples being value R𝑅R (a learned parameter) rather than explicit enforcing through a hard normalization operation. Moreover, normalizing both features and weights (110, 112, 115, 105, 106) has become a common strategy. Wang et al. explained the necessity of this normalization operation from both analytic and geometric perspectives. After normalizing features and weights, CoCo loss optimized the cosine distance among data features, and Hasnat et al. used the von Mises-Fisher (vMF) mixture model as the theoretical basis to develop a novel vMF mixture loss and its corresponding vMF deep features. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_27",
"text": " Mainstream architectures. The commonly used network architectures of deep FR have always followed those of deep object classification and evolved from AlexNet to SENet rapidly. We present the most influential architectures of deep object classification and deep face recognition in chronological order 111The time we present is when the paper was published. in Fig. 8. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_28",
"text": " In 2012, AlexNet was reported to achieve the SOTA recognition accuracy in the ImageNet large-scale visual recognition competition (ILSVRC) 2012, exceeding the previous best results by a large margin. AlexNet consists of five convolutional layers and three fully connected layers, and it also integrates various techniques, such as rectified linear unit (ReLU), dropout, data augmentation, and so forth. ReLU was widely regarded as the most essential component for making deep learning possible. Then, in 2014, VGGNet proposed a standard network architecture that used very small 3×3333\\times 3 convolutional filters throughout and doubled the number of feature maps after the 2×\\times2 pooling. It increased the depth of the network to 16-19 weight layers, which further enhanced the flexibility to learn progressive nonlinear mappings by deep architectures. In 2015, the 22-layer GoogleNet introduced an “inception module” with the concatenation of hybrid feature maps, as well as two additional intermediate softmax supervised signals. It performs several convolutions with different receptive fields (1×1111\\times 1, 3×3333\\times 3 and 5×5555\\times 5) in parallel, and concatenates all feature maps to merge the multi-resolution information. In 2016, ResNet proposed to make layers learn a residual mapping with reference to the layer inputs ℱ(x):=ℋ(x)−xassignℱ𝑥ℋ𝑥𝑥\\mathcal{F}(x):=\\mathcal{H}(x)-x rather than directly learning a desired underlying mapping ℋ(x)ℋ𝑥\\mathcal{H}(x) to ease the training of very deep networks (up to 152 layers). The original mapping is recast into ℱ(x)+xℱ𝑥𝑥\\mathcal{F}(x)+x and can be realized by “shortcut connections”. As the champion of ILSVRC 2017, SENet introduced a “Squeeze-and-Excitation” (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. These blocks can be integrated with modern architectures, such as ResNet, and improves their representational power. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_29",
"text": " With the evolved architectures and advanced training techniques, such as batch normalization (BN), the network becomes deeper and the training becomes more controllable. Following these architectures in object classification, the networks in deep FR are also developed step by step, and the performance of deep FR is continually improving. We present these mainstream architectures of deep FR in Fig. 9. In 2014, DeepFace was the first to use a nine-layer CNN with several locally connected layers. With 3D alignment for face processing, it reaches an accuracy of 97.35% on LFW. In 2015, FaceNet used a large private dataset to train a GoogleNet. It adopted a triplet loss function based on triplets of roughly aligned matching/nonmatching face patches generated by a novel online triplet mining method and achieved good performance of 99.63%. In the same year, VGGface designed a procedure to collect a large-scale dataset from the Internet. It trained the VGGNet on this dataset and then fine-tuned the networks via a triplet loss function similar to FaceNet. VGGface obtains an accuracy of 98.95%. In 2017, SphereFace used a 64-layer ResNet architecture and proposed the angular softmax (A-Softmax) loss to learn discriminative face features with angular margin. It boosts the achieves to 99.42% on LFW. In the end of 2017, a new large-scale face dataset, namely VGGface2 , was introduced, which consists of large variations in pose, age, illumination, ethnicity and profession. Cao et al. first trained a SENet with MS-celeb-1M dataset and then fine-tuned the model with VGGface2 , and achieved the SOTA performance on the IJB-A and IJB-B . ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_30",
"text": " Light-weight networks. Using deeper neural network with hundreds of layers and millions of parameters to achieve higher accuracy comes at cost. Powerful GPUs with larger memory size are needed, which makes the applications on many mobiles and embedded devices impractical. To address this problem, light-weight networks are proposed. Light CNN (85, 86) proposed a max-feature-map (MFM) activation function that introduces the concept of maxout in the fully connected layer to CNN. The MFM obtains a compact representation and reduces the computational cost. Sun et al. proposed to sparsify deep networks iteratively from the previously learned denser models based on a weight selection criterion. MobiFace adopted fast downsampling and bottleneck residual block with the expansion layers and achieved high performance with 99.7% on LFW database. Although some other light-weight CNNs, such as SqueezeNet, MobileNet, ShuffleNet and Xception (126, 127, 128, 129), are still not widely used in FR, they deserve more attention. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_31",
"text": " Adaptive-architecture networks. Considering that designing architectures manually by human experts are time-consuming and error-prone processes, there is growing interest in adaptive-architecture networks which can find well-performing architectures, e.g. the type of operation every layer executes (pooling, convolution, etc) and hyper-parameters associated with the operation (number of filters, kernel size and strides for a convolutional layer, etc), according to the specific requirements of training and testing data. Currently, neural architecture search (NAS) is one of the promising methodologies, which has outperformed manually designed architectures on some tasks such as image classification or semantic segmentation . Zhu et al. integrated NAS technology into face recognition. They used reinforcement learning algorithm (policy gradient) to guide the controller network to train the optimal child architecture. Besides NAS, there are some other explorations to learn optimal architectures adaptively. For example, conditional convolutional neural network (c-CNN) dynamically activated sets of kernels according to modalities of samples; Han et al. proposed a novel contrastive convolution consisted of a trunk CNN and a kernel generator, which is beneficial owing to its dynamistic generation of contrastive kernels based on the pair of faces being compared. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_32",
"text": " Joint alignment-recognition networks. Recently, an end-to-end system (91, 92, 93, 94) was proposed to jointly train FR with several modules (face detection, alignment, and so forth) together. Compared to the existing methods in which each module is generally optimized separately according to different objectives, this end-to-end system optimizes each module according to the recognition objective, leading to more adequate and robust inputs for the recognition model. For example, inspired by spatial transformer , Hayat et al. proposed a CNN-based data-driven approach that learns to simultaneously register and represent faces (Fig. 10), while Wu et al. designed a novel recursive spatial transformer (ReST) module for CNN allowing face alignment and recognition to be jointly optimized. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_33",
"text": " Multi-input networks. In “one-to-many augmentation”, multiple images with variety are generated from one image in order to augment training data. Taken these multiple images as input, multiple networks are also assembled together to extract and combine features of different type of inputs, which can outperform an individual network. In (58, 59, 60, 99, 34, 21, 35), assembled networks are built after different face patches are cropped, and then different types of patches are fed into different sub-networks for representation extraction. By combining the results of sub-networks, the performance can be improved. Other papers (96, 95, 98) used assembled networks to recognize images with different poses. For example, Masi et al. adjusted the pose to frontal (0∘superscript00^{\\circ}), half-profile (40∘superscript4040^{\\circ}) and full-profile views (75∘superscript7575^{\\circ}) and then addressed pose variation by assembled pose networks. A multi-view deep network (MvDN) consists of view-specific subnetworks and common subnetworks; the former removes view-specific variations, and the latter obtains common representations. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_34",
"text": " Multi-task networks. FR is intertwined with various factors, such as pose, illumination, and age. To solve this problem, multitask learning is introduced to transfer knowledge from other relevant tasks and to disentangle nuisance factors. In multi-task networks, identity classification is the main task and the side tasks are pose, illumination, and expression estimations, among others. The lower layers are shared among all the tasks, and the higher layers are disentangled into different sub-networks to generate the task-specific outputs. In , the task-specific sub-networks are branched out to learn face detection, face alignment, pose estimation, gender recognition, smile detection, age estimation and FR. Yin et al. proposed to automatically assign the dynamic loss weights for each side task. Peng et al. used a feature reconstruction metric learning to disentangle a CNN into sub-networks for jointly learning the identity and non-identity features as shown in Fig. 11. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_35",
"text": " During testing, the cosine distance and L2 distance are generally employed to measure the similarity between the deep features x1subscript𝑥1x_{1} and x2subscript𝑥2x_{2}; then, threshold comparison and the nearest neighbor (NN) classifier are used to make decision for verification and identification. In addition to these common methods, there are some other explorations. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_36",
"text": " Metric learning, which aims to find a new metric to make two classes more separable, can also be used for face matching based on extracted deep features. The JB model is a well-known metric learning method (35, 21, 36, 34, 120), and Hu et al. proved that it can improve the performance greatly. In the JB model, a face feature x𝑥x is modeled as x=μ+ε𝑥𝜇𝜀x=\\mu+\\varepsilon, where μ𝜇\\mu and ε𝜀\\varepsilon are identity and intra-personal variations, respectively. The similarity score r(x1,x2)𝑟subscript𝑥1subscript𝑥2r(x_{1},x_{2}) can be represented as follows: r(x1,x2)=logP(x1,x2|HI)P(x1,x2|HE)𝑟subscript𝑥1subscript𝑥2𝑙𝑜𝑔𝑃subscript𝑥1conditionalsubscript𝑥2subscript𝐻𝐼𝑃subscript𝑥1conditionalsubscript𝑥2subscript𝐻𝐸r(x_{1},x_{2})=log\\frac{P\\left(x_{1},x_{2}|H_{I}\\right)}{P\\left(x_{1},x_{2}|H_{E}\\right)} (7) where P(x1,x2|HI)𝑃subscript𝑥1conditionalsubscript𝑥2subscript𝐻𝐼P(x_{1},x_{2}|H_{I}) is the probability that two faces belong to the same identity and P(x1,x2|HE)𝑃subscript𝑥1conditionalsubscript𝑥2subscript𝐻𝐸P(x_{1},x_{2}|H_{E}) is the probability that two faces belong to different identities. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_37",
"text": " After cosine distance was computed, Cheng et al. proposed a heuristic voting strategy at the similarity score level to combine the results of multiple CNN models and won first place in Challenge 2 of MS-celeb-1M 2017. Yang et al. extracted the local adaptive convolution features from the local regions of the face image and used the extended SRC for FR with a single sample per person. Guo et al. combined deep features and the SVM classifier to perform recognition. Wang et al. first used product quantization (PQ) to directly retrieve the top-k most similar faces and re-ranked these faces by combining similarities from deep features and the COTS matcher . In addition, Softmax can be also used in face matching when the identities of training set and test set overlap. For example, in Challenge 2 of MS-celeb-1M, Ding et al. trained a 21,000-class softmax classifier to directly recognize faces of one-shot classes and normal classes after augmenting feature by a conditional GAN; Guo et al. trained the softmax classifier combined with underrepresented-classes promotion (UP) loss term to enhance the performance on one-shot classes. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_38",
"text": " When the distributions of training data and testing data are the same, the face matching methods mentioned above are effective. However, there is always a distribution change or domain shift between two data domains that can degrade the performance on test data. Transfer learning (144, 145) has recently been introduced into deep FR to address the problem of domain shift. It learns transferable features using a labeled source domain (training data) and an unlabeled target domain (testing data) such that domain discrepancy is reduced and models trained on source domain will also perform well on target domain. Sometimes, this technology is applied to face matching. For example, Crosswhite et al. and Xiong et al. adopted template adaptation to the set of media in a template by combining CNN features with template-specific linear SVMs. But most of the time, it is not enough to do transfer learning only at face matching stage. Transfer learning should be embedded in deep models to learn more transferable representations. Kan et al. proposed a bi-shifting autoencoder network (BAE) for domain adaptation across view angle, ethnicity, and imaging sensor; while Luo et al. utilized the multi-kernels maximum mean discrepancy (MMD) to reduce domain discrepancies. Sohn et al. used adversarial learning to transfer knowledge from still image FR to video FR. Moreover, fine-tuning the CNN parameters from a prelearned model using a target training dataset is a particular type of transfer learning, and is commonly employed by numerous methods (151, 152, 103). ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_39",
"text": " We present the development of face processing methods in chronological order in Fig. 12. As we can see from the figure, most papers attempted to perform face processing by autoencoder model in 2014 and 2015; while 3D model played an important role in 2016. GAN has drawn substantial attention from the deep learning and computer vision community since it was first proposed by Goodfellow et al. It can be used in different fields and was also introduced into face processing in 2017. GAN can be used to perform “one-to-many augmentation” and “many-to-one normalization”, and it broke the limit that face synthesis should be done under supervised way. Although GAN has not been widely used in face processing for training and recognition, it has great latent capacity for preprocessing, for example, Dual-Agent GANs (DA-GAN) won the 1st places on verification and identification tracks in the NIST IJB-A 2017 FR competitions. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_40",
"text": " Collecting a large database is extremely expensive and time consuming. The methods of “one-to-many augmentation” can mitigate the challenges of data collection, and they can be used to augment not only training data but also the gallery of test data. we categorized them into four classes: data augmentation, 3D model, autoencoder model and GAN model. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_41",
"text": " Data augmentation. Common data augmentation methods consist of photometric transformations (75, 22) and geometric transformations, such as oversampling (multiple patches obtained by cropping at different scales) , mirroring , and rotating the images. Recently, data augmentation has been widely used in deep FR algorithms (58, 59, 60, 35, 21, 36, 61, 62). for example, Sun et al. cropped 400 face patches varying in positions, scales, and color channels and mirrored the images. Liu et al. generated seven overlapped image patches centered at different landmarks on the face region and trained them with seven CNNs with the same structure. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_42",
"text": " 3D model. 3D face reconstruction is also a way to enrich the diversity of training data. They utilize 3D structure information to model the transformation between poses. 3D models first use 3D face data to obtain morphable displacement fields and then apply them to obtain 2D face data in different pose angles. There is a large number of papers about this domain, but we only focus on the 3D face reconstruction using deep methods or used for deep FR. In , Masi et al. generated face images with new intra-class facial appearance variations, including pose, shape and expression, and then trained a 19-layer VGGNet with both real and augmented data. Masi et al. used generic 3D faces and rendered fixed views to reduce much of the computational effort. Richardson et al. employed an iterative 3D CNN by using a secondary input channel to represent the previous network’s output as an image for reconstructing a 3D face as shown in Fig. 13. Dou et al. used a multi-task CNN to divide 3D face reconstruction into neutral 3D reconstruction and expressive 3D reconstruction. Tran et al. directly regressed 3D morphable face model (3DMM) parameters from an input photo by a very deep CNN architecture. An et al. synthesized face images with various poses and expressions using the 3DMM method, then reduced the gap between synthesized data and real data with the help of MMD. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_43",
"text": " Autoencoder model. Rather than reconstructing 3D models from a 2D image and projecting it back into 2D images of different poses, autoencoder models can generate 2D target images directly. Taken a face image and a pose code encoding a target pose as input, an encoder first learns pose-invariant face representation, and then a decoder generates a face image with the same identity viewed at the target pose by using the pose-invariant representation and the pose code. For example, given the target pose codes, multi-view perceptron (MVP) trained some deterministic hidden neurons to learn pose-invariant face representations, and simultaneously trained some random hidden neurons to capture pose features, then a decoder generated the target images by combining pose-invariant representations with pose features. As shown in Fig. 14, Yim et al. and Qian et al. introduced an auxiliary CNN to generate better images viewed at the target poses. First, an autoencoder generated the desired pose image, then the auxiliary CNN reconstructed the original input image back from the generated target image, which guarantees that the generated image is identity-preserving. In , two groups of units are embedded between encoder and decoder. The identity units remain unchanged and the rotation of images is achieved by taking actions to pose units at each time step. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_44",
"text": " GAN model. In GAN models, a generator aims to fool a discriminator through generating images that resemble the real images, while the discriminator aims to discriminate the generated samples from the real ones. By this minimax game between generator and discriminator, GAN can successfully generate photo-realistic images with different poses. After using a 3D model to generate profile face images, DA-GAN refined the images by a GAN, which combines prior knowledge of the data distribution and knowledge of faces (pose and identity perception loss). CVAE-GAN combined a variational auto-encoder with a GAN for augmenting data, and took advantages of both statistic and pairwise feature matching to make the training process converge faster and more stably. In addition to synthesizing diverse faces from noise, some papers also explore to disentangle the identity and variation, and synthesize new faces by exchanging identity and variation from different people. In CG-GAN , a generator directly resolves each representation of input image into a variation code and an identity code and regroups these codes for cross-generating, simultaneously, a discriminator ensures the reality of generated images. Bao et al. extracted identity representation of one input image and attribute representation of any other input face image, then synthesized new faces by recombining these representations. This work shows superior performance in generating realistic and identity preserving face images, even for identities outside the training dataset. Unlike previous methods that treat classifier as a spectator, FaceID-GAN proposed a three-player GAN where the classifier cooperates together with the discriminator to compete with the generator from two different aspects, i.e. facial identity and image quality respectively. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_45",
"text": " In contrast to “one-to-many augmentation”, the methods of “many-to-one normalization” produce frontal faces and reduce appearance variability of test data to make faces align and compare easily. It can be categorized as autoencoder model, CNN model and GAN model. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_46",
"text": " Autoencoder model. Autoencoder can also be applied to “many-to-one normalization”. Different from the autoencoder model in “one-to-many augmentation” which generates the desired pose images with the help of pose codes, autoencoder model here learns pose-invariant face representation by an encoder and directly normalizes faces by a decoder without pose codes. Zhu et al. (66, 67) selected canonical-view images according to the face images’ symmetry and sharpness and then adopted an autoencoder to recover the frontal view images by minimizing the reconstruction loss error. The proposed stacked progressive autoencoders (SPAE) progressively map the nonfrontal face to the frontal face through a stack of several autoencoders. Each shallow autoencoders of SPAE is designed to convert the input face images at large poses to a virtual view at a smaller pose, so the pose variations are narrowed down gradually layer by layer along the pose manifold. Zhang et al. built a sparse many-to-one encoder to enhance the discriminant of the pose free feature by using multiple random faces as the target values for multiple encoders. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_47",
"text": " CNN model. CNN models usually directly learn the 2D mappings between non-frontal face images and frontal images, and utilize these mapping to normalize images in pixel space. The pixels in normalized images are either directly the pixels or the combinations of the pixels in non-frontal images. In LDF-Net , the displacement field network learns the shifting relationship of two pixels, and the translation layer transforms the input non-frontal face image into a frontal one with this displacement field. In GridFace shown in Fig. 15, first, the rectification network normalizes the images by warping pixels from the original image to the canonical one according to the computed homography matrix, then the normalized output is regularized by an implicit canonical view face prior, finally, with the normalized faces as input, the recognition network learns discriminative face representation via metric learning. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_48",
"text": " GAN model. Huang et al. proposed a two-pathway generative adversarial network (TP-GAN) that contains four landmark-located patch networks and a global encoder-decoder network. Through combining adversarial loss, symmetry loss and identity-preserving loss, TP-GAN generates a frontal view and simultaneously preserves global structures and local details as shown in Fig. 16. In a disentangled representation learning generative adversarial network (DR-GAN) , the generator serves as a face rotator, in which an encoder produces an identity representation, and a decoder synthesizes a face at the specified pose using this representation and a pose code. And the discriminator is trained to not only distinguish real vs. synthetic images, but also predict the identity and pose of a face. Yin et al. incorporated 3DMM into the GAN structure to provide shape and appearance priors to guide the generator to frontalization. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_49",
"text": " In the past three decades, many face databases have been constructed with a clear tendency from small-scale to large-scale, from single-source to diverse-sources, and from lab-controlled to real-world unconstrained condition, as shown in Fig. 17. As the performance of some simple databases become saturated, e.g. LFW , more and more complex databases were continually developed to facilitate the FR research. It can be said without exaggeration that the development process of the face databases largely leads the direction of FR research. In this section, we review the development of major training and testing academic databases for the deep FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_50",
"text": " The prerequisite of effective deep FR is a sufficiently large training dataset. Zhou et al. suggested that large amounts of data with deep learning improve the performance of FR. The results of Megaface Challenge also revealed that premier deep FR methods were typically trained on data larger than 0.5M images and 20K people. The early works of deep FR were usually trained on private training datasets. Facebook’s Deepface model was trained on 4M images of 4K people; Google’s FaceNet was trained on 200M images of 3M people; DeepID serial models (34, 35, 21, 36) were trained on 0.2M images of 10K people. Although they reported ground-breaking performance at this stage, researchers cannot accurately reproduce or compare their models without public training datasets. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_51",
"text": " To address this issue, CASIA-Webface provided the first widely-used public training dataset for the deep model training purpose, which consists of 0.5M images of 10K celebrities collected from the web. Given its moderate size and easy usage, it has become a great resource for fair comparisons for academic deep models. However, its relatively small data and ID size may not be sufficient to reflect the power of many advanced deep learning methods. Currently, there have been more databases providing public available large-scale training dataset (Table VI), especially three databases with over 1M images, namely MS-Celeb-1M , VGGface2 , and Megaface (44, 164), and we summary some interesting findings about these training sets, as shown in Fig. 18. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_52",
"text": " Depth v.s. breadth. These large training sets are expanded from depth or breadth. VGGface2 provides a large-scale training dataset of depth, which have limited number of subjects but many images for each subjects. The depth of dataset enforces the trained model to address a wide range intra-class variations, such as lighting, age, and pose. In contrast, MS-Celeb-1M and Mageface (Challenge 2) offers large-scale training datasets of breadth, which contains many subject but limited images for each subjects. The breadth of dataset ensures the trained model to cover the sufficiently variable appearance of various people. Cao et al. conducted a systematic studies on model training using VGGface2 and MS-Celeb-1M, and found an optimal model by first training on MS-Celeb-1M (breadth) and then fine-tuning on VGGface2 (depth). ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_53",
"text": " Long tail distribution. The utilization of long tail distribution is different among datasets. For example, in Challenge 2 of MS-Celeb-1M, the novel set specially uses the tailed data to study low-shot learning; central part of the long tail distribution is used by the Challenge 1 of MS-Celeb-1M and images’ number is approximately limited to 100 for each celebrity; VGGface and VGGface2 only use the head part to construct deep databases; Megaface utilizes the whole distribution to contain as many images as possible, the minimal number of images is 3 per person and the maximum is 2469. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_54",
"text": " Data engineering. Several popular benchmarks, such as LFW unrestricted protocol, Megaface Challenge 1, MS-Celeb-1M Challenge 1&2, explicitly encourage researchers to collect and clean a large-scale data set for enhancing the capability of deep neural network. Although data engineering is a valuable problem to computer vision researchers, this protocol is more incline to the industry participants. As evidence, the leaderboards of these experiments are mostly occupied by the companies holding invincible hardwares and data scales. This phenomenon may not be beneficial for developments of new models in academic community. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_55",
"text": " Data noise. Owing to data source and collecting strategies, existing large-scale datasets invariably contain label noises. Wang et al. profiled the noise distribution in existing datasets in Fig. 19 and showed that the noise percentage increases dramatically along the scale of data. Moreover, they found that noise is more lethal on a 10,000-class problem of FR than on a 10-class problem of object classification and that label flip noise severely deteriorates the performance of a model, especially the model using A-softmax . Therefore, building a sufficiently large and clean dataset for academic research is very meaningful. Deng et al. found there are serious label noise in MS-Celeb-1M , and they cleaned the noise of MS-Celeb-1M, and made the refined dataset public available. Microsoft and Deepglint jointly released the largest public data set with cleaned labels, which includes 4M images cleaned from MS-Celeb-1M dataset and 2.8M aligned images of 100K Asian celebrities. Moreover, Zhan et al. shifted the focus from cleaning the datasets to leveraging more unlabeled data. Through automatically assigning pseudo labels to unlabeled data with the help of relational graphs, they obtained competitive or even better results over the fully-supervised counterpart. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_56",
"text": " Data bias. Large-scale training datasets, such as CASIA-WebFace , VGGFace2 and MS-Celeb-1M , are typically constructed by scraping websites like Google Images, and consist of celebrities on formal occasions: smiling, make-up, young, and beautiful. They are largely different from databases captured in the daily life (e.g. Megaface). The biases can be attributed to many exogenous factors in data collection, such as cameras, lightings, preferences over certain types of backgrounds, or annotator tendencies. Dataset biases adversely affect cross-dataset generalization; that is, the performance of the model trained on one dataset drops significantly when applied to another one. One persuasive evidence is presented by P.J. Phillips’ study which conducted a cross benchmark assessment of VGGFace model for face recognition. The VGGFace model achieves 98.95% on LFW and 97.30% on YTF , but only obtains 26%, 52% and 85% on Ugly, Bad and Good partition of GBU database . ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_57",
"text": " Demographic bias (e.g., race/ethnicity, gender, age) in datasets is a universal but urgent issue to be solved in data bias field. In existing training and testing datasets, the male, White, and middle-aged cohorts always appear more frequently, as shown in Table VII, which inevitably causes deep learning models to replicate and even amplify these biases resulting in significantly different accuracies when deep models are applied to different demographic groups. Some researches (145, 171, 172) showed that the female, Black, and younger cohorts are usually more difficult to recognize in FR systems trained with commonly-used datasets. For example, Wang et al. proposed a Racial Faces in-the-Wild (RFW) database and proved that existing commercial APIs and the SOTA algorithms indeed work unequally for different races and the maximum difference in error rate between the best and worst groups is 12%, as shown in Table VIII. Hupont et al. showed that SphereFace has a TAR of 0.87 for White males which drops to 0.28 for Asian females, at a FAR of 1e−41𝑒41e-4. Such bias can result in mistreatment of certain demographic groups, by either exposing them to a higher risk of fraud, or by making access to services more difficult. Therefore, addressing data bias and enhancing fairness of FR systems in real life are urgent and necessary tasks. Collecting balanced data to train a fair model or designing some debiasing algorithms are effective way. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_58",
"text": " In terms of training protocol, FR can be categorized into subject-dependent and subject-independent settings, as illustrated in Fig. 20. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_59",
"text": " Subject-dependent protocol. For subject-dependent protocol, all testing identities are predefined in training set, it is natural to classify testing face images to the given identities. Therefore, subject-dependent FR can be well addressed as a classification problem, where features are expected to be separable. The protocol is mostly adopted by the early-stage (before 2010) FR studies on FERET , AR , and is suitable only for some small-scale applications. The Challenge 2 of MS-Celeb-1M is the only large-scale database using subject-dependent training protocol. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_60",
"text": " Subject-independent protocol. For subject-independent protocol, the testing identities are usually disjoint from the training set, which makes FR more challenging yet close to practice. Because it is impossible to classify faces to known identities in training set, generalized representation is essential. Due to the fact that human faces exhibit similar intra-subject variations, deep models can display transcendental generalization ability when training with a sufficiently large set of generic subjects, where the key is to learn discriminative large-margin deep features. This generalization ability makes subject-independent FR possible. Almost all major face-recognition benchmarks, such as LFW , PaSC , IJB-A/B/C (41, 42, 43) and Megaface (44, 164), require the tested models to be trained under subject-independent protocol. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_61",
"text": " In order to evaluate whether our deep models can solve the different problems of FR in real life, many testing datasets are designed to evaluate the models in different tasks, i.e. face verification, close-set face identification and open-set face identification. In either task, a set of known subjects is initially enrolled in the system (the gallery), and during testing, a new subject (the probe) is presented. Face verification computes one-to-one similarity between the gallery and probe to determine whether the two images are of the same subject, whereas face identification computes one-to-many similarity to determine the specific identity of a probe face. When the probe appears in the gallery identities, this is referred to as closed-set identification; when the probes include those who are not in the gallery, this is open-set identification. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_62",
"text": " Face verification is relevant to access control systems, re-identification, and application independent evaluations of FR algorithms. It is classically measured using the receiver operating characteristic (ROC) and estimated mean accuracy (Acc). At a given threshold (the independent variable), ROC analysis measures the true accept rate (TAR), which is the fraction of genuine comparisons that correctly exceed the threshold, and the false accept rate (FAR), which is the fraction of impostor comparisons that incorrectly exceed the threshold. And Acc is a simplified metric introduced by LFW , which represents the percentage of correct classifications. With the development of deep FR, more accurate recognitions are required. Customers concern more about the TAR when FAR is kept in a very low rate in most security certification scenario. PaSC reports TAR at a FAR of 10−2superscript10210^{-2}; IJB-A evaluates TAR at a FAR of 10−3superscript10310^{-3}; Megaface (44, 164) focuses on TAR@10−6superscript10610^{-6}FAR; especially, in MS-celeb-1M challenge 3 , TAR@10−9superscript10910^{-9}FAR is reported. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_63",
"text": " Close-set face identification is relevant to user driven searches (e.g., forensic identification), rank-N and cumulative match characteristic (CMC) is commonly used metrics in this scenario. Rank-N is based on what percentage of probe searches return the probe’s gallery mate within the top k𝑘k rank-ordered results. The CMC curve reports the percentage of probes identified within a given rank (the independent variable). IJB-A/B/C (41, 42, 43) concern on the rank-1 and rank-5 recognition rate. The MegaFace challenge (44, 164) systematically evaluates rank-1 recognition rate function of increasing number of gallery distractors (going from 10 to 1 Million), the results of the SOTA evaluated on MegaFace challenge are listed in Table IX. Rather than rank-N and CMC, MS-Celeb-1M further applies a precision-coverage curve to measure identification performance under a variable threshold t𝑡t. The probe is rejected when its confidence score is lower than t𝑡t. The algorithms are compared in term of what fraction of passed probes, i.e. coverage, with a high recognition precision, e.g. 95% or 99%, the results of the SOTA evaluated on MS-Celeb-1M challenge are listed in Table X. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_64",
"text": " Open-set face identification is relevant to high throughput face search systems (e.g., de-duplication, watch list identification), where the recognition system should reject unknown/unseen subjects (probes who do not present in gallery) at test time. At present, there are very few databases covering the task of open-set FR. IJB-A/B/C (41, 42, 43) benchmarks introduce a decision error tradeoff (DET) curve to characterize the the false negative identification rate (FNIR) as function of the false positive identification rate (FPIR). FPIR measures what fraction of comparisons between probe templates and non-mate gallery templates result in a match score exceeding T𝑇T. At the same time, FNIR measures what fraction of probe searches will fail to match a mated gallery template above a score of T𝑇T. The algorithms are compared in term of the FNIR at a low FPIR, e.g. 1% or 10%, the results of the SOTA evaluated on IJB-A dataset as listed in Table XI. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_65",
"text": " Public available training databases are mostly collected from the photos of celebrities due to privacy issue, it is far from images captured in the daily life with diverse scenes. In order to study different specific scenarios, more difficult and realistic datasets are constructed accordingly, as shown in Table XII. According to their characteristics, we divide these scenes into four categories: cross-factor FR, heterogenous FR, multiple (or single) media FR and FR in industry (Fig. 21). ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_66",
"text": " • Cross-factor FR. Due to the complex nonlinear facial appearance, some variations will be caused by people themselves, such as cross-pose, cross-age, make-up, and disguise. For example, CALFW , MORPH , CACD and FG-NET are commonly used datasets with different age range; CFP only focuses on frontal and profile face, CPLFW is extended from LFW and contains different poses. Disguised faces in the wild (DFW) evaluates face recognition across disguise. • Heterogenous FR. It refers to the problem of matching faces across different visual domains. The domain gap is mainly caused by sensory devices and cameras settings, e.g. visual light vs. near-infrared and photo vs. sketch. For example, CUFSF and CUFS are commonly used photo-sketch datasets and CUFSF dataset is harder due to lighting variation and shape exaggeration. • Multiple (or single) media FR. Ideally, in FR, many images of each subject are provided in training datasets and image-to-image recognitions are performed when testing. But the situation will be different in reality. Sometimes, the number of images per person in training set could be very small, such as MS-Celeb-1M challenge 2 . This challenge is often called low- shot or few-shot FR. Moreover, each subject face in test set may be enrolled with a set of images and videos and set-to-set recognition should be performed, such as IJB-A and PaSC . • FR in industry. Although deep FR has achieved beyond human performance on some standard benchmarks, but some other factors should be given more attention rather than accuracy when deep FR is adopted in industry, e.g. anti-attack (CASIA-FASD ) and 3D FR (Bosphorus , BU-3DFE and FRGCv2 ). Compared to publicly available 2D face databases, 3D scans are hard to acquire, and the number of scans and subjects in public 3D face databases is still limited, which hinders the development of 3D deep FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_67",
"text": " Despite the high accuracy in the LFW and Megaface (44, 164) benchmarks, the performance of FR models still hardly meets the requirements in real-world application. A conjecture in industry is made that results of generic deep models can be improved simply by collecting big datasets of the target scene. However, this holds only to a certain degree. More and more concerns on privacy may make the collection and human-annotation of face data become illegal in the future. Therefore, significant efforts have been paid to design excellent algorithms to address the specific problems with limited data in these realistic scenes. In this section, we present several special algorithms of FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_68",
"text": " As shows that many existing algorithms suffer a decrease of over 10% from frontal-frontal to frontal-profile verification, cross-pose FR is still an extremely challenging scene. In addition to the aforementioned methods, including “one-to-many augmentation”, “many-to-one normalization” and assembled networks (Section 4 and 3.2.2), there are some other algorithms designed for cross-pose FR. Considering the extra burden of above methods, Cao et al. attempted to perform frontalization in the deep feature space rather than the image space. A deep residual equivariant mapping (DREAM) block dynamically added residuals to an input representation to transform a profile face to a frontal image. Chen et al. proposed to combine feature extraction with multi-view subspace learning to simultaneously make features be more pose-robust and discriminative. Pose Invariant Model (PIM) jointly performed face frontalization and learned pose invariant representations end-to-end to allow them to mutually boost each other, and further introduced unsupervised cross-domain adversarial training and a learning to learn strategy to provide high-fidelity frontal reference face images. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_69",
"text": " Cross-age FR is extremely challenging due to the changes in facial appearance by the aging process over time. One direct approach is to synthesize the desired image with target age such that the recognition can be performed in the same age group. A generative probabilistic model was used by to model the facial aging process at each short-term stage. The identity-preserved conditional generative adversarial networks (IPCGANs) framework utilized a conditional-GAN to generate a face in which an identity-preserved module preserved the identity information and an age classifier forced the generated face with the target age. Antipov et al. proposed to age faces by GAN, but the synthetic faces cannot be directly used for face verification due to its imperfect preservation of identities. Then, they used a local manifold adaptation (LMA) approach to solve the problem of . In , high-level age-specific features conveyed by the synthesized face are estimated by a pyramidal adversarial discriminator at multiple scales to generate more lifelike facial details. An alternative to address the cross-age problem is to decompose aging and identity components separately and extract age-invariant representations. Wen et al. developed a latent identity analysis (LIA) layer to separate these two components, as shown in Fig. 22. In , age-invariant features were obtained by subtracting age-specific factors from the representations with the help of the age estimation task. In , face features are decomposed in the spherical coordinate system, in which the identity-related components are represented with angular coordinates and the age-related information is encoded with radial coordinate. Additionally, there are other methods designed for cross-age FR. For example, Bianco ett al. and El et al. fine-tuned the CNN to transfer knowledge across age. Wang et al. proposed a siamese deep network to perform multi-task learning of FR and age estimation. Li et al. integrated feature extraction and metric learning via a deep CNN. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_70",
"text": " Makeup is widely used by the public today, but it also brings challenges for FR due to significant facial appearance changes. The research on matching makeup and nonmakeup face images is receiving increasing attention. Li et al. generated nonmakeup images from makeup ones by a bi-level adversarial network (BLAN) and then used the synthesized nonmakeup images for verification as shown in Fig. 23. Sun et al. pretrained a triplet network on videos and fine-tuned it on a small makeup datasets. Specially, facial disguise (214, 228, 229) is a challenging research topic in makeup face recognition. By using disguise accessories such as wigs, beard, hats, mustache, and heavy makeup, disguise introduces two variations: (i) when a person wants to obfuscate his/her own identity, and (ii) another individual impersonates someone else’s identity. Obfuscation increases intra-class variations whereas impersonation reduces the inter-class dissimilarity, thereby affecting face recognition/verification task. To address this issue, a variety of methods are proposed. Zhang et al. first trained two DCNNs for generic face recognition and then used Principal Components Analysis (PCA) to find the transformation matrix for disguised face recognition adaptation. Kohli et al. finetuned models using disguised faces. Smirnov et al. proposed a hard example mining method benefitted from class-wise (Doppelganger Mining ) and example-wise mining to learn useful deep embeddings for disguised face recognition. Suri et al. learned the representations of images in terms of colors, shapes, and textures (COST) using an unsupervised dictionary learning method, and utilized the combination of COST features and CNN features to perform recognition. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_71",
"text": " Due to the excellent performance of the near-infrared spectrum (NIS) images under low-light scenarios, NIS images are widely applied in surveillance systems. Because most enrolled databases consist of visible light (VIS) spectrum images, how to recognize a NIR face from a gallery of VIS images has been a hot topic. Saxena et al. and Liu et al. transferred the VIS deep networks to the NIR domain by fine-tuning. Lezama et al. used a VIS CNN to recognize NIR faces by transforming NIR images to VIS faces through cross-spectral hallucination and restoring a low-rank structure for features through low-rank embedding. Reale et al. trained a VISNet (for visible images) and a NIRNet (for near-infrared images), and coupled their output features by creating a siamese network. He et al. (238, 239) divided the high layer of the network into a NIR layer, a VIS layer and a NIR-VIS shared layer, then, a modality-invariant feature can be learned by the NIR-VIS shared layer. Song et al. embedded cross-spectral face hallucination and discriminative feature learning into an end-to-end adversarial network. In , the low-rank relevance and cross-modal ranking were used to alleviate the semantic gap. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_72",
"text": " Although deep networks are robust to low resolution to a great extent, there are still a few studies focused on promoting the performance of low-resolution FR. For example, Zangeneh et al. proposed a CNN with a two-branch architecture (a super-resolution network and a feature extraction network) to map the high- and low-resolution face images into a common space where the intra-person distance is smaller than the inter-person distance. Shen et al. exploited the face semantic information and local structural constraints to better restore the shape and detail of face images. In addition, they optimized the network with perceptual and adversarial losses to produce photo-realistic results. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_73",
"text": " The photo-sketch FR may help law enforcement to quickly identify suspects. The commonly used methods can be categorized as two classes. One is to utilize transfer learning to directly match photos to sketches. Deep networks are first trained using a large face database of photos and are then fine-tuned using small sketch database (243, 244). The other is to use the image-to-image translation, where the photo can be transformed to a sketch or the sketch to a photo; then, FR can be performed in one domain. Zhang et al. developed a fully convolutional network with generative loss and a discriminative regularizer to transform photos to sketches. Zhang et al. utilized a branched fully convolutional neural network (BFCN) to generate a structure-preserved sketch and a texture-preserved sketch, and then they fused them together via a probabilistic method. Recently, GANs have achieved impressive results in image generation. Yi et al. , Kim et al. and Zhu et al. used two generators, GAsubscript𝐺𝐴G_{A} and GBsubscript𝐺𝐵G_{B}, to generate sketches from photos and photos from sketches, respectively (Fig. 24). Based on , Wang et al. proposed a multi-adversarial network to avoid artifacts by leveraging the implicit presence of feature maps of different resolutions in the generator subnetwork. Similar to photo-sketch FR, photo-caricature FR is one kind of heterogenous FR scenes which is challenging and important to understanding of face perception. Huo et al. built a large dataset of caricatures and photos, and provided several evaluation protocols and their baseline performances for comparison. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_74",
"text": " For many practical applications, such as surveillance and security, the FR system should recognize persons with a very limited number of training samples or even with only one sample. The methods of low-shot learning can be categorized as 1) synthesizing training data and 2) learning more powerful features. Hong et al. generated images in various poses using a 3D face model and adopted deep domain adaptation to handle other variations, such as blur, occlusion, and expression (Fig. 25). Choe et al. used data augmentation methods and a GAN for pose transition and attribute boosting to increase the size of the training dataset. Wu et al. proposed a framework with hybrid classifiers using a CNN and a nearest neighbor (NN) model. Guo et al. made the norms of the weight vectors of the one-shot classes and the normal classes aligned to address the data imbalance problem. Cheng et al. proposed an enforced softmax that contains optimal dropout, selective attenuation, L2 normalization and model-level optimization. Yin et al. augmented feature space of low-shot classes by transferring the principal components from regular to low-shot classes to encourage the variance of low-shot classes to mimic that of regular classes. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_75",
"text": " Different from traditional image-to-image recognition, set-to-set recognition takes a set (heterogeneous contents containing both images and videos) as the smallest unit of representation. This kind of setting does reflect the real-world biometric scenarios, thereby attracting a lot of attention. After learning face representations of media in each set, two strategies are generally adopted to perform set-to-set matching. One is to use these representations to perform pair-wise similarity comparison of two sets and aggregate the results into a single and final score by max score pooling , average score pooling and its variations (253, 254). The other strategy is feature pooling (96, 103, 81) which first aggregates face representations into a single representation for each set and then performs a comparison between two sets. In addition to the commonly used strategies, there are also some novel methods proposed for set/template-based FR. For example, Hayat et al. proposed a deep heterogeneous feature fusion network to exploit the features’ complementary information generated by different CNNs. Liu et al. introduced the actor-critic reinforcement learning for set-based FR. They casted the inner-set dependency modeling to a Markov decision process in the latent space, and trained a dependency-aware attention control agent to make attention control for each image in each step. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_76",
"text": " There are two key issues in video FR: one is to integrate the information across different frames together to build a representation of the video face, and the other is to handle video frames with severe blur, pose variations, and occlusions. For frame aggregation, Yang et al. proposed a neural aggregation network (NAN) in which the aggregation module, consisting of two attention blocks driven by a memory, produces a 128-dimensional vector representation (Fig. 26). Rao et al. aggregated raw video frames directly by combining the idea of metric learning and adversarial learning. For dealing with bad frames, Rao et al. discarded the bad frames by treating this operation as a Markov decision process and trained the attention model through a deep reinforcement learning framework. Ding et al. artificially blurred clear images for training to learn blur-robust face representations. Parchami et al. used a CNN to reconstruct a lower-quality video into a high-quality face. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_77",
"text": " 3D FR has inherent advantages over 2D methods, but 3D deep FR is not well developed due to the lack of large annotated 3D data. To enlarge 3D training datasets, most works use the methods of “one-to-many augmentation” to synthesize 3D faces. However, the effective methods for extracting deep features of 3D faces remain to be explored. Kim et al. fine-tuned a 2D CNN with a small amount of 3D scans for 3D FR. Zulqarnain et al. used a three-channel (corresponding to depth, azimuth and elevation angles of the normal vector) image as input and minimized the average prediction log-loss. Zhang et al. first selected 30 feature points from the Candide-3 face model to characterize faces, then conducted the unsupervised pretraining of face depth data, and finally performed the supervised fine-tuning. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_78",
"text": " Partial FR, in which only arbitrary-size face patches are presented, has become an emerging problem with increasing requirements of identification from CCTV cameras and embedded vision systems in mobile devices, robots and smart home facilities. He et al. divided the aligned face image into several multi-scale patches, and the dissimilarity between two partial face images is calculated as the weighted L2 distance between corresponding patches. Dynamic feature matching (DFM) utilized a sliding window of the same size as the probe feature maps to decompose the gallery feature maps into several gallery sub-feature maps, and the similarity-guided constraint imposed on sparse representation classification (SRC) provides an alignment-free matching. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_79",
"text": " With the emergence of mobile phones, tablets and augmented reality, FR has been applied in mobile devices. Due to computational limitations, the recognition tasks in these devices need to be carried out in a light but timely fashion. MobiFace required efficient memory and low cost operators by adopting fast downsampling and bottleneck residual block, and achieves 99.7% on LFW database and 91.3% on Megaface database. Tadmor et al. proposed a multibatch method that first generates signatures for a minibatch of k𝑘k face images and then constructs an unbiased estimate of the full gradient by relying on all k2−ksuperscript𝑘2𝑘k^{2}-k pairs from the minibatch. As mentioned in Section 3.2.1, light-weight deep networks (126, 127, 128, 129) perform excellently in the fundamental tasks of image classification and deserve further attention in FR tasks. Moreover, some well-known compressed networks such as Pruning (264, 265, 266), BinaryNets (267, 268, 269, 270), Mimic Networks (271, 272), also have potential to be introduced into FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_80",
"text": " With the success of FR techniques, various types of attacks, such as face spoofing and adversarial perturbations, are becoming large threats. Face spoofing involves presenting a fake face to the biometric sensor using a printed photograph, worn mask, or even an image displayed on another electronic device. In order to defense this type of attack, several methods are proposed (211, 273, 274, 275, 276, 277, 278, 279). Atoum et al. proposed a novel two-stream CNN in which the local features discriminate the spoof patches that are independent of the spatial face areas, and holistic depth maps ensure that the input live sample has a face-like depth. Yang et al. trained a CNN using both a single frame and multiple frames with five scales as input, and using the live/spoof label as the output. Taken the sequence of video frames as input, Xu et al. applied LSTM units on top of CNN to obtain end-to-end features to recognize spoofing faces which leveraged the local and dense property from convolution operation and learned the temporal structure using LSTM units. Li et al. and Patel et al. fine-tuned their networks from a pretrained model by training sets of real and fake images. Jourabloo et al. proposed to inversely decompose a spoof face into the live face and the spoof noise pattern. Adversarial perturbation is the other type of attack which can be defined as the addition of a minimal vector r𝑟r such that with addition of this vector into the input image x𝑥x, i.e. (x+r)𝑥𝑟(x+r), the deep learning models misclassifies the input while people will not. Recently, more and more work has begun to focus on solving this perturbation of FR. Goswami et al. proposed to detect adversarial samples by characterizing abnormal filter response behavior in the hidden layers and increase the network’s robustness by removing the most problematic filters. Goel et al. provided an open source implementation of adversarial detection and mitigation algorithms. Despite of progresses of anti-attack algorithms, attack methods are updated as well and remind us the need to further increase security and robustness in FR systems, for example, Mai et al. proposed a neighborly de-convolutional neural network (NbNet) to reconstruct a fake face using the stolen deep templates. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_81",
"text": " As described in Section 5.1, existing datasets are highly biased in terms of the distribution of demographic cohorts, which may dramatically impact the fairness of deep models. To address this issue, there are some works that seek to introduce fairness into face recognition and mitigate demographic bias, e,g. unbalanced-training , attribute removal (284, 285, 286) and domain adaptation (173, 287, 147). 1) Unbalanced-training methods mitigate the bias via model regularization, taking into consideration of the fairness goal in the overall model objective function. For example, RL-RBN formulated the process of finding the optimal margins for non-Caucasians as a Markov decision process and employed deep Q-learning to learn policies based on large margin loss. 2) Attribute removal methods confound or remove demographic information of faces to learn attribute-invariant representations. For example, Alvi et al. applied a confusion loss to make a classifier fail to distinguish attributes of examples so that multiple spurious variations are removed from the feature representation. SensitiveNets proposed to introduce sensitive information into triplet loss. They minimized the sensitive information, while maintaining distances between positive and negative embeddings. 3) Domain adaptation methods propose to investigate data bias problem from a domain adaptation point of view and attempt to design domain-invariant feature representations to mitigate bias across domains. IMAN simultaneously aligned global distribution to decrease race gap at domain-level, and learned the discriminative target representations at cluster level. Kan directly converted the Caucasian data to non-Caucasian domain in the image space with the help of sparse reconstruction coefficients learnt in the common subspace. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_82",
"text": " In this paper, we provide a comprehensive survey of deep FR from both data and algorithm aspects. For algorithms, mainstream and special network architectures are presented. Meanwhile, we categorize loss functions into Euclidean-distance-based loss, angular/cosine-margin-based loss and variable softmax loss. For data, we summarize some commonly used datasets. Moreover, the methods of face processing are introduced and categorized as “one-to-many augmentation” and “many-to-one normalization”. Finally, the special scenes of deep FR, including video FR, 3D FR and cross-age FR, are briefly introduced. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_83",
"text": " Taking advantage of big annotated data and revolutionary deep learning techniques, deep FR has dramatically improved the SOTA performance and fostered successful real-world applications. With the practical and commercial use of this technology, many ideal assumptions of academic research were broken, and more and more real-world issues are emerging. To the best our knowledge, major technical challenges include the following aspects. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_84",
"text": " • Security issues. Presentation attack , adversarial attack (280, 281, 290), template attack and digital manipulation attack (292, 293) are developing to threaten the security of deep face recognition systems. 1) Presentation attack with 3D silicone mask, which exhibits skin-like appearance and facial motion, challenges current anti-sproofing methods . 2) Although adversarial perturbation detection and mitigation methods are recently proposed , the root cause of adversarial vulnerability is unclear and thus new types of adversarial attacks are still upgraded continuously (295, 296). 3) The stolen deep feature template can be used to recover its facial appearance, and how to generate cancelable template without loss of accuracy is another important issue. 4) Digital manipulation attack, made feasible by GANs, can generate entirely or partially modified photorealistic faces by expression swap, identity swap, attribute manipulation and entire face synthesis, which remains a main challenge for the security of deep FR. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_85",
"text": " • Privacy-preserving face recognition. With the leakage of biological data, privacy concerns are raising nowadays. Facial images can predict not only demographic information such as gender, age, or race, but even the genetic information . Recently, the pioneer works such as Semi-Adversarial Networks (298, 299, 285) have explored to generate a recognizable biometric templates that can hidden some of the private information presented in the facial images. Further research on the principles of visual cryptography, signal mixing and image perturbation to protect users’ privacy on stored face templates are essential for addressing public concern on privacy. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_86",
"text": " • Understanding deep face recognition. Deep face recognition systems are now believed to surpass human performance in most scenarios . There are also some interesting attempts to apply deep models to assist human operators for face verification . Despite this progress, many fundamental questions are still open, such as what is the “identity capacity” of a deep representation ? Why deep neural networks, rather than humans, are easily fooled by adversarial samples? While bigger and bigger training dataset by itself cannot solve this problem, deeper understanding on these questions may help us to build robust applications in real world. Recently, a new benchmark called TALFW has been proposed to explore this issue . ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_87",
"text": " • Remaining challenges defined by non-saturated benchmark datasets. Three current major datasets, namely, MegaFace (44, 164) , MS-Celeb-1M and IJB-A/B/C (41, 42, 43), are corresponding to large-scale FR with a very large number of candidates, low/one-shot FR and large pose-variance FR which will be the focus of research in the future. Although the SOTA algorithms can be over 99.9 percent accurate on LFW and Megaface (44, 164) databases, fundamental challenges such as matching faces cross ages , poses , sensors, or styles still remain. For both datasets and algorithms, it is necessary to measure and address the racial/gender/age biases of deep FR in future research. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_88",
"text": " • Ubiquitous face recognition across applications and scenes. Deep face recognition has been successfully applied on many user-cooperated applications, but the ubiquitous recognition applications in everywhere are still an ambitious goal. In practice, it is difficult to collect and label sufficient samples for innumerable scenes in real world. One promising solution is to first learn a general model and then transfer it to an application-specific scene. While deep domain adaptation has recently been applied to reduce the algorithm bias on different scenes , different races , general solution to transfer face recognition is largely open. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_89",
"text": " • Pursuit of extreme accuracy and efficiency. Many killer-applications, such as watch-list surveillance or financial identity verification, require high matching accuracy at very low alarm rate, e.g. 10−9superscript10910^{-9}. It is still a big challenge even with deep learning on massive training data. Meanwhile, deploying deep face recognition on mobile devices pursues the minimum size of feature representation and compressed deep network. It is of great significance for both industry and academic to explore this extreme face-recognition performance beyond human imagination. It is also exciting to constantly push the performance limits of the algorithm after it has already surpassed human. ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_90",
"text": " • Fusion issues. Face recognition by itself is far from sufficient to solve all biometric and forensic tasks, such as distinguishing identical twins and matching faces before and after surgery . A reliable solution is to consolidate multiple sources of biometric evidence . These sources of information may correspond to different biometric traits (e.g., face + hand ), sensors (e.g., 2D + 3D face cameras), feature extraction and matching techniques, or instances (e.g., a face sequence of various poses). It is beneficial for face biometric and forensic applications to perform information fusion at the data level, feature level, score level, rank level, and decision level . ",
"title": "Deep Face Recognition"
},
{
"id": "1804.06655_all_91",
"text": " This work was partially supported by National Key R&D Program of China (2019YFB1406504) and BUPT Excellent Ph.D. Students Foundation CX2020207. ",
"title": "Deep Face Recognition"
}
] |
What is an example of model compression approaches?
|
different examples can be: Applying SVD to a pretrained CNN model through which we can obtain most effective parameters or features of largest singular values of this factorization if we want [3].
|
[
3
] |
[
{
"id": "1602.07360_all_0",
"text": " Much of the recent research on deep convolutional neural networks (CNNs) has focused on increasing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accuracy, a CNN architecture with fewer parameters has several advantages: ∙∙\\bullet More efficient distributed training. Communication among servers is the limiting factor to the scalability of distributed CNN training. For distributed data-parallel training, communication overhead is directly proportional to the number of parameters in the model Iandola et al. (2016). In short, small models train faster due to requiring less communication. ∙∙\\bullet Less overhead when exporting new models to clients. For autonomous driving, companies such as Tesla periodically copy new models from their servers to customers’ cars. This practice is often referred to as an over-the-air update. Consumer Reports has found that the safety of Tesla’s Autopilot semi-autonomous driving functionality has incrementally improved with recent over-the-air updates Consumer Reports (2016). However, over-the-air updates of today’s typical CNN/DNN models can require large data transfers. With AlexNet, this would require 240MB of communication from the server to the car. Smaller models require less communication, making frequent updates more feasible. ∙∙\\bullet Feasible FPGA and embedded deployment. FPGAs often have less than 10MB111For example, the Xilinx Vertex-7 FPGA has a maximum of 8.5 MBytes (i.e. 68 Mbits) of on-chip memory and does not provide off-chip memory. of on-chip memory and no off-chip memory or storage. For inference, a sufficiently small model could be stored directly on the FPGA instead of being bottlenecked by memory bandwidth Qiu et al. (2016), while video frames stream through the FPGA in real time. Further, when deploying CNNs on Application-Specific Integrated Circuits (ASICs), a sufficiently small model could be stored directly on-chip, and smaller models may enable the ASIC to fit on a smaller die. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_1",
"text": " As you can see, there are several advantages of smaller CNN architectures. With this in mind, we focus directly on the problem of identifying a CNN architecture with fewer parameters but equivalent accuracy compared to a well-known model. We have discovered such an architecture, which we call SqueezeNet. In addition, we present our attempt at a more disciplined approach to searching the design space for novel CNN architectures. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_2",
"text": " The rest of the paper is organized as follows. In Section 2 we review the related work. Then, in Sections 3 and 4 we describe and evaluate the SqueezeNet architecture. After that, we turn our attention to understanding how CNN architectural design choices impact model size and accuracy. We gain this understanding by exploring the design space of SqueezeNet-like architectures. In Section 5, we do design space exploration on the CNN microarchitecture, which we define as the organization and dimensionality of individual layers and modules. In Section 6, we do design space exploration on the CNN macroarchitecture, which we define as high-level organization of layers in a CNN. Finally, we conclude in Section 7. In short, Sections 3 and 4 are useful for CNN researchers as well as practitioners who simply want to apply SqueezeNet to a new application. The remaining sections are aimed at advanced researchers who intend to design their own CNN architectures. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_3",
"text": " The overarching goal of our work is to identify a model that has very few parameters while preserving accuracy. To address this problem, a sensible approach is to take an existing CNN model and compress it in a lossy fashion. In fact, a research community has emerged around the topic of model compression, and several approaches have been reported. A fairly straightforward approach by Denton et al. is to apply singular value decomposition (SVD) to a pretrained CNN model Denton et al. (2014). Han et al. developed Network Pruning, which begins with a pretrained model, then replaces parameters that are below a certain threshold with zeros to form a sparse matrix, and finally performs a few iterations of training on the sparse CNN Han et al. (2015b). Recently, Han et al. extended their work by combining Network Pruning with quantization (to 8 bits or less) and huffman encoding to create an approach called Deep Compression Han et al. (2015a), and further designed a hardware accelerator called EIE Han et al. (2016a) that operates directly on the compressed model, achieving substantial speedups and energy savings. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_4",
"text": " Convolutions have been used in artificial neural networks for at least 25 years; LeCun et al. helped to popularize CNNs for digit recognition applications in the late 1980s LeCun et al. (1989). In neural networks, convolution filters are typically 3D, with height, width, and channels as the key dimensions. When applied to images, CNN filters typically have 3 channels in their first layer (i.e. RGB), and in each subsequent layer Lisubscript𝐿𝑖L_{i} the filters have the same number of channels as Li−1subscript𝐿𝑖1L_{i-1} has filters. The early work by LeCun et al. LeCun et al. (1989) uses 5x5xChannels222From now on, we will simply abbreviate HxWxChannels to HxW. filters, and the recent VGG Simonyan & Zisserman (2014) architectures extensively use 3x3 filters. Models such as Network-in-Network Lin et al. (2013) and the GoogLeNet family of architectures Szegedy et al. (2014); Ioffe & Szegedy (2015); Szegedy et al. (2015; 2016) use 1x1 filters in some layers. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_5",
"text": " With the trend of designing very deep CNNs, it becomes cumbersome to manually select filter dimensions for each layer. To address this, various higher level building blocks, or modules, comprised of multiple convolution layers with a specific fixed organization have been proposed. For example, the GoogLeNet papers propose Inception modules, which are comprised of a number of different dimensionalities of filters, usually including 1x1 and 3x3, plus sometimes 5x5 Szegedy et al. (2014) and sometimes 1x3 and 3x1 Szegedy et al. (2015). Many such modules are then combined, perhaps with additional ad-hoc layers, to form a complete network. We use the term CNN microarchitecture to refer to the particular organization and dimensions of the individual modules. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_6",
"text": " While the CNN microarchitecture refers to individual layers and modules, we define the CNN macroarchitecture as the system-level organization of multiple modules into an end-to-end CNN architecture. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_7",
"text": " Perhaps the mostly widely studied CNN macroarchitecture topic in the recent literature is the impact of depth (i.e. number of layers) in networks. Simoyan and Zisserman proposed the VGG Simonyan & Zisserman (2014) family of CNNs with 12 to 19 layers and reported that deeper networks produce higher accuracy on the ImageNet-1k dataset Deng et al. (2009). K. He et al. proposed deeper CNNs with up to 30 layers that deliver even higher ImageNet accuracy He et al. (2015a). ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_8",
"text": " The choice of connections across multiple layers or modules is an emerging area of CNN macroarchitectural research. Residual Networks (ResNet) He et al. (2015b) and Highway Networks Srivastava et al. (2015) each propose the use of connections that skip over multiple layers, for example additively connecting the activations from layer 3 to the activations from layer 6. We refer to these connections as bypass connections. The authors of ResNet provide an A/B comparison of a 34-layer CNN with and without bypass connections; adding bypass connections delivers a 2 percentage-point improvement on Top-5 ImageNet accuracy. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_9",
"text": " Neural networks (including deep and convolutional NNs) have a large design space, with numerous options for microarchitectures, macroarchitectures, solvers, and other hyperparameters. It seems natural that the community would want to gain intuition about how these factors impact a NN’s accuracy (i.e. the shape of the design space). Much of the work on design space exploration (DSE) of NNs has focused on developing automated approaches for finding NN architectures that deliver higher accuracy. These automated DSE approaches include bayesian optimization Snoek et al. (2012), simulated annealing Ludermir et al. (2006), randomized search Bergstra & Bengio (2012), and genetic algorithms Stanley & Miikkulainen (2002). To their credit, each of these papers provides a case in which the proposed DSE approach produces a NN architecture that achieves higher accuracy compared to a representative baseline. However, these papers make no attempt to provide intuition about the shape of the NN design space. Later in this paper, we eschew automated approaches – instead, we refactor CNNs in such a way that we can do principled A/B comparisons to investigate how CNN architectural decisions influence model size and accuracy. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_10",
"text": " In the following sections, we first propose and evaluate the SqueezeNet architecture with and without model compression. Then, we explore the impact of design choices in microarchitecture and macroarchitecture for SqueezeNet-like CNN architectures. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_11",
"text": " In this section, we begin by outlining our design strategies for CNN architectures with few parameters. Then, we introduce the Fire module, our new building block out of which to build CNN architectures. Finally, we use our design strategies to construct SqueezeNet, which is comprised mainly of Fire modules. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_12",
"text": " Our overarching objective in this paper is to identify CNN architectures that have few parameters while maintaining competitive accuracy. To achieve this, we employ three main strategies when designing CNN architectures: ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_13",
"text": " Strategy 1. Replace 3x3 filters with 1x1 filters. Given a budget of a certain number of convolution filters, we will choose to make the majority of these filters 1x1, since a 1x1 filter has 9X fewer parameters than a 3x3 filter. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_14",
"text": " Strategy 2. Decrease the number of input channels to 3x3 filters. Consider a convolution layer that is comprised entirely of 3x3 filters. The total quantity of parameters in this layer is (number of input channels) * (number of filters) * (3*3). So, to maintain a small total number of parameters in a CNN, it is important not only to decrease the number of 3x3 filters (see Strategy 1 above), but also to decrease the number of input channels to the 3x3 filters. We decrease the number of input channels to 3x3 filters using squeeze layers, which we describe in the next section. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_15",
"text": " Strategy 3. Downsample late in the network so that convolution layers have large activation maps. In a convolutional network, each convolution layer produces an output activation map with a spatial resolution that is at least 1x1 and often much larger than 1x1. The height and width of these activation maps are controlled by: (1) the size of the input data (e.g. 256x256 images) and (2) the choice of layers in which to downsample in the CNN architecture. Most commonly, downsampling is engineered into CNN architectures by setting the (stride >> 1) in some of the convolution or pooling layers (e.g. Szegedy et al. (2014); Simonyan & Zisserman (2014); Krizhevsky et al. (2012)). If early333In our terminology, an “early” layer is close to the input data. layers in the network have large strides, then most layers will have small activation maps. Conversely, if most layers in the network have a stride of 1, and the strides greater than 1 are concentrated toward the end444In our terminology, the “end” of the network is the classifier. of the network, then many layers in the network will have large activation maps. Our intuition is that large activation maps (due to delayed downsampling) can lead to higher classification accuracy, with all else held equal. Indeed, K. He and H. Sun applied delayed downsampling to four different CNN architectures, and in each case delayed downsampling led to higher classification accuracy He & Sun (2015). ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_16",
"text": " Strategies 1 and 2 are about judiciously decreasing the quantity of parameters in a CNN while attempting to preserve accuracy. Strategy 3 is about maximizing accuracy on a limited budget of parameters. Next, we describe the Fire module, which is our building block for CNN architectures that enables us to successfully employ Strategies 1, 2, and 3. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_17",
"text": " We define the Fire module as follows. A Fire module is comprised of: a squeeze convolution layer (which has only 1x1 filters), feeding into an expand layer that has a mix of 1x1 and 3x3 convolution filters; we illustrate this in Figure 1. The liberal use of 1x1 filters in Fire modules is an application of Strategy 1 from Section 3.1. We expose three tunable dimensions (hyperparameters) in a Fire module: s1x1subscript𝑠1𝑥1s_{1x1}, e1x1subscript𝑒1𝑥1e_{1x1}, and e3x3subscript𝑒3𝑥3e_{3x3}. In a Fire module, s1x1subscript𝑠1𝑥1s_{1x1} is the number of filters in the squeeze layer (all 1x1), e1x1subscript𝑒1𝑥1e_{1x1} is the number of 1x1 filters in the expand layer, and e3x3subscript𝑒3𝑥3e_{3x3} is the number of 3x3 filters in the expand layer. When we use Fire modules we set s1x1subscript𝑠1𝑥1s_{1x1} to be less than (e1x1subscript𝑒1𝑥1e_{1x1} + e3x3subscript𝑒3𝑥3e_{3x3}), so the squeeze layer helps to limit the number of input channels to the 3x3 filters, as per Strategy 2 from Section 3.1. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_18",
"text": " We now describe the SqueezeNet CNN architecture. We illustrate in Figure 2 that SqueezeNet begins with a standalone convolution layer (conv1), followed by 8 Fire modules (fire2-9), ending with a final conv layer (conv10). We gradually increase the number of filters per fire module from the beginning to the end of the network. SqueezeNet performs max-pooling with a stride of 2 after layers conv1, fire4, fire8, and conv10; these relatively late placements of pooling are per Strategy 3 from Section 3.1. We present the full SqueezeNet architecture in Table 1. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_19",
"text": " For brevity, we have omitted number of details and design choices about SqueezeNet from Table 1 and Figure 2. We provide these design choices in the following. The intuition behind these choices may be found in the papers cited below. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_20",
"text": " ∙∙\\bullet So that the output activations from 1x1 and 3x3 filters have the same height and width, we add a 1-pixel border of zero-padding in the input data to 3x3 filters of expand modules. ∙∙\\bullet ReLU Nair & Hinton (2010) is applied to activations from squeeze and expand layers. ∙∙\\bullet Dropout Srivastava et al. (2014) with a ratio of 50% is applied after the fire9 module. ∙∙\\bullet Note the lack of fully-connected layers in SqueezeNet; this design choice was inspired by the NiN Lin et al. (2013) architecture. ∙∙\\bullet When training SqueezeNet, we begin with a learning rate of 0.04, and we linearly decrease the learning rate throughout training, as described in Mishkin et al. (2016). For details on the training protocol (e.g. batch size, learning rate, parameter initialization), please refer to our Caffe-compatible configuration files located here: https://github.com/DeepScale/SqueezeNet. ∙∙\\bullet The Caffe framework does not natively support a convolution layer that contains multiple filter resolutions (e.g. 1x1 and 3x3) Jia et al. (2014). To get around this, we implement our expand layer with two separate convolution layers: a layer with 1x1 filters, and a layer with 3x3 filters. Then, we concatenate the outputs of these layers together in the channel dimension. This is numerically equivalent to implementing one layer that contains both 1x1 and 3x3 filters. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_21",
"text": " We released the SqueezeNet configuration files in the format defined by the Caffe CNN framework. However, in addition to Caffe, several other CNN frameworks have emerged, including MXNet Chen et al. (2015a), Chainer Tokui et al. (2015), Keras Chollet (2016), and Torch Collobert et al. (2011). Each of these has its own native format for representing a CNN architecture. That said, most of these libraries use the same underlying computational back-ends such as cuDNN Chetlur et al. (2014) and MKL-DNN Das et al. (2016). The research community has ported the SqueezeNet CNN architecture for compatibility with a number of other CNN software frameworks: • MXNet Chen et al. (2015a) port of SqueezeNet: Haria (2016) • Chainer Tokui et al. (2015) port of SqueezeNet: Bell (2016) • Keras Chollet (2016) port of SqueezeNet: DT42 (2016) • Torch Collobert et al. (2011) port of SqueezeNet’s Fire Modules: Waghmare (2016) ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_22",
"text": " We now turn our attention to evaluating SqueezeNet. In each of the CNN model compression papers reviewed in Section 2.1, the goal was to compress an AlexNet Krizhevsky et al. (2012) model that was trained to classify images using the ImageNet Deng et al. (2009) (ILSVRC 2012) dataset. Therefore, we use AlexNet555Our baseline is bvlc_alexnet from the Caffe codebase Jia et al. (2014). and the associated model compression results as a basis for comparison when evaluating SqueezeNet. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_23",
"text": " In Table 2, we review SqueezeNet in the context of recent model compression results. The SVD-based approach is able to compress a pretrained AlexNet model by a factor of 5x, while diminishing top-1 accuracy to 56.0% Denton et al. (2014). Network Pruning achieves a 9x reduction in model size while maintaining the baseline of 57.2% top-1 and 80.3% top-5 accuracy on ImageNet Han et al. (2015b). Deep Compression achieves a 35x reduction in model size while still maintaining the baseline accuracy level Han et al. (2015a). Now, with SqueezeNet, we achieve a 50X reduction in model size compared to AlexNet, while meeting or exceeding the top-1 and top-5 accuracy of AlexNet. We summarize all of the aforementioned results in Table 2. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_24",
"text": " It appears that we have surpassed the state-of-the-art results from the model compression community: even when using uncompressed 32-bit values to represent the model, SqueezeNet has a 1.4×1.4\\times smaller model size than the best efforts from the model compression community while maintaining or exceeding the baseline accuracy. Until now, an open question has been: are small models amenable to compression, or do small models “need” all of the representational power afforded by dense floating-point values? To find out, we applied Deep Compression Han et al. (2015a) to SqueezeNet, using 33% sparsity666Note that, due to the storage overhead of storing sparse matrix indices, 33% sparsity leads to somewhat less than a 3×3\\times decrease in model size. and 8-bit quantization. This yields a 0.66 MB model (363×363\\times smaller than 32-bit AlexNet) with equivalent accuracy to AlexNet. Further, applying Deep Compression with 6-bit quantization and 33% sparsity on SqueezeNet, we produce a 0.47MB model (510×510\\times smaller than 32-bit AlexNet) with equivalent accuracy. Our small model is indeed amenable to compression. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_25",
"text": " In addition, these results demonstrate that Deep Compression Han et al. (2015a) not only works well on CNN architectures with many parameters (e.g. AlexNet and VGG), but it is also able to compress the already compact, fully convolutional SqueezeNet architecture. Deep Compression compressed SqueezeNet by 10×10\\times while preserving the baseline accuracy. In summary: by combining CNN architectural innovation (SqueezeNet) with state-of-the-art compression techniques (Deep Compression), we achieved a 510×510\\times reduction in model size with no decrease in accuracy compared to the baseline. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_26",
"text": " Finally, note that Deep Compression Han et al. (2015b) uses a codebook as part of its scheme for quantizing CNN parameters to 6- or 8-bits of precision. Therefore, on most commodity processors, it is not trivial to achieve a speedup of 328=4x3284𝑥\\frac{32}{8}=4x with 8-bit quantization or 326=5.3x3265.3𝑥\\frac{32}{6}=5.3x with 6-bit quantization using the scheme developed in Deep Compression. However, Han et al. developed custom hardware – Efficient Inference Engine (EIE) – that can compute codebook-quantized CNNs more efficiently Han et al. (2016a). In addition, in the months since we released SqueezeNet, P. Gysel developed a strategy called Ristretto for linearly quantizing SqueezeNet to 8 bits Gysel (2016). Specifically, Ristretto does computation in 8 bits, and it stores parameters and activations in 8-bit data types. Using the Ristretto strategy for 8-bit computation in SqueezeNet inference, Gysel observed less than 1 percentage-point of drop in accuracy when using 8-bit instead of 32-bit data types. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_27",
"text": " So far, we have proposed architectural design strategies for small models, followed these principles to create SqueezeNet, and discovered that SqueezeNet is 50x smaller than AlexNet with equivalent accuracy. However, SqueezeNet and other models reside in a broad and largely unexplored design space of CNN architectures. Now, in Sections 5 and 6, we explore several aspects of the design space. We divide this architectural exploration into two main topics: microarchitectural exploration (per-module layer dimensions and configurations) and macroarchitectural exploration (high-level end-to-end organization of modules and other layers). ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_28",
"text": " In this section, we design and execute experiments with the goal of providing intuition about the shape of the microarchitectural design space with respect to the design strategies that we proposed in Section 3.1. Note that our goal here is not to maximize accuracy in every experiment, but rather to understand the impact of CNN architectural choices on model size and accuracy. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_29",
"text": " In SqueezeNet, each Fire module has three dimensional hyperparameters that we defined in Section 3.2: s1x1subscript𝑠1𝑥1s_{1x1}, e1x1subscript𝑒1𝑥1e_{1x1}, and e3x3subscript𝑒3𝑥3e_{3x3}. SqueezeNet has 8 Fire modules with a total of 24 dimensional hyperparameters. To do broad sweeps of the design space of SqueezeNet-like architectures, we define the following set of higher level metaparameters which control the dimensions of all Fire modules in a CNN. We define basee𝑏𝑎𝑠subscript𝑒𝑒base_{e} as the number of expand filters in the first Fire module in a CNN. After every freq𝑓𝑟𝑒𝑞freq Fire modules, we increase the number of expand filters by incre𝑖𝑛𝑐subscript𝑟𝑒incr_{e}. In other words, for Fire module i𝑖i, the number of expand filters is ei=basee+(incre∗⌊ifreq⌋e_{i}=base_{e}+(incr_{e}*{\\left\\lfloor{\\frac{i}{freq}}\\right\\rfloor}). In the expand layer of a Fire module, some filters are 1x1 and some are 3x3; we define ei=ei,1x1+ei,3x3subscript𝑒𝑖subscript𝑒𝑖1𝑥1subscript𝑒𝑖3𝑥3e_{i}=e_{i,{1x1}}+e_{i,{3x3}} with pct3x3𝑝𝑐subscript𝑡3𝑥3pct_{3x3} (in the range (0,1)01(0,1), shared over all Fire modules) as the percentage of expand filters that are 3x3. In other words, ei,3x3=ei∗pct3x3subscript𝑒𝑖3𝑥3subscript𝑒𝑖𝑝𝑐subscript𝑡3𝑥3e_{i,{3x3}}=e_{i}*pct_{3x3}, and ei,1x1=ei∗(1−pct3x3)subscript𝑒𝑖1𝑥1subscript𝑒𝑖1𝑝𝑐subscript𝑡3𝑥3e_{i,{1x1}}=e_{i}*(1-pct_{3x3}). Finally, we define the number of filters in the squeeze layer of a Fire module using a metaparameter called the squeeze ratio (SR) (again, in the range (0,1)01(0,1), shared by all Fire modules): si,1x1=SR∗eisubscript𝑠𝑖1𝑥1𝑆𝑅subscript𝑒𝑖s_{i,{1x1}}=SR*e_{i} (or equivalently si,1x1=SR∗(ei,1x1+ei,3x3)subscript𝑠𝑖1𝑥1𝑆𝑅subscript𝑒𝑖1𝑥1subscript𝑒𝑖3𝑥3s_{i,{1x1}}=SR*(e_{i,{1x1}}+e_{i,{3x3}})). SqueezeNet (Table 1) is an example architecture that we generated with the aforementioned set of metaparameters. Specifically, SqueezeNet has the following metaparameters: basee=128𝑏𝑎𝑠subscript𝑒𝑒128base_{e}=128, incre=128𝑖𝑛𝑐subscript𝑟𝑒128incr_{e}=128, pct3x3=0.5𝑝𝑐subscript𝑡3𝑥30.5pct_{3x3}=0.5, freq=2𝑓𝑟𝑒𝑞2freq=2, and SR=0.125𝑆𝑅0.125SR=0.125. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_30",
"text": " In Section 3.1, we proposed decreasing the number of parameters by using squeeze layers to decrease the number of input channels seen by 3x3 filters. We defined the squeeze ratio (SR) as the ratio between the number of filters in squeeze layers and the number of filters in expand layers. We now design an experiment to investigate the effect of the squeeze ratio on model size and accuracy. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_31",
"text": " In these experiments, we use SqueezeNet (Figure 2) as a starting point. As in SqueezeNet, these experiments use the following metaparameters: basee=128𝑏𝑎𝑠subscript𝑒𝑒128base_{e}=128, incre=128𝑖𝑛𝑐subscript𝑟𝑒128incr_{e}=128, pct3x3=0.5𝑝𝑐subscript𝑡3𝑥30.5pct_{3x3}=0.5, and freq=2𝑓𝑟𝑒𝑞2freq=2. We train multiple models, where each model has a different squeeze ratio (SR)777Note that, for a given model, all Fire layers share the same squeeze ratio. in the range (0.125, 1.0). In Figure 3(a), we show the results of this experiment, where each point on the graph is an independent model that was trained from scratch. SqueezeNet is the SR=0.125 point in this figure.888Note that we named it SqueezeNet because it has a low squeeze ratio (SR). That is, the squeeze layers in SqueezeNet have 0.125x the number of filters as the expand layers. From this figure, we learn that increasing SR beyond 0.125 can further increase ImageNet top-5 accuracy from 80.3% (i.e. AlexNet-level) with a 4.8MB model to 86.0% with a 19MB model. Accuracy plateaus at 86.0% with SR=0.75 (a 19MB model), and setting SR=1.0 further increases model size without improving accuracy. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_32",
"text": " In Section 3.1, we proposed decreasing the number of parameters in a CNN by replacing some 3x3 filters with 1x1 filters. An open question is, how important is spatial resolution in CNN filters? The VGG Simonyan & Zisserman (2014) architectures have 3x3 spatial resolution in most layers’ filters; GoogLeNet Szegedy et al. (2014) and Network-in-Network (NiN) Lin et al. (2013) have 1x1 filters in some layers. In GoogLeNet and NiN, the authors simply propose a specific quantity of 1x1 and 3x3 filters without further analysis.999To be clear, each filter is 1x1xChannels or 3x3xChannels, which we abbreviate to 1x1 and 3x3. Here, we attempt to shed light on how the proportion of 1x1 and 3x3 filters affects model size and accuracy. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_33",
"text": " We use the following metaparameters in this experiment: basee=incre=128𝑏𝑎𝑠subscript𝑒𝑒𝑖𝑛𝑐subscript𝑟𝑒128base_{e}=incr_{e}=128, freq=2𝑓𝑟𝑒𝑞2freq=2, SR=0.500𝑆𝑅0.500SR=0.500, and we vary pct3x3𝑝𝑐subscript𝑡3𝑥3pct_{3x3} from 1% to 99%. In other words, each Fire module’s expand layer has a predefined number of filters partitioned between 1x1 and 3x3, and here we turn the knob on these filters from “mostly 1x1” to “mostly 3x3”. As in the previous experiment, these models have 8 Fire modules, following the same organization of layers as in Figure 2. We show the results of this experiment in Figure 3(b). Note that the 13MB models in Figure 3(a) and Figure 3(b) are the same architecture: SR=0.500𝑆𝑅0.500SR=0.500 and pct3x3=50%𝑝𝑐subscript𝑡3𝑥3percent50pct_{3x3}=50\\%. We see in Figure 3(b) that the top-5 accuracy plateaus at 85.6% using 50% 3x3 filters, and further increasing the percentage of 3x3 filters leads to a larger model size but provides no improvement in accuracy on ImageNet. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_34",
"text": " So far we have explored the design space at the microarchitecture level, i.e. the contents of individual modules of the CNN. Now, we explore design decisions at the macroarchitecture level concerning the high-level connections among Fire modules. Inspired by ResNet He et al. (2015b), we explored three different architectures: ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_35",
"text": " ∙∙\\bullet Vanilla SqueezeNet (as per the prior sections). ∙∙\\bullet SqueezeNet with simple bypass connections between some Fire modules. (Inspired by Srivastava et al. (2015); He et al. (2015b).) ∙∙\\bullet SqueezeNet with complex bypass connections between the remaining Fire modules. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_36",
"text": " We illustrate these three variants of SqueezeNet in Figure 2. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_37",
"text": " Our simple bypass architecture adds bypass connections around Fire modules 3, 5, 7, and 9, requiring these modules to learn a residual function between input and output. As in ResNet, to implement a bypass connection around Fire3, we set the input to Fire4 equal to (output of Fire2 + output of Fire3), where the + operator is elementwise addition. This changes the regularization applied to the parameters of these Fire modules, and, as per ResNet, can improve the final accuracy and/or ability to train the full model. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_38",
"text": " One limitation is that, in the straightforward case, the number of input channels and number of output channels has to be the same; as a result, only half of the Fire modules can have simple bypass connections, as shown in the middle diagram of Fig 2. When the “same number of channels” requirement can’t be met, we use a complex bypass connection, as illustrated on the right of Figure 2. While a simple bypass is “just a wire,” we define a complex bypass as a bypass that includes a 1x1 convolution layer with the number of filters set equal to the number of output channels that are needed. Note that complex bypass connections add extra parameters to the model, while simple bypass connections do not. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_39",
"text": " In addition to changing the regularization, it is intuitive to us that adding bypass connections would help to alleviate the representational bottleneck introduced by squeeze layers. In SqueezeNet, the squeeze ratio (SR) is 0.125, meaning that every squeeze layer has 8x fewer output channels than the accompanying expand layer. Due to this severe dimensionality reduction, a limited amount of information can pass through squeeze layers. However, by adding bypass connections to SqueezeNet, we open up avenues for information to flow around the squeeze layers. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_40",
"text": " We trained SqueezeNet with the three macroarchitectures in Figure 2 and compared the accuracy and model size in Table 3. We fixed the microarchitecture to match SqueezeNet as described in Table 1 throughout the macroarchitecture exploration. Complex and simple bypass connections both yielded an accuracy improvement over the vanilla SqueezeNet architecture. Interestingly, the simple bypass enabled a higher accuracy accuracy improvement than complex bypass. Adding the simple bypass connections yielded an increase of 2.9 percentage-points in top-1 accuracy and 2.2 percentage-points in top-5 accuracy without increasing model size. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_41",
"text": " In this paper, we have proposed steps toward a more disciplined approach to the design-space exploration of convolutional neural networks. Toward this goal we have presented SqueezeNet, a CNN architecture that has 50×50\\times fewer parameters than AlexNet and maintains AlexNet-level accuracy on ImageNet. We also compressed SqueezeNet to less than 0.5MB, or 510×510\\times smaller than AlexNet without compression. Since we released this paper as a technical report in 2016, Song Han and his collaborators have experimented further with SqueezeNet and model compression. Using a new approach called Dense-Sparse-Dense (DSD) Han et al. (2016b), Han et al. use model compression during training as a regularizer to further improve accuracy, producing a compressed set of SqueezeNet parameters that is 1.2 percentage-points more accurate on ImageNet-1k, and also producing an uncompressed set of SqueezeNet parameters that is 4.3 percentage-points more accurate, compared to our results in Table 2. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_42",
"text": " We mentioned near the beginning of this paper that small models are more amenable to on-chip implementations on FPGAs. Since we released the SqueezeNet model, Gschwend has developed a variant of SqueezeNet and implemented it on an FPGA Gschwend (2016). As we anticipated, Gschwend was able to able to store the parameters of a SqueezeNet-like model entirely within the FPGA and eliminate the need for off-chip memory accesses to load model parameters. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_43",
"text": " In the context of this paper, we focused on ImageNet as a target dataset. However, it has become common practice to apply ImageNet-trained CNN representations to a variety of applications such as fine-grained object recognition Zhang et al. (2013); Donahue et al. (2013), logo identification in images Iandola et al. (2015), and generating sentences about images Fang et al. (2015). ImageNet-trained CNNs have also been applied to a number of applications pertaining to autonomous driving, including pedestrian and vehicle detection in images Iandola et al. (2014); Girshick et al. (2015); Ashraf et al. (2016) and videos Chen et al. (2015b), as well as segmenting the shape of the road Badrinarayanan et al. (2015). We think SqueezeNet will be a good candidate CNN architecture for a variety of applications, especially those in which small model size is of importance. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
},
{
"id": "1602.07360_all_44",
"text": " SqueezeNet is one of several new CNNs that we have discovered while broadly exploring the design space of CNN architectures. We hope that SqueezeNet will inspire the reader to consider and explore the broad range of possibilities in the design space of CNN architectures and to perform that exploration in a more systematic manner. ",
"title": "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and<0.5MB model size"
}
] |
Is it crucial to use 6 layers in the encoder? if it is free to change, Does increasing those layers need more data to avoid overfitting or just would take longer time to converge?
|
For translation tasks the result shows that 6 layers are the optimal number of layers [10].
|
[
10
] |
[
{
"id": "1706.03762_all_0",
"text": " Recurrent neural networks, long short-term memory and gated recurrent neural networks in particular, have been firmly established as state of the art approaches in sequence modeling and transduction problems such as language modeling and machine translation (35, 2, 5). Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures (38, 24, 15). ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_1",
"text": " Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states htsubscriptℎ𝑡h_{t}, as a function of the previous hidden state ht−1subscriptℎ𝑡1h_{t-1} and the input for position t𝑡t. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples. Recent work has achieved significant improvements in computational efficiency through factorization tricks and conditional computation , while also improving model performance in case of the latter. The fundamental constraint of sequential computation, however, remains. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_2",
"text": " Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences (2, 19). In all but a few cases , however, such attention mechanisms are used in conjunction with a recurrent network. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_3",
"text": " In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_4",
"text": " The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU , ByteNet and ConvS2S , all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions . In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_5",
"text": " Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations (4, 27, 28, 22). ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_6",
"text": " End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks . ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_7",
"text": " To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution. In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as (17, 18) and . ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_8",
"text": " Most competitive neural sequence transduction models have an encoder-decoder structure (5, 2, 35). Here, the encoder maps an input sequence of symbol representations (x1,…,xn)subscript𝑥1…subscript𝑥𝑛(x_{1},...,x_{n}) to a sequence of continuous representations 𝐳=(z1,…,zn)𝐳subscript𝑧1…subscript𝑧𝑛\\mathbf{z}=(z_{1},...,z_{n}). Given 𝐳𝐳\\mathbf{z}, the decoder then generates an output sequence (y1,…,ym)subscript𝑦1…subscript𝑦𝑚(y_{1},...,y_{m}) of symbols one element at a time. At each step the model is auto-regressive , consuming the previously generated symbols as additional input when generating the next. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_9",
"text": " The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_10",
"text": " The encoder is composed of a stack of N=6𝑁6N=6 identical layers. Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection around each of the two sub-layers, followed by layer normalization . That is, the output of each sub-layer is LayerNorm(x+Sublayer(x))LayerNorm𝑥Sublayer𝑥\\mathrm{LayerNorm}(x+\\mathrm{Sublayer}(x)), where Sublayer(x)Sublayer𝑥\\mathrm{Sublayer}(x) is the function implemented by the sub-layer itself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension dmodel=512subscript𝑑model512d_{\\text{model}}=512. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_11",
"text": " The decoder is also composed of a stack of N=6𝑁6N=6 identical layers. In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization. We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i𝑖i can depend only on the known outputs at positions less than i𝑖i. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_12",
"text": " An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_13",
"text": " We call our particular attention \"Scaled Dot-Product Attention\" (Figure 2). The input consists of queries and keys of dimension dksubscript𝑑𝑘d_{k}, and values of dimension dvsubscript𝑑𝑣d_{v}. We compute the dot products of the query with all keys, divide each by dksubscript𝑑𝑘\\sqrt{d_{k}}, and apply a softmax function to obtain the weights on the values. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_14",
"text": " In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q𝑄Q. The keys and values are also packed together into matrices K𝐾K and V𝑉V. We compute the matrix of outputs as: ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_15",
"text": " Attention(Q,K,V)=softmax(QKTdk)VAttention𝑄𝐾𝑉softmax𝑄superscript𝐾𝑇subscript𝑑𝑘𝑉\\mathrm{Attention}(Q,K,V)=\\mathrm{softmax}(\\frac{QK^{T}}{\\sqrt{d_{k}}})V (1) ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_16",
"text": " The two most commonly used attention functions are additive attention , and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of 1dk1subscript𝑑𝑘\\frac{1}{\\sqrt{d_{k}}}. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_17",
"text": " While for small values of dksubscript𝑑𝑘d_{k} the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of dksubscript𝑑𝑘d_{k} . We suspect that for large values of dksubscript𝑑𝑘d_{k}, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients 111To illustrate why the dot products get large, assume that the components of q𝑞q and k𝑘k are independent random variables with mean 00 and variance 111. Then their dot product, q⋅k=∑i=1dkqiki⋅𝑞𝑘superscriptsubscript𝑖1subscript𝑑𝑘subscript𝑞𝑖subscript𝑘𝑖q\\cdot k=\\sum_{i=1}^{d_{k}}q_{i}k_{i}, has mean 00 and variance dksubscript𝑑𝑘d_{k}.. To counteract this effect, we scale the dot products by 1dk1subscript𝑑𝑘\\frac{1}{\\sqrt{d_{k}}}. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_18",
"text": " Instead of performing a single attention function with dmodelsubscript𝑑modeld_{\\text{model}}-dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values hℎh times with different, learned linear projections to dksubscript𝑑𝑘d_{k}, dksubscript𝑑𝑘d_{k} and dvsubscript𝑑𝑣d_{v} dimensions, respectively. On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding dvsubscript𝑑𝑣d_{v}-dimensional output values. These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_19",
"text": " Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_20",
"text": " MultiHead(Q,K,V)MultiHead𝑄𝐾𝑉\\displaystyle\\mathrm{MultiHead}(Q,K,V) =Concat(head1,…,headh)WOabsentConcatsubscripthead1…subscriptheadhsuperscript𝑊𝑂\\displaystyle=\\mathrm{Concat}(\\mathrm{head_{1}},...,\\mathrm{head_{h}})W^{O} whereheadiwheresubscriptheadi\\displaystyle\\text{where}~{}\\mathrm{head_{i}} =Attention(QWiQ,KWiK,VWiV)absentAttention𝑄subscriptsuperscript𝑊𝑄𝑖𝐾subscriptsuperscript𝑊𝐾𝑖𝑉subscriptsuperscript𝑊𝑉𝑖\\displaystyle=\\mathrm{Attention}(QW^{Q}_{i},KW^{K}_{i},VW^{V}_{i}) ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_21",
"text": " Where the projections are parameter matrices WiQ∈ℝdmodel×dksubscriptsuperscript𝑊𝑄𝑖superscriptℝsubscript𝑑modelsubscript𝑑𝑘W^{Q}_{i}\\in\\mathbb{R}^{d_{\\text{model}}\\times d_{k}}, WiK∈ℝdmodel×dksubscriptsuperscript𝑊𝐾𝑖superscriptℝsubscript𝑑modelsubscript𝑑𝑘W^{K}_{i}\\in\\mathbb{R}^{d_{\\text{model}}\\times d_{k}}, WiV∈ℝdmodel×dvsubscriptsuperscript𝑊𝑉𝑖superscriptℝsubscript𝑑modelsubscript𝑑𝑣W^{V}_{i}\\in\\mathbb{R}^{d_{\\text{model}}\\times d_{v}} and WO∈ℝhdv×dmodelsuperscript𝑊𝑂superscriptℝℎsubscript𝑑𝑣subscript𝑑modelW^{O}\\in\\mathbb{R}^{hd_{v}\\times d_{\\text{model}}}. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_22",
"text": " In this work we employ h=8ℎ8h=8 parallel attention layers, or heads. For each of these we use dk=dv=dmodel/h=64subscript𝑑𝑘subscript𝑑𝑣subscript𝑑modelℎ64d_{k}=d_{v}=d_{\\text{model}}/h=64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_23",
"text": " The Transformer uses multi-head attention in three different ways: • In \"encoder-decoder attention\" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as (38, 2, 9). • The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder. • Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot-product attention by masking out (setting to −∞-\\infty) all values in the input of the softmax which correspond to illegal connections. See Figure 2. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_24",
"text": " In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_25",
"text": " FFN(x)=max(0,xW1+b1)W2+b2FFN𝑥0𝑥subscript𝑊1subscript𝑏1subscript𝑊2subscript𝑏2\\mathrm{FFN}(x)=\\max(0,xW_{1}+b_{1})W_{2}+b_{2} (2) ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_26",
"text": " While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is dmodel=512subscript𝑑model512d_{\\text{model}}=512, and the inner-layer has dimensionality dff=2048subscript𝑑𝑓𝑓2048d_{ff}=2048. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_27",
"text": " Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension dmodelsubscript𝑑modeld_{\\text{model}}. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to . In the embedding layers, we multiply those weights by dmodelsubscript𝑑model\\sqrt{d_{\\text{model}}}. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_28",
"text": " Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add \"positional encodings\" to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodelsubscript𝑑modeld_{\\text{model}} as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed . ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_29",
"text": " In this work, we use sine and cosine functions of different frequencies: ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_30",
"text": " PE(pos,2i)=sin(pos/100002i/dmodel)𝑃subscript𝐸𝑝𝑜𝑠2𝑖𝑠𝑖𝑛𝑝𝑜𝑠superscript100002𝑖subscript𝑑model\\displaystyle PE_{(pos,2i)}=sin(pos/10000^{2i/d_{\\text{model}}}) PE(pos,2i+1)=cos(pos/100002i/dmodel)𝑃subscript𝐸𝑝𝑜𝑠2𝑖1𝑐𝑜𝑠𝑝𝑜𝑠superscript100002𝑖subscript𝑑model\\displaystyle PE_{(pos,2i+1)}=cos(pos/10000^{2i/d_{\\text{model}}}) ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_31",
"text": " where pos𝑝𝑜𝑠pos is the position and i𝑖i is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π2𝜋2\\pi to 10000⋅2π⋅100002𝜋10000\\cdot 2\\pi. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k𝑘k, PEpos+k𝑃subscript𝐸𝑝𝑜𝑠𝑘PE_{pos+k} can be represented as a linear function of PEpos𝑃subscript𝐸𝑝𝑜𝑠PE_{pos}. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_32",
"text": " We also experimented with using learned positional embeddings instead, and found that the two versions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_33",
"text": " In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations (x1,…,xn)subscript𝑥1…subscript𝑥𝑛(x_{1},...,x_{n}) to another sequence of equal length (z1,…,zn)subscript𝑧1…subscript𝑧𝑛(z_{1},...,z_{n}), with xi,zi∈ℝdsubscript𝑥𝑖subscript𝑧𝑖superscriptℝ𝑑x_{i},z_{i}\\in\\mathbb{R}^{d}, such as a hidden layer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we consider three desiderata. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_34",
"text": " One is the total computational complexity per layer. Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_35",
"text": " The third is the path length between long-range dependencies in the network. Learning long-range dependencies is a key challenge in many sequence transduction tasks. One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network. The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies . Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_36",
"text": " As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n)𝑂𝑛O(n) sequential operations. In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n𝑛n is smaller than the representation dimensionality d𝑑d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece and byte-pair representations. To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size r𝑟r in the input sequence centered around the respective output position. This would increase the maximum path length to O(n/r)𝑂𝑛𝑟O(n/r). We plan to investigate this approach further in future work. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_37",
"text": " A single convolutional layer with kernel width k<n𝑘𝑛k<n does not connect all pairs of input and output positions. Doing so requires a stack of O(n/k)𝑂𝑛𝑘O(n/k) convolutional layers in the case of contiguous kernels, or O(logk(n))𝑂𝑙𝑜subscript𝑔𝑘𝑛O(log_{k}(n)) in the case of dilated convolutions , increasing the length of the longest paths between any two positions in the network. Convolutional layers are generally more expensive than recurrent layers, by a factor of k𝑘k. Separable convolutions , however, decrease the complexity considerably, to O(k⋅n⋅d+n⋅d2)𝑂⋅𝑘𝑛𝑑⋅𝑛superscript𝑑2O(k\\cdot n\\cdot d+n\\cdot d^{2}). Even with k=n𝑘𝑛k=n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_38",
"text": " As side benefit, self-attention could yield more interpretable models. We inspect attention distributions from our models and present and discuss examples in the appendix. Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_39",
"text": " This section describes the training regime for our models. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_40",
"text": " We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding , which has a shared source-target vocabulary of about 37000 tokens. For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary . Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_41",
"text": " We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models,(described on the bottom line of table 3), step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days). ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_42",
"text": " We used the Adam optimizer with β1=0.9subscript𝛽10.9\\beta_{1}=0.9, β2=0.98subscript𝛽20.98\\beta_{2}=0.98 and ϵ=10−9italic-ϵsuperscript109\\epsilon=10^{-9}. We varied the learning rate over the course of training, according to the formula: ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_43",
"text": " lrate=dmodel−0.5⋅min(step_num−0.5,step_num⋅warmup_steps−1.5)𝑙𝑟𝑎𝑡𝑒⋅superscriptsubscript𝑑model0.5𝑠𝑡𝑒𝑝_𝑛𝑢superscript𝑚0.5⋅𝑠𝑡𝑒𝑝_𝑛𝑢𝑚𝑤𝑎𝑟𝑚𝑢𝑝_𝑠𝑡𝑒𝑝superscript𝑠1.5lrate=d_{\\text{model}}^{-0.5}\\cdot\\min({step\\_num}^{-0.5},{step\\_num}\\cdot{warmup\\_steps}^{-1.5}) (3) ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_44",
"text": " This corresponds to increasing the learning rate linearly for the first warmup_steps𝑤𝑎𝑟𝑚𝑢𝑝_𝑠𝑡𝑒𝑝𝑠warmup\\_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used warmup_steps=4000𝑤𝑎𝑟𝑚𝑢𝑝_𝑠𝑡𝑒𝑝𝑠4000warmup\\_steps=4000. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_45",
"text": " We employ three types of regularization during training: ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_46",
"text": " We apply dropout to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of Pdrop=0.1subscript𝑃𝑑𝑟𝑜𝑝0.1P_{drop}=0.1. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_47",
"text": " During training, we employed label smoothing of value ϵls=0.1subscriptitalic-ϵ𝑙𝑠0.1\\epsilon_{ls}=0.1 . This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_48",
"text": " On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.02.02.0 BLEU, establishing a new state-of-the-art BLEU score of 28.428.428.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.53.53.5 days on 888 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_49",
"text": " On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.041.041.0, outperforming all of the previously published single models, at less than 1/4141/4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate Pdrop=0.1subscript𝑃𝑑𝑟𝑜𝑝0.1P_{drop}=0.1, instead of 0.30.30.3. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_50",
"text": " For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals. For the big models, we averaged the last 20 checkpoints. We used beam search with a beam size of 444 and length penalty α=0.6𝛼0.6\\alpha=0.6 . These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length + 505050, but terminate early when possible . ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_51",
"text": " Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature. We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU 222We used values of 2.8, 3.7, 6.0 and 9.5 TFLOPS for K80, K40, M40 and P100, respectively.. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_52",
"text": " To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the development set, newstest2013. We used beam search as described in the previous section, but no checkpoint averaging. We present these results in Table 3. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_53",
"text": " In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2. While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_54",
"text": " In Table 3 rows (B), we observe that reducing the attention key size dksubscript𝑑𝑘d_{k} hurts model quality. This suggests that determining compatibility is not easy and that a more sophisticated compatibility function than dot product may be beneficial. We further observe in rows (C) and (D) that, as expected, bigger models are better, and dropout is very helpful in avoiding over-fitting. In row (E) we replace our sinusoidal positional encoding with learned positional embeddings , and observe nearly identical results to the base model. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_55",
"text": " To evaluate if the Transformer can generalize to other tasks we performed experiments on English constituency parsing. This task presents specific challenges: the output is subject to strong structural constraints and is significantly longer than the input. Furthermore, RNN sequence-to-sequence models have not been able to attain state-of-the-art results in small-data regimes . ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_56",
"text": " We trained a 4-layer transformer with dmodel=1024subscript𝑑𝑚𝑜𝑑𝑒𝑙1024d_{model}=1024 on the Wall Street Journal (WSJ) portion of the Penn Treebank , about 40K training sentences. We also trained it in a semi-supervised setting, using the larger high-confidence and BerkleyParser corpora from with approximately 17M sentences . We used a vocabulary of 16K tokens for the WSJ only setting and a vocabulary of 32K tokens for the semi-supervised setting. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_57",
"text": " We performed only a small number of experiments to select the dropout, both attention and residual (section 5.4), learning rates and beam size on the Section 22 development set, all other parameters remained unchanged from the English-to-German base translation model. During inference, we increased the maximum output length to input length + 300300300. We used a beam size of 212121 and α=0.3𝛼0.3\\alpha=0.3 for both WSJ only and the semi-supervised setting. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_58",
"text": " Our results in Table 4 show that despite the lack of task-specific tuning our model performs surprisingly well, yielding better results than all previously reported models with the exception of the Recurrent Neural Network Grammar . ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_59",
"text": " In contrast to RNN sequence-to-sequence models , the Transformer outperforms the BerkeleyParser even when training only on the WSJ training set of 40K sentences. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_60",
"text": " In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_61",
"text": " For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_62",
"text": " We are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video. Making generation less sequential is another research goals of ours. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_63",
"text": " The code we used to train and evaluate our models is available at https://github.com/tensorflow/tensor2tensor. ",
"title": "Attention Is All You Need"
},
{
"id": "1706.03762_all_64",
"text": " We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful comments, corrections and inspiration. ",
"title": "Attention Is All You Need"
}
] |
How was the hyperparameters chosen for the baseline methods, and what were the chosen values for he experiments presented?
|
For baseline pooling methods, we perform grid search following previous work, and present best results [25].
|
[
25
] |
[
{
"id": "2209.02939_all_0",
"text": " Graph Neural Networks (GNNs) learn representations of individual nodes based on the connectivity structure of an input graph. For graph-level prediction tasks, the standard procedure globally pools all the node features into a single graph representation without weight difference, then feeds the representation to a final prediction layer. This process implies that information only propagates through node-to-node edges, rendering the model unable to hierarchically aggregate information efficiently beyond local convolution. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_1",
"text": " However, a hierarchical structure can encode the global topology of graphs that is useful for effective learning of long range interactions. Therefore, designing a pooling architecture which respects the graph structure is crucial for downstream tasks such as social network analyses https://doi.org/10.48550/arxiv.1609.02907 ; https://doi.org/10.48550/arxiv.1706.02216 and molecule property predictions https://doi.org/10.48550/arxiv.1509.09292 ; https://doi.org/10.48550/arxiv.1606.09375 ; https://doi.org/10.48550/arxiv.1312.6203 ; doi:10.1021/acs.jcim.6b00601 ; 4700287 ; https://doi.org/10.48550/arxiv.1812.01070 . ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_2",
"text": " As an alternative to global pooling, DiffPool first proposed an end-to-end differentiable pooling by soft-classifying each node into a smaller number of clusters ying2018 . Later gPool gao2019 and SAGpool lee2019 incorporated the attention mechanism into pooling, while MinCutPool proposed grouping the nodes into clusters by minimizing the relaxed K𝐾K-way normalized minimum cut objective bianchi2019 . ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_3",
"text": " In most inductive settings, there is no single number of clusters that is suitable across all graphs in the dataset. Particularly in molecular graphs, the number of functional groups often determines useful characteristics and chemical behaviors, while varying significantly across different molecules. Nonetheless, existing pooling methods require the number of clusters as a hyperparameter, then operates under the assumption that all graphs share the same number of clusters ranjan2020asap . This is often undesirable as it not only requires additional hyperparameter tuning, but also imposes a strong inductive bias that deteriorates downstream performance. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_4",
"text": " To overcome this challenge, we propose GMPool, a general pooling framework that does not require an universal number of clusters as a user hyperparameter. Figure 1 depicts the overall framework of GMPool. The core intuition is that the product of a pooling matrix with itself forms a grouping matrix, where each (i,j)𝑖𝑗(i,j)-th entry indicates the pairwise clustering similarity: whether the nodes i𝑖i and j𝑗j are pooled to the same clusters. For each graph, GMPool parameterizes the clustering similarities in its grouping matrix via a classification layer. Finally, we perform SVD on the grouping matrix to obtain the pooling matrix such that the overall rank represents the suitable number of clusters. We also test a single-pooling variant NGMPool that does not perform any decomposition, but rather uses the grouping matrix as is. In real-world molecular property prediction tasks, we show that our approach outperforms previous baselines, while successfully learning suitable clusters. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_5",
"text": " The main contributions of this paper are as follows: • We design a grouping matrix-based pooling operator that does not require users to specify the number of clusters a priori. • We propose GMPool and NGMPool. GMPool performs SVD on the grouping matrix to obtain the pooling matrix, whereas NGMPool utilizes the grouping matrix as is. • We demonstrate the power of our methods both quantitatively and qualitatively on a wide range of real molecular property prediction tasks. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_6",
"text": " GNN architectures have shown great performance in various fields such as social network data, authorship/citation networks, and molecular data that can naturally be interpreted as graphs. For graph convolution, several work have utilized the graph Laplacian in the spectral domain. However, sheer convolution in the spectral domain suffers from the non-locality problem, and various approaches have been introduced to overcome this limitation. https://doi.org/10.48550/arxiv.1706.02216 ; https://doi.org/10.48550/arxiv.1810.00826 ; https://doi.org/10.48550/arxiv.1710.10903 ; https://doi.org/10.48550/arxiv.1704.01212 One stream of work has embedded the attention architecture into GNN, inferring the interaction between nodes without using a diffusion-like picture. https://doi.org/10.48550/arxiv.1710.10903 Another line of work considered message passing networks, which ensures the signal to be localized and non-linearly weighted. https://doi.org/10.48550/arxiv.1704.01212 This architecture has been proven to be highly effective in molecular property prediction fields. doi:10.1021/acs.jcim.9b00237 ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_7",
"text": " Graph pooling aims to utilize the hierarchical nature of graphs. Early work mainly focused on fixed axiomatic pooling methods such as minimum cut, k-means, and spectral clustering without any gradient-based optimization. https://doi.org/10.48550/arxiv.1312.6203 ; NIPS2011_6c1da886 ; 10.1016/j.patcog.2006.04.007 ; https://doi.org/10.48550/arxiv.0711.0189 ; 4302760 Although these pooling methods are effective on graphs without noise, the same heuristic often fails to work well on real datasets and tasks, especially due to a lack of differentiability that prohibits training under supervised signals. Since node representations and pooling strategies mutually affect each other during the training process, simultaneous optimization of whole components is crucial for avoiding local minima. Among many solutions, Diffpoolying2018 is the first to propose an end-to-end learnable pooling mechanism that learns an assignment matrix in which each entry represents the probability of a node being assigned to a cluster. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_8",
"text": " gPool gao2019 and SAGPool lee2019 are ranking-based pooling methods that coarsen the input graph by ranking and downsampling a small subset of nodes. MinCutPool bianchi2019 leverages a continuous relaxation of the minimum-cut objective, enabling spectral clustering under full differentiability. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_9",
"text": " However, the pooling methods above all share a common limitation: the number of clusters must be predefined for each layer as hyperparameters. This limitation is especially detrimental in inductive settings such as molecular property prediction, where each graph can have varying numbers of useful sub-structures. https://doi.org/10.1111/cbdd.12952 ; doi:10.1021/acs.jmedchem.0c00754 ; GUVENCH20161928 Allowing the model to pool towards varying number of clusters based on data is expected to enhance performance, and our proposed GMPool allows such variation through the rank of the grouping matrix. To the best of our knowledge, GMPool is the first to achieve high performance without the need to manually adjust the number of clusters through additional hyperparameter tuning. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_10",
"text": " In this section, we propose a novel differentiable pooling layer, GMPool, which obtains the pooling matrix by first building a grouping matrix that contains clustering similarities of pairwise nodes and then decomposing the matrix into its square-root form. We start the section with preliminary information, then outline the details of GMPool in later sections. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_11",
"text": " We assume an inductive graph-level prediction setting where our aim is to learn a function fθ:𝒢→𝒴:subscript𝑓𝜃→𝒢𝒴f_{\\theta}:\\mathcal{G}\\to\\mathcal{Y} that maps a graph G∈𝒢𝐺𝒢G\\in\\mathcal{G} to a property label y∈𝒴𝑦𝒴y\\in\\mathcal{Y}. Each graph G𝐺G with n𝑛n nodes is represented as a triplet G=(A,X,E)𝐺𝐴𝑋𝐸G=(A,X,E) with graph adjacency A∈{0,1}n×n𝐴superscript01𝑛𝑛A\\in\\{0,1\\}^{n\\times n}, node features X∈ℝn×dn𝑋superscriptℝ𝑛subscript𝑑𝑛X\\in\\mathbb{R}^{n\\times d_{n}}, and edge features E∈ℝn×n×de𝐸superscriptℝ𝑛𝑛subscript𝑑𝑒E\\in\\mathbb{R}^{n\\times n\\times d_{e}}. We use Xisubscript𝑋𝑖X_{i} and Eijsubscript𝐸𝑖𝑗E_{ij} to denote the features of node i𝑖i and edge (i,j)𝑖𝑗(i,j), respectively. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_12",
"text": " As our backbone GNN, we adopt the Directed Message Passing Neural Network (DMPNN) doi:10.1021/acs.jcim.9b00237 which aggregates messages through directed edges. Note that while we chose DMPNN due to its superior performance over GNN architectures, our pooling layer is module-agnostic and can be combined with any GNN as long as node representations are returned as output. Given a graph, DMPNN first initializes the hidden state of each edge (i,j)𝑖𝑗(i,j) based on its feature Eijsubscript𝐸𝑖𝑗E_{ij} and the source-node’s feature Xisubscript𝑋𝑖X_{i}. At each timestep t𝑡t, each directional edge gathers hidden states from incident edges into a message mijt+1superscriptsubscript𝑚𝑖𝑗𝑡1m_{ij}^{t+1} and updates its own hidden state to hijt+1superscriptsubscriptℎ𝑖𝑗𝑡1h_{ij}^{t+1} as follows mijt+1=∑k∈𝒩(i)∖jhkitsuperscriptsubscript𝑚𝑖𝑗𝑡1subscript𝑘𝒩𝑖𝑗superscriptsubscriptℎ𝑘𝑖𝑡\\displaystyle m_{ij}^{t+1}=\\sum_{k\\in\\mathcal{N}(i)\\setminus j}h_{ki}^{t} (1) hijt+1=ReLU(hij0+Wemijt+1)superscriptsubscriptℎ𝑖𝑗𝑡1ReLUsuperscriptsubscriptℎ𝑖𝑗0subscript𝑊𝑒superscriptsubscript𝑚𝑖𝑗𝑡1\\displaystyle h_{ij}^{t+1}=\\texttt{ReLU}(h_{ij}^{0}+W_{e}m_{ij}^{t+1}) (2) Here, 𝒩(i)𝒩𝑖\\mathcal{N}(i) denotes the set of neighboring nodes of node i𝑖i and Wesubscript𝑊𝑒W_{e} a learnable weight. The hidden states of nodes are updated by aggregating the hidden states of incident edges into message mit+1superscriptsubscript𝑚𝑖𝑡1m_{i}^{t+1}, and passing its concatenation with the node feature Xisubscript𝑋𝑖X_{i} into a linear layer followed by ReLU non-linearity mit+1=∑j∈𝒩(i)hijtsuperscriptsubscript𝑚𝑖𝑡1subscript𝑗𝒩𝑖superscriptsubscriptℎ𝑖𝑗𝑡\\displaystyle m_{i}^{t+1}=\\sum_{j\\in\\mathcal{N}(i)}h_{ij}^{t} (3) hit+1=ReLU(Wnconcat(Xi,mit+1))superscriptsubscriptℎ𝑖𝑡1ReLUsubscript𝑊𝑛concatsubscript𝑋𝑖superscriptsubscript𝑚𝑖𝑡1\\displaystyle h_{i}^{t+1}=\\texttt{ReLU}(W_{n}\\texttt{concat}(X_{i},m_{i}^{t+1})) (4) Similarly, Wnsubscript𝑊𝑛W_{n} denotes a learnable weight. Assuming DMPNN runs for T𝑇T timesteps, we use (Xout,Eout)=GNN(A,X,E)subscript𝑋𝑜𝑢𝑡subscript𝐸𝑜𝑢𝑡GNN𝐴𝑋𝐸(X_{out},E_{out})=\\texttt{GNN}(A,X,E) to denote the output representation matrices containing hidden states of all nodes and edges, respectively (i.e., Xout,i=hiTsubscript𝑋𝑜𝑢𝑡𝑖superscriptsubscriptℎ𝑖𝑇X_{out,i}=h_{i}^{T} and Eout,ij=hijTsubscript𝐸𝑜𝑢𝑡𝑖𝑗superscriptsubscriptℎ𝑖𝑗𝑇E_{out,ij}=h_{ij}^{T}). ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_13",
"text": " For graph-level prediction, the node representations after the final GNN layer are typically sum-pooled to obtain a single graph representation hG=∑ihisubscriptℎ𝐺subscript𝑖subscriptℎ𝑖h_{G}=\\sum_{i}h_{i}, which is then passed to a FFN prediction layer. Note that this approach only allows features to propagate locally and is hence unable to learn long-range dependencies and hierarchical structures within graphs. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_14",
"text": " Our goal is to learn a pooling operator to coarsen the input graph after the GNN in each hierarchical layer. In each hierarchical layer, the GNN constructs node representations and then the pooling layer forms a coarsened graph, which is used as input to the next hierarchical layer. More formally, given the representations from the l𝑙l-th layer as (Xout(l),Eout(l))=GNN(A(l),X(l),E(l))subscriptsuperscript𝑋𝑙𝑜𝑢𝑡subscriptsuperscript𝐸𝑙𝑜𝑢𝑡GNNsuperscript𝐴𝑙superscript𝑋𝑙superscript𝐸𝑙(X^{(l)}_{out},E^{(l)}_{out})=\\texttt{GNN}(A^{(l)},X^{(l)},E^{(l)}), the pooling layer yields an assignment matrix S(l)∈ℝnl×nl+1superscript𝑆𝑙superscriptℝsubscript𝑛𝑙subscript𝑛𝑙1S^{(l)}\\in\\mathbb{R}^{n_{l}\\times n_{l+1}} pooling nlsubscript𝑛𝑙n_{l} nodes into nl+1subscript𝑛𝑙1n_{l+1} clusters. Then, the graph G(l)=(A(l),X(l),E(l))superscript𝐺𝑙superscript𝐴𝑙superscript𝑋𝑙superscript𝐸𝑙G^{(l)}=(A^{(l)},X^{(l)},E^{(l)}) is coarsened into G(l+1)=(A(l+1),X(l+1),E(l+1))=(S(l)TA(l)S(l),S(l)TXout(l),S(l)TEout(l)S(l))superscript𝐺𝑙1superscript𝐴𝑙1superscript𝑋𝑙1superscript𝐸𝑙1superscript𝑆superscript𝑙𝑇superscript𝐴𝑙superscript𝑆𝑙superscript𝑆superscript𝑙𝑇subscriptsuperscript𝑋𝑙𝑜𝑢𝑡superscript𝑆superscript𝑙𝑇subscriptsuperscript𝐸𝑙𝑜𝑢𝑡superscript𝑆𝑙G^{(l+1)}=(A^{(l+1)},X^{(l+1)},E^{(l+1)})=(S^{(l)^{T}}A^{(l)}S^{(l)},S^{(l)^{T}}X^{(l)}_{out},S^{(l)^{T}}E^{(l)}_{out}S^{(l)}). This hierarchical process can be utilized iteratively depending on the task at hand. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_15",
"text": " When looking into the relation between pairs of nodes, the grouping task becomes rather simple. While most previous models focus on classifying each node to a predefined number of clusters, our idea simplifies the task into classifying whether each pair of nodes is in the same group. Thus, setting the number of clusters a priori becomes unnecessary. This classification will go through every pair of combinations of nodes to ensure permutation invariance. Mij(l)=Softmax(Clf(f(Xi,Xj)))∀i,j∈# of Nodesformulae-sequencesubscriptsuperscript𝑀𝑙𝑖𝑗SoftmaxClf𝑓subscript𝑋𝑖subscript𝑋𝑗for-all𝑖𝑗# of NodesM^{(l)}_{ij}=\\textrm{Softmax}(\\textrm{Clf}(f(X_{i},X_{j})))\\qquad\\forall\\,\\,i,j\\in\\textrm{\\# of Nodes} (5) where M(l)∈ℝN×Nsuperscript𝑀𝑙superscriptℝ𝑁𝑁M^{(l)}\\in\\mathbb{R}^{N\\times N} and f𝑓f is a commutative function f:X⊕X→YwhereX,Y∈ℝN:𝑓formulae-sequence→direct-sum𝑋𝑋𝑌where𝑋𝑌superscriptℝ𝑁f:X\\oplus X\\rightarrow Y\\qquad\\textrm{where}\\,\\,X,Y\\in\\mathbb{R}^{N} (6) that maps two input vectors into one output vector. While there exist many available choices for f𝑓f, we use Euclidean distance between input vectors to simplify the classification task. Each matrix index corresponds to the node number and each element contains probability values for each pair of nodes whether they are in the same group. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_16",
"text": " As an illustrative example, consider a set of disjoint clusters with no overlapping nodes. In such case, the grouping matrix not only contains 0,1010,1 as its elements, but also can be reformed into a block diagonal form. The number of blocks corresponds to the number of groups after pooling and nodes assigned to the same blocks corresponds to a same group. For instance, if there are three different groups and each group size are k1,k2,k3subscript𝑘1subscript𝑘2subscript𝑘3k_{1},k_{2},k_{3}, M(l)=(1k1×k10001k2×k20001k3×k3)superscript𝑀𝑙matrixsubscript1subscript𝑘1subscript𝑘1000subscript1subscript𝑘2subscript𝑘2000subscript1subscript𝑘3subscript𝑘3M^{(l)}=\\begin{bmatrix}\\framebox{$1_{k_{1}\\times k_{1}}$}&0&0\\\\ 0&\\framebox{$1_{k_{2}\\times k_{2}}$}&0\\\\ 0&0&\\framebox{$1_{k_{3}\\times k_{3}}$}\\\\ \\end{bmatrix} (7) One can easily see that the corresponding pooling operator is as follows S(l)=(1k1×100⋯001k2×10⋯0001k3×1⋯0)superscript𝑆𝑙matrixsubscript1subscript𝑘1100⋯00subscript1subscript𝑘210⋯000subscript1subscript𝑘31⋯0S^{(l)}=\\begin{bmatrix}\\framebox{$1_{k_{1}\\times 1}$}&0&0\\quad&\\cdots&\\quad 0\\\\ 0&\\framebox{$1_{k_{2}\\times 1}$}&0\\quad&\\cdots&\\quad 0\\\\ 0&0&\\framebox{$1_{k_{3}\\times 1}$}\\quad&\\cdots&\\quad 0\\\\ \\end{bmatrix} (8) In general, each element of the grouping matrix (in eq. 26) is a continuous number within (0,1)01(0,1), which allows soft-clustering with overlapping nodes. For detailed computation, see appendix. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_17",
"text": " However, the grouping matrix itself has a limited role in pooling operations. Therefore, extracting pooling operators from the grouping matrix is crucial. Our strategy to form a pooling operator is rather simple. It can be acquired by decomposing a grouping matrix into square-root form. There are numerous known methods which can be utilized, yet we will introduce two representative methods in the following subsection. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_18",
"text": " While the grouping matrix cannot be used for pooling as is, it encodes how similarly each pair of nodes are pooled as it equals the product of the pooling operator with its transpose. The (i,j)𝑖𝑗(i,j)-th entry of the grouping matrix equals ⟨Si(l),Sj(l)⟩=1superscriptsubscript𝑆𝑖𝑙superscriptsubscript𝑆𝑗𝑙1\\langle S_{i}^{(l)},S_{j}^{(l)}\\rangle=1 if the nodes are exactly pooled to the same clusters, ⟨Si(l),Sj(l)⟩=0superscriptsubscript𝑆𝑖𝑙superscriptsubscript𝑆𝑗𝑙0\\langle S_{i}^{(l)},S_{j}^{(l)}\\rangle=0 if they are pooled orthogonally to different clusters. Therefore, if we can decompose the grouping matrix into square-root form, it can be interpreted as a pooling operator for the model. S(l)S(l)T=M(l)superscript𝑆𝑙superscript𝑆𝑙𝑇superscript𝑀𝑙S^{(l)}S^{(l)T}=M^{(l)} (9) The pooling operator S∈ℝnl×nl+1𝑆superscriptℝsubscript𝑛𝑙subscript𝑛𝑙1S\\in\\mathbb{R}^{n_{l}\\times n_{l+1}} is a matrix where nl+1≤nlsubscript𝑛𝑙1subscript𝑛𝑙n_{l+1}\\leq n_{l}. Note that by multiplying pooling operator S𝑆S in reverse order, the degree matrix D∈ℝnl+1×nl+1𝐷superscriptℝsubscript𝑛𝑙1subscript𝑛𝑙1D\\in\\mathbb{R}^{n_{l+1}\\times n_{l+1}} of pooling space can be obtained. S(l)TS(l)=D(l)superscript𝑆𝑙𝑇superscript𝑆𝑙superscript𝐷𝑙S^{(l)T}S^{(l)}=D^{(l)} (10) From eq. 9, it is obvious that the pooling operator completely reconstructs grouping matrix by interacting pooling indices. Moreover, S𝑆S can be interpreted as a weighted matrix for each node to form appropriate sub-structures. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_19",
"text": " Eigen decomposition is one of the basic decomposition schemes one can consider. It is widely used to decompose a given matrix into orthonormal basis O∈ℝnl×nl𝑂superscriptℝsubscript𝑛𝑙subscript𝑛𝑙O\\in\\mathbb{R}^{n_{l}\\times n_{l}} and eigen value Λ∈ℝnl×nlΛsuperscriptℝsubscript𝑛𝑙subscript𝑛𝑙\\Lambda\\in\\mathbb{R}^{n_{l}\\times n_{l}}. M(l)=OΛOTsuperscript𝑀𝑙𝑂Λsuperscript𝑂𝑇M^{(l)}=O\\Lambda O^{T} (11) This particular decomposition scheme always works unless the determinant of a given matrix is equal to 0. From eq. 11, one can rearrange RHS of the equation to become a square form of pooling operator if we set nl+1=nlsubscript𝑛𝑙1subscript𝑛𝑙n_{l+1}=n_{l}. M(l)=OΛΛOT≡S(l)S(l)Tsuperscript𝑀𝑙𝑂ΛΛsuperscript𝑂𝑇superscript𝑆𝑙superscript𝑆𝑙𝑇M^{(l)}=O\\sqrt{\\Lambda}\\sqrt{\\Lambda}O^{T}\\equiv S^{(l)}S^{(l)T} (12) The pooling operator S𝑆S is a square matrix with size of nl×nlsubscript𝑛𝑙subscript𝑛𝑙n_{l}\\times n_{l}, yet the eigen value ΛΛ\\Lambda suppresses useless ranks in the matrix by multiplying 00 to each column of orthonormal basis. Also, eigen decomposition works for any matrix with non-zero determinants, and so it performs perfectly fine in real world situations. Furthermore, any symmetric and real matrix are guaranteed to have real eigen values as well as vectors. Therefore, the square-root of the grouping matrix is ensured to be interpreted as a transformation operator forming sub-groups from nodes. These continuous real valued elements have the advantage that nodes can be soft-clustered to sub-groups. In conventional clustering, it is hard to cluster these structures properly. However, since soft clustering is naturally embedded in the algorithm, linker structures can be dealt with ease. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_20",
"text": " After acquiring the pooling operator, the pooling process becomes obvious. Nodes are in fundamental representation while edge features and adjacency matrix are in adjoint representation. Which leads to the following transformation rules. Xi(l+1)=S(l)Xi(l)superscriptsubscript𝑋𝑖𝑙1superscript𝑆𝑙superscriptsubscript𝑋𝑖𝑙\\displaystyle X_{i}^{(l+1)}=S^{(l)}X_{i}^{(l)} (13) Eij(l+1)=S(l)Eij(l)S(l)Tsuperscriptsubscript𝐸𝑖𝑗𝑙1superscript𝑆𝑙superscriptsubscript𝐸𝑖𝑗𝑙superscript𝑆𝑙𝑇\\displaystyle E_{ij}^{(l+1)}=S^{(l)}E_{ij}^{(l)}S^{(l)T} (14) Aij(l+1)=S(l)Aij(l)S(l)Tsuperscriptsubscript𝐴𝑖𝑗𝑙1superscript𝑆𝑙superscriptsubscript𝐴𝑖𝑗𝑙superscript𝑆𝑙𝑇\\displaystyle A_{ij}^{(l+1)}=S^{(l)}A_{ij}^{(l)}S^{(l)T} (15) If grouping is properly done, 00 (or close to 00) components will appear in the decomposed eigen value matrix. These zero eigenvalues arise naturally and play a role in disregarding group information; those are ineffective towards prediction. However, zero elements in the eigen values causes a major problem in the decomposition process since the matrix might carry a singular determinant. Eigen decomposition is based on an iterative approximation algorithm which includes unbounded terms if any two eigen values are small or close. One can see clearly about this matter in DBLP:journals/corr/IonescuVS15 . (∂l∂A)=U(KT⊙(UT∂l∂U)+(∂l∂Λ)diag)(UT)𝑙𝐴𝑈direct-productsuperscript𝐾𝑇superscript𝑈𝑇𝑙𝑈subscript𝑙Λdiagsuperscript𝑈𝑇\\Big{(}\\frac{\\partial{l}}{\\partial{A}}\\Big{)}=U\\big{(}K^{T}\\odot(U^{T}\\frac{\\partial{l}}{\\partial{U}})+(\\frac{\\partial{l}}{\\partial{\\Lambda}})_{\\textrm{diag}})(U^{T}) (16) Here, ⊙direct-product\\odot denotes element-wise product. Off-diagonal components of K=1/(λi−λj)𝐾1subscript𝜆𝑖subscript𝜆𝑗K=1/(\\lambda_{i}-\\lambda_{j}) causes the problem, since the value blows up to the infinity if any two eigen values are close or very small. However, there are some solutions for this matter by approximating gradient in different ways DBLP:journals/corr/abs-1906-09023 ; 9400752 ; DBLP:journals/corr/abs-2105-02498 . Those methods are developed further to achieve higher speed in the calculation DBLP:journals/corr/abs-2201-08663 . They claim that the method is noticeably faster, over 888 times, than the standard SVD which has the time complexity 𝒪(n3)𝒪superscript𝑛3\\mathcal{O}(n^{3}). Thus, we utilized this method in our work to stabilize and accelerate the learning process. However, since the algorithm achieves the higher speed by approximating gradients, the error compared to standard SVD grows bigger as the size of the matrix grows. Therefore, this method might not be valid with large sized graph data. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_21",
"text": " Another decomposition scheme we are introducing has a rather different approach. Since computing the square root of a given matrix is not an easy task, here we focus on the square of the pooling operator, which is nothing but the grouping matrix itself, and formulate a pooling-like effect by multiplying the grouping matrix. The key idea is to retain pooling depth to one and use a weighted aggregation vector in pooling space as an aggregation basis. The weighted aggregation vector is transformed Euclidean one vector by acting a pooling matrix obtained by decomposing the grouping matrix. 1i(l+1)=S(l)1i(l)superscriptsubscript1𝑖𝑙1superscript𝑆𝑙superscriptsubscript1𝑖𝑙\\displaystyle 1_{i}^{(l+1)}=S^{(l)}1_{i}^{(l)} (17) The final form of the transformation can be expressed as follows. Xi(l+1)∼M(l)Xi(l)similar-tosuperscriptsubscript𝑋𝑖𝑙1superscript𝑀𝑙superscriptsubscript𝑋𝑖𝑙\\displaystyle X_{i}^{(l+1)}\\sim M^{(l)}X_{i}^{(l)} (18) Eij(l+1)∼M(l)Eij(l)M(l)similar-tosuperscriptsubscript𝐸𝑖𝑗𝑙1superscript𝑀𝑙superscriptsubscript𝐸𝑖𝑗𝑙superscript𝑀𝑙\\displaystyle E_{ij}^{(l+1)}\\sim M^{(l)}E_{ij}^{(l)}M^{(l)} (19) This pooling scheme is simpler to use and more scalable (with 𝒪(n2)𝒪superscript𝑛2\\mathcal{O}(n^{2}) cost) than GMPool since the method circumvents SVD computation. Yet there are two mathematical ambiguities. One is that it is only valid for single depth pooling cases. If one tries to perform multiple sequential pooling operations, the pooling operators are no more available to be reduced into the grouping matrix, since two different pooling operators are not Abelian. The other ambiguity is that most activation functions commonly used are not equivariant with pooling operators. However, since many of them are based on element-wise operations with monotonic functions, we can presume that the anomaly are not dominant in most cases. We find that this approach performs comparably to GMPool for small sized molecules where a single pooling depth suffice. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_22",
"text": " We arrange a total of five datasets to test our algorithms: two are open datasets collected from MoleculeNet Ramsundar-et-al-2019 and Binding DB 10.2174/1386207013330670 ; 10.1093/bioinformatics/18.1.130 ; 10.1002/bip.10076 ; 10.1093/nar/gkl999 ; 10.1093/nar/gkv1072 , three are manually collected and arranged from different literatures including scientific articles and patents. • PLQY includes experimentally measured values of photoluminescence quantum yield (PLQY) for fluorescence molecules. • λmaxsubscript𝜆𝑚𝑎𝑥\\lambda_{max} Solvents contains measured λmaxsubscript𝜆𝑚𝑎𝑥\\lambda_{max}, wavelength that shows maximum intensity for emission of a fluorescence molecule, under the solvent condition. • λmaxsubscript𝜆𝑚𝑎𝑥\\lambda_{max} Films consists of λmaxsubscript𝜆𝑚𝑎𝑥\\lambda_{max} values measured after spin coating of fluorescence molecules on films doped with host materials. • pIC50 contains the negative log of the IC50 values for ATP receptor. IC50 implies minimum concentration of certain molecule needed for inhibiting half of activity of the target proteins. The IC50 values are optained from the BindingDB (https://www.bindingdb.org/bind/index.jsp). • Tox21 consists of results of 12 types of toxicity screening tests. We labeled a molecule ‘toxic’ if the molecule failed in any of screening type. Data were originated from Tox21 challenge (2014). Since there are molecules without graph structure information in the dataset, we selected 7,83178317,831 molecules that have the graph structure information. For proper evaluation of pooling approaches, each graph in the data must have at least two or more effective groups. However, Tox21 and pIC50 data contains molecules too small to contain multiple groups and thus we drop molecules with less than 20 nodes from the datasets. In addition, we drop molecules with more than 40 nodes from Tox21 and pIC50 datasets to accelerate the whole training process under dense matrix computations: the largest molecule in each respective dataset has 86 and 132 nodes, but the ratio of molecules with size over 40 in the dataset is only 3.4%percent3.43.4\\% and 3.6%percent3.63.6\\%. Especially for pIC50 dataset, the proportion of molecules with less than 20 nodes are 0.4%percent0.40.4\\%. Lastly, the Tox21 task has been simplified to a single classification task by setting a positive label if any of the 12 tasks are positive in the original dataset. Details can be found in Table 1 and appendix section. Every experiments are tested under five-fold settings with uniform sampling and 10% of dedicated test set to secure the results, and single RTX 3090 is used for the experiments. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_23",
"text": " For empirical evaluation, we compare the performance of GMPool and NGMPool against that of five other pooling approaches. We run all experiments via a pipeline with a fixed DMPNN backbone, while exchanging the pooling layers only. Here we provide brief descriptions of each baselines used: Top-k gao2019 and SAGPool lee2019 retain nodes with the highest scoring based on the projections of node features and self-attention scores, respectively. DiffPool ying2018 uses an additional GNN to learn soft-assignment matrices that mix nodes into clusters. ASAPool ranjan2020asap clusters local subgraphs together through scoring and selection of clusters. MemPool mempool incorporates memory layers that jointly coarsen and transform input node representations. Note that we reimplemented the DMPNN backbone, Top-k pooling, and DiffPool. Implementations of other pooling baselines are borrowed from the pytorch-geometric library. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_24",
"text": " For the backbone of the model, DMPNN, we use the same hidden size of 200 across all three independent layers: the initial edge features with dimension desubscript𝑑𝑒d_{e} and node features with dimension dnsubscript𝑑𝑛d_{n} are passed through layers of dimension de×200subscript𝑑𝑒200d_{e}\\times 200 and dn×200subscript𝑑𝑛200d_{n}\\times 200, respectively with ReLU activation. The initial node and edge embeddings are determined by features generated in RDKit. The message passing module passes node embeddings through a linear layer with dimension 200×200200200200\\times 200, followed by ReLU activation and 0.150.150.15 dropout layer. For graph representation we use a global average pooling scheme. GMPool and NGMPool construct the grouping matrix via a 200×12001200\\times 1 linear layer and sigmoid activation without any parameters related to cluster numbers or thresholds. We use a batch size of 808080 and Adam optimizer for all model training. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_25",
"text": " For baseline pooling methods that require the cluster size as a hyperparameter, we perform grid search across candidates following previous work, and present best results. However, we fix the final pooling size to 10 as the average size of most common 404040 functional groups in bioactive molecules is 4.254.254.25 ertl2020most , indicating that molecules under concern (statistics shown in Table 1) can have up to 101010 clusters. The specific hyperparameter setups used for pooling baselines can be found in appendix. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_26",
"text": " The grouping matrix starts from randomized initial state and is optimized to gather effective functional groups in the molecules (Figures 2(b) and 2(c)). Furthermore, since our algorithm fully enjoys the soft clustering concept, the result shows continuous weights for each group. This characteristic ensures the model can gather information from distant nodes if necessary. However, sometimes the grouping matrix shows unfamiliar forms, since the effective functional groups should vary due to the downstream task itself. For instance, for some simple tasks such as PLQY prediction, the grouping is rather intuitive as shown in Figure 2(b), yet for complicated tasks like λmaxsubscript𝜆max\\lambda_{\\textrm{max}} prediction, the effective functional groups are also complicated as in Figure 2(c). ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_27",
"text": " We tested various combinations of models and dataset to check the validity of our algorithm. We selected GCN, DMPNN, Top-k, SAGPool, DiffPool, ASAPool and MemPool algorithm to set a benchmark score to compare with. As it is shown in the table 2, majority of the cases, our models outperform conventional methods. However, for some tasks (i.e. λmaxsubscript𝜆max\\lambda_{\\textrm{max}} datasets), our model is gaining only a small margin of the performance. This is caused by the underlying mechanism of the chemical effect. Since some tasks are strongly related to the effective groups of the molecule, yet others are not. In those cases, sub-structures are not intuitive and might appear in very complicated forms, as shown in Figure 2(c). If the grouping becomes complicated, the rank of the pooling matrix should be larger to cover all degrees of freedom for the data. However, conventional models, which shared predefined numbers as universal grouping numbers, force to collect groups and reduce it to the low-rank form, which might not have enough degree of freedom. This will cause information loss or blend which compromises the prediction result. Therefore, one can check that in λmaxsubscript𝜆max\\lambda_{\\textrm{max}} prediction test, conventional pooling algorithms show inferior result than simple massage passing scheme. Yet our model is not designed to reduce the physical rank of the matrix during the pooling process, and there is always enough degree of freedom to carry the information throughout learning. Hence, even for the λmaxsubscript𝜆max\\lambda_{\\textrm{max}} case, our model outperforms the others. Furthermore, for other tasks, it is clear that our model improves performance by 5∼10%psimilar-to5percent10𝑝5\\sim 10\\%p. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_28",
"text": " One crucial hyperparameter to be explored in pooling models is the number of clusters. Even though our model does not require to fix the number of clusters in the first place, one can set the parameter by force. One can easily see in the Figure 3(a) that the number of clusters can be set to the number of nodes without compromising performance of the model. Further, Figure 3(b) shows that our model outperforms Top k algorithms with various cluster numbers and original DMPNN as well. This is one of the powerful features of our model, since the model automatically splits groups and determines the appropriate number of sub-structures for each individual graph. One can also force the number of clusters and share through all graphs in an equal manner; however, it is not effective for the following reasons. In real world data, one can not esteem the exact number of clusters for individual graphs. This might be problematic if one sets the number of clusters less than it requires, the models’ performance will be compromised due to the information loss. Another problem is caused by the mathematical structure of the decomposition scheme. Using SVD method will cause ambiguity since collecting only top k eigen values from the decomposed matrix might not reconstruct the original grouping matrix due to lack of information. It is even worse in the initial stage of the learning as the weight is almost in the random state and the top k eigen values are not precisely representing the appropriate clusters. Thus, as it is depicted in the above figure, it is best to leave the cluster number to be determined automatically by the model itself. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_29",
"text": " We have introduced a novel pooling architecture with adaptive number of clusters based on a second order pooling operator, namely the grouping matrix. The grouping matrix is based on clustering similarities between every possible pairs of nodes, ensuring permutation invariance. We have shown that our model is valid for chemical property prediction and outperforms conventional methods in real-world datasets. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_30",
"text": " While our model is useful and effective, there is still room for improvement. First of all, despite leveraging a method to decompose the grouping matrix with stable gradient computations, there exist corner cases with a small eigengap at which the model fails to converge. This event seldom happens (about 0.00018%percent0.000180.00018\\% in our experiments), but can be non-negligible when one needs to learn with a large number of data points. Hence, one future direction would be to impose proper constraints on the loss to avoid such gradient blowup in the grouping matrix. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_31",
"text": " Another future direction would be to enhance scalability of our methods to improve applicability to large-scale graphs. Since the grouping matrix decomposition step via SVD is the main computational bottleneck of GMPool, incorporating faster decomposition modules such as randomized approximation halko2011finding ; DBLP:journals/corr/abs-1710-02812 methods can lead to faster inference. However, this is likely to incur loss in predictive performance, and as the focus of this work lies in allowing variation in the number of clusters in small molecular graphs where scalability is not an issue, we defer improving the scalability to future work. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_32",
"text": " Lastly, generalizing the second order grouping matrix towards higher-order grouping tensors can allow further expressive power. We have introduced a pairwise structure; yet it is not obliged to be fixed into the pairwise form. If we consider higher-order form of node combinations, i.e. k-form where k<N𝑘𝑁k<N and N𝑁N is total node number, the grouping matrix can be generalized into the higher rank tensor. Based on the tensor-form, the transformation rule can be written as ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_33",
"text": " M~μ1⋯μk=Sμ1ν1⋯SμkνkMν1⋯νksubscript~𝑀subscript𝜇1⋯subscript𝜇𝑘superscriptsubscript𝑆subscript𝜇1subscript𝜈1⋯superscriptsubscript𝑆subscript𝜇𝑘subscript𝜈𝑘subscript𝑀subscript𝜈1⋯subscript𝜈𝑘\\tilde{M}_{\\mu_{1}\\cdots\\mu_{k}}=S_{\\mu_{1}}^{\\phantom{\\mu_{1}}\\nu_{1}}\\cdots S_{\\mu_{k}}^{\\phantom{\\mu_{k}}\\nu_{k}}M_{\\nu_{1}\\cdots\\nu_{k}} (20) ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
},
{
"id": "2209.02939_all_34",
"text": " Note that to satisfy the above transformation rule, the following conditions are required. One is that selecting nodes combination should be as same as selecting nodes set and size of the set should fixed into the number of nodes in a group. The other is that the classification result of the node set should be retained the same for any subset in the node set. This concept may have a connection to hypergraph configurations. However if we raise the nodes numbers above 222, required computation power increases by a huge amount, since the combination number grows exponentially until the number of nodes hits N/2𝑁2N/2. Therefore, practically it is a difficult task to test the higher rank version of our algorithm, yet it could be useful for learning datasets with higher order connections. ",
"title": "Grouping-matrix based Graph Pooling with Adaptive Number of Clusters"
}
] |
What type of scenes were used for training?
|
Proposed model is trained on the Cityscapes dataset and then fine tuned on KITTI scenes [24]. The training split used is from [7] [28].
|
[
24,
28
] |
[
{
"id": "1704.07813_all_0",
"text": " Humans are remarkably capable of inferring ego-motion and the 3D structure of a scene even over short timescales. For instance, in navigating along a street, we can easily locate obstacles and react quickly to avoid them. Years of research in geometric computer vision has failed to recreate similar modeling capabilities for real-world scenes (e.g., where non-rigidity, occlusion and lack of texture are present). So why do humans excel at this task? One hypothesis is that we develop a rich, structural understanding of the world through our past visual experience that has largely consisted of moving around and observing vast numbers of scenes and developing consistent modeling of our observations. From millions of such observations, we have learned about the regularities of the world—roads are flat, buildings are straight, cars are supported by roads etc., and we can apply this knowledge when perceiving a new scene, even from a single monocular image. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_1",
"text": " In this work, we mimic this approach by training a model that observes sequences of images and aims to explain its observations by predicting likely camera motion and the scene structure (as shown in Fig. 1). We take an end-to-end approach in allowing the model to map directly from input pixels to an estimate of ego-motion (parameterized as 6-DoF transformation matrices) and the underlying scene structure (parameterized as per-pixel depth maps under a reference view). We are particularly inspired by prior work that has suggested view synthesis as a metric and recent work that tackles the calibrated, multi-view 3D case in an end-to-end framework . Our method is unsupervised, and can be trained simply using sequences of images with no manual labeling or even camera motion information. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_2",
"text": " Our approach builds upon the insight that a geometric view synthesis system only performs consistently well when its intermediate predictions of the scene geometry and the camera poses correspond to the physical ground-truth. While imperfect geometry and/or pose estimation can cheat with reasonable synthesized views for certain types of scenes (e.g., textureless), the same model would fail miserably when presented with another set of scenes with more diverse layout and appearance structures. Thus, our goal is to formulate the entire view synthesis pipeline as the inference procedure of a convolutional neural network, so that by training the network on large-scale video data for the ‘meta’-task of view synthesis the network is forced to learn about intermediate tasks of depth and camera pose estimation in order to come up with a consistent explanation of the visual world. Empirical evaluation on the KITTI benchmark demonstrates the effectiveness of our approach on both single-view depth and camera pose estimation. Our code will be made available at https://github.com/tinghuiz/SfMLearner. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_3",
"text": " The simultaneous estimation of structure and motion is a well studied problem with an established toolchain of techniques (12, 50, 38). Whilst the traditional toolchain is effective and efficient in many cases, its reliance on accurate image correspondence can cause problems in areas of low texture, complex geometry/photometry, thin structures, and occlusions. To address these issues, several of the pipeline stages have been recently tackled using deep learning, e.g., feature matching , pose estimation , and stereo (10, 27, 53). These learning-based techniques are attractive in that they are able to leverage external supervision during training, and potentially overcome the above issues when applied to test data. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_4",
"text": " One important application of geometric scene understanding is the task of novel view synthesis, where the goal is to synthesize the appearance of the scene seen from novel camera viewpoints. A classic paradigm for view synthesis is to first either estimate the underlying 3D geometry explicitly or establish pixel correspondence among input views, and then synthesize the novel views by compositing image patches from the input views (e.g., (4, 55, 43, 6, 9)). Recently, end-to-end learning has been applied to reconstruct novel views by transforming the input based on depth or flow, e.g., DeepStereo , Deep3D and Appearance Flows . In these methods, the underlying geometry is represented by quantized depth planes (DeepStereo), probabilistic disparity maps (Deep3D) and view-dependent flow fields (Appearance Flows), respectively. Unlike methods that directly map from input views to the target view (e.g., ), warping-based methods are forced to learn intermediate predictions of geometry and/or correspondence. In this work, we aim to distill such geometric reasoning capability from CNNs trained to perform warping-based view synthesis. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_5",
"text": " Our work is closely related to a line of recent research on learning single-view 3D inference from registered 2D observations. Garg et al. propose to learn a single-view depth estimation CNN using projection errors to a calibrated stereo twin for supervision. Concurrently, Deep3D predicts a second stereo viewpoint from an input image using stereoscopic film footage as training data. A similar approach was taken by Godard et al. , with the addition of a left-right consistency constraint, and a better architecture design that led to impressive performance. Like our approach, these techniques only learn from image observations of the world, unlike methods that require explicit depth for training, e.g., (20, 42, 7, 27, 30). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_6",
"text": " These techniques bear some resemblance to direct methods for structure and motion estimation , where the camera parameters and scene depth are adjusted to minimize a pixel-based error function. However, rather than directly minimizing the error to obtain the estimation, the CNN-based methods only take a gradient step for each batch of input instances, which allows the network to learn an implicit prior from a large corpus of related imagery. Several authors have explored building differentiable rendering operations into their models that are trained in this way, e.g., (19, 29, 34). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_7",
"text": " While most of the above techniques (including ours) are mainly focused on inferring depth maps as the scene geometry output, recent work (e.g., (13, 41, 46, 52)) has also shown success in learning 3D volumetric representations from 2D observations based on similar principles of projective geometry. Fouhey et al. further show that it is even possible to learn 3D inference without 3D labels (or registered 2D views) by utilizing scene regularity. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_8",
"text": " Another line of related work to ours is visual representation learning from video, where the general goal is to design pretext tasks for learning generic visual features from video data that can later be re-purposed for other vision tasks such as object detection and semantic segmentation. Such pretext tasks include ego-motion estimation (2, 24), tracking , temporal coherence , temporal order verification , and object motion mask prediction . While we focus on inferring the explicit scene geometry and ego-motion in this work, intuitively, the internal representation learned by the deep network (especially the single-view depth CNN) should capture some level of semantics that could generalize to other tasks as well. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_9",
"text": " Concurrent to our work, Vijayanarasimhan et al. independently propose a framework for joint training of depth, camera motion and scene motion from videos. While both methods are conceptually similar, ours is focused on the unsupervised aspect, whereas their framework adds the capability to incorporate supervision (e.g., depth, camera motion or scene motion). There are significant differences in how scene dynamics are modeled during training, in which they explicitly solve for object motion whereas our explainability mask discounts regions undergoing motion, occlusion and other factors. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_10",
"text": " Here we propose a framework for jointly training a single-view depth CNN and a camera pose estimation CNN from unlabeled video sequences. Despite being jointly trained, the depth model and the pose estimation model can be used independently during test-time inference. Training examples to our model consist of short image sequences of scenes captured by a moving camera. While our training procedure is robust to some degree of scene motion, we assume that the scenes we are interested in are mostly rigid, i.e., the scene appearance change across different frames is dominated by the camera motion. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_11",
"text": " The key supervision signal for our depth and pose prediction CNNs comes from the task of novel view synthesis: given one input view of a scene, synthesize a new image of the scene seen from a different camera pose. We can synthesize a target view given a per-pixel depth in that image, plus the pose and visibility in a nearby view. As we will show next, this synthesis process can be implemented in a fully differentiable manner with CNNs as the geometry and pose estimation modules. Visibility can be handled, along with non-rigidity and other non-modeled factors, using an “explanability” mask, which we discuss later (Sec. 3.3). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_12",
"text": " Let us denote <I1,…,IN><I_{1},\\ldots,I_{N}> as a training image sequence with one of the frames Itsubscript𝐼𝑡I_{t} being the target view and the rest being the source views Is(1≤s≤N,s≠t)I_{s}(1\\leq s\\leq N,s\\neq t). The view synthesis objective can be formulated as ℒvs=∑s∑p|It(p)−I^s(p)|,subscriptℒ𝑣𝑠subscript𝑠subscript𝑝subscript𝐼𝑡𝑝subscript^𝐼𝑠𝑝\\mathcal{L}_{vs}=\\sum_{s}\\sum_{p}|I_{t}(p)-\\hat{I}_{s}(p)|~{}, (1) where p𝑝p indexes over pixel coordinates, and I^ssubscript^𝐼𝑠\\hat{I}_{s} is the source view Issubscript𝐼𝑠I_{s} warped to the target coordinate frame based on a depth image-based rendering module (described in Sec. 3.2), taking the predicted depth D^tsubscript^𝐷𝑡\\hat{D}_{t}, the predicted 4×4444\\times 4 camera transformation matrix111In practice, the CNN estimates the Euler angles and the 3D translation vector, which are then converted to the transformation matrix. T^t→ssubscript^𝑇→𝑡𝑠\\hat{T}_{t\\rightarrow s} and the source view Issubscript𝐼𝑠I_{s} as input. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_13",
"text": " Note that the idea of view synthesis as supervision has also been recently explored for learning single-view depth estimation (14, 16) and multi-view stereo . However, to the best of our knowledge, all previous work requires posed image sets during training (and testing too in the case of DeepStereo), while our framework can be applied to standard videos without pose information. Furthermore, it predicts the poses as part of the learning framework. See Figure 2 for an illustration of our learning pipeline for depth and pose estimation. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_14",
"text": " As indicated in Eq. 1, a key component of our learning framework is a differentiable depth image-based renderer that reconstructs the target view Itsubscript𝐼𝑡I_{t} by sampling pixels from a source view Issubscript𝐼𝑠I_{s} based on the predicted depth map D^tsubscript^𝐷𝑡\\hat{D}_{t} and the relative pose T^t→ssubscript^𝑇→𝑡𝑠\\hat{T}_{t\\rightarrow s}. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_15",
"text": " Let ptsubscript𝑝𝑡p_{t} denote the homogeneous coordinates of a pixel in the target view, and K𝐾K denote the camera intrinsics matrix. We can obtain ptsubscript𝑝𝑡p_{t}’s projected coordinates onto the source view pssubscript𝑝𝑠p_{s} by222For notation simplicity, we omit showing the necessary conversion to homogeneous coordinates along the steps of matrix multiplication. ps∼KT^t→sD^t(pt)K−1ptsimilar-tosubscript𝑝𝑠𝐾subscript^𝑇→𝑡𝑠subscript^𝐷𝑡subscript𝑝𝑡superscript𝐾1subscript𝑝𝑡p_{s}\\sim K\\hat{T}_{t\\rightarrow s}\\hat{D}_{t}(p_{t})K^{-1}p_{t} (2) Notice that the projected coordinates pssubscript𝑝𝑠p_{s} are continuous values. To obtain Is(ps)subscript𝐼𝑠subscript𝑝𝑠I_{s}(p_{s}) for populating the value of I^s(pt)subscript^𝐼𝑠subscript𝑝𝑡\\hat{I}_{s}(p_{t}) (see Figure 3), we then use the differentiable bilinear sampling mechanism proposed in the spatial transformer networks that linearly interpolates the values of the 444-pixel neighbors (top-left, top-right, bottom-left, and bottom-right) of pssubscript𝑝𝑠p_{s} to approximate Is(ps)subscript𝐼𝑠subscript𝑝𝑠I_{s}(p_{s}), i.e. I^s(pt)=Is(ps)=∑i∈{t,b},j∈{l,r}wijIs(psij),subscript^𝐼𝑠subscript𝑝𝑡subscript𝐼𝑠subscript𝑝𝑠subscriptformulae-sequence𝑖𝑡𝑏𝑗𝑙𝑟superscript𝑤𝑖𝑗subscript𝐼𝑠superscriptsubscript𝑝𝑠𝑖𝑗\\hat{I}_{s}(p_{t})=I_{s}(p_{s})=\\sum_{i\\in\\{t,b\\},j\\in\\{l,r\\}}w^{ij}I_{s}(p_{s}^{ij}), where wijsuperscript𝑤𝑖𝑗w^{ij} is linearly proportional to the spatial proximity between pssubscript𝑝𝑠p_{s} and psijsuperscriptsubscript𝑝𝑠𝑖𝑗p_{s}^{ij} , and ∑i,jwij=1subscript𝑖𝑗superscript𝑤𝑖𝑗1\\sum_{i,j}w^{ij}=1. A similar strategy is used in for learning to directly warp between different views, while here the coordinates for pixel warping are obtained through projective geometry that enables the factorization of depth and camera pose. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_16",
"text": " Note that when applied to monocular videos the above view synthesis formulation implicitly assumes 1) the scene is static without moving objects; 2) there is no occlusion/disocclusion between the target view and the source views; 3) the surface is Lambertian so that the photo-consistency error is meaningful. If any of these assumptions are violated in a training sequence, the gradients could be corrupted and potentially inhibit training. To improve the robustness of our learning pipeline to these factors, we additionally train a explainability prediction network (jointly and simultaneously with the depth and pose networks) that outputs a per-pixel soft mask E^ssubscript^𝐸𝑠\\hat{E}_{s} for each target-source pair, indicating the network’s belief in where direct view synthesis will be successfully modeled for each target pixel. Based on the predicted E^ssubscript^𝐸𝑠\\hat{E}_{s}, the view synthesis objective is weighted correspondingly by ℒvs=∑<I1,…,IN>∈𝒮∑pE^s(p)|It(p)−I^s(p)|.subscriptℒ𝑣𝑠subscriptabsentsubscript𝐼1…subscript𝐼𝑁absent𝒮subscript𝑝subscript^𝐸𝑠𝑝subscript𝐼𝑡𝑝subscript^𝐼𝑠𝑝\\mathcal{L}_{vs}=\\sum_{<I_{1},\\ldots,I_{N}>\\in\\mathcal{S}}\\sum_{p}\\hat{E}_{s}(p)|I_{t}(p)-\\hat{I}_{s}(p)|~{}. (3) Since we do not have direct supervision for E^ssubscript^𝐸𝑠\\hat{E}_{s}, training with the above loss would result in a trivial solution of the network always predicting E^ssubscript^𝐸𝑠\\hat{E}_{s} to be zero, which perfectly minimizes the loss. To resolve this, we add a regularization term ℒreg(E^s)subscriptℒ𝑟𝑒𝑔subscript^𝐸𝑠\\mathcal{L}_{reg}(\\hat{E}_{s}) that encourages nonzero predictions by minimizing the cross-entropy loss with constant label 111 at each pixel location. In other words, the network is encouraged to minimize the view synthesis objective, but allowed a certain amount of slack for discounting the factors not considered by the model. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_17",
"text": " One remaining issue with the above learning pipeline is that the gradients are mainly derived from the pixel intensity difference between I(pt)𝐼subscript𝑝𝑡I(p_{t}) and the four neighbors of I(ps)𝐼subscript𝑝𝑠I(p_{s}), which would inhibit training if the correct pssubscript𝑝𝑠p_{s} (projected using the ground-truth depth and pose) is located in a low-texture region or far from the current estimation. This is a well known issue in motion estimation . Empirically, we found two strategies to be effective for overcoming this issue: 1) using a convolutional encoder-decoder architecture with a small bottleneck for the depth network that implicitly constrains the output to be globally smooth and facilitates gradients to propagate from meaningful regions to nearby regions; 2) explicit multi-scale and smoothness loss (e.g., as in (14, 16)) that allows gradients to be derived from larger spatial regions directly. We adopt the second strategy in this work as it is less sensitive to architectural choices. For smoothness, we minimize the L1subscript𝐿1L_{1} norm of the second-order gradients for the predicted depth maps (similar to ). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_18",
"text": " Our final objective becomes ℒfinal=∑lℒvsl+λsℒsmoothl+λe∑sℒreg(E^sl),subscriptℒ𝑓𝑖𝑛𝑎𝑙subscript𝑙superscriptsubscriptℒ𝑣𝑠𝑙subscript𝜆𝑠subscriptsuperscriptℒ𝑙𝑠𝑚𝑜𝑜𝑡ℎsubscript𝜆𝑒subscript𝑠subscriptℒ𝑟𝑒𝑔superscriptsubscript^𝐸𝑠𝑙\\mathcal{L}_{final}=\\sum_{l}\\mathcal{L}_{vs}^{l}+\\lambda_{s}\\mathcal{L}^{l}_{smooth}+\\lambda_{e}\\sum_{s}\\mathcal{L}_{reg}(\\hat{E}_{s}^{l})~{}, (4) where l𝑙l indexes over different image scales, s𝑠s indexes over source images, and λssubscript𝜆𝑠\\lambda_{s} and λesubscript𝜆𝑒\\lambda_{e} are the weighting for the depth smoothness loss and the explainability regularization, respectively. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_19",
"text": " For single-view depth prediction, we adopt the DispNet architecture proposed in that is mainly based on an encoder-decoder design with skip connections and multi-scale side predictions (see Figure 4). All conv layers are followed by ReLU activation except for the prediction layers, where we use 1/(α∗sigmoid(x)+β)1𝛼𝑠𝑖𝑔𝑚𝑜𝑖𝑑𝑥𝛽1/(\\alpha*sigmoid(x)+\\beta) with α=10𝛼10\\alpha=10 and β=0.01𝛽0.01\\beta=0.01 to constrain the predicted depth to be always positive within a reasonable range. We also experimented with using multiple views as input to the depth network, but did not find this to improve the results. This is in line with the observations in , where optical flow constraints need to be enforced to utilize multiple views effectively. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_20",
"text": " The input to the pose estimation network is the target view concatenated with all the source views (along the color channels), and the outputs are the relative poses between the target view and each of the source views. The network consists of 777 stride-2 convolutions followed by a 1×1111\\times 1 convolution with 6∗(N−1)6𝑁16*(N-1) output channels (corresponding to 333 Euler angles and 333-D translation for each source view). Finally, global average pooling is applied to aggregate predictions at all spatial locations. All conv layers are followed by ReLU except for the last layer where no nonlinear activation is applied. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_21",
"text": " The explainability prediction network shares the first five feature encoding layers with the pose network, followed by 555 deconvolution layers with multi-scale side predictions. All conv/deconv layers are followed by ReLU except for the prediction layers with no nonlinear activation. The number of output channels for each prediction layer is 2∗(N−1)2𝑁12*(N-1), with every two channels normalized by softmax to obtain the explainability prediction for the corresponding source-target pair (the second channel after normalization is E^ssubscript^𝐸𝑠\\hat{E}_{s} and used in computing the loss in Eq. 3). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_22",
"text": " Here we evaluate the performance of our system, and compare with prior approaches on single-view depth as well as ego-motion estimation. We mainly use the KITTI dataset for benchmarking, but also use the Make3D dataset for evaluating cross-dataset generalization ability. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_23",
"text": " We implemented the system using the publicly available TensorFlow framework. For all the experiments, we set λs=0.5/lsubscript𝜆𝑠0.5𝑙\\lambda_{s}=0.5/l (l𝑙l is the downscaling factor for the corresponding scale) and λe=0.2subscript𝜆𝑒0.2\\lambda_{e}=0.2. During training, we used batch normalization for all the layers except for the output layers, and the Adam optimizer with β1=0.9subscript𝛽10.9\\beta_{1}=0.9, β2=0.999subscript𝛽20.999\\beta_{2}=0.999, learning rate of 0.00020.00020.0002 and mini-batch size of 444. The training typically converges after about 150K150𝐾150K iterations. All the experiments are performed with image sequences captured with a monocular camera. We resize the images to 128×416128416128\\times 416 during training, but both the depth and pose networks can be run fully-convolutionally for images of arbitrary size at test time. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_24",
"text": " We train our system on the split provided by , and exclude all the frames from the testing scenes as well as static sequences with mean optical flow magnitude less than 111 pixel for training. We fix the length of image sequences to be 333 frames, and treat the central frame as the target view and the ±1plus-or-minus1\\pm 1 frames as the source views. We use images captured by both color cameras, but treated them independently when forming training sequences. This results in a total of 44,5404454044,540 sequences, out of which we use 40,1094010940,109 for training and 4,43144314,431 for validation. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_25",
"text": " To the best of our knowledge, no previous systems exist that learn single-view depth estimation in an unsupervised manner from monocular videos. Nonetheless, here we provide comparison with prior methods with depth supervision and recent methods that use calibrated stereo images (i.e. with pose supervision) for training (14, 16). Since the depth predicted by our method is defined up to a scale factor, for evaluation we multiply the predicted depth maps by a scalar s^^𝑠\\hat{s} that matches the median with the ground-truth, i.e. s^=median(Dgt)/median(Dpred)^𝑠𝑚𝑒𝑑𝑖𝑎𝑛subscript𝐷𝑔𝑡𝑚𝑒𝑑𝑖𝑎𝑛subscript𝐷𝑝𝑟𝑒𝑑\\hat{s}=median(D_{gt})/median(D_{pred}). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_26",
"text": " Similar to , we also experimented with first pre-training the system on the larger Cityscapes dataset (sample predictions are shown in Figure 5), and then fine-tune on KITTI, which results in slight performance improvement. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_27",
"text": " Here we evaluate the single-view depth performance on the 697697697 images from the test split of . As shown in Table 1, our unsupervised method performs comparably with several supervised methods (e.g. Eigen et al. and Garg et al. ), but falls short of concurrent work by Godard et al. that uses calibrated stereo images (i.e. with pose supervision) with left-right cycle consistency loss for training. For future work, it would be interesting to see if incorporating the similar cycle consistency loss into our framework could further improve the results. Figure 6 provides examples of visual comparison between our results and some supervised baselines over a variety of examples. One can see that although trained in an unsupervised manner, our results are comparable to that of the supervised baselines, and sometimes preserve the depth boundaries and thin structures such as trees and street lights better. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_28",
"text": " We show sample predictions made by our initial Cityscapes model and the final model (pre-trained on Cityscapes and then fine-tuned on KITTI) in Figure 7. Due to the domain gap between the two datasets, our Cityscapes model sometimes has difficulty in recovering the complete shape of the car/bushes, and mistakes them with distant objects. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_29",
"text": " We also performed an ablation study of the explainability modeling (see Table 1), which turns out only offering a modest performance boost. This is likely because 1) most of the KITTI scenes are static without significant scene motions, and 2) the occlusion/visibility effects only occur in small regions in sequences across a short time span (333-frames), which make the explainability modeling less essential to the success of training. Nonetheless, our explainability prediction network does seem to capture the factors like scene motion and visibility well (see Sec. 4.3), and could potentially be more important for other more challenging datasets. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_30",
"text": " To evaluate the generalization ability of our single-view depth model, we directly apply our model trained on Cityscapes + KITTI to the Make3D dataset unseen during training. While there still remains a significant performance gap between our method and others supervised using Make3D ground-truth depth (see Table 2), our predictions are able to capture the global scene layout reasonably well without any training on the Make3D images (see Figure 8). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_31",
"text": " To evaluate the performance of our pose estimation network, we applied our system to the official KITTI odometry split (containing 111111 driving sequences with ground truth odometry obtained through the IMU/GPS readings, which we use for evaluation purpose only), and used sequences 000000-080808 for training and 090909-101010 for testing. In this experiment, we fix the length of input image sequences to our system to 555 frames. We compare our ego-motion estimation with two variants of monocular ORB-SLAM (a well-established SLAM system): 1) ORB-SLAM (full), which recovers odometry using all frames of the driving sequence (i.e. allowing loop closure and re-localization), and 2) ORB-SLAM (short), which runs on 555-frame snippets (same as our input setting). Another baseline we compare with is the dataset mean of car motion (using ground-truth odometry) for 555-frame snippets. To resolve scale ambiguity during evaluation, we first optimize the scaling factor for the predictions made by each method to best align with the ground truth, and then measure the Absolute Trajectory Error (ATE) as the metric. ATE is computed on 555-frame snippets and averaged over the full sequence.333For evaluating ORB-SLAM (full) we break down the trajectory of the full sequence into 555-frame snippets with the reference coordinate frame adjusted to the central frame of each snippet. As shown in Table 3 and Fig. 9, our method outperforms both baselines (mean odometry and ORB-SLAM (short)) that share the same input setting as ours, but falls short of ORB-SLAM (full), which leverages whole sequences (159115911591 for seq. 090909 and 120112011201 for seq. 101010) for loop closure and re-localization. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_32",
"text": " For better understanding of our pose estimation results, we show in Figure 9 the ATE curve with varying amount of side-rotation by the car between the beginning and the end of a sequence. Figure 9 suggests that our method is significantly better than ORB-SLAM (short) when the side-rotation is small (i.e. car mostly driving forward), and comparable to ORB-SLAM (full) across the entire spectrum. The large performance gap between ours and ORB-SLAM (short) suggests that our learned ego-motion could potentially be used as an alternative to the local estimation modules in monocular SLAM systems. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_33",
"text": " We visualize example explainability masks predicted by our network in Figure 10. The first three rows suggest that the network has learned to identify dynamic objects in the scene as unexplainable by our model, and similarly, rows 4–5 are examples of objects that disappear from the frame in subsequent views. The last two rows demonstrate the potential downside of explainability-weighted loss: the depth CNN has low confidence in predicting thin structures well, and tends to mask them as unexplainable. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_34",
"text": " We have presented an end-to-end learning pipeline that utilizes the task of view synthesis for supervision of single-view depth and camera pose estimation. The system is trained on unlabeled videos, and yet performs comparably with approaches that require ground-truth depth or pose for training. Despite good performance on the benchmark evaluation, our method is by no means close to solving the general problem of unsupervised learning of 3D scene structure inference. A number of major challenges are yet to be addressed: 1) our current framework does not explicitly estimate scene dynamics and occlusions (although they are implicitly taken into account by the explainability masks), both of which are critical factors in 3D scene understanding. Direct modeling of scene dynamics through motion segmentation (e.g. (48, 40)) could be a potential solution; 2) our framework assumes the camera intrinsics are given, which forbids the use of random Internet videos with unknown camera types/calibration – we plan to address this in future work; 3) depth maps are a simplified representation of the underlying 3D scene. It would be interesting to extend our framework to learn full 3D volumetric representations (e.g. ). ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_35",
"text": " Another interesting area for future work would be to investigate in more detail the representation learned by our system. In particular, the pose network likely uses some form of image correspondence in estimating the camera motion, whereas the depth estimation network likely recognizes common structural features of scenes and objects. It would be interesting to probe these, and investigate the extent to which our network already performs, or could be re-purposed to perform, tasks such as object detection and semantic segmentation. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
},
{
"id": "1704.07813_all_36",
"text": " We thank our colleagues, Sudheendra Vijayanarasimhan, Susanna Ricco, Cordelia Schmid, Rahul Sukthankar, and Katerina Fragkiadaki for their help. We also thank the anonymous reviewers for their valuable comments. TZ would like to thank Shubham Tulsiani for helpful discussions, and Clement Godard for sharing the evaluation code. This work is also partially funded by Intel/NSF VEC award IIS-1539099. ",
"title": "Unsupervised Learning of Depth and Ego-Motion from Video"
}
] |
Why does negative transfer occur when learning with auxiliary tasks?
|
Negative transfer happens when the learning of an auxiliary task negatively impacts the performance of the primary task [1]. In the case of graph-based tasks, it can happen because the graph structure, such as the number of nodes, edges, and diameter, can be vastly different between domains [15].
|
[
1,
15
] |
[
{
"id": "2007.08294_all_0",
"text": " Graph neural networks have been proven effective to learn representations for various tasks such as node classification , link prediction , and graph classification . The powerful representation yields state-of-the-art performance in a variety of applications including social network analysis , citation network analysis , visual understanding (6, 7), recommender systems , physics , and drug discovery . Despite the wide operating range of graph neural networks, employing auxiliary (pre-text) tasks has been less explored for further improving graph representation learning. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_1",
"text": " Pre-training with an auxiliary task is a common technique for deep neural networks. Indeed, it is the de facto standard step in natural language processing and computer vision to learn a powerful backbone networks such as BERT and ResNet leveraging large datasets such as BooksCorpus , English Wikipedia, and ImageNet . The models trained on the auxiliary task are often beneficial for the primary (target) task of interest. Despite the success of pre-training, few approaches have been generalized to graph-structured data due to their fundamental challenges. First, graph structure (e.g., the number of nodes/edges, and diameter) and its meaning can significantly differ between domains. So the model trained on an auxiliary task can harm generalization on the primary task, i.e., negative transfer . Also, many graph neural networks are transductive approaches. This often makes transfer learning between datasets inherently infeasible. So, pre-training on the target dataset has been proposed using auxiliary tasks: graph kernel , graph reconstruction , and attribute masking . These assume that the auxiliary tasks for pre-training are carefully selected with substantial domain knowledge and expertise in graph characteristics to assist the primary task. Since most graph neural networks operate on homogeneous graphs, which have a single type of nodes and edges, the previous pre-training/auxiliary tasks are not specifically designed for heterogeneous graphs, which have multiple types of nodes and edges. Heterogeneous graphs commonly occur in real-world applications, for instance, a music dataset has multiple types of nodes (e.g., user, song, artist) and multiple types of relations (e.g., user-artist, song-film, song-instrument). ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_2",
"text": " In this paper, we proposed a framework to train a graph neural networks with automatically selected auxiliary self-supervised tasks which assist the target task without additional data and labels. Our approach first generates meta-paths from heterogeneous graphs without manual labeling and train a model with meta-path prediction to assist the primary task such as link prediction and node classification. This can be formulated as a meta-learning problem. Furthermore, our method can be adopted to existing GNNs in a plug-in manner, enhancing the model performance. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_3",
"text": " Our contribution is threefold: (i) We propose a self-supervised learning method on a heterogeneous graph via meta-path prediction without additional data. (ii) Our framework automatically selects meta-paths (auxiliary tasks) to assist the primary task via meta-learning. (iii) We develop Hint Network that helps the learner network to benefit from challenging auxiliary tasks. To the best of our knowledge, this is the first auxiliary task with meta-paths specifically designed for leveraging heterogeneous graph structure. Our experiment shows that meta-path prediction improves the representational power and the gain can be further improved to explicitly optimize the auxiliary tasks for the primary task via meta-learning and the Hint Network, built on various state-of-the-art GNNs. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_4",
"text": " Graph Neural Networks have provided promising results for various tasks (2, 5, 6, 7, 8, 9, 10). Bruna et al. proposed a neural network that performs convolution on the graph domain using the Fourier basis from spectral graph theory. In contrast, non-spectral (spatial) approaches have been developed (2, 20, 21, 22, 23, 24, 25). Inspired by self-supervised learning (26, 27, 28, 29) and pre-training (11, 30) in computer vision and natural language processing, pre-training for GNNs has been recently proposed (16, 18). Recent works show promising results that self-supervised learning can be effective for GNNs (16, 17, 18, 31). Hu et al. have introduced several strategies for pre-training GNNs such as attribute masking and context prediction. Separated from the pre-training and fine-tuning strategy, has studied multi-task learning and analyzed why the pretext tasks are useful for GNNs. However, one problem with both pre-training and multi-task learning strategies is that all the auxiliary tasks are not beneficial for the downstream applications. So, we studied auxiliary learning for GNNs that explicitly focuses on the primary task. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_5",
"text": " Auxiliary Learning is a learning strategy to employ auxiliary tasks to assist the primary task. It is similar to multi-task learning, but auxiliary learning cares only the performance of the primary task. A number of auxiliary learning methods are proposed in a wide range of tasks (32, 33, 34). AC-GAN proposed an auxiliary classifier for generative models. Recently, Meta-Auxiliary Learning proposes an elegant solution to generate new auxiliary tasks by collapsing existing classes. However, it cannot be applicable to some tasks such as link prediction which has only one positive class. Our approach generates meta-paths on heterogeneous graphs to make new labels and trains models to predict meta-paths as auxiliary tasks. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_6",
"text": " Meta-learning aims at learning to learn models efficiently and effectively, and generalizes the learning strategy to new tasks. Meta-learning includes black-box methods to approximate gradients without any information about models (37, 38), optimization-based methods to learn an optimal initialization for adapting new tasks (39, 40, 41), learning loss functions (40, 42) and metric-learning or non-parametric methods for few-shot learning (43, 44, 45). In contrast to classical learning algorithms that generalize across samples, meta-learning generalizes across tasks. In this paper, we use meta-learning to learn a concept across tasks and transfer the knowledge from auxiliary tasks to the primary task. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_7",
"text": " The goal of our framework is to learn with multiple auxiliary tasks to improve the performance of the primary task. In this work, we demonstrate our framework with meta-path predictions as auxiliary tasks. But our framework could be extended to include other auxiliary tasks. The meta-paths capture diverse and meaningful relations between nodes on heterogeneous graphs . However, learning with auxiliary tasks has multiple challenges: identifying useful auxiliary tasks, balancing the auxiliary tasks with the primary task, and converting challenging auxiliary tasks into solvable (and relevant) tasks. To address the challenges, we propose SELf-supervised Auxiliary LeaRning (SELAR). Our framework consists of two main components: 1) learning weight functions to softly select auxiliary tasks and balance them with the primary task via meta-learning, and 2) learning Hint Networks to convert challenging auxiliary tasks into more relevant and solvable tasks to the primary task learner. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_8",
"text": " Most existing graph neural networks have been studied focusing on homogeneous graphs that have a single type of nodes and edges. However, in real-world applications, heterogeneous graphs , which have multiple types of nodes and edges, commonly occur. Learning models on the heterogeneous graphs requires different considerations to effectively represent their node and edge heterogeneity. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_9",
"text": " Heterogeneous graph . Let G=(V,E)𝐺𝑉𝐸G=(V,E) be a graph with a set of nodes V𝑉V and edges E𝐸E. A heterogeneous graph is a graph equipped with a node type mapping function fv:V→𝒯v:subscript𝑓𝑣→𝑉superscript𝒯𝑣f_{v}:V\\rightarrow\\mathcal{T}^{v} and an edge type mapping function fe:E→𝒯e:subscript𝑓𝑒→𝐸superscript𝒯𝑒f_{e}:E\\rightarrow\\mathcal{T}^{e}, where 𝒯vsuperscript𝒯𝑣\\mathcal{T}^{v} is a set of node types and 𝒯esuperscript𝒯𝑒\\mathcal{T}^{e} is a set of edge types. Each node vi∈Vsubscript𝑣𝑖𝑉v_{i}\\in V (and edge eij∈Esubscript𝑒𝑖𝑗𝐸e_{ij}\\in E resp.) has one node type, i.e., fv(vi)∈𝒯vsubscript𝑓𝑣subscript𝑣𝑖superscript𝒯𝑣f_{v}(v_{i})\\in\\mathcal{T}^{v}, (and one edge type fe(eij)∈𝒯esubscript𝑓𝑒subscript𝑒𝑖𝑗superscript𝒯𝑒f_{e}(e_{ij})\\in\\mathcal{T}^{e} resp.). In this paper, we consider the heterogeneous graphs with |𝒯e|>1superscript𝒯𝑒1|\\mathcal{T}^{e}|>1 or |𝒯v|>1superscript𝒯𝑣1|\\mathcal{T}^{v}|>1. When |𝒯e|=1superscript𝒯𝑒1|\\mathcal{T}^{e}|=1 and |𝒯v|=1superscript𝒯𝑣1|\\mathcal{T}^{v}|=1, it becomes a homogeneous graph. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_10",
"text": " Meta-Path (46, 49) is a path on a heterogeneous graph G𝐺G that a sequence of nodes connected with heterogeneous edges, i.e., v1→t1v2→t2…→tlvl+1subscript𝑡1→subscript𝑣1subscript𝑣2subscript𝑡2→…subscript𝑡𝑙→subscript𝑣𝑙1{v}_{1}\\xrightarrow{t_{1}}{v}_{2}\\xrightarrow{t_{2}}\\ldots\\xrightarrow{t_{l}}{v}_{l+1}, where tl∈𝒯esubscript𝑡𝑙superscript𝒯𝑒t_{l}\\in\\mathcal{T}^{e} denotes an l𝑙l-th edge type of the meta-path. The meta-path can be viewed as a composite relation R=t1∘t2…∘tl𝑅subscript𝑡1subscript𝑡2…subscript𝑡𝑙R=t_{1}\\circ t_{2}\\ldots\\circ t_{l} between node v1subscript𝑣1{v}_{1} and vl+1subscript𝑣𝑙1{v}_{l+1}, where R1∘R2subscript𝑅1subscript𝑅2R_{1}\\circ R_{2} denotes the composition of relation R1subscript𝑅1R_{1} and R2subscript𝑅2R_{2}. The definition of meta-path generalizes multi-hop connections and is shown to be useful to analyze heterogeneous graphs. For instance, in Book-Crossing dataset, ‘user-item-written.series-item-user’ indicates that a meta-path that connects users who like the same book series. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_11",
"text": " We introduce meta-path prediction as a self-supervised auxiliary task to improve the representational power of graph neural networks. To our knowledge, the meta-path prediction has not been studied in the context of self-supervised learning for graph neural networks in the literature. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_12",
"text": " Meta-path prediction is similar to link prediction but meta-paths allow heterogeneous composite relations. The meta-path prediction can be achieved in the same manner as link prediction. If two nodes u𝑢u and v𝑣v are connected by a meta-path p𝑝p with the heterogeneous edges (t1,t2,…tℓ)subscript𝑡1subscript𝑡2…subscript𝑡ℓ(t_{1},t_{2},\\ldots t_{\\ell}), then yu,vp=1superscriptsubscript𝑦𝑢𝑣𝑝1y_{u,v}^{p}=1, otherwise yu,vp=0superscriptsubscript𝑦𝑢𝑣𝑝0y_{u,v}^{p}=0. The labels can be generated from a heterogeneous graph without any manual labeling. They can be obtained by Ap=Atl…At2At1subscript𝐴𝑝subscript𝐴subscript𝑡𝑙…subscript𝐴subscript𝑡2subscript𝐴subscript𝑡1A_{p}=A_{t_{l}}\\ldots A_{t_{2}}A_{t_{1}}, where Atsubscript𝐴𝑡A_{t} is the adjacency matrix of edge type t𝑡t. The binarized value at (u,v)𝑢𝑣(u,v) in Apsubscript𝐴𝑝A_{p} indicates whether u𝑢u and v𝑣v are connected with the meta-path p𝑝p. In this paper, we use meta-path prediction as a self-supervised auxiliary task. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_13",
"text": " Let 𝐗∈R|V|×d𝐗superscriptR𝑉𝑑\\mathbf{X}\\in\\textbf{R}^{|V|\\times d} and 𝐙∈R|V|×d′𝐙superscriptR𝑉superscript𝑑′\\mathbf{Z}\\in\\textbf{R}^{|V|\\times d^{\\prime}} be input features and their hidden representations learnt by GNN f𝑓f, i.e., 𝐙=f(𝑿;𝐰,𝑨)𝐙𝑓𝑿𝐰𝑨\\mathbf{Z}=f(\\boldsymbol{X};\\mathbf{w},\\boldsymbol{A}), where 𝐰𝐰\\mathbf{w} is the parameter for f𝑓f, and 𝐀∈R|V|×|V|𝐀superscriptR𝑉𝑉\\mathbf{A}\\in\\textbf{R}^{|V|\\times|V|} is the adjacency matrix. Then link prediction and meta-path prediction are obtained by a simple operation as y^u,vt=σ(Φt(zu)⊤Φt(zv)),superscriptsubscript^𝑦𝑢𝑣𝑡𝜎subscriptΦ𝑡superscriptsubscript𝑧𝑢topsubscriptΦ𝑡subscript𝑧𝑣\\displaystyle\\hat{y}_{u,v}^{t}=\\sigma(\\Phi_{t}(z_{u})^{\\top}\\Phi_{t}(z_{v})), (1) where ΦtsubscriptΦ𝑡\\Phi_{t} is the task-specific network for task t∈𝒯𝑡𝒯t\\in\\mathcal{T} and zusubscript𝑧𝑢z_{u} and zvsubscript𝑧𝑣z_{v} are the node embeddings of node u𝑢u and v𝑣v. e.g., Φ0subscriptΦ0\\Phi_{0} (and Φ1subscriptΦ1\\Phi_{1} resp.) for link prediction (and the first type of meta-path prediction resp.). ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_14",
"text": " The architecture is shown in Fig. 1. To optimize the model, as the link prediction, cross entropy is used. The graph neural network f𝑓f is shared by the link prediction and meta-path predictions. As any auxiliary learning methods, the meta-paths (auxiliary tasks) should be carefully chosen and properly weighted so that the meta-path prediction does not compete with link prediction especially when the capacity of GNNs is limited. To address these issues, we propose our framework that automatically selects meta-paths and balances them with the link prediction via meta-learning. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_15",
"text": " Our framework SELAR is learning to learn a primary task with multiple auxiliary tasks to assist the primary task. This can be formally written as min𝐰,Θ𝔼(ℒpr(𝐰∗(Θ)))(x,y)∼Dpr s.t. 𝐰∗(Θ)=argmin𝐰𝔼(ℒpr+au(𝐰;Θ))(x,y)∼Dpr+au,\\displaystyle\\min_{\\mathbf{w},\\Theta}\\;\\;\\underset{(x,y)\\sim D^{pr}\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;}{\\text{\\large$\\mathbb{E}$}\\;\\;\\left(\\;\\;\\mathcal{L}^{pr}(\\mathbf{w}^{\\ast}(\\Theta))\\;\\;\\right)}\\;\\;\\text{ s.t. }\\;\\;\\mathbf{w}^{\\ast}(\\Theta)=\\operatorname*{\\arg\\!\\min}_{\\mathbf{w}}\\underset{(x,y)\\sim D^{pr+au}\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;}{\\;\\;\\mathbb{E}\\;\\;\\left(\\;\\;\\mathcal{L}^{pr+au}(\\mathbf{w};\\Theta)\\;\\;\\right)}, (2) where ℒpr(⋅)superscriptℒ𝑝𝑟⋅\\mathcal{L}^{pr}(\\cdot) is the primary task loss function to evaluate the trained model f(x;𝐰∗(Θ))𝑓𝑥superscript𝐰∗Θf(x;\\mathbf{w}^{\\ast}(\\Theta)) on meta-data (a validation for meta-learning ) Dprsuperscript𝐷𝑝𝑟D^{pr} and ℒpr+ausuperscriptℒ𝑝𝑟𝑎𝑢\\mathcal{L}^{pr+au} is the loss function to train a model on training data Dpr+ausuperscript𝐷𝑝𝑟𝑎𝑢D^{pr+au} with the primary and auxiliary tasks. To avoid cluttered notation, f𝑓f, x𝑥x, and y𝑦y are omitted. Each task 𝒯tsubscript𝒯𝑡\\mathcal{T}_{t} has Ntsubscript𝑁𝑡N_{t} samples and 𝒯0subscript𝒯0\\mathcal{T}_{0} and {𝒯t}t=1Tsuperscriptsubscriptsubscript𝒯𝑡𝑡1𝑇\\{\\mathcal{T}_{t}\\}_{t=1}^{T} denote the primary and auxiliary tasks respectively. The proposed formulation in Eq. (2) learns how to assist the primary task by optimizing ΘΘ\\Theta via meta-learning. The nested optimization problem given ΘΘ\\Theta is a regular training with properly adjusted loss functions to balance the primary and auxiliary tasks. The formulation can be more specifically written as min𝐰,Θsubscript𝐰Θ\\displaystyle\\min_{\\mathbf{w},\\Theta} ∑i=1M01M0ℓ0(yi(0,meta),f(xi(0,meta);𝐰∗(Θ))\\displaystyle\\sum_{i=1}^{M_{0}}\\frac{1}{M_{0}}\\ell^{0}(y_{i}^{(0,meta)},f(x_{i}^{(0,meta)};\\mathbf{w}^{\\ast}(\\Theta)) (3) s.t. 𝐰∗(Θ)=argmin𝐰∑t=0T∑i=1Nt1Nt𝒱(ξi(t,train);Θ)ℓt(yi(t,train),ft(xi(t,train);𝐰)),superscript𝐰∗Θsubscript𝐰superscriptsubscript𝑡0𝑇superscriptsubscript𝑖1subscript𝑁𝑡1subscript𝑁𝑡𝒱subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖Θsuperscriptℓ𝑡superscriptsubscript𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛superscript𝑓𝑡superscriptsubscript𝑥𝑖𝑡𝑡𝑟𝑎𝑖𝑛𝐰\\displaystyle\\mathbf{w}^{\\ast}(\\Theta)=\\operatorname*{\\arg\\!\\min}_{\\mathbf{w}}\\sum_{t=0}^{T}\\sum_{i=1}^{N_{t}}\\frac{1}{N_{t}}\\mathcal{V}(\\xi^{(t,train)}_{i};\\Theta)\\ell^{t}(y_{i}^{(t,train)},f^{t}(x_{i}^{(t,train)};\\mathbf{w})), (4) where ℓtsuperscriptℓ𝑡\\ell^{t} and ftsuperscript𝑓𝑡f^{t} denote the loss function and the model for task t𝑡t. We overload ℓtsuperscriptℓ𝑡\\ell^{t} with its function value, i.e., ℓt=ℓt(yi(t,train),ft(xi(t,train);𝐰))superscriptℓ𝑡superscriptℓ𝑡superscriptsubscript𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛superscript𝑓𝑡superscriptsubscript𝑥𝑖𝑡𝑡𝑟𝑎𝑖𝑛𝐰\\ell^{t}=\\ell^{t}(y_{i}^{(t,train)},f^{t}(x_{i}^{(t,train)};\\mathbf{w})). ξi(t,train)subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖\\xi^{(t,train)}_{i} is the embedding vector of ithsubscript𝑖𝑡ℎi_{th} sample for task t𝑡t. It is the concatenation of one-hot representation of task types, the label of the sample (positive/negative), and its loss value, i.e., ξi(t,train)=(ℓt;et;yi(t,train))∈RT+2subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖superscriptℓ𝑡subscript𝑒𝑡superscriptsubscript𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛superscriptR𝑇2\\xi^{(t,train)}_{i}=\\left(\\ell^{t};e_{t};y_{i}^{(t,train)}\\right)\\in\\textbf{R}^{T+2}. To derive our learning algorithm, we first shorten the objective function in Eq. (3) and Eq. (4) as ℒpr(𝐰∗(Θ))superscriptℒ𝑝𝑟superscript𝐰∗Θ\\mathcal{L}^{pr}(\\mathbf{w}^{\\ast}(\\Theta)) and ℒpr+au(𝐰;Θ)superscriptℒ𝑝𝑟𝑎𝑢𝐰Θ\\mathcal{L}^{pr+au}(\\mathbf{w};\\Theta). This is equivalent to Eq. (2) without expectation. Then, our formulation is given as min𝐰,Θℒpr(𝐰∗(Θ)) s.t. 𝐰∗(Θ)=argmin𝐰ℒpr+au(𝐰;Θ),subscript𝐰Θsuperscriptℒ𝑝𝑟superscript𝐰∗Θ s.t. superscript𝐰∗Θsubscript𝐰superscriptℒ𝑝𝑟𝑎𝑢𝐰Θ\\min_{\\mathbf{w},\\Theta}\\mathcal{L}^{pr}(\\mathbf{w}^{\\ast}(\\Theta))\\;\\;\\text{ s.t. }\\mathbf{w}^{\\ast}(\\Theta)=\\operatorname*{\\arg\\!\\min}_{\\mathbf{w}}\\mathcal{L}^{pr+au}(\\mathbf{w};\\Theta), (5) To circumvent the difficulty of the bi-level optimization, as previous works (39, 40) in meta-learning we approximate it with the updated parameters 𝐰^^𝐰\\hat{\\mathbf{w}} using the gradient descent update as 𝐰∗(Θ)≈𝐰^k(Θk)=𝐰k−α∇𝐰ℒpr+au(𝐰k;Θk),superscript𝐰∗Θsuperscript^𝐰𝑘superscriptΘ𝑘superscript𝐰𝑘𝛼subscript∇𝐰superscriptℒ𝑝𝑟𝑎𝑢superscript𝐰𝑘superscriptΘ𝑘\\displaystyle\\mathbf{w}^{\\ast}(\\Theta)\\approx\\hat{\\mathbf{w}}^{k}(\\Theta^{k})=\\mathbf{w}^{k}-\\alpha\\nabla_{\\mathbf{w}}\\mathcal{L}^{pr+au}(\\mathbf{w}^{k};\\Theta^{k}), (6) where α𝛼\\alpha is the learning rate for 𝐰𝐰\\mathbf{w}. We do not numerically evaluate 𝐰^k(Θ)superscript^𝐰𝑘Θ\\hat{\\mathbf{w}}^{k}(\\Theta) instead we plug the computational graph of 𝐰^ksuperscript^𝐰𝑘\\hat{\\mathbf{w}}^{k} in ℒpr(𝐰∗(Θ))superscriptℒ𝑝𝑟superscript𝐰∗Θ\\mathcal{L}^{pr}(\\mathbf{w}^{\\ast}(\\Theta)) to optimize ΘΘ\\Theta. Let ∇Θℒpr(𝐰∗(Θk))subscript∇Θsuperscriptℒ𝑝𝑟superscript𝐰∗superscriptΘ𝑘\\nabla_{\\Theta}\\mathcal{L}^{pr}(\\mathbf{w}^{\\ast}(\\Theta^{k})) be the gradient evaluated at ΘksuperscriptΘ𝑘\\Theta^{k}. Then updating parameters ΘΘ\\Theta is given as Θk+1=Θk−β∇Θℒpr(𝐰^k(Θk)),superscriptΘ𝑘1superscriptΘ𝑘𝛽subscript∇Θsuperscriptℒ𝑝𝑟superscript^𝐰𝑘superscriptΘ𝑘\\displaystyle\\Theta^{k+1}=\\Theta^{k}-\\beta\\nabla_{\\Theta}\\mathcal{L}^{pr}(\\hat{\\mathbf{w}}^{k}(\\Theta^{k})), (7) where β𝛽\\beta is the learning rate for ΘΘ\\Theta. This update allows softly selecting useful auxiliary tasks (meta-paths) and balance them with the primary task to improve the performance of the primary task. Without balancing tasks with the weighting function 𝒱(⋅;Θ)𝒱⋅Θ\\mathcal{V}(\\cdot;\\Theta), auxiliary tasks can dominate training and degrade the performance of the primary task. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_16",
"text": " The model parameters 𝐰ksuperscript𝐰𝑘\\mathbf{w}^{k} for tasks can be updated with optimized Θk+1superscriptΘ𝑘1\\Theta^{k+1} in (7) as 𝐰k+1=𝐰k−α∇𝐰ℒpr+au(𝐰k;Θk+1).superscript𝐰𝑘1superscript𝐰𝑘𝛼subscript∇𝐰superscriptℒ𝑝𝑟𝑎𝑢superscript𝐰𝑘superscriptΘ𝑘1\\displaystyle\\mathbf{w}^{k+1}=\\mathbf{w}^{k}-\\alpha\\nabla_{\\mathbf{w}}\\mathcal{L}^{pr+au}(\\mathbf{w}^{k};\\Theta^{k+1}). (8) Remarks. The proposed formulation can suffer from the meta-overfitting (50, 51) meaning that the parameters ΘΘ\\Theta to learn weights for softly selecting meta-paths and balancing the tasks with the primary task can overfit to the small meta-dataset. In our experiment, we found that the overfitting can be alleviated by meta-validation sets . To learn ΘΘ\\Theta that is generalizable across meta-training sets, we optimize ΘΘ\\Theta across k𝑘k different meta-datasets like k𝑘k-fold cross validation using the following equation: Θk+1=Θk−β𝔼(∇Θℒpr(𝐰^k(Θk))),Dpr(meta)∼CV\\displaystyle\\Theta^{k+1}\\;=\\;\\underset{D^{pr(meta)}\\sim CV\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;}{\\Theta^{k}\\;-\\;\\;\\beta\\;\\;\\mathbb{E}\\left(\\;\\nabla_{\\Theta}\\mathcal{L}^{pr}(\\hat{\\mathbf{w}}^{k}(\\Theta^{k}))\\;\\right),} (9) where Dpr(meta)∼CVsimilar-tosuperscript𝐷𝑝𝑟𝑚𝑒𝑡𝑎𝐶𝑉D^{pr(meta)}\\sim CV is a meta-dataset from cross validation. We used 3-fold cross validation and the gradients of ΘΘ\\Theta w.r.t different meta-datasets are averaged to update ΘksuperscriptΘ𝑘\\Theta^{k}, see Algorithm 1. The cross validation is crucial to alleviate meta-overfitting and more discussion is Section 4.3. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_17",
"text": " Meta-path prediction is generally more challenging than link prediction and node classification since it requires the understanding of long-range relations across heterogeneous nodes. The meta-path prediction gets more difficult when mini-batch training is inevitable due to the size of datasets or models. Within a mini-batch, important nodes and edges for meta-paths are not available. Also, a small learner network, e.g., two-layer GNNs, with a limited receptive field, inherently cannot capture long-range relations. The challenges can hinder representation learning and damage the generalization of the primary task. We proposed a Hint Network (HintNet) which makes the challenge tasks more solvable by correcting the answer with more information at the learner’s need. Specifically, in our experiments, the HintNet corrects the answer of the learner with its own answer from the augmented graph with hub nodes, see Fig. 2. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_18",
"text": " The amount of help (correction) by HintNet is optimized maximizing the learner’s gain. Let 𝒱H(⋅)subscript𝒱𝐻⋅\\mathcal{V}_{H}(\\cdot) and ΘHsubscriptΘ𝐻\\Theta_{H} be a weight function to determine the amount of hint and its parameters which are optimized by meta-learning. Then, our formulation with HintNet is given as min𝐰,Θ∑i=1M01M0ℓ0(yi(0,meta),f(xi(0,meta);𝐰∗(Θ,ΘH)))subscript𝐰Θsuperscriptsubscript𝑖1subscript𝑀01subscript𝑀0superscriptℓ0superscriptsubscript𝑦𝑖0𝑚𝑒𝑡𝑎𝑓superscriptsubscript𝑥𝑖0𝑚𝑒𝑡𝑎superscript𝐰∗ΘsubscriptΘ𝐻\\displaystyle\\min_{\\mathbf{w},\\Theta}\\sum_{i=1}^{M_{0}}\\frac{1}{M_{0}}\\ell^{0}(y_{i}^{(0,meta)},f(x_{i}^{(0,meta)};\\mathbf{w}^{\\ast}(\\Theta,\\Theta_{H}))) (10) s.t. 𝐰∗(Θ)=argmin𝐰∑t=0T∑i=1Nt1Nt𝒱(ξi(t,train),ℓt;Θ)ℓt(yi(t,train),y^i(t,train)(ΘH)),s.t. superscript𝐰∗Θsubscript𝐰superscriptsubscript𝑡0𝑇superscriptsubscript𝑖1subscript𝑁𝑡1subscript𝑁𝑡𝒱subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖superscriptℓ𝑡Θsuperscriptℓ𝑡superscriptsubscript𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛superscriptsubscript^𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛subscriptΘ𝐻\\displaystyle\\text{s.t. }\\mathbf{w}^{\\ast}(\\Theta)=\\operatorname*{\\arg\\!\\min}_{\\mathbf{w}}\\sum_{t=0}^{T}\\sum_{i=1}^{N_{t}}\\frac{1}{N_{t}}\\mathcal{V}(\\xi^{(t,train)}_{i},\\ell^{t};\\Theta)\\ell^{t}(y_{i}^{(t,train)},\\hat{y}_{i}^{(t,train)}(\\Theta_{H})), (11) where y^i(t,train)(ΘH)superscriptsubscript^𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛subscriptΘ𝐻\\hat{y}_{i}^{(t,train)}(\\Theta_{H}) denotes the convex combination of the learner’s answer and HintNet’s answer, i.e., 𝒱H(ξi(t,train);ΘH)ft(xi(t,train);𝐰)+(1−𝒱H(ξi(t,train);ΘH))fHt(xi(t,train);𝐰)subscript𝒱𝐻subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖subscriptΘ𝐻superscript𝑓𝑡superscriptsubscript𝑥𝑖𝑡𝑡𝑟𝑎𝑖𝑛𝐰1subscript𝒱𝐻subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖subscriptΘ𝐻superscriptsubscript𝑓𝐻𝑡superscriptsubscript𝑥𝑖𝑡𝑡𝑟𝑎𝑖𝑛𝐰\\mathcal{V}_{H}(\\xi^{(t,train)}_{i};\\Theta_{H})f^{t}(x_{i}^{(t,train)};\\mathbf{w})+(1-\\mathcal{V}_{H}(\\xi^{(t,train)}_{i};\\Theta_{H}))f_{H}^{t}(x_{i}^{(t,train)};\\mathbf{w}). The sample embedding is ξi(t,train)=(ℓt;ℓHt;et;yi(t,train))∈RT+3subscriptsuperscript𝜉𝑡𝑡𝑟𝑎𝑖𝑛𝑖superscriptℓ𝑡subscriptsuperscriptℓ𝑡𝐻subscript𝑒𝑡superscriptsubscript𝑦𝑖𝑡𝑡𝑟𝑎𝑖𝑛superscriptR𝑇3\\xi^{(t,train)}_{i}=\\left(\\ell^{t};\\ell^{t}_{H};e_{t};y_{i}^{(t,train)}\\right)\\in\\textbf{R}^{T+3}. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_19",
"text": " We evaluate our proposed methods on four public benchmark datasets on heterogeneous graphs. Our experiments answer the following research questions: Q1. Is meta-path prediction effective for representation learning on heterogeneous graphs? Q2. Can the meta-path prediction be further improved by the proposed methods (e.g., SELAR, HintNet)? Q3. Why are the proposed methods effective, any relation with hard negative mining? ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_20",
"text": " Datasets. We use two public benchmark datasets from different domains for link prediction: Music dataset Last-FM and Book dataset Book-Crossing, released by KGNN-LS , RippleNet . We use two datasets for node classification: citation network datasets ACM and Movie dataset IMDB, used by HAN for node classification tasks. ACM has three types nodes (Paper(P), Author(A), Subject(S)), four types of edges (PA, AP, PS, SP) and labels (categories of papers). IMDB contains three types of nodes (Movie (M), Actor (A), Director (D)), four types (MA, AM, MD, DM) of edges and labels (genres of movies). ACM and IMDB have node features, which are bag-of-words of keywords and plots. Dataset details are in the supplement. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_21",
"text": " Baselines. We evaluate our methods with five graph neural networks : GCN , GAT , GIN , SGConv and GTN . Our methods can be applied to both homogeneous graphs and heterogeneous graphs. We compare four learning strategies: Vanilla, standard training of base models only with the primary task samples; w/o meta-path, learning a primary task with sample weighting function 𝒱(ξ;Θ)𝒱𝜉Θ\\mathcal{V}(\\xi;\\Theta); w/ meta-path, training with the primary task and auxiliary tasks (meta-path prediction) with a standard loss function; SELAR proposed in Section 3.2, learning the primary task with optimized auxiliary tasks by meta-learning; SELAR+Hint introduced in Section 3.3. In all the experiments, we report the mean performance of three independent runs. Implementation details are in the supplement. Our experiments were mainly performed based on NAVER Smart Machine Learning platform (NSML) (54, 55). ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_22",
"text": " We used five types of meta-paths of length 2 to 4 for auxiliary tasks. Table 1 shows that our methods consistently improve link prediction performance for all the GNNs, compared to the Vanilla and the method using Meta-Weight-Net only without meta-paths (denoted as w/o meta-path). Overall, a standard training with meta-paths shows 1.1% improvement on average on both Last-FM and Book-Crossing whereas meta-learning that learns sample weights degrades on average on Last-FM and improves only 0.6% on average on Book-Crossing, e.g., GCN, SGC and GTN on Last-FM and GCN and SGC on Book-Crossing, show degradation 0.2% compared to the standard training (Vanilla). As we expected, SELAR and SELAR with HintNet provide more optimized auxiliary learning resulting in 1.9% and 2.0% absolute improvement on Last-FM and 2.6% and 2.7% on the Book-Crossing dataset. Further, in particular, GIN on Book-crossing, SELAR and SELAR+Hint provide ∼similar-to\\sim5.5% and ∼similar-to\\sim5.3% absolute improvement compared to the vanilla algorithm. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_23",
"text": " Similar to link prediction above, our SELAR consistently enhances node classification performance of all the GNN models and the improvements are more significant on IMDB which is larger than the ACM dataset. We believe that ACM dataset is already saturated and the room for improvement is limited. However, our methods still show small yet consistent improvement over all the architecture on ACM. We conjecture that the efficacy of our proposed methods differs depending on graph structures. However, it is worth noting that introducing meta-path prediction as auxiliary tasks remarkably improves the performance of primary tasks such as link and node prediction with consistency compared to the existing methods. “w/o meta-path”, the meta-learning to learn sample weight function on a primary task shows marginal degradation in five out of eight settings. Remarkably, SELAR improved the F1-score of GAT on the IMDB by (4.46%) compared to the vanilla learning scheme. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_24",
"text": " The effectiveness of meta-path prediction and the proposed learning strategies are answered above. To address the last research question Q3. why the proposed method is effective, we provide analysis on the weighting function 𝒱(ξ;Θ)𝒱𝜉Θ\\mathcal{V}(\\xi;\\Theta) learned by our framework. Also, we show the evidence that meta-overfitting occurs and can be addressed by cross-validation as in Algorithm 1. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_25",
"text": " Weighting function. Our proposed methods can automatically balance multiple auxiliary tasks to improve the primary task. To understand the ability of our method, we analyze the weighting function and the adjusted loss function by the weighting function, i.e.,𝒱(ξ;Θ)𝒱𝜉Θ\\mathcal{V}(\\xi;\\Theta), 𝒱(ξ;Θ)ℓt(y,y^)𝒱𝜉Θsuperscriptℓ𝑡𝑦^𝑦\\mathcal{V}(\\xi;\\Theta)\\ell^{t}(y,\\hat{y}). The positive and negative samples are solid and dash lines respectively. We present the weighting function learnt by SELAR+HintNet for GAT which is the best-performing construction on Last-FM. The weighting function is from the epoch with the best validation performance. Fig. 3 shows that the learnt weighting function attends to hard examples more than easy ones with a small loss range from 0 to 1. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_26",
"text": " Also, the primary task-positive samples are relatively less down weighted than auxiliary tasks even when the samples are easy (i.e., the loss is ranged from 0 to 1). Our adjusted loss 𝒱(ξ;Θ)ℓt(y,y^)𝒱𝜉Θsuperscriptℓ𝑡𝑦^𝑦\\mathcal{V}(\\xi;\\Theta)\\ell^{t}(y,\\hat{y}) is closely related to the focal loss, −(1−pt)γlog(pt)superscript1subscript𝑝𝑡𝛾subscript𝑝𝑡-(1-p_{t})^{\\gamma}\\log(p_{t}). When ℓtsuperscriptℓ𝑡\\ell^{t} is the cross-entropy, it becomes 𝒱(ξ;Θ)log(pt)𝒱𝜉Θsubscript𝑝𝑡\\mathcal{V}(\\xi;\\Theta)\\log(p_{t}), where p𝑝p is the model’s prediction for the correct class and ptsubscript𝑝𝑡p_{t} is defined as p𝑝p if y=1𝑦1y=1, otherwise 1−p1𝑝1-p as . The weighting function differentially evolves over iterations. At the early stage of training, it often focuses on easy examples first and then changes its focus over time. Also, the adjusted loss values by the weighting function learnt by our method differ across tasks. To analyze the contribution of each task, we calculate the average of the task-specific weighted loss on the Last-FM and Book-Crossing datasets. Especially, on the Book-Crossing, our method has more attention to ’user-item’ (primary task) and ‘user-item-literary.series.item-user’ (auxiliary task) which is a meta-path that connects users who like a book series. This implies that two users who like a book series likely have a similar preference. More results and discussion are available in the supplement. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_27",
"text": " Meta cross-validation, i.e., cross-validation for meta-learning, helps to keep weighting function from over-fitting on meta data. Table 3 evidence that our algorithms as other meta-learning methods can overfit to meta-data. As in Algorithm 1, our proposed methods, both SELAR and SELAR with HintNet, with cross-validation denoted as ‘3-fold’ alleviates the meta-overfitting problem and provides a significant performance gain, whereas without meta cross-validation denoted as ‘1-fold’ the proposed method can underperform the vanilla training strategy. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_28",
"text": " We proposed meta-path prediction as self-supervised auxiliary tasks on heterogeneous graphs. Our experiments show that the representation learning on heterogeneous graphs can benefit from meta-path prediction which encourages to capture rich semantic information. The auxiliary tasks can be further improved by our proposed method SELAR, which automatically balances auxiliary tasks to assist the primary task via a form of meta-learning. The learnt weighting function identifies more beneficial meta-paths for the primary tasks. Within a task, the weighting function can adjust the cross entropy like the focal loss, which focuses on hard examples by decreasing weights for easy samples. Moreover, when it comes to challenging and remotely relevant auxiliary tasks, our HintNet helps the learner by correcting the learner’s answer dynamically and further improves the gain from auxiliary tasks. Our framework based on meta-learning provides learning strategies to balance primary task and auxiliary tasks, and easy/hard (and positive/negative) samples. Interesting future directions include applying our framework to other domains and various auxiliary tasks. Our code is publicly available at https://github.com/mlvlab/SELAR. ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
},
{
"id": "2007.08294_all_29",
"text": " Acknowledgements. This work was partly supported by NAVER Corp. and Institute for Information & communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (MSIT): the Regional Strategic Industry Convergence Security Core Talent Training Business (No.2019-0-01343) and the ICT Creative Consilience Program (IITP-2020-0-01819). ",
"title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs"
}
] |
What type of parameter would be considered a 'good' initial parameter?
|
A good initial parameter is a parameter that gives good performance in many tasks even with a little fine-tuning of the parameter [2]. This means that the loss function defined in many tasks is sensitive, and this sensitive loss leads to good updates [23].
|
[
2,
23
] |
[
{
"id": "1703.03400_all_0",
"text": " Learning quickly is a hallmark of human intelligence, whether it involves recognizing objects from a few examples or quickly learning new skills after just minutes of experience. Our artificial agents should be able to do the same, learning and adapting quickly from only a few examples, and continuing to adapt as more data becomes available. This kind of fast and flexible learning is challenging, since the agent must integrate its prior experience with a small amount of new information, while avoiding overfitting to the new data. Furthermore, the form of prior experience and new data will depend on the task. As such, for the greatest applicability, the mechanism for learning to learn (or meta-learning) should be general to the task and the form of computation required to complete the task. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_1",
"text": " In this work, we propose a meta-learning algorithm that is general and model-agnostic, in the sense that it can be directly applied to any learning problem and model that is trained with a gradient descent procedure. Our focus is on deep neural network models, but we illustrate how our approach can easily handle different architectures and different problem settings, including classification, regression, and policy gradient reinforcement learning, with minimal modification. In meta-learning, the goal of the trained model is to quickly learn a new task from a small amount of new data, and the model is trained by the meta-learner to be able to learn on a large number of different tasks. The key idea underlying our method is to train the model’s initial parameters such that the model has maximal performance on a new task after the parameters have been updated through one or more gradient steps computed with a small amount of data from that new task. Unlike prior meta-learning methods that learn an update function or learning rule (Schmidhuber, 1987; Bengio et al., 1992; Andrychowicz et al., 2016; Ravi & Larochelle, 2017), our algorithm does not expand the number of learned parameters nor place constraints on the model architecture (e.g. by requiring a recurrent model (Santoro et al., 2016) or a Siamese network (Koch, 2015)), and it can be readily combined with fully connected, convolutional, or recurrent neural networks. It can also be used with a variety of loss functions, including differentiable supervised losses and non-differentiable reinforcement learning objectives. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_2",
"text": " The process of training a model’s parameters such that a few gradient steps, or even a single gradient step, can produce good results on a new task can be viewed from a feature learning standpoint as building an internal representation that is broadly suitable for many tasks. If the internal representation is suitable to many tasks, simply fine-tuning the parameters slightly (e.g. by primarily modifying the top layer weights in a feedforward model) can produce good results. In effect, our procedure optimizes for models that are easy and fast to fine-tune, allowing the adaptation to happen in the right space for fast learning. From a dynamical systems standpoint, our learning process can be viewed as maximizing the sensitivity of the loss functions of new tasks with respect to the parameters: when the sensitivity is high, small local changes to the parameters can lead to large improvements in the task loss. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_3",
"text": " The primary contribution of this work is a simple model- and task-agnostic algorithm for meta-learning that trains a model’s parameters such that a small number of gradient updates will lead to fast learning on a new task. We demonstrate the algorithm on different model types, including fully connected and convolutional networks, and in several distinct domains, including few-shot regression, image classification, and reinforcement learning. Our evaluation shows that our meta-learning algorithm compares favorably to state-of-the-art one-shot learning methods designed specifically for supervised classification, while using fewer parameters, but that it can also be readily applied to regression and can accelerate reinforcement learning in the presence of task variability, substantially outperforming direct pretraining as initialization. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_4",
"text": " We aim to train models that can achieve rapid adaptation, a problem setting that is often formalized as few-shot learning. In this section, we will define the problem setup and present the general form of our algorithm. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_5",
"text": " The goal of few-shot meta-learning is to train a model that can quickly adapt to a new task using only a few datapoints and training iterations. To accomplish this, the model or learner is trained during a meta-learning phase on a set of tasks, such that the trained model can quickly adapt to new tasks using only a small number of examples or trials. In effect, the meta-learning problem treats entire tasks as training examples. In this section, we formalize this meta-learning problem setting in a general manner, including brief examples of different learning domains. We will discuss two different learning domains in detail in Section 3. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_6",
"text": " We consider a model, denoted f𝑓f, that maps observations 𝐱𝐱\\mathbf{x} to outputs 𝐚𝐚\\mathbf{a}. During meta-learning, the model is trained to be able to adapt to a large or infinite number of tasks. Since we would like to apply our framework to a variety of learning problems, from classification to reinforcement learning, we introduce a generic notion of a learning task below. Formally, each task 𝒯={ℒ(𝐱1,𝐚1,…,𝐱H,𝐚H),q(𝐱1),q(𝐱t+1|𝐱t,𝐚t),H}𝒯ℒsubscript𝐱1subscript𝐚1…subscript𝐱𝐻subscript𝐚𝐻𝑞subscript𝐱1𝑞conditionalsubscript𝐱𝑡1subscript𝐱𝑡subscript𝐚𝑡𝐻\\mathcal{T}=\\{\\mathcal{L}(\\mathbf{x}_{1},\\mathbf{a}_{1},\\dots,\\mathbf{x}_{H},\\mathbf{a}_{H}),q(\\mathbf{x}_{1}),q(\\mathbf{x}_{t+1}|\\mathbf{x}_{t},\\mathbf{a}_{t}),H\\} consists of a loss function ℒℒ\\mathcal{L}, a distribution over initial observations q(𝐱1)𝑞subscript𝐱1q(\\mathbf{x}_{1}), a transition distribution q(𝐱t+1|𝐱t,𝐚t)𝑞conditionalsubscript𝐱𝑡1subscript𝐱𝑡subscript𝐚𝑡q(\\mathbf{x}_{t+1}|\\mathbf{x}_{t},\\mathbf{a}_{t}), and an episode length H𝐻H. In i.i.d. supervised learning problems, the length H=1𝐻1H\\!=\\!1. The model may generate samples of length H𝐻H by choosing an output 𝐚tsubscript𝐚𝑡\\mathbf{a}_{t} at each time t𝑡t. The loss ℒ(𝐱1,𝐚1,…,𝐱H,𝐚H)→ℝ→ℒsubscript𝐱1subscript𝐚1…subscript𝐱𝐻subscript𝐚𝐻ℝ\\mathcal{L}(\\mathbf{x}_{1},\\mathbf{a}_{1},\\dots,\\mathbf{x}_{H},\\mathbf{a}_{H})\\rightarrow\\mathbb{R}, provides task-specific feedback, which might be in the form of a misclassification loss or a cost function in a Markov decision process. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_7",
"text": " In our meta-learning scenario, we consider a distribution over tasks p(𝒯)𝑝𝒯p(\\mathcal{T}) that we want our model to be able to adapt to. In the K𝐾K-shot learning setting, the model is trained to learn a new task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} drawn from p(𝒯)𝑝𝒯p(\\mathcal{T}) from only K𝐾K samples drawn from qisubscript𝑞𝑖q_{i} and feedback ℒ𝒯isubscriptℒsubscript𝒯𝑖\\mathcal{L}_{\\mathcal{T}_{i}} generated by 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. During meta-training, a task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} is sampled from p(𝒯)𝑝𝒯p(\\mathcal{T}), the model is trained with K𝐾K samples and feedback from the corresponding loss ℒ𝒯isubscriptℒsubscript𝒯𝑖\\mathcal{L}_{\\mathcal{T}_{i}} from 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}, and then tested on new samples from 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. The model f𝑓f is then improved by considering how the test error on new data from qisubscript𝑞𝑖q_{i} changes with respect to the parameters. In effect, the test error on sampled tasks 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} serves as the training error of the meta-learning process. At the end of meta-training, new tasks are sampled from p(𝒯)𝑝𝒯p(\\mathcal{T}), and meta-performance is measured by the model’s performance after learning from K𝐾K samples. Generally, tasks used for meta-testing are held out during meta-training. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_8",
"text": " In contrast to prior work, which has sought to train recurrent neural networks that ingest entire datasets (Santoro et al., 2016; Duan et al., 2016b) or feature embeddings that can be combined with nonparametric methods at test time (Vinyals et al., 2016; Koch, 2015), we propose a method that can learn the parameters of any standard model via meta-learning in such a way as to prepare that model for fast adaptation. The intuition behind this approach is that some internal representations are more transferrable than others. For example, a neural network might learn internal features that are broadly applicable to all tasks in p(𝒯)𝑝𝒯p(\\mathcal{T}), rather than a single individual task. How can we encourage the emergence of such general-purpose representations? We take an explicit approach to this problem: since the model will be fine-tuned using a gradient-based learning rule on a new task, we will aim to learn a model in such a way that this gradient-based learning rule can make rapid progress on new tasks drawn from p(𝒯)𝑝𝒯p(\\mathcal{T}), without overfitting. In effect, we will aim to find model parameters that are sensitive to changes in the task, such that small changes in the parameters will produce large improvements on the loss function of any task drawn from p(𝒯)𝑝𝒯p(\\mathcal{T}), when altered in the direction of the gradient of that loss (see Figure 1). We make no assumption on the form of the model, other than to assume that it is parametrized by some parameter vector θ𝜃\\theta, and that the loss function is smooth enough in θ𝜃\\theta that we can use gradient-based learning techniques. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_9",
"text": " Formally, we consider a model represented by a parametrized function fθsubscript𝑓𝜃f_{\\theta} with parameters θ𝜃\\theta. When adapting to a new task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}, the model’s parameters θ𝜃\\theta become θi′superscriptsubscript𝜃𝑖′\\theta_{i}^{\\prime}. In our method, the updated parameter vector θi′superscriptsubscript𝜃𝑖′\\theta_{i}^{\\prime} is computed using one or more gradient descent updates on task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. For example, when using one gradient update, θi′=θ−α∇θℒ𝒯i(fθ).superscriptsubscript𝜃𝑖′𝜃𝛼subscript∇𝜃subscriptℒsubscript𝒯𝑖subscript𝑓𝜃\\vspace{-0.15cm}\\theta_{i}^{\\prime}=\\theta-\\alpha\\nabla_{\\theta}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta}). The step size α𝛼\\alpha may be fixed as a hyperparameter or meta-learned. For simplicity of notation, we will consider one gradient update for the rest of this section, but using multiple gradient updates is a straightforward extension. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_10",
"text": " The model parameters are trained by optimizing for the performance of fθi′subscript𝑓superscriptsubscript𝜃𝑖′f_{\\theta_{i}^{\\prime}} with respect to θ𝜃\\theta across tasks sampled from p(𝒯)𝑝𝒯p(\\mathcal{T}). More concretely, the meta-objective is as follows: minθ∑𝒯i∼p(𝒯)ℒ𝒯i(fθi′)=∑𝒯i∼p(𝒯)ℒ𝒯i(fθ−α∇θℒ𝒯i(fθ))subscript𝜃subscriptsimilar-tosubscript𝒯𝑖𝑝𝒯subscriptℒsubscript𝒯𝑖subscript𝑓superscriptsubscript𝜃𝑖′subscriptsimilar-tosubscript𝒯𝑖𝑝𝒯subscriptℒsubscript𝒯𝑖subscript𝑓𝜃𝛼subscript∇𝜃subscriptℒsubscript𝒯𝑖subscript𝑓𝜃\\displaystyle\\vspace{-0.2cm}\\min_{\\theta}\\sum_{\\mathcal{T}_{i}\\sim p(\\mathcal{T})}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta_{i}^{\\prime}})=\\sum_{\\mathcal{T}_{i}\\sim p(\\mathcal{T})}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta-\\alpha\\nabla_{\\theta}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta})}) Note that the meta-optimization is performed over the model parameters θ𝜃\\theta, whereas the objective is computed using the updated model parameters θ′superscript𝜃′\\theta^{\\prime}. In effect, our proposed method aims to optimize the model parameters such that one or a small number of gradient steps on a new task will produce maximally effective behavior on that task. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_11",
"text": " The meta-optimization across tasks is performed via stochastic gradient descent (SGD), such that the model parameters θ𝜃\\theta are updated as follows: θ←θ−β∇θ∑𝒯i∼p(𝒯)ℒ𝒯i(fθi′)←𝜃𝜃𝛽subscript∇𝜃subscriptsimilar-tosubscript𝒯𝑖𝑝𝒯subscriptℒsubscript𝒯𝑖subscript𝑓superscriptsubscript𝜃𝑖′\\vspace{-0.2cm}\\theta\\leftarrow\\theta-\\beta\\nabla_{\\theta}\\sum_{\\mathcal{T}_{i}\\sim p(\\mathcal{T})}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\theta_{i}^{\\prime}}) (1) where β𝛽\\beta is the meta step size. The full algorithm, in the general case, is outlined in Algorithm 1. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_12",
"text": " The MAML meta-gradient update involves a gradient through a gradient. Computationally, this requires an additional backward pass through f𝑓f to compute Hessian-vector products, which is supported by standard deep learning libraries such as TensorFlow (Abadi et al., 2016). In our experiments, we also include a comparison to dropping this backward pass and using a first-order approximation, which we discuss in Section 5.2. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_13",
"text": " In this section, we discuss specific instantiations of our meta-learning algorithm for supervised learning and reinforcement learning. The domains differ in the form of loss function and in how data is generated by the task and presented to the model, but the same basic adaptation mechanism can be applied in both cases. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_14",
"text": " Few-shot learning is well-studied in the domain of supervised tasks, where the goal is to learn a new function from only a few input/output pairs for that task, using prior data from similar tasks for meta-learning. For example, the goal might be to classify images of a Segway after seeing only one or a few examples of a Segway, with a model that has previously seen many other types of objects. Likewise, in few-shot regression, the goal is to predict the outputs of a continuous-valued function from only a few datapoints sampled from that function, after training on many functions with similar statistical properties. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_15",
"text": " To formalize the supervised regression and classification problems in the context of the meta-learning definitions in Section 2.1, we can define the horizon H=1𝐻1H=1 and drop the timestep subscript on 𝐱tsubscript𝐱𝑡\\mathbf{x}_{t}, since the model accepts a single input and produces a single output, rather than a sequence of inputs and outputs. The task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} generates K𝐾K i.i.d. observations 𝐱𝐱\\mathbf{x} from qisubscript𝑞𝑖q_{i}, and the task loss is represented by the error between the model’s output for 𝐱𝐱\\mathbf{x} and the corresponding target values 𝐲𝐲\\mathbf{y} for that observation and task. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_16",
"text": " Two common loss functions used for supervised classification and regression are cross-entropy and mean-squared error (MSE), which we will describe below; though, other supervised loss functions may be used as well. For regression tasks using mean-squared error, the loss takes the form: ℒ𝒯i(fϕ)=∑𝐱(j),𝐲(j)∼𝒯i∥fϕ(𝐱(j))−𝐲(j)∥22,subscriptℒsubscript𝒯𝑖subscript𝑓italic-ϕsubscriptsimilar-tosuperscript𝐱𝑗superscript𝐲𝑗subscript𝒯𝑖superscriptsubscriptdelimited-∥∥subscript𝑓italic-ϕsuperscript𝐱𝑗superscript𝐲𝑗22\\displaystyle\\vspace{-0.2cm}\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\phi})=\\!\\!\\!\\!\\!\\!\\sum_{\\mathbf{x}^{(j)},\\mathbf{y}^{(j)}\\sim\\mathcal{T}_{i}}\\lVert f_{\\phi}(\\mathbf{x}^{(j)})-\\mathbf{y}^{(j)}\\rVert_{2}^{2}, (2) where 𝐱(j),𝐲(j)superscript𝐱𝑗superscript𝐲𝑗\\mathbf{x}^{(j)},\\mathbf{y}^{(j)} are an input/output pair sampled from task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. In K𝐾K-shot regression tasks, K𝐾K input/output pairs are provided for learning for each task. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_17",
"text": " Similarly, for discrete classification tasks with a cross-entropy loss, the loss takes the form: ℒ𝒯i(fϕ)=∑𝐱(j),𝐲(j)∼𝒯isubscriptℒsubscript𝒯𝑖subscript𝑓italic-ϕsubscriptsimilar-tosuperscript𝐱𝑗superscript𝐲𝑗subscript𝒯𝑖\\displaystyle\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\phi})=\\!\\!\\!\\!\\!\\!\\sum_{\\mathbf{x}^{(j)},\\mathbf{y}^{(j)}\\sim\\mathcal{T}_{i}} 𝐲(j)logfϕ(𝐱(j))superscript𝐲𝑗subscript𝑓italic-ϕsuperscript𝐱𝑗\\displaystyle\\mathbf{y}^{(j)}\\log f_{\\phi}(\\mathbf{x}^{(j)}) (3) +(1−𝐲(j))log(1−fϕ(𝐱(j)))1superscript𝐲𝑗1subscript𝑓italic-ϕsuperscript𝐱𝑗\\displaystyle+(1-\\mathbf{y}^{(j)})\\log(1-f_{\\phi}(\\mathbf{x}^{(j)})) According to the conventional terminology, K𝐾K-shot classification tasks use K𝐾K input/output pairs from each class, for a total of NK𝑁𝐾NK data points for N𝑁N-way classification. Given a distribution over tasks p(𝒯i)𝑝subscript𝒯𝑖p(\\mathcal{T}_{i}), these loss functions can be directly inserted into the equations in Section 2.2 to perform meta-learning, as detailed in Algorithm 2. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_18",
"text": " In reinforcement learning (RL), the goal of few-shot meta-learning is to enable an agent to quickly acquire a policy for a new test task using only a small amount of experience in the test setting. A new task might involve achieving a new goal or succeeding on a previously trained goal in a new environment. For example, an agent might learn to quickly figure out how to navigate mazes so that, when faced with a new maze, it can determine how to reliably reach the exit with only a few samples. In this section, we will discuss how MAML can be applied to meta-learning for RL. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_19",
"text": " Each RL task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} contains an initial state distribution qi(𝐱1)subscript𝑞𝑖subscript𝐱1q_{i}(\\mathbf{x}_{1}) and a transition distribution qi(𝐱t+1|𝐱t,𝐚t)subscript𝑞𝑖conditionalsubscript𝐱𝑡1subscript𝐱𝑡subscript𝐚𝑡q_{i}(\\mathbf{x}_{t+1}|\\mathbf{x}_{t},\\mathbf{a}_{t}), and the loss ℒ𝒯isubscriptℒsubscript𝒯𝑖\\mathcal{L}_{\\mathcal{T}_{i}} corresponds to the (negative) reward function R𝑅R. The entire task is therefore a Markov decision process (MDP) with horizon H𝐻H, where the learner is allowed to query a limited number of sample trajectories for few-shot learning. Any aspect of the MDP may change across tasks in p(𝒯)𝑝𝒯p(\\mathcal{T}). The model being learned, fθsubscript𝑓𝜃f_{\\theta}, is a policy that maps from states 𝐱tsubscript𝐱𝑡\\mathbf{x}_{t} to a distribution over actions 𝐚tsubscript𝐚𝑡\\mathbf{a}_{t} at each timestep t∈{1,…,H}𝑡1…𝐻t\\in\\{1,...,H\\}. The loss for task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i} and model fϕsubscript𝑓italic-ϕf_{\\phi} takes the form ℒ𝒯i(fϕ)=−𝔼𝐱t,𝐚t∼fϕ,q𝒯i(∑t=1HRi(𝐱t,𝐚t)).subscriptℒsubscript𝒯𝑖subscript𝑓italic-ϕsubscript𝔼formulae-sequencesimilar-tosubscript𝐱𝑡subscript𝐚𝑡subscript𝑓italic-ϕsubscript𝑞subscript𝒯𝑖delimited-()superscriptsubscript𝑡1𝐻subscript𝑅𝑖subscript𝐱𝑡subscript𝐚𝑡\\displaystyle\\mathcal{L}_{\\mathcal{T}_{i}}(f_{\\phi})=-\\mathbb{E}_{\\mathbf{x}_{t},\\mathbf{a}_{t}\\sim f_{\\phi},q_{\\mathcal{T}_{i}}}\\left(\\sum_{t=1}^{H}R_{i}(\\mathbf{x}_{t},\\mathbf{a}_{t})\\right). (4) In K𝐾K-shot reinforcement learning, K𝐾K rollouts from fθsubscript𝑓𝜃f_{\\theta} and task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}, (𝐱1,𝐚1,…𝐱H)subscript𝐱1subscript𝐚1…subscript𝐱𝐻(\\mathbf{x}_{1},\\mathbf{a}_{1},...\\mathbf{x}_{H}), and the corresponding rewards R(𝐱t,𝐚t)𝑅subscript𝐱𝑡subscript𝐚𝑡R(\\mathbf{x}_{t},\\mathbf{a}_{t}), may be used for adaptation on a new task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. Since the expected reward is generally not differentiable due to unknown dynamics, we use policy gradient methods to estimate the gradient both for the model gradient update(s) and the meta-optimization. Since policy gradients are an on-policy algorithm, each additional gradient step during the adaptation of fθsubscript𝑓𝜃f_{\\theta} requires new samples from the current policy fθi′subscript𝑓subscript𝜃superscript𝑖′f_{\\theta_{i^{\\prime}}}. We detail the algorithm in Algorithm 3. This algorithm has the same structure as Algorithm 2, with the principal difference being that steps 5 and 8 require sampling trajectories from the environment corresponding to task 𝒯isubscript𝒯𝑖\\mathcal{T}_{i}. Practical implementations of this method may also use a variety of improvements recently proposed for policy gradient algorithms, including state or action-dependent baselines and trust regions (Schulman et al., 2015). ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_20",
"text": " The method that we propose in this paper addresses the general problem of meta-learning (Thrun & Pratt, 1998; Schmidhuber, 1987; Naik & Mammone, 1992), which includes few-shot learning. A popular approach for meta-learning is to train a meta-learner that learns how to update the parameters of the learner’s model (Bengio et al., 1992; Schmidhuber, 1992; Bengio et al., 1990). This approach has been applied to learning to optimize deep networks (Hochreiter et al., 2001; Andrychowicz et al., 2016; Li & Malik, 2017), as well as for learning dynamically changing recurrent networks (Ha et al., 2017). One recent approach learns both the weight initialization and the optimizer, for few-shot image recognition (Ravi & Larochelle, 2017). Unlike these methods, the MAML learner’s weights are updated using the gradient, rather than a learned update; our method does not introduce additional parameters for meta-learning nor require a particular learner architecture. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_21",
"text": " Few-shot learning methods have also been developed for specific tasks such as generative modeling (Edwards & Storkey, 2017; Rezende et al., 2016) and image recognition (Vinyals et al., 2016). One successful approach for few-shot classification is to learn to compare new examples in a learned metric space using e.g. Siamese networks (Koch, 2015) or recurrence with attention mechanisms (Vinyals et al., 2016; Shyam et al., 2017; Snell et al., 2017). These approaches have generated some of the most successful results, but are difficult to directly extend to other problems, such as reinforcement learning. Our method, in contrast, is agnostic to the form of the model and to the particular learning task. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_22",
"text": " Another approach to meta-learning is to train memory-augmented models on many tasks, where the recurrent learner is trained to adapt to new tasks as it is rolled out. Such networks have been applied to few-shot image recognition (Santoro et al., 2016; Munkhdalai & Yu, 2017) and learning “fast” reinforcement learning agents (Duan et al., 2016b; Wang et al., 2016). Our experiments show that our method outperforms the recurrent approach on few-shot classification. Furthermore, unlike these methods, our approach simply provides a good weight initialization and uses the same gradient descent update for both the learner and meta-update. As a result, it is straightforward to finetune the learner for additional gradient steps. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_23",
"text": " Our approach is also related to methods for initialization of deep networks. In computer vision, models pretrained on large-scale image classification have been shown to learn effective features for a range of problems (Donahue et al., 2014). In contrast, our method explicitly optimizes the model for fast adaptability, allowing it to adapt to new tasks with only a few examples. Our method can also be viewed as explicitly maximizing sensitivity of new task losses to the model parameters. A number of prior works have explored sensitivity in deep networks, often in the context of initialization (Saxe et al., 2014; Kirkpatrick et al., 2016). Most of these works have considered good random initializations, though a number of papers have addressed data-dependent initializers (Krähenbühl et al., 2016; Salimans & Kingma, 2016), including learned initializations (Husken & Goerick, 2000; Maclaurin et al., 2015). In contrast, our method explicitly trains the parameters for sensitivity on a given task distribution, allowing for extremely efficient adaptation for problems such as K𝐾K-shot learning and rapid reinforcement learning in only one or a few gradient steps. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_24",
"text": " The goal of our experimental evaluation is to answer the following questions: (1) Can MAML enable fast learning of new tasks? (2) Can MAML be used for meta-learning in multiple different domains, including supervised regression, classification, and reinforcement learning? (3) Can a model learned with MAML continue to improve with additional gradient updates and/or examples? ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_25",
"text": " All of the meta-learning problems that we consider require some amount of adaptation to new tasks at test-time. When possible, we compare our results to an oracle that receives the identity of the task (which is a problem-dependent representation) as an additional input, as an upper bound on the performance of the model. All of the experiments were performed using TensorFlow (Abadi et al., 2016), which allows for automatic differentiation through the gradient update(s) during meta-learning. The code is available online111Code for the regression and supervised experiments is at github.com/cbfinn/maml and code for the RL experiments is at github.com/cbfinn/maml_rl. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_26",
"text": " We start with a simple regression problem that illustrates the basic principles of MAML. Each task involves regressing from the input to the output of a sine wave, where the amplitude and phase of the sinusoid are varied between tasks. Thus, p(𝒯)𝑝𝒯p(\\mathcal{T}) is continuous, where the amplitude varies within (0.1,5.0)0.15.0(0.1,5.0) and the phase varies within (0,π)0𝜋(0,\\pi), and the input and output both have a dimensionality of 111. During training and testing, datapoints 𝐱𝐱\\mathbf{x} are sampled uniformly from (−5.0,5.0)5.05.0(-5.0,5.0). The loss is the mean-squared error between the prediction f(𝐱)𝑓𝐱f(\\mathbf{x}) and true value. The regressor is a neural network model with 222 hidden layers of size 404040 with ReLU nonlinearities. When training with MAML, we use one gradient update with K=10𝐾10K=10 examples with a fixed step size α=0.01𝛼0.01\\alpha=0.01, and use Adam as the meta-optimizer (Kingma & Ba, 2015). The baselines are likewise trained with Adam. To evaluate performance, we fine-tune a single meta-learned model on varying numbers of K𝐾K examples, and compare performance to two baselines: (a) pretraining on all of the tasks, which entails training a network to regress to random sinusoid functions and then, at test-time, fine-tuning with gradient descent on the K𝐾K provided points, using an automatically tuned step size, and (b) an oracle which receives the true amplitude and phase as input. In Appendix C, we show comparisons to additional multi-task and adaptation methods. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_27",
"text": " We evaluate performance by fine-tuning the model learned by MAML and the pretrained model on K={5,10,20}𝐾51020K=\\{5,10,20\\} datapoints. During fine-tuning, each gradient step is computed using the same K𝐾K datapoints. The qualitative results, shown in Figure 2 and further expanded on in Appendix B show that the learned model is able to quickly adapt with only 555 datapoints, shown as purple triangles, whereas the model that is pretrained using standard supervised learning on all tasks is unable to adequately adapt with so few datapoints without catastrophic overfitting. Crucially, when the K𝐾K datapoints are all in one half of the input range, the model trained with MAML can still infer the amplitude and phase in the other half of the range, demonstrating that the MAML trained model f𝑓f has learned to model the periodic nature of the sine wave. Furthermore, we observe both in the qualitative and quantitative results (Figure 3 and Appendix B) that the model learned with MAML continues to improve with additional gradient steps, despite being trained for maximal performance after one gradient step. This improvement suggests that MAML optimizes the parameters such that they lie in a region that is amenable to fast adaptation and is sensitive to loss functions from p(𝒯)𝑝𝒯p(\\mathcal{T}), as discussed in Section 2.2, rather than overfitting to parameters θ𝜃\\theta that only improve after one step. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_28",
"text": " To evaluate MAML in comparison to prior meta-learning and few-shot learning algorithms, we applied our method to few-shot image recognition on the Omniglot (Lake et al., 2011) and MiniImagenet datasets. The Omniglot dataset consists of 20 instances of 1623 characters from 50 different alphabets. Each instance was drawn by a different person. The MiniImagenet dataset was proposed by Ravi & Larochelle (2017), and involves 64 training classes, 12 validation classes, and 24 test classes. The Omniglot and MiniImagenet image recognition tasks are the most common recently used few-shot learning benchmarks (Vinyals et al., 2016; Santoro et al., 2016; Ravi & Larochelle, 2017). We follow the experimental protocol proposed by Vinyals et al. (2016), which involves fast learning of N𝑁N-way classification with 1 or 5 shots. The problem of N𝑁N-way classification is set up as follows: select N𝑁N unseen classes, provide the model with K𝐾K different instances of each of the N𝑁N classes, and evaluate the model’s ability to classify new instances within the N𝑁N classes. For Omniglot, we randomly select 120012001200 characters for training, irrespective of alphabet, and use the remaining for testing. The Omniglot dataset is augmented with rotations by multiples of 909090 degrees, as proposed by Santoro et al. (2016). ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_29",
"text": " Our model follows the same architecture as the embedding function used by Vinyals et al. (2016), which has 4 modules with a 3×3333\\times 3 convolutions and 646464 filters, followed by batch normalization (Ioffe & Szegedy, 2015), a ReLU nonlinearity, and 2×2222\\times 2 max-pooling. The Omniglot images are downsampled to 28×28282828\\times 28, so the dimensionality of the last hidden layer is 646464. As in the baseline classifier used by Vinyals et al. (2016), the last layer is fed into a softmax. For Omniglot, we used strided convolutions instead of max-pooling. For MiniImagenet, we used 323232 filters per layer to reduce overfitting, as done by (Ravi & Larochelle, 2017). In order to also provide a fair comparison against memory-augmented neural networks (Santoro et al., 2016) and to test the flexibility of MAML, we also provide results for a non-convolutional network. For this, we use a network with 444 hidden layers with sizes 256256256, 128128128, 646464, 646464, each including batch normalization and ReLU nonlinearities, followed by a linear layer and softmax. For all models, the loss function is the cross-entropy error between the predicted and true class. Additional hyperparameter details are included in Appendix A.1. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_30",
"text": " We present the results in Table 1. The convolutional model learned by MAML compares well to the state-of-the-art results on this task, narrowly outperforming the prior methods. Some of these existing methods, such as matching networks, Siamese networks, and memory models are designed with few-shot classification in mind, and are not readily applicable to domains such as reinforcement learning. Additionally, the model learned with MAML uses fewer overall parameters compared to matching networks and the meta-learner LSTM, since the algorithm does not introduce any additional parameters beyond the weights of the classifier itself. Compared to these prior methods, memory-augmented neural networks (Santoro et al., 2016) specifically, and recurrent meta-learning models in general, represent a more broadly applicable class of methods that, like MAML, can be used for other tasks such as reinforcement learning (Duan et al., 2016b; Wang et al., 2016). However, as shown in the comparison, MAML significantly outperforms memory-augmented networks and the meta-learner LSTM on 5-way Omniglot and MiniImagenet classification, both in the 111-shot and 555-shot case. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_31",
"text": " A significant computational expense in MAML comes from the use of second derivatives when backpropagating the meta-gradient through the gradient operator in the meta-objective (see Equation (1)). On MiniImagenet, we show a comparison to a first-order approximation of MAML, where these second derivatives are omitted. Note that the resulting method still computes the meta-gradient at the post-update parameter values θi′superscriptsubscript𝜃𝑖′\\theta_{i}^{\\prime}, which provides for effective meta-learning. Surprisingly however, the performance of this method is nearly the same as that obtained with full second derivatives, suggesting that most of the improvement in MAML comes from the gradients of the objective at the post-update parameter values, rather than the second order updates from differentiating through the gradient update. Past work has observed that ReLU neural networks are locally almost linear (Goodfellow et al., 2015), which suggests that second derivatives may be close to zero in most cases, partially explaining the good performance of the first-order approximation. This approximation removes the need for computing Hessian-vector products in an additional backward pass, which we found led to roughly 33%percent3333\\% speed-up in network computation. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_32",
"text": " To evaluate MAML on reinforcement learning problems, we constructed several sets of tasks based off of the simulated continuous control environments in the rllab benchmark suite (Duan et al., 2016a). We discuss the individual domains below. In all of the domains, the model trained by MAML is a neural network policy with two hidden layers of size 100100100, with ReLU nonlinearities. The gradient updates are computed using vanilla policy gradient (REINFORCE) (Williams, 1992), and we use trust-region policy optimization (TRPO) as the meta-optimizer (Schulman et al., 2015). In order to avoid computing third derivatives, we use finite differences to compute the Hessian-vector products for TRPO. For both learning and meta-learning updates, we use the standard linear feature baseline proposed by Duan et al. (2016a), which is fitted separately at each iteration for each sampled task in the batch. We compare to three baseline models: (a) pretraining one policy on all of the tasks and then fine-tuning, (b) training a policy from randomly initialized weights, and (c) an oracle policy which receives the parameters of the task as input, which for the tasks below corresponds to a goal position, goal direction, or goal velocity for the agent. The baseline models of (a) and (b) are fine-tuned with gradient descent with a manually tuned step size. Videos of the learned policies can be viewed at sites.google.com/view/maml ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_33",
"text": " 2D Navigation. In our first meta-RL experiment, we study a set of tasks where a point agent must move to different goal positions in 2D, randomly chosen for each task within a unit square. The observation is the current 2D position, and actions correspond to velocity commands clipped to be in the range (−0.1,0.1)0.10.1(-0.1,0.1). The reward is the negative squared distance to the goal, and episodes terminate when the agent is within 0.010.010.01 of the goal or at the horizon of H=100𝐻100H=100. The policy was trained with MAML to maximize performance after 111 policy gradient update using 202020 trajectories. Additional hyperparameter settings for this problem and the following RL problems are in Appendix A.2. In our evaluation, we compare adaptation to a new task with up to 4 gradient updates, each with 404040 samples. The results in Figure 4 show the adaptation performance of models that are initialized with MAML, conventional pretraining on the same set of tasks, random initialization, and an oracle policy that receives the goal position as input. The results show that MAML can learn a model that adapts much more quickly in a single gradient update, and furthermore continues to improve with additional updates. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_34",
"text": " Locomotion. To study how well MAML can scale to more complex deep RL problems, we also study adaptation on high-dimensional locomotion tasks with the MuJoCo simulator (Todorov et al., 2012). The tasks require two simulated robots – a planar cheetah and a 3D quadruped (the “ant”) – to run in a particular direction or at a particular velocity. In the goal velocity experiments, the reward is the negative absolute value between the current velocity of the agent and a goal, which is chosen uniformly at random between 0.00.00.0 and 2.02.02.0 for the cheetah and between 0.00.00.0 and 3.03.03.0 for the ant. In the goal direction experiments, the reward is the magnitude of the velocity in either the forward or backward direction, chosen at random for each task in p(𝒯)𝑝𝒯p(\\mathcal{T}). The horizon is H=200𝐻200H=200, with 202020 rollouts per gradient step for all problems except the ant forward/backward task, which used 404040 rollouts per step. The results in Figure 5 show that MAML learns a model that can quickly adapt its velocity and direction with even just a single gradient update, and continues to improve with more gradient steps. The results also show that, on these challenging tasks, the MAML initialization substantially outperforms random initialization and pretraining. In fact, pretraining is in some cases worse than random initialization, a fact observed in prior RL work (Parisotto et al., 2016). ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_35",
"text": " We introduced a meta-learning method based on learning easily adaptable model parameters through gradient descent. Our approach has a number of benefits. It is simple and does not introduce any learned parameters for meta-learning. It can be combined with any model representation that is amenable to gradient-based training, and any differentiable objective, including classification, regression, and reinforcement learning. Lastly, since our method merely produces a weight initialization, adaptation can be performed with any amount of data and any number of gradient steps, though we demonstrate state-of-the-art results on classification with only one or five examples per class. We also show that our method can adapt an RL agent using policy gradients and a very modest amount of experience. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
},
{
"id": "1703.03400_all_36",
"text": " Reusing knowledge from past tasks may be a crucial ingredient in making high-capacity scalable models, such as deep neural networks, amenable to fast training with small datasets. We believe that this work is one step toward a simple and general-purpose meta-learning technique that can be applied to any problem and any model. Further research in this area can make multitask initialization a standard ingredient in deep learning and reinforcement learning. ",
"title": "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"
}
] |
How well RoBERTa language modeling on Wiki-40B?
|
RoBERTa performs at about 26 BPC on the MLM task with the Wiki-40B dataset [16]. RoBERTa performs better than BERT [17].
|
[
16,
17
] |
[
{
"id": "2204.08110_all_0",
"text": " Pretrained language models have become an integral part of NLP systems. They come in two flavors: monolingual, where the model is trained on text from a single language, and multilingual, where the model is jointly trained on data from many different languages. Monolingual pretrained models are generally applied to tasks in the same language, whereas multilingual ones are used for cross-lingual tasks or transfer. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_1",
"text": " Recent work has claimed that monolingual pretrained models are also surprisingly good at transferring between languages, despite ostensibly having never seen the target language before (Gogoulou et al., 2021; Li et al., 2021, inter alia). However, because of the large scale of pretraining data and because many pretraining corpora are not publicly available, it is currently unknown how much foreign language data exists in monolingual pretraining corpora. In this paper, we show that (1) these data are almost certainly contaminated with very small percentages of text from other languages and that (2) cross-lingual transfer is possible from such data leakage in the pretraining corpus. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_2",
"text": " More specifically, we quantify how multilingual English pretrained models are in two steps. First, we analyze common English pretraining corpora with a large-scale automatic evaluation to estimate their language composition, as well as a smaller-scale manual analysis. Second, we perform experiments across fifty languages on masked language modeling and part-of-speech (POS) tagging to measure how well the models trained on these pretraining corpora perform outside of English. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_3",
"text": " Our analysis finds that these corpora include very small percentages that amount to overall significant amounts of non-English text (Figure 1), particularly those derived from web-crawled data. Furthermore, the models trained on this data perform surprisingly well on other languages; this transfer is strongly correlated with the amount of target language data seen during pretraining. Notably, we find that the English T5 outperforms mBERT on POS tagging in multiple languages with no finetuning. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_4",
"text": " Overall, these results indicate that the considered models are actually multilingual and that their ability to transfer across languages is not zero-shot, despite what has been recently claimed. Given the effort required to fully remove all non-English data, we question whether it is practically possible to train truly monolingual models at scale. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_5",
"text": " We first measure how much non-English text exists in commonly used English pretraining corpora with two analyses: an automatic language identification to estimate the amount of foreign language data in these corpora, and a manual qualitative analysis of the text classified as non-English. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_6",
"text": " We consider the following pretraining datasets: English Wikipedia (11.8GB); BookCorpus (Zhu et al. 2015, 4.2GB); Stories (Trinh and Le 2018, 31GB); OpenWebText (Gokaslan and Cohen 2019, 38GB), which is an open-source version of WebText Radford et al. (2019); CC-NEWS (Liu et al. 2019, 76 GB); and C4.En (Raffel et al. 2020, 305GB), as provided by Dodge et al. (2021). We use the versions of Wikipedia, BookCorpus, and CC-NEWS used to pretrain RoBERTa. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_7",
"text": " We use the FastText language identification model Joulin et al. (2017) to label every line in each corpus and keep lines as non-English if they score above a set confidence threshold (0.6). Due to the large size of C4, we subsample the first 50M examples (or 14%); we classify the entirety of all other datasets. Since language detection is imperfect, particularly for low-resource languages Caswell et al. (2021), we present the results of this analysis as an estimate of the non-English data in each dataset and perform a qualitative analysis of potential errors in the following section. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_8",
"text": " A summary of the language identification experiments is presented in Figure 1.111Full results of this evaluation are detailed in Appendix C. We see that every corpus contains notable quantities of non-English data, with our estimates ranging between 300k to 406M tokens. An obvious factor that affects the amount of non-English data in each corpus is the overall size of the dataset; however, even when controlling for size by looking at the percentage of non-English data, we still see that the smaller corpora (Wikipedia, BookCorpus, and Stories) have relatively less non-English data. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_9",
"text": " Indeed, a major factor of language leakage is the method in which the data was collected: the datasets derived from web crawls contain higher percentages of non-English text (OpenWebText andCCNews). This is true even for C4, where the dataset was filtered with a classifier to exclude non-English text Raffel et al. (2020). Since automatic methods for language identification are imperfect, the datasets with more manual filtering (such as Wikipedia, which has human editors curating its content) are less prone to non-English data than those relying on classifiers. Due to these challenges, it is likely impossible to fully remove non-English text from a web-crawled dataset at scale. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_10",
"text": " We also see that non-English text makes up small percentages of the overall data, though this still leads to millions of tokens in large datasets. The largest individual languages after English only make up 0.01%, 0.15%, and 0.05% of the BERT, RoBERTa, and T5 training data, respectively. Multilingual pretraining work has shown that models generalize to new languages from varying amounts of data Delvin (2019); Lample and Conneau (2019); Conneau et al. (2020); however, these approaches intentionally select data across languages, and most upsample low-resource languages during training. Without these considerations, it is an open question how well the models trained on these relatively small amounts of non-English data generalize. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_11",
"text": " We also perform a closer analysis on a random subset (200 per corpus) of non-English lines predicted by the language classifier (Table 2.1). Each example is manually coded into one of six categories. The first set covers various kinds of foreign language data: NE, where the line contains only non-English language text; BiL, or bilingual, where the line contains both English and non-English text; Trans., in which the English and non-English data that are translations of each other; and Ent., where the line is primarily English but contains non-English entities. The last two codes pertain to errors made by the language classifier: En., where the line only contains English text, and XX, which refers to lines that contain no natural language. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_12",
"text": " The majority of lines across datasets consist only of non-English text. The next most common type of non-English data is BiL; this contains many subtypes of data, such as codeswitching and foreign language dialogue within English text. These datasets also include parallel data at both the sentence- and word-level.222e.g., ”大学 【だい・がく】– college”, OpenWebText We note that all observed translations are between English and another language. Finally, some of the examples classified as non-English are actually English texts with non-English phrases. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_13",
"text": " Our analysis also shows that the language classifier performs worse on the non-web crawled data. For example, it misclassified a quarter of the sampled lines from Stories as non-English when they in fact only contain English text; many of these lines stem from snippets of dialogue in the dataset. We generally observe that lines coded as En tend to be shorter than the correctly labeled lines and often contain non-standard English. The language classifier also struggles to handle noisy lines, for which it has no appropriate language label. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_14",
"text": " We now ask: how well do models pretrained on these putatively English corpora perform on non-English tasks? While the English data is more multilingual than previously thought, there are many differences between monolingual and multilingual pretraining; non-English data are often tokenized into more subword units333For example, the Basque UD treebank requires on average 1.78, 2.59, and 2.66 tokens per word to be encoded by XLMR, RoBERTa, and BERT, respectively. and are much less frequently observed during monolingual training. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_15",
"text": " We evaluate popular English pretrained models on tasks in more than 50 languages: (masked) language modeling, POS probing, and finetuned POS tagging. We compare the performance of monolingual BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), and T5 Raffel et al. (2020) against multilingual mBERT Delvin (2019) and XLM-R Conneau et al. (2020). We report average performance across five runs with different random seeds for the POS evaluations. The full results and all languages can be found in Appendix D. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_16",
"text": " We first measure the perplexity of English pretrained MLMs in other languages. We use Wiki-40B, a multilingual language modeling dataset that covers 41 languages Guo et al. (2020). Following the Wiki-40B paper, we report bits per character (BPC) to allow comparison between models with different tokenizations of the text. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_17",
"text": " We find that both BERT models perform notably worse on modeling other languages; however, RoBERTa, reduces the gap with the multilingual models from 2.51 BPC to 0.87 BPC (Figure 2(a)). This finding is consistent with Tran (2020), who also found RoBERTa transfers well cross-lingually. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_18",
"text": " Next, we evaluate how well monolingual English models perform on non-English downstream tasks, using part-of-speech (POS) tagging as a case study. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_19",
"text": " We first consider the performance of the encoders when probed for POS knowledge (Figure 2(b)).444For T5, this means that we evaluate the output of the encoder and discard the decoder. Unsurprisingly, on average all of the English models underperform the multilingual models. Similar to MLM, we find that RoBERTa performs better than BERT when probed for POS features on other languages; surprisingly, it also strongly outperforms T5, despite C4 containing more absolute non-English data than the RoBERTa corpus. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_20",
"text": " This difference is likely due to two factors. First, in terms of relative percentages, RoBERTa is exposed to more non-English text than T5 (0.78% compared to only 0.22%). Secondly, RoBERTa’s subword vocabulary is robust to unexpected inputs and does not substitute an UNK token any input tokens; in contrast, T5 and BERT have high rates of UNK tokens for some non-Latin languages (Appendix B).555UNK tokens refer to placeholder tokens used when the model receives an input not covered by its vocabulary. However, for many high-resource languages the English models perform competitively, with T5 outperforming mBERT on German and Portuguese, among others. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_21",
"text": " To test if the effects of foreign language data carry through after finetuning, we also finetune a subset of the models (BERTbase, RoBERTabase, mBERT, XLMRbase) for non-English POS tagging (Figure 2(c)). After finetuning, the gap between the mono- and multilingual models is much smaller: RoBERTa only averages 2.65 points worse than XLM-R, compared to 12.5 points when probing. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_22",
"text": " We then investigate the correlation between potential transfer causes and model performance (Table 2). Specifically, we consider the quantity of target language data found in the model’s pretraining corpus and the language similarity to English as potential causes of cross-lingual transfer. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_23",
"text": " We find that across tasks, RoBERTa task performance is most strongly correlated with the amount of target language data seen during pretraining. BERT and T5 task performance are less correlated with observed pretrained data, likely due to tokenization artifacts (Appendix B). Indeed, when we control for languages not written with Latin script on T5, the correlation between performance and the amount of target pretraining data increases to ρ=𝜌absent\\rho= 0.313. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_24",
"text": " We also consider the effect of language similarity on task performance, which is often hypothesized to facilitate cross-lingual transfer. We use the syntactic distance of languages calculated by Malaviya et al. (2017); more similar languages score lower. However, we generally find that this is less correlated with performance than the quantity of target text, particularly for RoBERTa. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_25",
"text": " In this paper, we demonstrate that English pretrained models are exposed to a considerable amount of non-English data during pretraining, particularly in the case of more recent models that are trained on larger corpora derived from web crawls. We also find that this non-English text acts as a significant source of signal for cross-lingual transfer. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_26",
"text": " Other recent work has focused on documenting the composition of pretraining corpora Dodge et al. (2021); Gururangan et al. (2022). Caswell et al. (2021) manually audit a variety of multilingual datasets, finding data quality issues that are worse for low-resource languages and, similarly to our work, that texts for many languages are misclassified. In contrast, our focus is on the presence of foreign language data in primarily English corpora. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_27",
"text": " Prior work has also shown the ability of monolingual models to transfer to other languages across a wide range of tasks Gogoulou et al. (2021); Li et al. (2021); Tran (2020); Artetxe et al. (2020); Chi et al. (2020), but these works do not consider the effect of foreign language data leakage as a source of signal. Notably, de Souza et al. (2021) mention the presence of foreign language data in their corpora but assume the small amounts observed will not affect model performance. However, our findings demonstrate that the amount of foreign language data directly correlates with cross-lingual transfer. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_28",
"text": " An obvious follow-up to our findings would be to retrain the models with text that is verified to only contain English data; this would confirm the effect the leaked non-English data has on the models. We reiterate that the standard method for filtering these datasets, automatic language classifiers, is imperfect. This, and the infeasibility of manual filtering due to the scale of the data, means that controlling for the language the model is pretrained on is nearly impossible. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_29",
"text": " However, the presence of foreign language data in pretraining corpora is not inherently problematic. Models trained on these datasets perform exceedingly well on their target languages and generalize to other languages much better than expected. Rather, it is important to remember that these models are not performing zero-shot transfer when used in other languages, given the scale and data with which they were pretrained. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_30",
"text": " Our work has a number of limitations. First, we measure the quantities of non-English data using a language classifier. The amounts of foreign language data we report are estimates for each dataset, as the classifier likely misclassified some examples. We manually audit the types of mistakes made by the language classifier in Section 2. Additionally, we evaluate downstream performance via POS tagging, and it is possible that the models would exhibit different behavior on other NLP tasks. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
},
{
"id": "2204.08110_all_31",
"text": " We also only consider the effect of foreign language contamination for English pretrained models. It is unclear to what extent this phenomenon affects monolingual models for other languages; however, since many of the resources evaluated in this work are also used to pretrain non-English monolingual models (e.g., Wikipedia), similar effects would likely be observed. ",
"title": "Language Contamination Helps Explain the Cross-lingual Capabilities of English Pretrained Models"
}
] |
Why did the authors use multi-scale feature maps for detection?
|
Authors used multi-scale feature maps for detection because they allow predictions of detections at multiple scales [5].
|
[
5
] |
[
{
"id": "1512.02325_all_0",
"text": " Current state-of-the-art object detection systems are variants of the following approach: hypothesize bounding boxes, resample pixels or features for each box, and apply a high-quality classifier. This pipeline has prevailed on detection benchmarks since the Selective Search work through the current leading results on PASCAL VOC, COCO, and ILSVRC detection all based on Faster R-CNN albeit with deeper features such as . While accurate, these approaches have been too computationally intensive for embedded systems and, even with high-end hardware, too slow for real-time applications. Often detection speed for these approaches is measured in seconds per frame (SPF), and even the fastest high-accuracy detector, Faster R-CNN, operates at only 7 frames per second (FPS). There have been many attempts to build faster detectors by attacking each stage of the detection pipeline (see related work in Sec. 4), but so far, significantly increased speed comes only at the cost of significantly decreased detection accuracy. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_1",
"text": " This paper presents the first deep network based object detector that does not resample pixels or features for bounding box hypotheses and and is as accurate as approaches that do. This results in a significant improvement in speed for high-accuracy detection (59 FPS with mAP 74.3% on VOC2007 test, vs. Faster R-CNN 7 FPS with mAP 73.2% or YOLO 45 FPS with mAP 63.4%). The fundamental improvement in speed comes from eliminating bounding box proposals and the subsequent pixel or feature resampling stage. We are not the first to do this (cf (4, 5)), but by adding a series of improvements, we manage to increase the accuracy significantly over previous attempts. Our improvements include using a small convolutional filter to predict object categories and offsets in bounding box locations, using separate predictors (filters) for different aspect ratio detections, and applying these filters to multiple feature maps from the later stages of a network in order to perform detection at multiple scales. With these modifications—especially using multiple layers for prediction at different scales—we can achieve high-accuracy using relatively low resolution input, further increasing detection speed. While these contributions may seem small independently, we note that the resulting system improves accuracy on real-time detection for PASCAL VOC from 63.4% mAP for YOLO to 74.3% mAP for our SSD. This is a larger relative improvement in detection accuracy than that from the recent, very high-profile work on residual networks . Furthermore, significantly improving the speed of high-quality detection can broaden the range of settings where computer vision is useful. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_2",
"text": " We summarize our contributions as follows: • We introduce SSD, a single-shot detector for multiple categories that is faster than the previous state-of-the-art for single shot detectors (YOLO), and significantly more accurate, in fact as accurate as slower techniques that perform explicit region proposals and pooling (including Faster R-CNN). • The core of SSD is predicting category scores and box offsets for a fixed set of default bounding boxes using small convolutional filters applied to feature maps. • To achieve high detection accuracy we produce predictions of different scales from feature maps of different scales, and explicitly separate predictions by aspect ratio. • These design features lead to simple end-to-end training and high accuracy, even on low resolution input images, further improving the speed vs accuracy trade-off. • Experiments include timing and accuracy analysis on models with varying input size evaluated on PASCAL VOC, COCO, and ILSVRC and are compared to a range of recent state-of-the-art approaches. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_3",
"text": " This section describes our proposed SSD framework for detection (Sec. 2.1) and the associated training methodology (Sec. 2.2). Afterwards, Sec. 3 presents dataset-specific model details and experimental results. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_4",
"text": " The SSD approach is based on a feed-forward convolutional network that produces a fixed-size collection of bounding boxes and scores for the presence of object class instances in those boxes, followed by a non-maximum suppression step to produce the final detections. The early network layers are based on a standard architecture used for high quality image classification (truncated before any classification layers), which we will call the base network222We use the VGG-16 network as a base, but other networks should also produce good results.. We then add auxiliary structure to the network to produce detections with the following key features: ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_5",
"text": " Multi-scale feature maps for detection We add convolutional feature layers to the end of the truncated base network. These layers decrease in size progressively and allow predictions of detections at multiple scales. The convolutional model for predicting detections is different for each feature layer (cf Overfeat and YOLO that operate on a single scale feature map). ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_6",
"text": " Convolutional predictors for detection Each added feature layer (or optionally an existing feature layer from the base network) can produce a fixed set of detection predictions using a set of convolutional filters. These are indicated on top of the SSD network architecture in Fig. 2. For a feature layer of size m×n𝑚𝑛m\\times n with p𝑝p channels, the basic element for predicting parameters of a potential detection is a 3×3×p33𝑝3\\times 3\\times p small kernel that produces either a score for a category, or a shape offset relative to the default box coordinates. At each of the m×n𝑚𝑛m\\times n locations where the kernel is applied, it produces an output value. The bounding box offset output values are measured relative to a default box position relative to each feature map location (cf the architecture of YOLO that uses an intermediate fully connected layer instead of a convolutional filter for this step). ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_7",
"text": " Default boxes and aspect ratios We associate a set of default bounding boxes with each feature map cell, for multiple feature maps at the top of the network. The default boxes tile the feature map in a convolutional manner, so that the position of each box relative to its corresponding cell is fixed. At each feature map cell, we predict the offsets relative to the default box shapes in the cell, as well as the per-class scores that indicate the presence of a class instance in each of those boxes. Specifically, for each box out of k𝑘k at a given location, we compute c𝑐c class scores and the 444 offsets relative to the original default box shape. This results in a total of (c+4)k𝑐4𝑘(c+4)k filters that are applied around each location in the feature map, yielding (c+4)kmn𝑐4𝑘𝑚𝑛(c+4)kmn outputs for a m×n𝑚𝑛m\\times n feature map. For an illustration of default boxes, please refer to Fig. 1. Our default boxes are similar to the anchor boxes used in Faster R-CNN , however we apply them to several feature maps of different resolutions. Allowing different default box shapes in several feature maps let us efficiently discretize the space of possible output box shapes. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_8",
"text": " The key difference between training SSD and training a typical detector that uses region proposals, is that ground truth information needs to be assigned to specific outputs in the fixed set of detector outputs. Some version of this is also required for training in YOLO and for the region proposal stage of Faster R-CNN and MultiBox. Once this assignment is determined, the loss function and back propagation are applied end-to-end. Training also involves choosing the set of default boxes and scales for detection as well as the hard negative mining and data augmentation strategies. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_9",
"text": " During training we need to determine which default boxes correspond to a ground truth detection and train the network accordingly. For each ground truth box we are selecting from default boxes that vary over location, aspect ratio, and scale. We begin by matching each ground truth box to the default box with the best jaccard overlap (as in MultiBox ). Unlike MultiBox, we then match default boxes to any ground truth with jaccard overlap higher than a threshold (0.5). This simplifies the learning problem, allowing the network to predict high scores for multiple overlapping default boxes rather than requiring it to pick only the one with maximum overlap. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_10",
"text": " The SSD training objective is derived from the MultiBox objective (7, 8) but is extended to handle multiple object categories. Let xijp={1,0}superscriptsubscript𝑥𝑖𝑗𝑝10x_{ij}^{p}=\\{1,0\\} be an indicator for matching the i𝑖i-th default box to the j𝑗j-th ground truth box of category p𝑝p. In the matching strategy above, we can have ∑ixijp≥1subscript𝑖superscriptsubscript𝑥𝑖𝑗𝑝1\\sum_{i}x_{ij}^{p}\\geq 1. The overall objective loss function is a weighted sum of the localization loss (loc) and the confidence loss (conf): L(x,c,l,g)=1N(Lconf(x,c)+αLloc(x,l,g))𝐿𝑥𝑐𝑙𝑔1𝑁subscript𝐿𝑐𝑜𝑛𝑓𝑥𝑐𝛼subscript𝐿𝑙𝑜𝑐𝑥𝑙𝑔L(x,c,l,g)=\\frac{1}{N}(L_{conf}(x,c)+\\alpha L_{loc}(x,l,g)) (1) where N is the number of matched default boxes. If N=0𝑁0N=0, wet set the loss to 0. The localization loss is a Smooth L1 loss between the predicted box (l𝑙l) and the ground truth box (g𝑔g) parameters. Similar to Faster R-CNN , we regress to offsets for the center (cx,cy𝑐𝑥𝑐𝑦cx,cy) of the default bounding box (d𝑑d) and for its width (w𝑤w) and height (hℎh). Lloc(x,l,g)=∑i∈PosN∑m∈{cx,cy,w,h}xijksmoothL1(lim−g^jm)g^jcx=(gjcx−dicx)/diwg^jcy=(gjcy−dicy)/dihg^jw=log(gjwdiw)g^jh=log(gjhdih)formulae-sequencesubscript𝐿𝑙𝑜𝑐𝑥𝑙𝑔superscriptsubscript𝑖𝑃𝑜𝑠𝑁subscript𝑚𝑐𝑥𝑐𝑦𝑤ℎsuperscriptsubscript𝑥𝑖𝑗𝑘subscriptsmoothL1superscriptsubscript𝑙𝑖𝑚superscriptsubscript^𝑔𝑗𝑚superscriptsubscript^𝑔𝑗𝑐𝑥superscriptsubscript𝑔𝑗𝑐𝑥superscriptsubscript𝑑𝑖𝑐𝑥superscriptsubscript𝑑𝑖𝑤superscriptsubscript^𝑔𝑗𝑐𝑦superscriptsubscript𝑔𝑗𝑐𝑦superscriptsubscript𝑑𝑖𝑐𝑦superscriptsubscript𝑑𝑖ℎsuperscriptsubscript^𝑔𝑗𝑤superscriptsubscript𝑔𝑗𝑤superscriptsubscript𝑑𝑖𝑤superscriptsubscript^𝑔𝑗ℎsuperscriptsubscript𝑔𝑗ℎsuperscriptsubscript𝑑𝑖ℎ\\begin{split}L_{loc}(x,l,g)=\\sum_{i\\in Pos}^{N}\\sum_{m\\in\\{cx,cy,w,h\\}}&x_{ij}^{k}\\text{smooth}_{\\text{L1}}(l_{i}^{m}-\\hat{g}_{j}^{m})\\\\ \\hat{g}_{j}^{cx}=(g_{j}^{cx}-d_{i}^{cx})/d_{i}^{w}\\quad\\quad&\\hat{g}_{j}^{cy}=(g_{j}^{cy}-d_{i}^{cy})/d_{i}^{h}\\\\ \\hat{g}_{j}^{w}=\\log\\Big{(}\\frac{g_{j}^{w}}{d_{i}^{w}}\\Big{)}\\quad\\quad&\\hat{g}_{j}^{h}=\\log\\Big{(}\\frac{g_{j}^{h}}{d_{i}^{h}}\\Big{)}\\end{split} (2) The confidence loss is the softmax loss over multiple classes confidences (c𝑐c). Lconf(x,c)=−∑i∈PosNxijplog(c^ip)−∑i∈Neglog(c^i0)wherec^ip=exp(cip)∑pexp(cip)formulae-sequencesubscript𝐿𝑐𝑜𝑛𝑓𝑥𝑐superscriptsubscript𝑖𝑃𝑜𝑠𝑁superscriptsubscript𝑥𝑖𝑗𝑝𝑙𝑜𝑔superscriptsubscript^𝑐𝑖𝑝subscript𝑖𝑁𝑒𝑔𝑙𝑜𝑔superscriptsubscript^𝑐𝑖0wheresuperscriptsubscript^𝑐𝑖𝑝superscriptsubscript𝑐𝑖𝑝subscript𝑝superscriptsubscript𝑐𝑖𝑝L_{conf}(x,c)=-\\sum_{i\\in Pos}^{N}x_{ij}^{p}log(\\hat{c}_{i}^{p})-\\sum_{i\\in Neg}log(\\hat{c}_{i}^{0})\\quad\\text{where}\\quad\\hat{c}_{i}^{p}=\\frac{\\exp(c_{i}^{p})}{\\sum_{p}\\exp(c_{i}^{p})} (3) and the weight term α𝛼\\alpha is set to 1 by cross validation. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_11",
"text": " To handle different object scales, some methods (4, 9) suggest processing the image at different sizes and combining the results afterwards. However, by utilizing feature maps from several different layers in a single network for prediction we can mimic the same effect, while also sharing parameters across all object scales. Previous works (10, 11) have shown that using feature maps from the lower layers can improve semantic segmentation quality because the lower layers capture more fine details of the input objects. Similarly, showed that adding global context pooled from a feature map can help smooth the segmentation results. Motivated by these methods, we use both the lower and upper feature maps for detection. Figure 1 shows two exemplar feature maps (8×8888\\times 8 and 4×4444\\times 4) which are used in the framework. In practice, we can use many more with small computational overhead. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_12",
"text": " Feature maps from different levels within a network are known to have different (empirical) receptive field sizes . Fortunately, within the SSD framework, the default boxes do not necessary need to correspond to the actual receptive fields of each layer. We design the tiling of default boxes so that specific feature maps learn to be responsive to particular scales of the objects. Suppose we want to use m𝑚m feature maps for prediction. The scale of the default boxes for each feature map is computed as: sk=smin+smax−sminm−1(k−1),k∈(1,m)formulae-sequencesubscript𝑠𝑘subscript𝑠minsubscript𝑠maxsubscript𝑠min𝑚1𝑘1𝑘1𝑚s_{k}=s_{\\text{min}}+\\frac{s_{\\text{max}}-s_{\\text{min}}}{m-1}(k-1),\\quad k\\in(1,m) (4) where sminsubscript𝑠mins_{\\text{min}} is 0.2 and smaxsubscript𝑠maxs_{\\text{max}} is 0.9, meaning the lowest layer has a scale of 0.2 and the highest layer has a scale of 0.9, and all layers in between are regularly spaced. We impose different aspect ratios for the default boxes, and denote them as ar∈{1,2,3,12,13}subscript𝑎𝑟1231213a_{r}\\in\\{1,2,3,\\frac{1}{2},\\frac{1}{3}\\}. We can compute the width (wka=skarsuperscriptsubscript𝑤𝑘𝑎subscript𝑠𝑘subscript𝑎𝑟w_{k}^{a}=s_{k}\\sqrt{a_{r}}) and height (hka=sk/arsuperscriptsubscriptℎ𝑘𝑎subscript𝑠𝑘subscript𝑎𝑟h_{k}^{a}=s_{k}/\\sqrt{a_{r}}) for each default box. For the aspect ratio of 1, we also add a default box whose scale is sk′=sksk+1subscriptsuperscript𝑠′𝑘subscript𝑠𝑘subscript𝑠𝑘1s^{\\prime}_{k}=\\sqrt{s_{k}s_{k+1}}, resulting in 6 default boxes per feature map location. We set the center of each default box to (i+0.5|fk|,j+0.5|fk|)𝑖0.5subscript𝑓𝑘𝑗0.5subscript𝑓𝑘(\\frac{i+0.5}{|f_{k}|},\\frac{j+0.5}{|f_{k}|}), where |fk|subscript𝑓𝑘|f_{k}| is the size of the k𝑘k-th square feature map, i,j∈(0,|fk|)𝑖𝑗0subscript𝑓𝑘i,j\\in(0,|f_{k}|). In practice, one can also design a distribution of default boxes to best fit a specific dataset. How to design the optimal tiling is an open question as well. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_13",
"text": " By combining predictions for all default boxes with different scales and aspect ratios from all locations of many feature maps, we have a diverse set of predictions, covering various input object sizes and shapes. For example, in Fig. 1, the dog is matched to a default box in the 4×4444\\times 4 feature map, but not to any default boxes in the 8×8888\\times 8 feature map. This is because those boxes have different scales and do not match the dog box, and therefore are considered as negatives during training. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_14",
"text": " After the matching step, most of the default boxes are negatives, especially when the number of possible default boxes is large. This introduces a significant imbalance between the positive and negative training examples. Instead of using all the negative examples, we sort them using the highest confidence loss for each default box and pick the top ones so that the ratio between the negatives and positives is at most 3:1. We found that this leads to faster optimization and a more stable training. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_15",
"text": " To make the model more robust to various input object sizes and shapes, each training image is randomly sampled by one of the following options: • Use the entire original input image. • Sample a patch so that the minimum jaccard overlap with the objects is 0.1, 0.3, 0.5, 0.7, or 0.9. • Randomly sample a patch. The size of each sampled patch is (0.1, 1) of the original image size, and the aspect ratio is between 1212\\frac{1}{2} and 2. We keep the overlapped part of the ground truth box if the center of it is in the sampled patch. After the aforementioned sampling step, each sampled patch is resized to fixed size and is horizontally flipped with probability of 0.5, in addition to applying some photo-metric distortions similar to those described in . ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_16",
"text": " Our experiments are all based on VGG16 , which is pre-trained on the ILSVRC CLS-LOC dataset . Similar to DeepLab-LargeFOV , we convert fc6 and fc7 to convolutional layers, subsample parameters from fc6 and fc7, change pool5 from 2×2−s222𝑠22\\times 2-s2 to 3×3−s133𝑠13\\times 3-s1, and use the à trous algorithm to fill the ”holes”. We remove all the dropout layers and the fc8 layer. We fine-tune the resulting model using SGD with initial learning rate 10−3superscript10310^{-3}, 0.9 momentum, 0.0005 weight decay, and batch size 32. The learning rate decay policy is slightly different for each dataset, and we will describe details later. The full training and testing code is built on Caffe and is open source at: https://github.com/weiliu89/caffe/tree/ssd . ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_17",
"text": " On this dataset, we compare against Fast R-CNN and Faster R-CNN on VOC2007 test (4952 images). All methods fine-tune on the same pre-trained VGG16 network. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_18",
"text": " Figure 2 shows the architecture details of the SSD300 model. We use conv4_3, conv7 (fc7), conv8_2, conv9_2, conv10_2, and conv11_2 to predict both location and confidences. We set default box with scale 0.1 on conv4_3333For SSD512 model, we add extra conv12_2 for prediction, set sminsubscript𝑠mins_{\\text{min}} to 0.15, and 0.07 on conv4_3.. We initialize the parameters for all the newly added convolutional layers with the ”xavier” method . For conv4_3, conv10_2 and conv11_2, we only associate 4 default boxes at each feature map location – omitting aspect ratios of 1313\\frac{1}{3} and 3. For all other layers, we put 6 default boxes as described in Sec. 2.2.3. Since, as pointed out in , conv4_3 has a different feature scale compared to the other layers, we use the L2 normalization technique introduced in to scale the feature norm at each location in the feature map to 20 and learn the scale during back propagation. We use the 10−3superscript10310^{-3} learning rate for 40k iterations, then continue training for 10k iterations with 10−4superscript10410^{-4} and 10−5superscript10510^{-5}. When training on VOC2007 trainval, Table 1 shows that our low resolution SSD300 model is already more accurate than Fast R-CNN. When we train SSD on a larger 512×512512512512\\times 512 input image, it is even more accurate, surpassing Faster R-CNN by 1.7% mAP. If we train SSD with more (i.e. 07+12) data, we see that SSD300 is already better than Faster R-CNN by 1.1% and that SSD512 is 3.6% better. If we take models trained on COCO trainval35k as described in Sec. 3.4 and fine-tuning them on the 07+12 dataset with SSD512, we achieve the best results: 81.6% mAP. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_19",
"text": " To understand the performance of our two SSD models in more details, we used the detection analysis tool from . Figure 3 shows that SSD can detect various object categories with high quality (large white area). The majority of its confident detections are correct. The recall is around 85-90%, and is much higher with “weak” (0.1 jaccard overlap) criteria. Compared to R-CNN , SSD has less localization error, indicating that SSD can localize objects better because it directly learns to regress the object shape and classify object categories instead of using two decoupled steps. However, SSD has more confusions with similar object categories (especially for animals), partly because we share locations for multiple categories. Figure 4 shows that SSD is very sensitive to the bounding box size. In other words, it has much worse performance on smaller objects than bigger objects. This is not surprising because those small objects may not even have any information at the very top layers. Increasing the input size (e.g. from 300×300300300300\\times 300 to 512×512512512512\\times 512) can help improve detecting small objects, but there is still a lot of room to improve. On the positive side, we can clearly see that SSD performs really well on large objects. And it is very robust to different object aspect ratios because we use default boxes of various aspect ratios per feature map location. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_20",
"text": " To understand SSD better, we carried out controlled experiments to examine how each component affects performance. For all the experiments, we use the same settings and input size (300×300300300300\\times 300), except for specified changes to the settings or component(s). ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_21",
"text": " Data augmentation is crucial. Fast and Faster R-CNN use the original image and the horizontal flip to train. We use a more extensive sampling strategy, similar to YOLO . Table 2 shows that we can improve 8.8% mAP with this sampling strategy. We do not know how much our sampling strategy will benefit Fast and Faster R-CNN, but they are likely to benefit less because they use a feature pooling step during classification that is relatively robust to object translation by design. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_22",
"text": " More default box shapes is better. As described in Sec. 2.2.3, by default we use 6 default boxes per location. If we remove the boxes with 1313\\frac{1}{3} and 3 aspect ratios, the performance drops by 0.6%. By further removing the boxes with 1212\\frac{1}{2} and 2 aspect ratios, the performance drops another 2.1%. Using a variety of default box shapes seems to make the task of predicting boxes easier for the network. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_23",
"text": " Atrous is faster. As described in Sec. 3, we used the atrous version of a subsampled VGG16, following DeepLab-LargeFOV . If we use the full VGG16, keeping pool5 with 2×2−s222𝑠22\\times 2-s2 and not subsampling parameters from fc6 and fc7, and add conv5_3 for prediction, the result is about the same while the speed is about 20% slower. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_24",
"text": " We use the same settings as those used for our basic VOC2007 experiments above, except that we use VOC2012 trainval and VOC2007 trainval and test (21503 images) for training, and test on VOC2012 test (10991 images). We train the models with 10−3superscript10310^{-3} learning rate for 60k iterations, then 10−4superscript10410^{-4} for 20k iterations. Table 4 shows the results of our SSD300 and SSD512444\\ssmallhttp://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?cls=mean&challengeid=11&compid=4 model. We see the same performance trend as we observed on VOC2007 test. Our SSD300 improves accuracy over Fast/Faster R-CNN. By increasing the training and testing image size to 512×512512512512\\times 512, we are 4.5% more accurate than Faster R-CNN. Compared to YOLO, SSD is significantly more accurate, likely due to the use of convolutional default boxes from multiple feature maps and our matching strategy during training. When fine-tuned from models trained on COCO, our SSD512 achieves 80.0% mAP, which is 4.1% higher than Faster R-CNN. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_25",
"text": " To further validate the SSD framework, we trained our SSD300 and SSD512 architectures on the COCO dataset. Since objects in COCO tend to be smaller than PASCAL VOC, we use smaller default boxes for all layers. We follow the strategy mentioned in Sec. 2.2.3, but now our smallest default box has a scale of 0.15 instead of 0.2, and the scale of the default box on conv4_3 is 0.07 (e.g. 21 pixels for a 300×300300300300\\times 300 image)555For SSD512 model, we add extra conv12_2 for prediction, set sminsubscript𝑠mins_{\\text{min}} to 0.1, and 0.04 on conv4_3.. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_26",
"text": " We use the trainval35k for training. We first train the model with 10−3superscript10310^{-3} learning rate for 160k iterations, and then continue training for 40k iterations with 10−4superscript10410^{-4} and 40k iterations with 10−5superscript10510^{-5}. Table 5 shows the results on test-dev2015. Similar to what we observed on the PASCAL VOC dataset, SSD300 is better than Fast R-CNN in both [email protected] and mAP@(0.5:0.95). SSD300 has a similar [email protected] as ION and Faster R-CNN , but is worse in [email protected]. By increasing the image size to 512×512512512512\\times 512, our SSD512 is better than Faster R-CNN in both criteria. Interestingly, we observe that SSD512 is 5.3% better in [email protected], but is only 1.2% better in [email protected]. We also observe that it has much better AP (4.8%) and AR (4.6%) for large objects, but has relatively less improvement in AP (1.3%) and AR (2.0%) for small objects. Compared to ION, the improvement in AR for large and small objects is more similar (5.4% vs. 3.9%). We conjecture that Faster R-CNN is more competitive on smaller objects with SSD because it performs two box refinement steps, in both the RPN part and in the Fast R-CNN part. In Fig. 3.2, we show some detection examples on COCO test-dev with the SSD512 model. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_27",
"text": " We applied the same network architecture we used for COCO to the ILSVRC DET dataset . We train a SSD300 model using the ILSVRC2014 DET train and val1 as used in . We first train the model with 10−3superscript10310^{-3} learning rate for 320k iterations, and then continue training for 80k iterations with 10−4superscript10410^{-4} and 40k iterations with 10−5superscript10510^{-5}. We can achieve 43.4 mAP on the val2 set . Again, it validates that SSD is a general framework for high quality real-time detection. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_28",
"text": " ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_29",
"text": " Without a follow-up feature resampling step as in Faster R-CNN, the classification task for small objects is relatively hard for SSD, as demonstrated in our analysis (see Fig. 4). The data augmentation strategy described in Sec. 2.2 helps to improve the performance dramatically, especially on small datasets such as PASCAL VOC. The random crops generated by the strategy can be thought of as a ”zoom in” operation and can generate many larger training examples. To implement a ”zoom out” operation that creates more small training examples, we first randomly place an image on a canvas of 16×16\\times of the original image size filled with mean values before we do any random crop operation. Because we have more training images by introducing this new ”expansion” data augmentation trick, we have to double the training iterations. We have seen a consistent increase of 2%-3% mAP across multiple datasets, as shown in Table 6. In specific, Figure 3.2 shows that the new augmentation trick significantly improves the performance on small objects. This result underscores the importance of the data augmentation strategy for the final model accuracy. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_30",
"text": " An alternative way of improving SSD is to design a better tiling of default boxes so that its position and scale are better aligned with the receptive field of each position on a feature map. We leave this for future work. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_31",
"text": " ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_32",
"text": " Considering the large number of boxes generated from our method, it is essential to perform non-maximum suppression (nms) efficiently during inference. By using a confidence threshold of 0.01, we can filter out most boxes. We then apply nms with jaccard overlap of 0.45 per class and keep the top 200 detections per image. This step costs about 1.7 msec per image for SSD300 and 20 VOC classes, which is close to the total time (2.4 msec) spent on all newly added layers. We measure the speed with batch size 8 using Titan X and cuDNN v4 with Intel Xeon [email protected]. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_33",
"text": " Table 7 shows the comparison between SSD, Faster R-CNN, and YOLO. Both our SSD300 and SSD512 method outperforms Faster R-CNN in both speed and accuracy. Although Fast YOLO can run at 155 FPS, it has lower accuracy by almost 22% mAP. To the best of our knowledge, SSD300 is the first real-time method to achieve above 70% mAP. Note that about 80% of the forward time is spent on the base network (VGG16 in our case). Therefore, using a faster base network could even further improve the speed, which can possibly make the SSD512 model real-time as well. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_34",
"text": " There are two established classes of methods for object detection in images, one based on sliding windows and the other based on region proposal classification. Before the advent of convolutional neural networks, the state of the art for those two approaches – Deformable Part Model (DPM) and Selective Search – had comparable performance. However, after the dramatic improvement brought on by R-CNN , which combines selective search region proposals and convolutional network based post-classification, region proposal object detection methods became prevalent. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_35",
"text": " The original R-CNN approach has been improved in a variety of ways. The first set of approaches improve the quality and speed of post-classification, since it requires the classification of thousands of image crops, which is expensive and time-consuming. SPPnet speeds up the original R-CNN approach significantly. It introduces a spatial pyramid pooling layer that is more robust to region size and scale and allows the classification layers to reuse features computed over feature maps generated at several image resolutions. Fast R-CNN extends SPPnet so that it can fine-tune all layers end-to-end by minimizing a loss for both confidences and bounding box regression, which was first introduced in MultiBox for learning objectness. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_36",
"text": " The second set of approaches improve the quality of proposal generation using deep neural networks. In the most recent works like MultiBox (7, 8), the Selective Search region proposals, which are based on low-level image features, are replaced by proposals generated directly from a separate deep neural network. This further improves the detection accuracy but results in a somewhat complex setup, requiring the training of two neural networks with a dependency between them. Faster R-CNN replaces selective search proposals by ones learned from a region proposal network (RPN), and introduces a method to integrate the RPN with Fast R-CNN by alternating between fine-tuning shared convolutional layers and prediction layers for these two networks. This way region proposals are used to pool mid-level features and the final classification step is less expensive. Our SSD is very similar to the region proposal network (RPN) in Faster R-CNN in that we also use a fixed set of (default) boxes for prediction, similar to the anchor boxes in the RPN. But instead of using these to pool features and evaluate another classifier, we simultaneously produce a score for each object category in each box. Thus, our approach avoids the complication of merging RPN with Fast R-CNN and is easier to train, faster, and straightforward to integrate in other tasks. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_37",
"text": " Another set of methods, which are directly related to our approach, skip the proposal step altogether and predict bounding boxes and confidences for multiple categories directly. OverFeat , a deep version of the sliding window method, predicts a bounding box directly from each location of the topmost feature map after knowing the confidences of the underlying object categories. YOLO uses the whole topmost feature map to predict both confidences for multiple categories and bounding boxes (which are shared for these categories). Our SSD method falls in this category because we do not have the proposal step but use the default boxes. However, our approach is more flexible than the existing methods because we can use default boxes of different aspect ratios on each feature location from multiple feature maps at different scales. If we only use one default box per location from the topmost feature map, our SSD would have similar architecture to OverFeat ; if we use the whole topmost feature map and add a fully connected layer for predictions instead of our convolutional predictors, and do not explicitly consider multiple aspect ratios, we can approximately reproduce YOLO . ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_38",
"text": " This paper introduces SSD, a fast single-shot object detector for multiple categories. A key feature of our model is the use of multi-scale convolutional bounding box outputs attached to multiple feature maps at the top of the network. This representation allows us to efficiently model the space of possible box shapes. We experimentally validate that given appropriate training strategies, a larger number of carefully chosen default bounding boxes results in improved performance. We build SSD models with at least an order of magnitude more box predictions sampling location, scale, and aspect ratio, than existing methods (5, 7). We demonstrate that given the same VGG-16 base architecture, SSD compares favorably to its state-of-the-art object detector counterparts in terms of both accuracy and speed. Our SSD512 model significantly outperforms the state-of-the-art Faster R-CNN in terms of accuracy on PASCAL VOC and COCO, while being 3×3\\times faster. Our real time SSD300 model runs at 59 FPS, which is faster than the current real time YOLO alternative, while producing markedly superior detection accuracy. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_39",
"text": " Apart from its standalone utility, we believe that our monolithic and relatively simple SSD model provides a useful building block for larger systems that employ an object detection component. A promising future direction is to explore its use as part of a system using recurrent neural networks to detect and track objects in video simultaneously. ",
"title": "SSD: Single Shot MultiBox Detector"
},
{
"id": "1512.02325_all_40",
"text": " This work was started as an internship project at Google and continued at UNC. We would like to thank Alex Toshev for helpful discussions and are indebted to the Image Understanding and DistBelief teams at Google. We also thank Philip Ammirato and Patrick Poirson for helpful comments. We thank NVIDIA for providing GPUs and acknowledge support from NSF 1452851, 1446631, 1526367, 1533771. ",
"title": "SSD: Single Shot MultiBox Detector"
}
] |
Which pretrained large text to image models have authors used?
|
Authors used pre-trained Imagen text-to-image diffusion model [4].
|
[
4
] |
[
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requires synthesizing instances of specific subjects (e.g., objects, animals) in new contexts such that they naturally and seamlessly blend into the scene. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_1",
"text": " Recently developed large text-to-image models have shown unprecedented capabilities, by enabling high-quality and diverse synthesis of images based on a text prompt written in natural language (61, 54). One of the main advantages of such models is the strong semantic prior learned from a large collection of image-caption pairs. Such a prior learns, for instance, to bind the word “dog” with various instances of dogs that can appear in different poses and contexts in an image. While the synthesis capabilities of these models are unprecedented, they lack the ability to mimic the appearance of subjects in a given reference set, and synthesize novel renditions of the same subjects in different contexts. The main reason is that the expressiveness of their output domain is limited; even the most detailed textual description of an object may yield instances with different appearances. Furthermore, even models whose text embedding lies in a shared language-vision space cannot accurately reconstruct the appearance of given subjects but only create variations of the image content (Figure 2). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_2",
"text": " In this work, we present a new approach for “personalization” of text-to-image diffusion models (adapting them to user-specific image generation needs). Our goal is to expand the language-vision dictionary of the model such that it binds new words with specific subjects the user wants to generate. Once the new dictionary is embedded in the model, it can use these words to synthesize novel photorealistic images of the subject, contextualized in different scenes, while preserving their key identifying features. The effect is akin to a “magic photo booth”—once a few images of the subject are taken, the booth generates photos of the subject in different conditions and scenes, as guided by simple and intuitive text prompts (Figure 1). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_3",
"text": " More formally, given a few images of a subject (∼similar-to\\sim3-5), our objective is to implant the subject into the output domain of the model such that it can be synthesized with a unique identifier. To that end, we propose a technique to represent a given subject with rare token identifiers and fine-tune a pre-trained, diffusion-based text-to-image framework. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_4",
"text": " We fine-tune the text-to-image model with the input images and text prompts containing a unique identifier followed by the class name of the subject (e.g., “A (V) dog”). The latter enables the model to use its prior knowledge on the subject class while the class-specific instance is bound with the unique identifier. In order to prevent language drift (34, 40) that causes the model to associate the class name (e.g., “dog”) with the specific instance, we propose an autogenous, class-specific prior preservation loss, which leverages the semantic prior on the class that is embedded in the model, and encourages it to generate diverse instances of the same class as our subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_5",
"text": " We apply our approach to a myriad of text-based image generation applications including recontextualization of subjects, modification of their properties, original art renditions, and more, paving the way to a new stream of previously unassailable tasks. We highlight the contribution of each component in our method via ablation studies, and compare with alternative baselines and related work. We also conduct a user study to evaluate subject and prompt fidelity in our synthesized images, compared to alternative approaches. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_6",
"text": " To the best of our knowledge, ours is the first technique that tackles this new challenging problem of subject-driven generation, allowing users, from just a few casually captured images of a subject, synthesize novel renditions of the subject in different contexts while maintaining its distinctive features. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_7",
"text": " To evaluate this new task, we also construct a new dataset that contains various subjects captured in different contexts, and propose a new evaluation protocol that measures the subject fidelity and prompt fidelity of the generated results. We make our dataset and evaluation protocol publicly available on the project webpage. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_8",
"text": " Image Composition. Image composition techniques (70, 13, 38) aim to clone a given subject into a new background such that the subject melds into the scene. To consider composition in novel poses, one may apply 3D reconstruction techniques (41, 6, 8, 68, 49) which usually works on rigid objects and require a larger number of views. Some drawbacks include scene integration (lighting, shadows, contact) and the inability to generate novel scenes. In contrast, our approach enable generation of subjects in novel poses and new contexts. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_9",
"text": " Text-to-Image Editing and Synthesis. Text-driven image manipulation has recently achieved significant progress using GANs (22, 9, 28, 29, 30) combined with image-text representations such as CLIP , yielding realistic manipulations using text (48, 21, 71, 2, 7, 43). These methods work well on structured scenarios (e.g. human face editing) and can struggle over diverse datasets where subjects are varied. Crowson et al. use VQ-GAN and train over more diverse data to alleviate this concern. Other works (4, 31) exploit the recent diffusion models (25, 63, 65, 25, 64, 58, 45, 66, 60, 62), which achieve state-of-the-art generation quality over highly diverse datasets, often surpassing GANs . While most works that require only text are limited to global editing (14, 33), Bar-Tal et al. proposed a text-based localized editing technique without using masks, showing impressive results. While most of these editing approaches allow modification of global properties or local editing of a given image, none enables generating novel renditions of a given subject in new contexts. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_10",
"text": " There also exists work on text-to-image synthesis (16, 24, 67, 35, 36, 50, 51, 55, 74, 14, 19, 58, 27). Recent large text-to-image models such as Imagen , DALL-E2 , Parti , CogView2 and Stable Diffusion demonstrated unprecedented semantic generation. These models do not provide fine-grained control over a generated image and use text guidance only. Specifically, it is challenging or impossible to preserve the identity of a subject consistently across synthesized images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_11",
"text": " Controllable Generative Models. There are various approaches to control generative models, where some of them might prove to be viable directions for subject-driven prompt-guided image synthesis. Liu et al. propose a diffusion-based technique allowing for image variations guided by reference image or text. To overcome subject modification, several works (44, 3) assume a user-provided mask to restrict the modified area. Inversion (12, 15, 54) can be used to preserve a subject while modifying context. Prompt-to-prompt allows for local and global editing without an input mask. These methods fall short of identity-preserving novel sample generation of a subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_12",
"text": " In the context of GANs, Pivotal Tuning allows for real image editing by finetuning the model with an inverted latent code anchor, and Nitzan et al. extended this work to GAN finetuning on faces to train a personalized prior, which requires around 100 images and are limited to the face domain. Casanova et al. propose an instance conditioned GAN that can generate variations of an instance, although it can struggle with unique subjects and does not preserve all subject details. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_13",
"text": " Finally, the concurrent work of Gal et al. proposes a method to represent visual concepts, like an object or a style, through new tokens in the embedding space of a frozen text-to-image model, resulting in small personalized token embeddings. While this method is limited by the expressiveness of the frozen diffusion model, our fine-tuning approach enables us to embed the subject within the model’s output domain, resulting in the generation of novel images of the subject which preserve its key visual features. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_14",
"text": " Given only a few (typically 3-5) casually captured images of a specific subject, without any textual description, our objective is to generate new images of the subject with high detail fidelity and with variations guided by text prompts. Example variations include changing the subject location, changing subject properties such as color or shape, modifying the subject’s pose, viewpoint, and other semantic modifications. We do not impose any restrictions on input image capture settings and the subject image can have varying contexts. We next provide some background on text-to-image diffusion models (Sec. 3.1), then present our fine-tuning technique to bind a unique identifier with a subject described in a few images (Sec. 3.2), and finally propose a class-specific prior-preservation loss that enables us to overcome language drift in our fine-tuned model (Sec. 3.3). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_15",
"text": " Diffusion models are probabilistic generative models that are trained to learn a data distribution by the gradual denoising of a variable sampled from a Gaussian distribution. Specifically, we are interested in a pre-trained text-to-image diffusion model 𝐱^θsubscript^𝐱𝜃\\hat{\\mathbf{x}}_{\\theta} that, given an initial noise map ϵ∼𝒩(𝟎,𝐈)similar-tobold-italic-ϵ𝒩0𝐈{\\bm{\\epsilon}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) and a conditioning vector 𝐜=Γ(𝐏)𝐜Γ𝐏\\mathbf{c}=\\Gamma(\\mathbf{P}) generated using a text encoder ΓΓ\\Gamma and a text prompt 𝐏𝐏\\mathbf{P}, generates an image 𝐱gen=𝐱^θ(ϵ,𝐜)subscript𝐱gensubscript^𝐱𝜃bold-italic-ϵ𝐜\\mathbf{x}_{\\text{gen}}=\\hat{\\mathbf{x}}_{\\theta}({\\bm{\\epsilon}},\\mathbf{c}). They are trained using a squared error loss to denoise a variably-noised image or latent code 𝐳t≔αt𝐱+σtϵ≔subscript𝐳𝑡subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ\\mathbf{z}_{t}\\coloneqq\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}} as follows: 𝔼𝐱,𝐜,ϵ,t(wt‖𝐱^θ(αt𝐱+σtϵ,𝐜)−𝐱‖22)subscript𝔼𝐱𝐜bold-italic-ϵ𝑡delimited-()subscript𝑤𝑡subscriptsuperscriptnormsubscript^𝐱𝜃subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ𝐜𝐱22\\mathbb{E}_{\\mathbf{x},\\mathbf{c},{\\bm{\\epsilon}},t}\\!\\left(w_{t}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}},\\mathbf{c})-\\mathbf{x}\\|^{2}_{2}\\right) (1) where 𝐱𝐱\\mathbf{x} is the ground-truth image, 𝐜𝐜\\mathbf{c} is a conditioning vector (e.g., obtained from a text prompt), and αt,σt,wtsubscript𝛼𝑡subscript𝜎𝑡subscript𝑤𝑡\\alpha_{t},\\sigma_{t},w_{t} are terms that control the noise schedule and sample quality, and are functions of the diffusion process time t∼𝒰((0,1))similar-to𝑡𝒰01t\\sim\\mathcal{U}((0,1)). A more detailed description is given in the supplementary material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_16",
"text": " Our first task is to implant the subject instance into the output domain of the model such that we can query the model for varied novel images of the subject. One natural idea is to fine-tune the model using the few-shot dataset of the subject. Careful care had to be taken when fine-tuning generative models such as GANs in a few-shot scenario as it can cause overfitting and mode-collapse - as well as not capturing the target distribution sufficiently well. There has been research on techniques to avoid these pitfalls (56, 47, 37, 42, 69), although, in contrast to our work, this line of work primarily seeks to generate images that resemble the target distribution but has no requirement of subject preservation. With regards to these pitfalls, we observe the peculiar finding that, given a careful fine-tuning setup using the diffusion loss from Eq 1, large text-to-image diffusion models seem to excel at integrating new information into their domain without forgetting the prior or overfitting to a small set of training images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_17",
"text": " Our goal is to “implant” a new (unique identifier, subject) pair into the diffusion model’s “dictionary” . In order to bypass the overhead of writing detailed image descriptions for a given image set we opt for a simpler approach and label all input images of the subject “a (identifier) (class noun)”, where (identifier) is a unique identifier linked to the subject and (class noun) is a coarse class descriptor of the subject (e.g. cat, dog, watch, etc.). The class descriptor can be provided by the user or obtained using a classifier. We use a class descriptor in the sentence in order to tether the prior of the class to our unique subject and find that using a wrong class descriptor, or no class descriptor increases training time and language drift while decreasing performance. In essence, we seek to leverage the model’s prior of the specific class and entangle it with the embedding of our subject’s unique identifier so we can leverage the visual prior to generate new poses and articulations of the subject in different contexts. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_18",
"text": " We generally find existing English words (e.g. “unique”, “special”) suboptimal since the model has to learn to disentangle them from their original meaning and to re-entangle them to reference our subject. This motivates the need for an identifier that has a weak prior in both the language model and the diffusion model. A hazardous way of doing this is to select random characters in the English language and concatenate them to generate a rare identifier (e.g. “xxy5syt00”). In reality, the tokenizer might tokenize each letter separately, and the prior for the diffusion model is strong for these letters. We often find that these tokens incur the similar weaknesses as using common English words. Our approach is to find rare tokens in the vocabulary, and then invert these tokens into text space, in order to minimize the probability of the identifier having a strong prior. We perform a rare-token lookup in the vocabulary and obtain a sequence of rare token identifiers f(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}), where f𝑓f is a tokenizer; a function that maps character sequences to tokens and 𝐕^^𝐕\\hat{\\mathbf{V}} is the decoded text stemming from the tokens f(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}). The sequence can be of variable length k𝑘k, and find that relatively short sequences of k={1,…,3}𝑘1…3k=\\{1,...,3\\} work well. Then, by inverting the vocabulary using the de-tokenizer on f(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}) we obtain a sequence of characters that define our unique identifier 𝐕^^𝐕\\hat{\\mathbf{V}}. For Imagen, we find that using uniform random sampling of tokens that correspond to 3 or fewer Unicode characters (without spaces) and using tokens in the T5-XXL tokenizer range of {5000,…,10000}5000…10000\\{5000,...,10000\\} works well. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_19",
"text": " In our experience, the best results for maximum subject fidelity are achieved by fine-tuning all layers of the model. This includes fine-tuning layers that are conditioned on the text embeddings, which gives rise to the problem of language drift. Language drift has been an observed problem in language models (34, 40), where a model that is pre-trained on a large text corpus and later fine-tuned for a specific task progressively loses syntactic and semantic knowledge of the language. To the best of our knowledge, we are the first to find a similar phenomenon affecting diffusion models, where to model slowly forgets how to generate subjects of the same class as the target subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_20",
"text": " Another problem is the possibility of reduced output diversity. Text-to-image diffusion models naturally posses high amounts of output diversity. When fine-tuning on a small set of images we would like to be able to generate the subject in novel viewpoints, poses and articulations. Yet, there is a risk of reducing the amount of variability in the output poses and views of the subject (e.g. snapping to the few-shot views). We observe that this is often the case, especially when the model is trained for too long. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_21",
"text": " To mitigate the two aforementioned issues, we propose an autogenous class-specific prior preservation loss that encourages diversity and counters language drift. In essence, our method is to supervise the model with its own generated samples, in order for it to retain the prior once the few-shot fine-tuning begins. This allows it to generate diverse images of the class prior, as well as retain knowledge about the class prior that it can use in conjunction with knowledge about the subject instance. Specifically, we generate data 𝐱pr=𝐱^(𝐳t1,𝐜pr)subscript𝐱pr^𝐱subscript𝐳subscript𝑡1subscript𝐜pr\\mathbf{x}_{\\text{pr}}=\\hat{\\mathbf{x}}(\\mathbf{z}_{t_{1}},\\mathbf{c}_{\\text{pr}}) by using the ancestral sampler on the frozen pre-trained diffusion model with random initial noise 𝐳t1∼𝒩(𝟎,𝐈)similar-tosubscript𝐳subscript𝑡1𝒩0𝐈\\mathbf{z}_{t_{1}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) and conditioning vector 𝐜pr≔Γ(f(”a (class noun)”))≔subscript𝐜prΓ𝑓”a (class noun)”\\mathbf{c}_{\\text{pr}}\\coloneqq\\Gamma(f(\\text{\"a (class noun)\"})). The loss becomes: 𝔼𝐱,𝐜,ϵ,ϵ′,t(wt∥𝐱^θ(αt𝐱+σtϵ,𝐜)−𝐱∥22+λwt′∥𝐱^θ(αt′𝐱pr+σt′ϵ′,𝐜pr)−𝐱pr∥22),subscript𝔼𝐱𝐜bold-italic-ϵsuperscriptbold-italic-ϵ′𝑡delimited-()subscript𝑤𝑡subscriptsuperscriptdelimited-∥∥subscript^𝐱𝜃subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ𝐜𝐱22𝜆subscript𝑤superscript𝑡′subscriptsuperscriptdelimited-∥∥subscript^𝐱𝜃subscript𝛼superscript𝑡′subscript𝐱prsubscript𝜎superscript𝑡′superscriptbold-italic-ϵ′subscript𝐜prsubscript𝐱pr22\\mathbb{E}_{\\mathbf{x},\\mathbf{c},{\\bm{\\epsilon}},{\\bm{\\epsilon}}^{\\prime},t}(w_{t}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}},\\mathbf{c})-\\mathbf{x}\\|^{2}_{2}+\\\\ \\lambda w_{t^{\\prime}}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t^{\\prime}}\\mathbf{x}_{\\text{pr}}+\\sigma_{t^{\\prime}}{\\bm{\\epsilon}}^{\\prime},\\mathbf{c}_{\\text{pr}})-\\mathbf{x}_{\\text{pr}}\\|^{2}_{2}), (2) where the second term is the prior-preservation term that supervises the model with its own generated images, and λ𝜆\\lambda controls for the relative weight of this term. Figure 3 illustrates the model fine-tuning with the class-generated samples and prior-preservation loss. Despite being simple, we find this prior-preservation loss is effective in encouraging output diversity and in overcoming language-drift. We also find that we can train the model for more iterations without risking overfitting. We find that ∼similar-to\\sim 1000 iterations with λ=1𝜆1\\lambda=1 and learning rate 10−5superscript10510^{-5} for Imagen and 5×10−65superscript1065\\times 10^{-6} for Stable Diffusion , and with a subject dataset size of 3-5 images is enough to achieve good results. During this process, ∼1000similar-toabsent1000\\sim 1000 “a (class noun)” samples are generated - but less can be used. The training process takes about 5 minutes on one TPUv4 for Imagen, and 5 minutes on a NVIDIA A100 for Stable Diffusion. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_22",
"text": " In this section, we show experiments and applications. Our method enables a large expanse of text-guided semantic modifications of our subject instances, including recontextualization, modification of subject properties such as material and species, art rendition, and viewpoint modification. Importantly, across all of these modifications, we are able to preserve the unique visual features that give the subject its identity and essence. If the task is recontextualization, then the subject features are unmodified, but appearance (e.g., pose) may change. If the task is a stronger semantic modification, such as crossing between our subject and another species/object, then the key features of the subject are preserved after modification. In this section, we reference the subject’s unique identifier using (V). We include specific Imagen and Stable Diffusion implementation details in the supp. material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_23",
"text": " We collected a dataset of 30 subjects, including unique objects and pets such as backpacks, stuffed animals, dogs, cats, sunglasses, cartoons, etc. We separate each subject into two categories: objects and live subjects/pets. 21 of the 30 subjects are objects, and 9 are live subjects/pets. We provide one sample image for each of the subjects in Figure 5. Images for this dataset were collected by the authors or sourced from Unsplash . We also collected 25 prompts: 20 recontextualization prompts and 5 property modification prompts for objects; 10 recontextualization, 10 accessorization, and 5 property modification prompts for live subjects/pets. The full list of prompts can be found in the supplementary material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_24",
"text": " For the evaluation suite we generate four images per subject and per prompt, totaling 3,000 images. This allows us to robustly measure performances and generalization capabilities of a method. We make our dataset and evaluation protocol publicly available on the project webpage for future use in evaluating subject-driven generation. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_25",
"text": " One important aspect to evaluate is subject fidelity: the preservation of subject details in generated images. For this, we compute two metrics: CLIP-I and DINO . CLIP-I is the average pairwise cosine similarity between CLIP embeddings of generated and real images. Although this metric has been used in other work , it is not constructed to distinguish between different subjects that could have highly similar text descriptions (e.g. two different yellow clocks). Our proposed DINO metric is the average pairwise cosine similarity between the ViT-S/16 DINO embeddings of generated and real images. This is our preferred metric, since, by construction and in contrast to supervised networks, DINO is not trained to ignore differences between subjects of the same class. Instead, the self-supervised training objective encourages distinction of unique features of a subject or image. The second important aspect to evaluate is prompt fidelity, measured as the average cosine similarity between prompt and image CLIP embeddings. We denote this as CLIP-T. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_26",
"text": " We compare our results with Textual Inversion, the recent concurrent work of Gal et al. , using the hyperparameters provided in their work. We find that this work is the only comparable work in the literature that is subject-driven, text-guided and generates novel images. We generate images for DreamBooth using Imagen, DreamBooth using Stable Diffusion and Textual Inversion using Stable Diffusion. We compute DINO and CLIP-I subject fidelity metrics and the CLIP-T prompt fidelity metric. In Table 1 we show sizeable gaps in both subject and prompt fidelity metrics for DreamBooth over Textual Inversion. We find that DreamBooth (Imagen) achieves higher scores for both subject and prompt fidelity than DreamBooth (Stable Diffusion), approaching the upper-bound of subject fidelity for real images. We believe that this is due to the larger expressive power and higher output quality of Imagen. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_27",
"text": " Further, we compare Textual Inversion (Stable Diffusion) and DreamBooth (Stable Diffusion) by conducting a user study. For subject fidelity, we asked 72 users to answer questionnaires of 25 comparative questions (3 users per questionnaire), totaling 1800 answers. Samples are randomly selected from a large pool. Each question shows the set of real images for a subject, and one generated image of that subject by each method (with a random prompt). Users are asked to answer the question: “Which of the two images best reproduces the identity (e.g. item type and details) of the reference item?”, and we include a “Cannot Determine / Both Equally” option. Similarly for prompt fidelity, we ask “Which of the two images is best described by the reference text?”. We average results using majority voting and present them in Table 2. We find an overwhelming preference for DreamBooth for both subject fidelity and prompt fidelity. This shines a light on results in Table 1, where DINO differences of around 0.10.10.1 and CLIP-T differences of 0.050.050.05 are significant in terms of user preference. Finally, we show qualitative comparisons in Figure 4. We observe that DreamBooth better preserves subject identity, and is more faithful to prompts. We show samples of the user study in the supp. material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_28",
"text": " We fine-tune Imagen on 15 subjects from our dataset, with and without our proposed prior preservation loss (PPL). The prior preservation loss seeks to combat language drift and preserve the prior. We compute a prior preservation metric (PRES) by computing the average pairwise DINO embeddings between generated images of random subjects of the prior class and real images of our specific subject. The higher this metric, the more similar random subjects of the class are to our specific subject, indicating collapse of the prior. We report results in Table 3 and observe that PPL substantially counteracts language drift and helps retain the ability to generate diverse images of the prior class. Additionally, we compute a diversity metric (DIV) using the average LPIPS cosine similarity between generated images of same subject with same prompt. We observe that our model trained with PPL achieves higher diversity (with slightly diminished subject fidelity), which can also be observed qualitatively in Figure 6, where our model trained with PPL overfits less to the environment of the reference images and can generate the dog in more diverse poses and articulations. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_29",
"text": " We finetune Imagen on a subset of our dataset subjects (5 subjects) with no class noun, a randomly sampled incorrect class noun, and the correct class noun. With the correct class noun for our subject, we are able to faithfully fit to the subject, take advantage of the class prior, allowing us to generate our subject in various contexts. When an incorrect class noun (e.g. “can” for a backpack) is used, we run into contention between our subject and and the class prior - sometimes obtaining cylindrical backpacks, or otherwise misshapen subjects. If we train with no class noun, the model does not leverage the class prior, has difficulty learning the subject and converging, and can generate erroneous samples. Subject fidelity results are shown in Table 4, with substantially higher subject fidelity for our proposed approach. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_30",
"text": " We can generate novel images for a specific subject in different contexts (Figure 7) with descriptive prompts (“a (V) (class noun) (context description)”). Importantly, we are able to generate the subject in new poses and articulations, with previously unseen scene structure and realistic integration of the subject in the scene (e.g. contact, shadows, reflections). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_31",
"text": " Given a prompt “a painting of a (V) (class noun) in the style of (famous painter)” or “a statue of a (V) (class noun) in the style of (famous sculptor)” we are able to generate artistic renditions of our subject. Unlike style transfer, where the source structure is preserved and only the style is transferred, we are able to generate meaningful, novel variations depending on the artistic style, while preserving subject identity. E.g, as shown in Figure 8, “Michelangelo”, we generated a pose that is novel and not seen in the input images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_32",
"text": " We are able to render the subject under novel viewpoints. In Figure 8, we generate new images of the input cat (with consistent complex fur patterns) under new viewpoints. We highlight that the model has not seen this specific cat from behind, below, or above - yet it is able to extrapolate knowledge from the class prior to generate these novel views given only 4 frontal images of the subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_33",
"text": " We are able to modify subject properties. For example, we show crosses between a specific Chow Chow dog and different animal species in the bottom row of Figure 8. We prompt the model with sentences of the following structure: “a cross of a (V) dog and a (target species)”. In particular, we can see in this example that the identity of the dog is well preserved even when the species changes - the face of the dog has certain unique features that are well preserved and melded with the target species. Other property modifications are possible, such as material modification (e.g. “a transparent (V) teapot” in Figure 7). Some are harder than others and depend on the prior of the base generation model. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_34",
"text": " We illustrate some failure models of our method in Figure 9. The first is related to not being able to accurately generate the prompted context. Possible reasons are a weak prior for these contexts, or difficulty in generating both the subject and specified concept together due to low probability of co-occurrence in the training set. The second is context-appearance entanglement, where the appearance of the subject changes due to the prompted context, exemplified in Figure 9 with color changes of the backpack. Third, we also observe overfitting to the real images that happen when the prompt is similar to the original setting in which the subject was seen. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_35",
"text": " Other limitations are that some subjects are easier to learn than others (e.g. dogs and cats). Occasionally, with subjects that are rarer, the model is unable to support as many subject variations. Finally, there is also variability in the fidelity of the subject and some generated images might contain hallucinated subject features, depending on the strength of the model prior, and the complexity of the semantic modification. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_36",
"text": " We presented an approach for synthesizing novel renditions of a subject using a few images of the subject and the guidance of a text prompt. Our key idea is to embed a given subject instance in the output domain of a text-to-image diffusion model by binding the subject to a unique identifier. Remarkably - this fine-tuning process can work given only 3-5 subject images, making the technique particularly accessible. We demonstrated a variety of applications with animals and objects in generated photorealistic scenes, in most cases indistinguishable from real images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_37",
"text": " We thank Rinon Gal, Adi Zicher, Ron Mokady, Bill Freeman, Dilip Krishnan, Huiwen Chang and Daniel Cohen-Or for their valuable inputs that helped improve this work, and to Mohammad Norouzi, Chitwan Saharia and William Chan for providing us with their support and the pretrained Imagen models. Finally, a special thanks to David Salesin for his feedback, advice and for his support for the project. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
}
] |
What does using CLIP-based codes mean? And why is this a limitation? Why is not applicable to other methods? What do they mean with other methods here?
|
The definition of CLIP-based codes or its limitations cannot be found in this paper [15].
|
[
15
] |
[
{
"id": "2208.01618_all_0",
"text": " In a famous scene from the motion picture “Titanic”, Rose makes a request of Jack: “…draw me like one of your French girls”. Albeit simple, this request contains a wealth of information. It indicates that Jack should produce a drawing; It suggests that its style and composition should match those of a subset of Jack’s prior work; Finally, through a single word, “me”, Rose indicates that this drawing should portray a specific, unique subject: Rose herself. In making her request, Rose relies on Jack’s ability to reason over these concepts — both broad and specific — and bring them to life in a new creation. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_1",
"text": " Recently, large-scale text-to-image models (Rombach et al., 2021; Ramesh et al., 2021, 2022; Nichol et al., 2021; Yu et al., 2022; Saharia et al., 2022) have demonstrated an unprecedented capability to reason over natural language descriptions. They allow users to synthesize novel scenes with unseen compositions and produce vivid pictures in a myriad of styles. These tools have been used for artistic creation, as sources of inspiration, and even to design new, physical products (Yacoubian, 2022). Their use, however, is constrained by the user’s ability to describe the desired target through text. Turning back to Rose, one could then ask: How might she frame her request if she were to approach one of these models? How could we, as users, ask text-to-image models to craft a novel scene containing a cherished childhood toy? Or to pull our child’s drawing from its place on the fridge, and turn it into an artistic showpiece? ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_2",
"text": " Introducing new concepts into large scale models is often difficult. Re-training a model with an expanded dataset for each new concept is prohibitively expensive, and fine-tuning on few examples typically leads to catastrophic forgetting (Ding et al., 2022; Li et al., 2022). More measured approaches freeze the model and train transformation modules to adapt its output when faced with new concepts (Zhou et al., 2021; Gao et al., 2021; Skantze & Willemsen, 2022). However, these approaches are still prone to forgetting prior knowledge, or face difficulties in accessing it concurrently with newly learned concepts (Kumar et al., 2022; Cohen et al., 2022). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_3",
"text": " We propose to overcome these challenges by finding new words in the textual embedding space of pre-trained text-to-image models. We consider the first stage of the text encoding process (Figure 2). Here, an input string is first converted to a set of tokens. Each token is then replaced with its own embedding vector, and these vectors are fed through the downstream model. Our goal is to find new embedding vectors that represent new, specific concepts. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_4",
"text": " We represent a new embedding vector with a new pseudo-word (Rathvon, 2004) which we denote by S∗subscript𝑆S_{*}. This pseudo-word is then treated like any other word, and can be used to compose novel textual queries for the generative models. One can therefore ask for “a photograph of S∗subscript𝑆S_{*} on the beach”, “an oil painting of a S∗subscript𝑆S_{*} hanging on the wall”, or even compose two concepts, such as “a drawing of S∗1subscriptsuperscript𝑆1S^{1}_{*} in the style of S∗2subscriptsuperscript𝑆2S^{2}_{*}”. Importantly, this process leaves the generative model untouched. In doing so, we retain the rich textual understanding and generalization capabilities that are typically lost when fine-tuning vision and language models on new tasks. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_5",
"text": " To find these pseudo-words, we frame the task as one of inversion. We are given a fixed, pre-trained text-to-image model and a small (3-5) image set depicting the concept. We aim to find a single word embedding, such that sentences of the form “A photo of S∗subscript𝑆S_{*}” will lead to the reconstruction of images from our small set. This embedding is found through an optimization process, which we refer to as “Textual Inversion”. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_6",
"text": " We further investigate a series of extensions based on tools typically used in Generative Adversarial Network (GAN) inversion. Our analysis reveals that, while some core principles remain, applying the prior art in a naïve way is either unhelpful or actively harmful. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_7",
"text": " We demonstrate the effectiveness of our approach over a wide range of concepts and prompts, showing that it can inject unique objects into new scenes, transform them across different styles, transfer poses, diminish biases, and even imagine new products. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_8",
"text": " In summary, our contributions are as follows: • We introduce the task of personalized text-to-image generation, where we synthesize novel scenes of user-provided concepts guided by natural language instruction. • We present the idea of “Textual Inversions” in the context of generative models. Here the goal is to find new pseudo-words in the embedding space of a text encoder that can capture both high-level semantics and fine visual details. • We analyze the embedding space in light of GAN-inspired inversion techniques and demonstrate that it also exhibits a tradeoff between distortion and editability. We show that our approach resides on an appealing point on the tradeoff curve. • We evaluate our method against images generated using user-provided captions of the concepts and demonstrate that our embeddings provide higher visual fidelity, and also enable more robust editing. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_9",
"text": " Text-guided image synthesis has been widely studied in the context of GANs (Goodfellow et al., 2014). Typically, a conditional model is trained to reproduce samples from given paired image-caption datasets (Zhu et al., 2019; Tao et al., 2020), leveraging attention mechanisms (Xu et al., 2018) or cross-modal contrastive approaches (Zhang et al., 2021; Ye et al., 2021). More recently, impressive visual results were achieved by leveraging large scale auto-regressive (Ramesh et al., 2021; Yu et al., 2022) or diffusion models (Ramesh et al., 2022; Saharia et al., 2022; Nichol et al., 2021; Rombach et al., 2021). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_10",
"text": " Rather than training conditional models, several approaches employ test-time optimization to explore the latent spaces of a pre-trained generator (Crowson et al., 2022; Murdock, 2021; Crowson, 2021). These models typically guide the optimization to minimize a text-to-image similarity score derived from an auxiliary model such as CLIP (Radford et al., 2021). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_11",
"text": " Moving beyond pure image generation, a large body of work explores the use of text-based interfaces for image editing (Patashnik et al., 2021; Abdal et al., 2021; Avrahami et al., 2022b), generator domain adaptation (Gal et al., 2021; Kim et al., 2022), video manipulation (Tzaban et al., 2022; Bar-Tal et al., 2022), motion synthesis (Tevet et al., 2022; Petrovich et al., 2022), style transfer (Kwon & Ye, 2021; Liu et al., 2022) and even texture synthesis for 3D objects (Michel et al., 2021). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_12",
"text": " Our approach builds on the open-ended, conditional synthesis models. Rather than training a new model from scratch, we show that we can expand a frozen model’s vocabulary and introduce new pseudo-words that describe specific concepts. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_13",
"text": " Manipulating images with generative networks often requires one to find a corresponding latent representation of the given image, a process referred to as inversion (Zhu et al., 2016; Xia et al., 2021). In the GAN literature, this inversion is done through either an optimization-based technique (Abdal et al., 2019, 2020; Zhu et al., 2020b; Gu et al., 2020) or by using an encoder (Richardson et al., 2020; Zhu et al., 2020a; Pidhorskyi et al., 2020; Tov et al., 2021). Optimization methods directly optimize a latent vector, such that feeding it through the GAN will re-create a target image. Encoders leverage a large image set to train a network that maps images to their latent representations. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_14",
"text": " In our work, we follow the optimization approach, as it can better adapt to unseen concepts. Encoders face harsher generalization requirements, and would likely need to be trained on web-scale data to offer the same freedom. We further analyze our embedding space in light of the GAN-inversion literature, outlining the core principles that remain and those that do not. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_15",
"text": " In the realm of diffusion models, inversion can be performed naïvely by adding noise to an image and then de-noising it through the network. However, this process tends to change the image content significantly. Choi et al. (2021) improve inversion by conditioning the denoising process on noised low-pass filter data from the target image. (Dhariwal & Nichol, 2021) demonstrate that the DDIM (Song et al., 2020) sampling process can be inverted in a closed-form manner, extracting a latent noise map that will produce a given real image. In DALL-E 2 (Ramesh et al., 2022), they build on this method and demonstrate that it can be used to induce changes in the image, such as cross-image interpolations or semantic editing. The later relies on their use of CLIP-based codes to condition the model, and may not be applicable to other methods. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_16",
"text": " Whereas the above works invert a given image into the model’s latent space, we invert a user-provided concept. Moreover, we represent this concept as a new pseudo-word in the model’s vocabulary, allowing for more general and intuitive editing. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_17",
"text": " Adapting models to a specific individual or object is a long-standing goal in machine learning research. Personalized models are typically found in the realms of recommendation systems (Benhamdi et al., 2017; Amat et al., 2018; Martinez et al., 2009; Cho et al., 2002) or in federated learning (Mansour et al., 2020; Jiang et al., 2019; Fallah et al., 2020; Shamsian et al., 2021). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_18",
"text": " More recently, personalization efforts can also be found in vision and graphics. There it is typical to apply a delicate tuning of a generative model to better reconstruct specific faces or scenes (Bau et al., 2019; Roich et al., 2021; Alaluf et al., 2021; Dinh et al., 2022; Cao et al., 2022; Nitzan et al., 2022). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_19",
"text": " Most relevant to our work is PALAVRA (Cohen et al., 2022), which leverages a pre-trained CLIP model for retrieval and segmentation of personalized objects. PALAVRA identifies pseudo-words in the textual embedding space of CLIP that refer to a specific object. These are then used to describe images for retrieval, or in order to segment specific objects in a scene. However, their task and losses are both discriminative, aiming to separate the object from other candidates. As we later show (Figure 5), their approach fails to capture the details required for plausible reconstructions or synthesis in new scenes. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_20",
"text": " Our goal is to enable language-guided generation of new, user-specified concepts. To do so, we aim to encode these concepts into an intermediate representation of a pre-trained text-to-image model. Ideally, this should be done in a manner that would allow us to leverage the rich semantic and visual prior represented by such a model, and use it to guide intuitive visual transformations of the concepts. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_21",
"text": " It is natural to search for candidates for such a representation in the word-embedding stage of the text encoders typically employed by text-to-image models. There, the discrete input text is first converted into a continuous vector representation that is amenable to direct optimization. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_22",
"text": " Prior work has shown that this embedding space is expressive enough to capture basic image semantics (Cohen et al., 2022; Tsimpoukelli et al., 2021). However, these approaches leveraged contrastive or language-completion objectives, neither of which require an in-depth visual understanding of the image. As we demonstrate in Section 4, those methods fail to accurately capture the appearance of the concept, and attempting to employ them for synthesis leads to considerable visual corruption. Our goal is to find pseudo-words that can guide generation, which is a visual task. As such, we propose to find them through a visual reconstruction objective. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_23",
"text": " Below, we outline the core details of applying our approach to a specific class of generative models — Latent Diffusion Models (Rombach et al., 2021). In Section 5, we then analyze a set of extensions to this approach, motivated by GAN-inversion literature. However, as we later show, these additional complexities fail to improve upon the initial representation, presented here. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_24",
"text": " We implement our method over Latent Diffusion Models (LDMs) (Rombach et al., 2021), a recently introduced class of Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020) that operate in the latent space of an autoencoder. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_25",
"text": " LDMs consist of two core components. First, an autoencoder is pre-trained on a large collection of images. An encoder ℰℰ\\mathcal{E} learns to map images x∈𝒟x𝑥subscript𝒟𝑥x\\in\\mathcal{D}_{x} into a spatial latent code z=ℰ(x)𝑧ℰ𝑥z=\\mathcal{E}(x), regularized through either a KL-divergence loss or through vector quantization (Van Den Oord et al., 2017; Agustsson et al., 2017). The decoder D𝐷D learns to map such latents back to images, such that D(ℰ(x))≈x𝐷ℰ𝑥𝑥D\\left(\\mathcal{E}(x)\\right)\\approx x. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_26",
"text": " The second component, a diffusion model, is trained to produce codes within the learned latent space. This diffusion model can be conditioned on class labels, segmentation masks, or even on the output of a jointly trained text-embedding model. Let cθ(y)subscript𝑐𝜃𝑦c_{\\theta}(y) be a model that maps a conditioning input y𝑦y into a conditioning vector. The LDM loss is then given by: LLDM:=𝔼z∼ℰ(x),y,ϵ∼𝒩(0,1),t(‖ϵ−ϵθ(zt,t,cθ(y))‖22),assignsubscript𝐿𝐿𝐷𝑀subscript𝔼formulae-sequencesimilar-to𝑧ℰ𝑥𝑦similar-toitalic-ϵ𝒩01𝑡delimited-()superscriptsubscriptnormitalic-ϵsubscriptitalic-ϵ𝜃subscript𝑧𝑡𝑡subscript𝑐𝜃𝑦22L_{LDM}:=\\mathbb{E}_{z\\sim\\mathcal{E}(x),y,\\epsilon\\sim\\mathcal{N}(0,1),t}\\Big{(}\\|\\epsilon-\\epsilon_{\\theta}(z_{t},t,c_{\\theta}(y))\\|_{2}^{2}\\Big{)}\\,, (1) where t𝑡t is the time step, ztsubscript𝑧𝑡z_{t} is the latent noised to time t𝑡t, ϵitalic-ϵ\\epsilon is the unscaled noise sample, and ϵθsubscriptitalic-ϵ𝜃\\epsilon_{\\theta} is the denoising network. Intuitively, the objective here is to correctly remove the noise added to a latent representation of an image. While training, cθsubscript𝑐𝜃c_{\\theta} and ϵθsubscriptitalic-ϵ𝜃\\epsilon_{\\theta} are jointly optimized to minimize the LDM loss. At inference time, a random noise tensor is sampled and iteratively denoised to produce a new image latent, z0subscript𝑧0z_{0}. Finally, this latent code is transformed into an image through the pre-trained decoder x′=D(z0)superscript𝑥′𝐷subscript𝑧0x^{\\prime}=D(z_{0}). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_27",
"text": " We employ the publicly available 1.4 billion parameter text-to-image model of Rombach et al. (2021), which was pre-trained on the LAION-400M dataset (Schuhmann et al., 2021). Here, cθsubscript𝑐𝜃c_{\\theta} is realized through a BERT (Devlin et al., 2018) text encoder, with y𝑦y being a text prompt. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_28",
"text": " We next review the early stages of such a text encoder, and our choice of inversion space. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_29",
"text": " Typical text encoder models, such as BERT, begin with a text processing step (Figure 2, left). First, each word or sub-word in an input string is converted to a token, which is an index in some pre-defined dictionary. Each token is then linked to a unique embedding vector that can be retrieved through an index-based lookup. These embedding vectors are typically learned as part of the text encoder cθsubscript𝑐𝜃c_{\\theta}. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_30",
"text": " In our work, we choose this embedding space as the target for inversion. Specifically, we designate a placeholder string, S∗subscript𝑆S_{*}, to represent the new concept we wish to learn. We intervene in the embedding process and replace the vector associated with the tokenized string with a new, learned embedding v∗subscript𝑣v_{*}, in essence “injecting” the concept into our vocabulary. In doing so, we can then compose new sentences containing the concept, just as we would with any other word. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_31",
"text": " To find these new embeddings, we use a small set of images (typically 333-555), which depicts our target concept across multiple settings such as varied backgrounds or poses. We find v∗subscript𝑣v_{*} through direct optimization, by minimizing the LDM loss of Equation 1 over images sampled from the small set. To condition the generation, we randomly sample neutral context texts, derived from the CLIP ImageNet templates (Radford et al., 2021). These contain prompts of the form “A photo of S∗subscript𝑆S_{*}”, “A rendition of S∗subscript𝑆S_{*}”, etc. The full list of templates is provided in the supplementary materials. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_32",
"text": " Our optimization goal can then be defined as: v∗=argminv𝔼z∼ℰ(x),y,ϵ∼𝒩(0,1),t(‖ϵ−ϵθ(zt,t,cθ(y))‖22),subscript𝑣subscriptargmin𝑣subscript𝔼formulae-sequencesimilar-to𝑧ℰ𝑥𝑦similar-toitalic-ϵ𝒩01𝑡delimited-()superscriptsubscriptnormitalic-ϵsubscriptitalic-ϵ𝜃subscript𝑧𝑡𝑡subscript𝑐𝜃𝑦22v_{*}=\\operatorname*{arg\\,min}_{v}\\mathbb{E}_{z\\sim\\mathcal{E}(x),y,\\epsilon\\sim\\mathcal{N}(0,1),t}\\Big{(}\\|\\epsilon-\\epsilon_{\\theta}(z_{t},t,c_{\\theta}(y))\\|_{2}^{2}\\Big{)}\\,, (2) and is realized by re-using the same training scheme as the original LDM model, while keeping both cθsubscript𝑐𝜃c_{\\theta} and ϵθsubscriptitalic-ϵ𝜃\\epsilon_{\\theta} fixed. Notably, this is a reconstruction task. As such, we expect it to motivate the learned embedding to capture fine visual details unique to the concept. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_33",
"text": " Unless otherwise noted, we retain the original hyper-parameter choices of LDM (Rombach et al., 2021). Word embeddings were initialized with the embeddings of a single-word coarse descriptor of the object (e.g. “sculpture” and “cat” for the two concepts in Figure 1). Our experiments were conducted using 2×2\\timesV100 GPUs with a batch size of 4. The base learning rate was set to 0.0050.0050.005. Following LDM, we further scale the base learning rate by the number of GPUs and the batch size, for an effective rate of 0.040.040.04. All results were produced using 5,00050005,000 optimization steps. We find that these parameters work well for most cases. However, we note that for some concepts, better results can be achieved with fewer steps or with an increased learning rate. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_34",
"text": " In the following section, we demonstrate a range of applications enabled through Textual Inversions, and provide visual comparisons to the state-of-the-art and human-captioning baselines. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_35",
"text": " We begin by demonstrating our ability to capture and recreate variations of an object using a single pseudo-word. In Figure 3 we compare our method to two baselines: LDM guided by a human caption and DALLE-2 guided by either a human caption or an image prompt. Captions were collected using Mechanical Turk. Annotators were provided with four images of a concept and asked to describe it in a manner that could allow an artist to recreate it. We asked for both a short (≤12absent12\\leq 12 words) and a long (≤30absent30\\leq 30 words) caption. In total, we collected 101010 captions per concept — five short and five long. Figure 3 shows multiple results generated with a randomly chosen caption for each setup. Additional large-scale galleries showing our uncurated reconstructions are provided in the supplementary. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_36",
"text": " As our results demonstrate, our method better captures the unique details of the concept. Human captioning typically captures the most prominent features of an object, but provides insufficient detail to reconstruct finer features like color patterns (e.g. of the teapot). In some cases (e.g. the skull mug) the object itself may be exceedingly difficult to describe through natural language. When provided with an image, DALLE-2 is able to recreate more appealing samples, particularly for well-known objects with limited detail (Aladdin’s lamp). However, it still struggles with unique details of personalized objects that the image encoder (CLIP) is unlikely to have seen (mug, teapot). In contrast, our method can successfully capture these finer details, and it does so using only a single word embedding. However, note that while our creations are more similar to the source objects, they are still variations that may differ from the source. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_37",
"text": " In Figures 4 and 1 we show our ability to compose novel scenes by incorporating the learned pseudo-words into new conditioning texts. For each concept, we show exemplars from our training set, along with an array of generated images and their conditioning texts. As our results demonstrate, the frozen text-to-image model is able to jointly reason over both the new concepts and its large body of prior knowledge, bringing them together in a new creation. Importantly, despite the fact that our training goal was generative in nature, our pseudo-words still encapsulate semantic concepts that the model can then leverage. For example, observe the bowl’s ability (row four) to contain other objects like food, or the ability to preserve the Furby’s bird-like head and crown while adapting his palette to better match a prompt (album cover, row three). Additional concepts and texts are provided in the supplementary materials. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_38",
"text": " To better evaluate our ability to compose objects into new scenes, we compare our method to several personalization baselines (Figure 5). In particular, we consider the recent PALAVRA (Cohen et al., 2022), which is most similar to our own work. PALAVRA encodes object sets into the textual embedding space of CLIP, using a mix of contrastive learning and cyclic consistency goals. We find a new pseudo-word using their approach and use it to synthesize new images by leveraging VQGAN-CLIP (Crowson et al., 2022) and CLIP-Guided Diffusion (Crowson, 2021). As a second baseline, we apply the CLIP-guided models of Crowson et al. while trying to jointly minimize the CLIP-based distances to both the training set images and to the target text (VQGAN-CLIP) or by initializing the optimization with an input image from our set (Guided Diffusion). For the latter, we chose image-based initializations as we observed that they outperform the use of images in the optimization loss. Similar observations were reported in Disco Diffusion (Letts et al., 2021). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_39",
"text": " The images produced by PALAVRA (rows 222, 333) typically contain elements from the target prompt (e.g. a beach, a moon) but they fail to accurately capture the concept and display considerable visual corruption. This is unsurprising, as PALAVRA was trained with a discriminative goal. In their case, the model needs to only encode enough information to distinguish between two typical concepts (e.g. it may be sufficient to remember the mug was black-and-white with text-like symbols). Moreover, their word-discovery process had no need to remain in regions of the embedding space that contain embedding vectors that can be mapped to outputs on the natural image manifold. In the case of the text-and-image guided synthesis methods (rows 444, 555), results appear more natural and closer to the source image, but they fail to generalize to new texts. Moreover, as our method builds upon pre-trained, large-scale text-to-image synthesis models, we can optimize a single pseudo-word and re-use it for a multitude of new generations. The baseline models, meanwhile, use CLIP for test-time optimization and thus require expensive optimization for every new creation. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_40",
"text": " A typical use-case for text-guided synthesis is in artistic circles, where users aim to draw upon the unique style of a specific artist and apply it to new creations. Here, we show that our model can also find pseudo-words representing a specific, unknown style. To find such pseudo-words, we simply provide the model with a small set of images with a shared style, and replace the training texts with prompts of the form: “A painting in the style of S∗subscript𝑆S_{*}”. Results are shown in Figure 6. They serve as further demonstration that our ability to capture concepts extends beyond simple object reconstructions and into more abstract ideas. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_41",
"text": " Note that this differs from traditional style transfer, as we do not necessarily wish to maintain the content of some input image. Instead, we offer the network the freedom to decide how to depict the subject, and merely ask for an appropriate style. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_42",
"text": " In Figure 7 we demonstrate compositional synthesis, where the guiding text contains multiple learned concepts. We observe that the model can concurrently reason over multiple novel pseudo-words at the same time. However, it struggles with relations between them (e.g. it fails to place two concepts side-by-side). We hypothesize that this limitation arises because our training considers only single concept scenes, where the concept is at the core of the image. Training on multi-object scenes may alleviate this shortcoming. However, we leave such investigation to future work. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_43",
"text": " A common limitation of text-to-image models is that they inherit the biases found in the internet-scale data used to train them. These biases then manifest in the generated samples. For example, the DALLE-2 system card (Mishkin et al., 2022) reports that their baseline model tends to produce images of people that are white-passing and male-passing when provided with the prompt “A CEO”. Similarly, results for “wedding”, tend to assume Western wedding traditions, and default to heterosexual couples. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_44",
"text": " Here, we demonstrate that we can utilize a small, curated dataset in order to learn a new “fairer” word for a biased concept, which can then be used in place of the original to drive a more inclusive generation. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_45",
"text": " Specifically, in Figure 8 we highlight the bias encoded in the word “Doctor”, and show that this bias can be reduced (i.e. we increase perceived gender and ethnic diversity) by learning a new embedding from a small, more diverse set. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_46",
"text": " Finally, we demonstrate that our pseudo-words can be used in downstream models that build on the same initial LDM model. Specifically, we consider the recent Blended Latent Diffusion (Avrahami et al., 2022a) which enables localized text-based editing of images via a mask-based blending process in the latent space of an LDM. In Figure 9 we demonstrate that this localized synthesis process can also be conditioned on our learned pseudo-words, without requiring any additional modifications of the original model. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_47",
"text": " Unless otherwise noted, results in this section are partially curated. For each prompt, we generated 161616 candidates (or six for DALLE-2) and manually selected the best result. We note that similar curation processes with larger batches are typically employed in text-conditioned generation works (Avrahami et al., 2022b; Ramesh et al., 2021; Yu et al., 2022), and that one can automate this selection process by using CLIP to rank images. In the supplementary materials, we provide large-scale, uncurated galleries of generated results, including failure cases. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_48",
"text": " Inversion into an uncharted latent space provides us with a wide range of possible design choices. Here, we examine these choices in light of the GAN inversion literature and discover that many core premises (such as a distortion-editability tradeoff (Tov et al., 2021; Zhu et al., 2020b)) also exist in the textual embedding space. However, our analysis reveals that many of the solutions typically used in GAN inversion fail to generalize to this space, and are often unhelpful or actively harmful. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_49",
"text": " To analyze the quality of latent space embeddings, we consider two fronts: reconstruction and editability. First, we wish to gauge our ability to replicate the target concept. As our method produces variations on the concept and not a specific image, we measure similarity by considering semantic CLIP-space distances. Specifically, for each concept, we generate a 646464 of images using the prompt: “A photo of S∗subscript𝑆S_{*}”. Our reconstruction score is then the average pair-wise CLIP-space cosine-similarity between the generated images and the images of the concept-specific training set. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_50",
"text": " Second, we want to evaluate our ability to modify the concepts using textual prompts. To this end, we produce a set of images using prompts of varying difficulty and settings. These range from background modifications (“A photo of S∗subscript𝑆S_{*} on the moon”), to style changes (“An oil painting of S∗subscript𝑆S_{*}”), and a compositional prompt (“Elmo holding a S∗subscript𝑆S_{*}”). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_51",
"text": " For each prompt, we synthesize 646464 samples using 505050 DDIM steps, calculate the average CLIP-space embedding of the samples, and compute their cosine similarity with the CLIP-space embedding of the textual prompts, where we omit the placeholder S∗subscript𝑆S_{*} (i.e. “A photo of on the moon”). Here, a higher score indicates better editing capability and more faithfulness to the prompt itself. Note that our method does not involve the direct optimization of the CLIP-based objective score and, as such, is not sensitive to the adversarial scoring flaws outlined by Nichol et al. (2021). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_52",
"text": " We evaluate the embedding space using a set of experimental setups inspired by GAN inversion: ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_53",
"text": " Following Abdal et al. (2019), we consider an extended, multi-vector latent space. In this space, S∗subscript𝑆S_{*} is embedded into multiple learned embeddings, an approach that is equivalent to describing the concept through multiple learned pseudo-words. We consider an extension to two and three pseudo-words (denoted 2−word2𝑤𝑜𝑟𝑑2-word and 3−word3𝑤𝑜𝑟𝑑3-word, respectively). This setup aims to alleviate the potential bottleneck of a single embedding vector to enable more accurate reconstructions. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_54",
"text": " We follow Tov et al. (2021) and consider a progressive multi-vector setup. Here, we begin training with a single embedding vector, introduce a second vector following 2,00020002,000 training steps, and a third vector after 4,00040004,000 steps. In this scenario, we expect the network to focus on the core details first, and then leverage the additional pseudo-words to capture finer details. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_55",
"text": " Tov et al. (2021) observed that latent codes in the space of a GAN have increased editability when they lie closer to the code distribution which was observed during training. Here, we investigate a similar scenario by introducing a regularization term that aims to keep the learned embedding close to existing words. In practice, we minimize the L2 distance of the learned embedding to the embedding of a coarse descriptor of the object (e.g. “sculpture” and “cat” for the images in Figure 1). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_56",
"text": " Moving beyond GAN-based approaches, we investigate a novel scheme where we introduce unique, per-image tokens into our inversion approach. Let {xi}i=1nsuperscriptsubscriptsubscript𝑥𝑖𝑖1𝑛\\{x_{i}\\}_{i=1}^{n} be the set of input images. Rather than optimizing a single word vector shared across all images, we introduce both a universal placeholder, S∗subscript𝑆S_{*}, and an additional placeholder unique to each image, {Si}i=1nsuperscriptsubscriptsubscript𝑆𝑖𝑖1𝑛\\{S_{i}\\}_{i=1}^{n}, associated with a unique embedding visubscript𝑣𝑖v_{i}. We then compose sentences of the form “A photo of S∗subscript𝑆S_{*} with Sisubscript𝑆𝑖S_{i}”, where every image is matched to sentences containing its own, unique string. We jointly optimize over both S∗subscript𝑆S_{*} and {Si}i=1nsuperscriptsubscriptsubscript𝑆𝑖𝑖1𝑛\\{S_{i}\\}_{i=1}^{n}, using Equation 2. The intuition here is that the model should prefer to encode the shared information (i.e. the concept) in the shared code S∗subscript𝑆S_{*} while relegating per-image details such as the background to Sisubscript𝑆𝑖S_{i}. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_57",
"text": " In addition to the learned-embedding setups, we compare to human-level performance using the captions outlined in Section 4.1. Here, we simply replace the placeholder strings S∗subscript𝑆S_{*} with the human captions, using both the short and long-caption setups. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_58",
"text": " To provide intuition for the scale of the results, we add two reference baselines. First, we consider the expected behavior from a model that always produces copies of the training set, regardless of the prompt. For that, we simply use the training set itself as the “generated sample”. Second, we consider a model that always aligns with the text prompt but ignores the personalized concept. We do so by synthesizing images using the evaluation prompts but without the pseudo-word. We denote these setups as “Image Only” and “Prompt Only”, respectively. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_59",
"text": " Finally, we consider our own setup, as outlined in Section 3. We further evaluate our model with an increased learning rate (2e−22𝑒22e-2, “High-LR”) and a decreased learning rate (1e−41𝑒41e-4, “Low-LR”). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_60",
"text": " In the supplementary, we consider two additional setups for inversion: a pivotal tuning approach (Roich et al., 2021; Bau et al., 2020), where the model itself is optimized to improve reconstruction, and DALLE-2 (Ramesh et al., 2022)’s bipartite inversion process. We further analyze the effect of the image-set size on reconstruction and editability. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_61",
"text": " Our evaluation results are summarized in Figure 10(a). We highlight four observations of particular interest: First, the semantic reconstruction quality of our method and many of the baselines is comparable to simply sampling random images from the training set. Second, the single-word method achieves comparable reconstruction quality, and considerably improved editability over all multi-word baselines. These points outline the impressive flexibility of the textual embedding space, showing that it can serve to capture new concepts with a high degree of accuracy while using only a single pseudo-word. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_62",
"text": " Third, we observe that our baselines outline a distortion-editability trade-off curve, where embeddings that lie closer to the true word distribution (e.g. due to regularization, fewer pseudo-words, or a lower learning rate) can be more easily modified, but fail to capture the details of the target. In contrast, deviating far from the word distribution enables improved reconstruction at the cost of severely diminished editing capabilities. Notably, our single-embedding model can be moved along this curve by simply changing the learning rate, offering a user a degree of control over this trade-off. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_63",
"text": " As a fourth observation, we note that the use of human descriptions for the concepts not only fails to capture their likeness, but also leads to diminished editability. We hypothesize that this is tied to the selective-similarity property outlined in Paiss et al. (2022), where vision-and-language models tend to focus on a subset of the semantically meaningful tokens. By using long captions, we increase the chance of the model ignoring our desired setting, focusing only on the object description itself. Our model, meanwhile, uses only a single token and thus minimizes this risk. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_64",
"text": " Finally, we note that while our reconstruction scores are on par with those of randomly sampled, real images, these results should be taken with a grain of salt. Our metrics compare semantic similarity using CLIP, which is less sensitive to shape-preservation. On this front, there remains more to be done. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_65",
"text": " We further evaluate our models using a user study. Here, we created two questionnaires. In the first, users were provided with four images from a concept’s training set, and asked to rank the results produced by five models according to their similarity to these images. In the second questionnaire, users were provided with a text describing an image context (“A photo on the beach”) and asked to rank the results produced by the same models according to their similarity to the text. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_66",
"text": " We used the same target concepts and prompts as the CLIP-based evaluation and collected a total of 600600600 responses to each questionnaire, for a total of 1,20012001,200 responses. Results are shown in Figure 10(b). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_67",
"text": " The user-study results align with the CLIP-based metrics and demonstrate a similar reconstruction-editability tradeoff. Moreover, they outline the same limitations of human-based captioning when attempting to reproduce a concept, as well as when editing it. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_68",
"text": " While our method offers increased freedom, it may still struggle with learning precise shapes, instead incorporating the “semantic” essence of a concept. For artistic creations, this is often enough. In the future, we hope to achieve better control over the accuracy of the reconstructed concepts, enabling users to leverage our method for tasks that require greater precision. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_69",
"text": " Another limitation of our approach is in the lengthy optimization times. Using our setup, learning a single concept requires roughly two hours. These times could likely be shortened by training an encoder to directly map a set of images to their textual embedding. We aim to explore this line of work in the future. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_70",
"text": " Text-to-image models can be used to generate misleading content and promote disinformation. Personalized creation could allow a user to forge more convincing images of non-public individuals. However, our model does not currently preserve identity to the extent where this is a concern. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_71",
"text": " These models are further susceptible to the biases found in the training data. Examples include gender biases when portraying “doctors” and “nurses”, racial biases when requesting images of scientists, and more subtle biases such as an over-representation of heterosexual couples and western traditions when prompting for a “wedding” (Mishkin et al., 2022). As we build on such models, our own work may similarly exhibit biases. However, as demonstrated in Figure 8, our ability to more precisely describe specific concepts can also serve as a means for reducing these biases. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_72",
"text": " Finally, the ability to learn artistic styles may be misused for copyright infringement. Rather than paying an artist for their work, a user could train on their images without consent, and produce images in a similar style. While generated artwork is still easy to identify, in the future such infringement could be difficult to detect or legally pursue. However, we hope that such shortcomings are offset by the new opportunities that these tools could offer an artist, such as the ability to license out their unique style, or the ability to quickly create early prototypes for new work. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_73",
"text": " We introduced the task of personalized, language-guided generation, where a text-to-image model is leveraged to create images of specific concepts in novel settings and scenes. Our approach, “Textual Inversions”, operates by inverting the concepts into new pseudo-words within the textual embedding space of a pre-trained text-to-image model. These pseudo-words can be injected into new scenes using simple natural language descriptions, allowing for simple and intuitive modifications. In a sense, our method allows a user to leverage multi-modal information — using a text-driven interface for ease of editing, but providing visual cues when approaching the limits of natural language. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_74",
"text": " Our approach was implemented over LDM (Rombach et al., 2021), the largest publicly available text-to-image model. However, it does not rely on any architectural details unique to their approach. As such, we believe Textual Inversions to be easily applicable to additional, larger-scale text-to-image models. There, text-to-image alignment, shape preservation, and image generation fidelity may be further improved. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_75",
"text": " We hope our approach paves the way for future personalized generation works. These could be core to a multitude of downstream applications, from providing artistic inspiration to product design. ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
},
{
"id": "2208.01618_all_76",
"text": " We thank Yael Vinker, Roni Paiss and Haggai Maron for reviewing early drafts and helpful suggestions. Tom Bagshaw for discussions regarding artist rights and social impacts, and Omri Avrahami for providing us with early access to Blended Latent Diffusion. This work was partially supported by Len Blavatnik and the Blavatnik family foundation, BSF (grant 2020280) and ISF (grants 2492/20 and 3441/21). ",
"title": "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion"
}
] |
Is the increase in receptive field of the features being computed in subsequent network layers due to the downsampling mentioned by the authors, or is it the result of subsequent convolutions as the network goes deeper?
|
The increase in receptive field of the features being computed in subsequent network layers is the result of convolutions layers as the network goes deeper? [0].
|
[
0
] |
[
{
"id": "1606.04797_all_0",
"text": " Recent research in computer vision and pattern recognition has highlighted the capabilities of Convolutional Neural Networks (CNNs) to solve challenging tasks such as classification, segmentation and object detection, achieving state-of-the-art performances. This success has been attributed to the ability of CNNs to learn a hierarchical representation of raw input data, without relying on handcrafted features. As the inputs are processed through the network layers, the level of abstraction of the resulting features increases. Shallower layers grasp local information while deeper layers use filters whose receptive fields are much broader that therefore capture global information . ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_1",
"text": " Segmentation is a highly relevant task in medical image analysis. Automatic delineation of organs and structures of interest is often necessary to perform tasks such as visual augmentation , computer assisted diagnosis , interventions and extraction of quantitative indices from images . In particular, since diagnostic and interventional imagery often consists of 3D images, being able to perform volumetric segmentations by taking into account the whole volume content at once, has a particular relevance. In this work, we aim to segment prostate MRI volumes. This is a challenging task due to the wide range of appearance the prostate can assume in different scans due to deformations and variations of the intensity distribution. Moreover, MRI volumes are often affected by artefacts and distortions due to field inhomogeneity. Prostate segmentation is nevertheless an important task having clinical relevance both during diagnosis, where the volume of the prostate needs to be assessed , and during treatment planning, where the estimate of the anatomical boundary needs to be accurate (4, 20). ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_2",
"text": " CNNs have been recently used for medical image segmentation. Early approaches obtain anatomy delineation in images or volumes by performing patch-wise image classification. Such segmentations are obtained by only considering local context and therefore are prone to failure, especially in challenging modalities such as ultrasound, where a high number of mis-classified voxel are to be expected. Post-processing approaches such as connected components analysis normally yield no improvement and therefore, more recent works, propose to use the network predictions in combination with Markov random fields , voting strategies or more traditional approaches such as level-sets . Patch-wise approaches also suffer from efficiency issues. When densely extracted patches are processed in a CNN, a high number of computations is redundant and therefore the total algorithm runtime is high. In this case, more efficient computational schemes can be adopted. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_3",
"text": " Fully convolutional network trained end-to-end were so far applied only to 2D images both in computer vision (11, 8) and microscopy image analysis . These models, which served as an inspiration for our work, employed different network architectures and were trained to predict a segmentation mask, delineating the structures of interest, for the whole image. In a pre-trained VGG network architecture was used in conjunction with its mirrored, de-convolutional, equivalent to segment RGB images by leveraging the descriptive power of the features extracted by the innermost layer. In three fully convolutional deep neural networks, pre-trained on a classification task, were refined to produce segmentations while in a brand new CNN model, especially tailored to tackle biomedical image analysis problems in 2D, was proposed. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_4",
"text": " In this work we present our approach to medical image segmentation that leverages the power of a fully convolutional neural networks, trained end-to-end, to process MRI volumes. Differently from other recent approaches we refrain from processing the input volumes slice-wise and we propose to use volumetric convolutions instead. We propose a novel objective function based on Dice coefficient maximisation, that we optimise during training. We demonstrate fast and accurate results on prostate MRI test volumes and we provide direct comparison with other methods which were evaluated on the same test data 111Detailed results available on http://promise12.grand-challenge.org/results/. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_5",
"text": " In Figure 2 we provide a schematic representation of our convolutional neural network. We perform convolutions aiming to both extract features from the data and, at the end of each stage, to reduce its resolution by using appropriate stride. The left part of the network consists of a compression path, while the right part decompresses the signal until its original size is reached. Convolutions are all applied with appropriate padding. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_6",
"text": " The left side of the network is divided in different stages that operate at different resolutions. Each stage comprises one to three convolutional layers. Similarly to the approach presented in , we formulate each stage such that it learns a residual function: the input of each stage is (a) used in the convolutional layers and processed through the non-linearities and (b) added to the output of the last convolutional layer of that stage in order to enable learning a residual function. As confirmed by our empirical observations, this architecture ensures convergence in a fraction of the time required by a similar network that does not learn residual functions. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_7",
"text": " The convolutions performed in each stage use volumetric kernels having size 5×5×55555\\times 5\\times 5 voxels. As the data proceeds through different stages along the compression path, its resolution is reduced. This is performed through convolution with 2×2×22222\\times 2\\times 2 voxels wide kernels applied with stride 222 (Figure 3). Since the second operation extracts features by considering only non overlapping 2×2×22222\\times 2\\times 2 volume patches, the size of the resulting feature maps is halved. This strategy serves a similar purpose as pooling layers that, motivated by and other works discouraging the use of max-pooling operations in CNNs, have been replaced in our approach by convolutional ones. Moreover, since the number of feature channels doubles at each stage of the compression path of the V-Net, and due to the formulation of the model as a residual network, we resort to these convolution operations to double the number of feature maps as we reduce their resolution. PReLu non linearities are applied throughout the network. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_8",
"text": " Replacing pooling operations with convolutional ones results also to networks that, depending on the specific implementation, can have a smaller memory footprint during training, due to the fact that no switches mapping the output of pooling layers back to their inputs are needed for back-propagation, and that can be better understood and analysed by applying only de-convolutions instead of un-pooling operations. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_9",
"text": " Downsampling allows us to reduce the size of the signal presented as input and to increase the receptive field of the features being computed in subsequent network layers. Each of the stages of the left part of the network, computes a number of features which is two times higher than the one of the previous layer. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_10",
"text": " The right portion of the network extracts features and expands the spatial support of the lower resolution feature maps in order to gather and assemble the necessary information to output a two channel volumetric segmentation. The two features maps computed by the very last convolutional layer, having 1×1×11111\\times 1\\times 1 kernel size and producing outputs of the same size as the input volume, are converted to probabilistic segmentations of the foreground and background regions by applying soft-max voxelwise. After each stage of the right portion of the CNN, a de-convolution operation is employed in order increase the size of the inputs (Figure 3) followed by one to three convolutional layers involving half the number of 5×5×55555\\times 5\\times 5 kernels employed in the previous layer. Similar to the left part of the network, also in this case we resort to learn residual functions in the convolutional stages. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_11",
"text": " Similarly to , we forward the features extracted from early stages of the left part of the CNN to the right part. This is schematically represented in Figure 2 by horizontal connections. In this way we gather fine grained detail that would be otherwise lost in the compression path and we improve the quality of the final contour prediction. We also observed that when these connections improve the convergence time of the model. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_12",
"text": " We report in Table 1 the receptive fields of each network layer, showing the fact that the innermost portion of our CNN already captures the content of the whole input volume. We believe that this characteristic is important during segmentation of poorly visible anatomy: the features computed in the deepest layer perceive the whole anatomy of interest at once, since they are computed from data having a spatial support much larger than the typical size of the anatomy we seek to delineate, and therefore impose global constraints. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_13",
"text": " The network predictions, which consist of two volumes having the same resolution as the original input data, are processed through a soft-max layer which outputs the probability of each voxel to belong to foreground and to background. In medical volumes such as the ones we are processing in this work, it is not uncommon that the anatomy of interest occupies only a very small region of the scan. This often causes the learning process to get trapped in local minima of the loss function yielding a network whose predictions are strongly biased towards background. As a result the foreground region is often missing or only partially detected. Several previous approaches resorted to loss functions based on sample re-weighting where foreground regions are given more importance than background ones during learning. In this work we propose a novel objective function based on dice coefficient, which is a quantity ranging between 00 and 111 which we aim to maximise. The dice coefficient D𝐷D between two binary volumes can be written as D=2∑iNpigi∑iNpi2+∑iNgi2𝐷2superscriptsubscript𝑖𝑁subscript𝑝𝑖subscript𝑔𝑖superscriptsubscript𝑖𝑁superscriptsubscript𝑝𝑖2superscriptsubscript𝑖𝑁superscriptsubscript𝑔𝑖2D=\\frac{2\\sum_{i}^{N}p_{i}g_{i}}{\\sum_{i}^{N}p_{i}^{2}+\\sum_{i}^{N}g_{i}^{2}} ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_14",
"text": " where the sums run over the N𝑁N voxels, of the predicted binary segmentation volume pi∈Psubscript𝑝𝑖𝑃p_{i}\\in{P} and the ground truth binary volume gi∈Gsubscript𝑔𝑖𝐺g_{i}\\in{G}. This formulation of Dice can be differentiated yielding the gradient ∂D∂pj=2(gj(∑iNpi2+∑iNgi2)−2pj(∑iNpigi)(∑iNpi2+∑iNgi2)2)𝐷subscript𝑝𝑗2delimited-()subscript𝑔𝑗superscriptsubscript𝑖𝑁superscriptsubscript𝑝𝑖2superscriptsubscript𝑖𝑁superscriptsubscript𝑔𝑖22subscript𝑝𝑗superscriptsubscript𝑖𝑁subscript𝑝𝑖subscript𝑔𝑖superscriptsuperscriptsubscript𝑖𝑁superscriptsubscript𝑝𝑖2superscriptsubscript𝑖𝑁superscriptsubscript𝑔𝑖22\\frac{\\partial D}{\\partial p_{j}}=2\\left(\\frac{g_{j}\\left(\\sum_{i}^{N}p_{i}^{2}+\\sum_{i}^{N}g_{i}^{2}\\right)-2p_{j}\\left(\\sum_{i}^{N}p_{i}g_{i}\\right)}{\\left(\\sum_{i}^{N}p_{i}^{2}+\\sum_{i}^{N}g_{i}^{2}\\right)^{2}}\\right) computed with respect to the j𝑗j-th voxel of the prediction. Using this formulation we do not need to assign weights to samples of different classes to establish the right balance between foreground and background voxels, and we obtain results that we experimentally observed are much better than the ones computed through the same network trained optimising a multinomial logistic loss with sample re-weighting (Fig. 6). ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_15",
"text": " Our CNN is trained end-to-end on a dataset of prostate scans in MRI. An example of the typical content of such volumes is shown in Figure 1. All the volumes processed by the network have fixed size of 128×128×6412812864128\\times 128\\times 64 voxels and a spatial resolution of 1×1×1.5111.51\\times 1\\times 1.5 millimeters. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_16",
"text": " Annotated medical volumes are not easy to obtain due to the fact that one or more experts are required to manually trace a reliable ground truth annotation and that there is a cost associated with their acquisition. In this work we found necessary to augment the original training dataset in order to obtain robustness and increased precision on the test dataset. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_17",
"text": " During every training iteration, we fed as input to the network randomly deformed versions of the training images by using a dense deformation field obtained through a 2×2×22222\\times 2\\times 2 grid of control-points and B-spline interpolation. This augmentation has been performed ”on-the-fly”, prior to each optimisation iteration, in order to alleviate the otherwise excessive storage requirements. Additionally we vary the intensity distribution of the data by adapting, using histogram matching, the intensity distributions of the training volumes used in each iteration, to the ones of other randomly chosen scans belonging to the dataset. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_18",
"text": " A Previously unseen MRI volume can be segmented by processing it in a feed-forward manner through the network. The output of the last convolutional layer, after soft-max, consists of a probability map for background and foreground. The voxels having higher probability (>0.5absent0.5>0.5) to belong to the foreground than to the background are considered part of the anatomy. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_19",
"text": " We trained our method on 505050 MRI volumes, and the relative manual ground truth annotation, obtained from the ”PROMISE2012” challenge dataset . This dataset contains medical data acquired in different hospitals, using different equipment and different acquisition protocols. The data in this dataset is representative of the clinical variability and challenges encountered in clinical settings. As previously stated we massively augmented this dataset through random transformation performed in each training iteration, for each mini-batch fed to the network. The mini-batches used in our implementation contained two volumes each, mainly due to the high memory requirement of the model during training. We used a momentum of 0.990.990.99 and a initial learning rate of 0.00010.00010.0001 which decreases by one order of magnitude every 252525K iterations. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_20",
"text": " We tested V-Net on 303030 MRI volumes depicting prostate whose ground truth annotation was secret. All the results reported in this section of the paper were obtained directly from the organisers of the challenge after submitting the segmentation obtained through our approach. The test set was representative of the clinical variability encountered in prostate scans in real clinical settings . ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_21",
"text": " We evaluated the approach performance in terms of Dice coefficient, Hausdorff distance of the predicted delineation to the ground truth annotation and in terms of score obtained on the challenge data as computed by the organisers of ”PROMISE 2012” . The results are shown in Table 2 and Fig. 5. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_22",
"text": " Our implementation222Implementation available at https://github.com/faustomilletari/VNet was realised in python, using a custom version of the Caffe333Implementation available at https://github.com/faustomilletari/3D-Caffe framework which was enabled to perform volumetric convolutions via CuDNN v3. All the trainings and experiments were ran on a standard workstation equipped with 646464 GB of memory, an Intel(R) Core(TM) i7-5820K CPU working at 3.30GHz, and a NVidia GTX 1080 with 888 GB of video memory. We let our model train for 484848 hours, or 303030K iterations circa, and we were able to segment a previously unseen volume in circa 111 second. The datasets were first normalised using the N4 bias filed correction function of the ANTs framework and then resampled to a common resolution of 1×1×1.5111.51\\times 1\\times 1.5 mm. We applied random deformations to the scans used for training by varying the position of the control points with random quantities obtained from gaussian distribution with zero mean and 151515 voxels standard deviation. Qualitative results can be seen in Fig. 4. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_23",
"text": " We presented and approach based on a volumetric convolutional neural network that performs segmentation of MRI prostate volumes in a fast and accurate manner. We introduced a novel objective function that we optimise during training based on the Dice overlap coefficient between the predicted segmentation and the ground truth annotation. Our Dice loss layer does not need sample re-weighting when the amount of background and foreground pixels is strongly unbalanced and is indicated for binary segmentation tasks. Although we inspired our architecture to the one proposed in , we divided it into stages that learn residuals and, as empirically observed, improve both results and convergence time. Future works will aim at segmenting volumes containing multiple regions in other modalities such as ultrasound and at higher resolutions by splitting the network over multiple GPUs. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
},
{
"id": "1606.04797_all_24",
"text": " We would like to acknowledge NVidia corporation, that donated a Tesla K40 GPU to our group enabling this research, Dr. Geert Litjens who dedicated some of his time to evaluate our results against the ground truth of the PROMISE 2012 dataset and Ms. Iro Laina for her support to this project. ",
"title": "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation"
}
] |
Why did the authors didn't use other metrics to evaluate/compare the performance of the architectures ?
|
The authors compare with previous works wrt [12]. classification accuracy, in particular the average instance accuracy and average class accuracy [39].
|
[
12,
39
] |
[
{
"id": "1604.03265_all_0",
"text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techniques and crowd-sourcing of virtual 3D models have additionally facilitated the availability of 3D data. (29, 34, 31, 33, 2). This development has encouraged the lifting of 2D to 3D for deep learning, opening up new opportunities with the additional information of 3D data; e.g., aligning models is easier in 3D Euclidean space. In this paper, we specifically focus on the object classification task on 3D data obtained from both CAD models and commodity RGB-D sensors. In addition, we demonstrate retrieval results in the supplemental material. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_1",
"text": " While the extension of 2D convolutional neural networks to 3D seems natural, the additional computational complexity (volumetric domain) and data sparsity introduces significant challenges; for instance, in an image, every pixel contains observed information, whereas in 3D, a shape is only defined on its surface. Seminal work by Wu et al. propose volumetric CNN architectures on volumetric grids for object classification and retrieval. While these approaches achieve good results, it turns out that training a CNN on multiple 2D views achieves a significantly higher performance, as shown by Su et al. , who augment their 2D CNN with pre-training from ImageNet RGB data . These results indicate that existing 3D CNN architectures and approaches are unable to fully exploit the power of 3D representations. In this work, we analyze these observations and evaluate the design choices. Moreover, we show how to reduce the gap between volumetric CNNs and multi-view CNNs by efficiently augmenting training data, introducing new CNN architectures in 3D. Finally, we examine multi-view CNNs; our experiments show that we are able to improve upon state of the art with improved training data augmentation and a new multi-resolution component. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_2",
"text": " We consider volumetric representations of 3D point clouds or meshes as input to the 3D object classification problem. This is primarily inspired by recent advances in real-time scanning technology, which use volumetric data representations. We further assume that the input data is already pre-segmented by 3D bounding boxes. In practice, these bounding boxes can be extracted using the sliding windows, object proposals, or background subtraction. The output of the method is the category label of the volumetric data instance. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_3",
"text": " We provide a detailed analysis over factors that influence the performance of volumetric CNNs, including network architecture and volumn resolution. Based upon our analysis, we strive to improve the performance of volumetric CNNs. We propose two volumetric CNN network architectures that signficantly improve state-of-the-art of volumetric CNNs on 3D shape classification. This result has also closed the gap between volumetric CNNs and multi-view CNNs, when they are provided with 3D input discretized at 30×30×3030303030\\times 30\\times 30 3D resolution. The first network introduces auxiliary learning tasks by classifying part of an object, which help to scrutize details of 3D objects more deeply. The second network uses long anisotropic kernels to probe for long-distance interactions. Combining data augmentation with a multi-orientation pooling, we observe significant performance improvement for both networks. We also conduct extensive experiments to study the influence of volume resolution, which sheds light on future directions of improving volumetric CNNs. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_4",
"text": " Furthermore, we introduce a new multi-resolution component to multi-view CNNs, which improves their already compelling performance. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_5",
"text": " In addition to providing extensive experiments on 3D CAD model datasets, we also introduce a dataset of real-world 3D data, constructed using dense 3D reconstruction taken with . Experiments show that our networks can better adapt from synthetic data to this real-world data than previous methods. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_6",
"text": " A large variety of shape descriptors has been developed in the computer vision and graphics community. For instance, shapes can be represented as histograms or bag-of-feature models which are constructed from surface normals and curvatures . Alternatives include models based on distances, angles, triangle areas, or tetrahedra volumes , local shape diameters measured at densely-sampled surface points , Heat kernel signatures (1, 19), or extensions of SIFT and SURF feature descriptors to 3D voxel grids . The spherical harmonic descriptor (SPH) and the Light Field descriptor (LFD) are other popular descriptors. LFD extracts geometric and Fourier descriptors from object silhouettes rendered from several different viewpoints, and can be directly applied to the shape classification task. In contrast to recently developed feature learning techniques, these features are hand-crafted and do not generalize well across different domains. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_7",
"text": " Convolutional Neural Networks (CNNs) have been successfully used in different areas of computer vision and beyond. In particular, significant progress has been made in the context of learning features. It turns out that training from large RGB image datasets (e.g., ImageNet ) is able to learn general purpose image descriptors that outperform hand-crafted features for a number of vision tasks, including object detection, scene recognition, texture recognition and classification (7, 10, 27, 5, 12). This significant improvement in performance on these tasks has decidedly moved the field forward. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_8",
"text": " With the introduction of commodity range sensors, the depth channel became available to provide additional information that could be incorporated into common CNN architectures. A very first approach combines convolutional and recursive neural networks for learning features and classifying RGB-D images . Impressive performance for object detection from RGB-D images has been achieved using a geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity . Recently, a CNN architecture has been proposed where the RGB and depth data are processed in two separate streams; in the end, the two streams are combined with a late fusion network . All these descriptors operate on single RGB-D images, thus processing 2.5D data. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_9",
"text": " Wu et al. lift 2.5D to 3D with their 3DShapeNets approach by categorizing each voxel as free space, surface or occluded, depending on whether it is in front of, on, or behind the visible surface (i.e., the depth value) from the depth map. The resulting representation is a 3D binary voxel grid, which is the input to a CNN with 3D filter banks. Their method is particularly relevant in the context of this work, as they are the first to apply CNNs on a 3D representation. A similar approach is VoxNet , which also uses binary voxel grids and a corresponding 3D CNN architecture. The advantage of these approaches is that it can process different sources of 3D data, including LiDAR point clouds, RGB-D point clouds, and CAD models; we likewise follow this direction. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_10",
"text": " An alternative direction is to exploit established 2D CNN architectures; to this end, 2D data is extracted from the 3D representation. In this context, DeepPano converts 3D shapes into panoramic views; i.e., a cylinder projection around its principle axis. Current state-of-the-art uses multiple rendered views, and trains a CNN that can process all views jointly . This multi-view CNN (MVCNN) is pre-trained on ImageNet and uses view-point pooling to combine all streams obtained from each view. A similar idea on stereo views has been proposed earlier . ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_11",
"text": " Two representations of generic 3D shapes are popularly used for object classification, volumetric and multi-view (Fig 1). The volumetric representation encodes a 3D shape as a 3D tensor of binary or real values. The multi-view representation encodes a 3D shape as a collection of renderings from multiple viewpoints. Stored as tensors, both representations can easily be used to train convolutional neural networks, i.e., volumetric CNNs and multi-view CNNs. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_12",
"text": " Intuitively, a volumetric representation should encode as much information, if not more, than its multi-view counterpart. However, experiments indicate that multi-view CNNs produce superior performance in object classification. Fig 2 reports the classification accuracy on the ModelNet40 dataset by state-of-the-art volumetric/multi-view architectures111We train models by replicating the architecture of for volumetric CNNs and for multi-view CNNs. All networks are trained in an end-to-end fashion. All methods are trained/tested on the same split for fair comparison. The reported numbers are average instance accuracy. See Sec 6 for details.. A volumetric CNN based on voxel occupancy (green) is 7.3%percent7.37.3\\% worse than a multi-view CNN (yellow). ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_13",
"text": " We investigate this performance gap in order to ascertain how to improve volumetric CNNs. The gap seems to be caused by two factors: input resolution and network architecture differences. The multi-view CNN down-samples each rendered view to 227×227227227227\\times 227 pixels (Multi-view Standard Rendering in Fig 1); to maintain a similar computational cost, the volumetric CNN uses a 30×30×3030303030\\times 30\\times 30 occupancy grid (Volumetric Occupancy Grid in Fig 1)222Note that 30×30×30≈227×22730303022722730\\times 30\\times 30\\approx 227\\times 227.. As shown in Fig 1, the input to the multi-view CNN captures more detail. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_14",
"text": " However, the difference in input resolution is not the primary reason for this performance gap, as evidenced by further experiments. We compare the two networks by providing them with data containing similar level of detail. To this end, we feed the multi-view CNN with renderings of the 30×30×3030303030\\times 30\\times 30 occupancy grid using sphere rendering333It is computationally prohibitive to match the volumetric CNN resolution to multi-view CNN, which would be 227×227×227227227227227\\times 227\\times 227., i.e., for each occupied voxel, a ball is placed at its center, with radius equal to the edge length of a voxel (Multi-View Sphere Rendering in Fig 1). We train the multi-view CNN from scratch using these sphere renderings. The accuracy of this multi-view CNN is reported in blue. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_15",
"text": " As shown in Fig 2, even with similar level of object detail, the volumetric CNN (green) is 4.8%percent4.84.8\\% worse than the multi-view CNN (blue). That is, there is still significant room to improve the architecture of volumetric CNNs. This discovery motivates our efforts in Sec 4 to improve volumetric CNNs. Additionally, low-frequency information in 3D seems to be quite discriminative for object classification—it is possible to achieve 89.5%percent89.589.5\\% accuracy (blue) at a resolution of only 30×30×3030303030\\times 30\\times 30. This discovery motivates our efforts in Sec 5 to improve multi-view CNNs with a 3D multi-resolution approach. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_16",
"text": " We improve volumetric CNNs through three separate means: 1) introducing new network structures; 2) data augmentation; 3) feature pooling. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_17",
"text": " We propose two network variations that significantly improve state-of-the-art CNNs on 3D volumetric data. The first network is designed to mitigate overfitting by introducing auxiliary training tasks, which are themselves challenging. These auxiliary tasks encourage the network to predict object class labels from partial subvolumes. Therefore, no additional annotation efforts are needed. The second network is designed to mimic multi-view CNNs, as they are strong in 3D shape classification. Instead of using rendering routines from computer graphics, our network projects a 3D shape to 2D by convolving its 3D volume with an anisotropic probing kernel. This kernel is capable of encoding long-range interactions between points. An image CNN is then appended to classify the 2D projection. Note that the training of the projection module and the image classification module is end-to-end. This emulation of multi-view CNNs achieves similar performance to them, using only standard layers in CNN. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_18",
"text": " In order to mitigate overfitting from too many parameters, we adopt the mlpconv layer from as our basic building block in both network variations. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_19",
"text": " Compared with 2D image datasets, currently available 3D shape datasets are limited in scale and variation. To fully exploit the design of our networks, we augment the training data with different azimuth and elevation rotations. This allows the first network to cover local regions at different orientations, and the second network to relate distant points at different relative angles. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_20",
"text": " Both of our new networks are sensitive to shape orientation, i.e., they capture different information at different orientations. To capture a more holistic sense of a 3D object, we add an orientation pooling stage that aggregates information from different orientations. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_21",
"text": " We observe significant overfitting when we train the volumetric CNN proposed by in an end-to-end fashion (see supplementary). When the volumetric CNN overfits to the training data, it has no incentive to continue learning. We thus introduce auxiliary tasks that are closely correlated with the main task but are difficult to overfit, so that learning continues even if our main task is overfitted. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_22",
"text": " These auxiliary training tasks also predict the same object labels, but the predictions are made solely on a local subvolume of the input. Without complete knowledge of the object, the auxiliary tasks are more challenging, and can thus better exploit the discriminative power of local regions. This design is different from the classic multi-task learning setting of hetergenous auxiliary tasks, which inevitably requires collecting additional annotations (e.g., conducting both object classification and detection ). ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_23",
"text": " We implement this design through an architecture shown in Fig 3. The first three layers are mlpconv (multilayer perceptron convolution) layers, a 3D extension of the 2D mlpconv proposed by . The input and output of our mlpconv layers are both 4D tensors. Compared with the standard combination of linear convolutional layers and max pooling layers, mlpconv has a three-layer structure and is thus a universal function approximator if enough neurons are provided in its intermediate layers. Therefore, mlpconv is a powerful filter for feature extraction of local patches, enhancing approximation of more abstract representations. In addition, mlpconv has been validated to be more discriminative with fewer parameters than ordinary convolution with pooling . ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_24",
"text": " At the fourth layer, the network branches into two. The lower branch takes the whole object as input for traditional classification. The upper branch is a novel branch for auxiliary tasks. It slices the 512×2×2×2512222512\\times 2\\times 2\\times 2 4D tensor (222 grids along x𝑥x, y𝑦y, z𝑧z axes and 512512512 channels) into 2×2×2=822282\\times 2\\times 2=8 vectors of dimension 512512512. We set up a classification task for each vector. A fully connected layer and a softmax layer are then appended independently to each vector to construct classification losses. Simple calculation shows that the receptive field of each task is 22×22×2222222222\\times 22\\times 22, covering roughly 2/3232/3 of the entire volume. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_25",
"text": " The success of multi-view CNNs is intriguing. multi-view CNNs first project 3D objects to 2D and then make use of well-developed 2D image CNNs for classification. Inspired by its success, we design a neural network architecture that is also composed of the two stages. However, while multi-view CNNs use external rendering pipelines from computer graphics, we achieve the 3D-to-2D projection using network layers in a manner similar to ‘X-ray scanning’. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_26",
"text": " Key to this network is the use of an elongated anisotropic kernel which helps capture the global structure of the 3D volume. As illustrated in Fig 4, the neural network has two modules: an anisotropic probing module and a network in network module. The anisotropic probing module contains three convolutional layers of elongated kernels, each followed by a nonlinear ReLU layer. Note that both the input and output of each layer are 3D tensors. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_27",
"text": " In contrast to traditional isotropic kernels, an anisotropic probing module has the advantage of aggregating long-range interactions in the early feature learning stage with fewer parameters. As a comparison, with traditional neural networks constructed from isotropic kernels, introducing long-range interactions at an early stage can only be achieved through large kernels, which inevitably introduce many more parameters. After anisotropic probing, we use an adapted NIN network to address the classification problem. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_28",
"text": " Our anistropic probing network is capable of capturing internal structures of objects through its X-ray like projection mechanism. This is an ability not offered by standard rendering. Combined with multi-orientation pooling (introduced below), it is possible for this probing mechanism to capture any 3D structure, due to its relationship with the Radon transform. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_29",
"text": " In addition, this architecture is scalable to higher resolutions, since all its layers can be viewed as 2D. While 3D convolution involves computation at locations of cubic resolution, we maintain quadratic compute. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_30",
"text": " The two networks proposed above are both sensitive to model orientation. In the subvolume supervision method, different model orientations define different local subvolumes; in the anisotropic probing method, only voxels of the same height and along the probing direction can have interaction in the early feature extraction stage. Thus it is helpful to augment the training data by varying object orientation and combining predictions through orientation pooling. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_31",
"text": " Similar to Su-MVCNN which aggregates information from multiple view inputs through a view-pooling layer and follow-on fully connected layers, we sample 3D input from different orientations and aggregate them in a multi-orientation volumetric CNN (MO-VCNN) as shown in Fig 5. At training time, we generate different rotations of the 3D model by changing both azimuth and elevation angles, sampled randomly. A volumetric CNN is firstly trained on single rotations. Then we decompose the network to CNN1subscriptCNN1\\text{CNN}_{1} (lower layers) and CNN2subscriptCNN2\\text{CNN}_{2} (higher layers) to construct a multi-orientation version. The MO-VCNN’s weights are initialized by a previously trained volumetric CNN with CNN1subscriptCNN1\\text{CNN}_{1}’s weights fixed during fine-tuning. While a common practice is to extract the highest level features (features before the last classification linear layer) of multiple orientations, average/max/concatenate them, and train a linear SVM on the combined feature, this is just a special case of the MO-VCNN. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_32",
"text": " Compared to 3DShapeNets which only augments data by rotating around vertical axis, our experiment shows that orientation pooling combined with elevation rotation can greatly increase performance. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_33",
"text": " The multi-view CNN proposed by is a strong alternative to volumetric representations. This multi-view representation is constructed in three steps: first, a 3D shape is rendered into multiple images using varying camera extrinsics; then image features (e.g. conv5 feature in VGG or AlexNet) are extracted for each view; lastly features are combined across views through a pooling layer, followed by fully connected layers. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_34",
"text": " Although the multi-view CNN presented by produces compelling results, we are able to improve its performance through a multi-resolution extension with improved data augmentation. We introduce multi-resolution 3D filtering to capture information at multiple scales. We perform sphere rendering (see Sec 3) at different volume resolutions. Note that we use spheres for this discretization as they are view-invariant. In particular, this helps regularize out potential noise or irregularities in real-world scanned data (relative to synthetic training data), enabling robust performance on real-world scans. Note that our 3D multi-resolution filtering is different from classical 2D multi-resolution approaches, since the 3D filtering respects the distance in 3D. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_35",
"text": " Additionally, we also augment training data with variations in both azimuth and elevation, as opposed to azimuth only. We use AlexNet instead of VGG for efficiency. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_36",
"text": " We evaluate our volumetric CNNs and multi-view CNNs along with current state of the art on the ModelNet dataset and a new dataset of real-world reconstructions of 3D objects. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_37",
"text": " For convenience in following discussions, we define 3D resolution to be the discretization resolution of a 3D shape. That is, a 30×30×3030303030\\times 30\\times 30 volume has 3D resolution 303030. The sphere rendering from this volume also has 3D resolution 303030, though it may have higher 2D image resolution. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_38",
"text": " We use ModelNet for our training and testing datasets. ModelNet currently contains 127,915127915127,915 3D CAD models from 662662662 categories. ModelNet40, a subset including 12,3111231112,311 models from 404040 categories, is well annotated and can be downloaded from the web. The authors also provide a training and testing split on the website, in which there are 9,84398439,843 training and 2,46824682,468 test models444VoxNet uses the train/test split provided on the website and report average class accuracy on the 2,46824682,468 test split. 3DShapeNets and MVCNN use another train/test split comprising the first 80 shapes of each category in the “train” folder (or all shapes if there are fewer than 80) and the first 20 shapes of each category in the “test” folder, respectively.. We use this train/test split for our experiments. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_39",
"text": " By default, we report classification accuracy on all models in the test set (average instance accuracy). For comparisons with previous work we also report average class accuracy. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_40",
"text": " We provide a new real-world scanning dataset benchmark, comprising 243 objects of 12 categories; the geometry is captured with an ASUS Xtion Pro and a dense reconstruction is obtained using the publicly-available VoxelHashing framework . For each scan, we have performed a coarse, manual segmentation of the object of interest. In addition, each scan is aligned with the world-up vector. While there are existing datasets captured with commodity range sensors – e.g., (29, 34, 31) – this is the first containing hundreds of annotated models from dense 3D reconstructions. The goal of this dataset is to provide an example of modern real-time 3D reconstructions; i.e., structured representations more complete than a single RGB-D frame but still with many occlusions. This dataset is used as a test set. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_41",
"text": " We compare our methods with state of the art for shape classification on the ModelNet40 dataset. In the following, we discuss the results within volumetric CNN methods and within multi-view CNN methods. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_42",
"text": " Fig 7 summarizes the performance of volumetric CNNs. Ours-MO-SubvolumeSup is the subvolume supervision network in Sec 4.2 and Ours-MO-AniProbing is the anistropic probing network in Sec 4.3. Data augmentation is applied as described in Sec 6.4 (azimuth and elevation rotations). For clarity, we use MO- to denote that both networks are trained with an additional multi-orientation pooling step (202020 orientations in practice). For reference of multi-view CNN performance at the same 3D resolution, we also include Ours-MVCNN-Sphere-30, the result of our multi-view CNN with sphere rendering at 3D resolution 303030. More details of setup can be found in the supplementary. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_43",
"text": " As can be seen, both of our proposed volumetric CNNs significantly outperform state-of-the-art volumetric CNNs. Moreover, they both match the performance of our multi-view CNN under the same 3D resolution. That is, the gap between volumetric CNNs and multi-view CNNs is closed under 3D resolution 303030 on ModelNet40 dataset, an issue that motivates our study (Sec 3). ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_44",
"text": " Fig 8 summarizes the performance of multi-view CNNs. Ours-MVCNN-MultiRes is the result by training an SVM over the concatenation of fc7 features from Ours-MVCNN-Sphere-30, 60, and Ours-MVCNN. HoGPyramid-LFD is the result by training an SVM over a concatenation of HoG features at three 2D resolutions. Here LFD (lightfield descriptor) simply refers to extracting features from renderings. Ours-MVCNN-MultiRes achieves state-of-the-art. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_45",
"text": " Sec 6.2 shows that our volumetric CNN and multi-view CNN performs comparably at 3D resolution 303030. Here we study the effect of 3D resolution for both types of networks. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_46",
"text": " Fig 9 shows the performance of our volumetric CNN and multi-view CNN at different 3D resolutions (defined at the beginning of Sec 6). Due to computational cost, we only test our volumetric CNN at 3D resolutions 101010 and 303030. The observations are: first, the performance of our volumetric CNN and multi-view CNN is on par at tested 3D resolutions; second, the performance of multi-view CNN increases as the 3D resolution grows up. To further improve the performance of volumetric CNN, this experiment suggests that it is worth exploring how to scale volumetric CNN to higher 3D resolutions. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_47",
"text": " We use the same volumetric CNN model, the end-to-end learning verion of 3DShapeNets , to train and test on three variations of augmented data (Table 1). Similar trend is observed for other volumetric CNN variations. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_48",
"text": " When combined with multi-orientation pooling, applying both azimuth rotation (AZ) and elevation rotation (EL) augmentations is extremely effective. Using only azimuth augmentation (randomly sampled from 0∘superscript00^{\\circ} to 360∘superscript360360^{\\circ}) with orientation pooling, the classification performance is increased by 86.1%−84.7%=1.4%percent86.1percent84.7percent1.486.1\\%-84.7\\%=1.4\\%; combined with elevation augmentation (randomly sampled from −45∘superscript45-45^{\\circ} to 45∘superscript4545^{\\circ}), the improvement becomes more significant – increasing by 87.8%−83.0%=4.8%percent87.8percent83.0percent4.887.8\\%-83.0\\%=4.8\\%. On the other hand, translation jittering (randomly sampled shift from 00 to 666 voxels in each direction) provides only marginal influence. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_49",
"text": " The architectures in comparison include VoxNet , E2E- (the end-to-end learning variation of implemented in Caffe by ourselves), 3D-NIN (a 3D variation of Network in Network designed by ourselves as in Fig 3 without the “Prediction by partial object” branch), SubvolumeSup (Sec 4.2) and AniProbing (Sec 4.3). Data augmentation of AZ+EL (Sec 6.4) are applied. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_50",
"text": " From Table 2, first, the two volumetric CNNs we propose, SubvolumeSup and AniProbing networks, both show superior performance, indicating the effectiveness of our design; second, multi-orientation pooling increases performance for all network variations. This is especially significant for the anisotropic probing network, since each orientation usually only carries partial information of the object. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_51",
"text": " We compare different methods that are based on multi-view representations in Table 3. Methods in the second group are trained on the full ModelNet40 train set. Methods in the first group, SPH, LFD, FV, and Su-MVCNN, are trained on a subset of ModelNet40 containing 3,183 training samples. They are provided for reference. Also note that the MVCNNs in the second group are our implementations in Caffe with AlexNet instead of VGG as in Su-MVCNN . ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_52",
"text": " We observe that MVCNNs are superior to methods by SVMs on hand-crafted features. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_53",
"text": " We further assess the performance of volumetric CNNs and multi-view CNNs on real-world reconstructions in Table 4. All methods are trained on CAD models in ModelNet40 but tested on real data, which may be highly partial, noisy, or oversmoothed (Fig 6). Our networks continue to outperform state-of-the-art results. In particular, our 3D multi-resolution filtering is quite effective on real-world data, possibly because the low 3D resolution component filters out spurious and noisy micro-structures. Example results for object retrieval can be found in supplementary. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_54",
"text": " In this paper, we have addressed the task of object classification on 3D data using volumetric CNNs and multi-view CNNs. We have analyzed the performance gap between volumetric CNNs and multi-view CNNs from perspectives of network architecture and 3D resolution. The analysis motivates us to propose two new architectures of volumetric CNNs, which outperform state-of-the-art volumetric CNNs, achieving comparable performance to multi-view CNNs at the same 3D resolution of 30×30×3030303030\\times 30\\times 30. Further evalution over the influence of 3D resolution indicates that 3D resolution is likely to be the bottleneck for the performance of volumetric CNNs. Therefore, it is worth exploring the design of efficient volumetric CNN architectures that scale up to higher resolutions. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_55",
"text": " The authors gratefully acknowledge the support of Stanford Graduate Fellowship, NSF grants IIS-1528025 and DMS-1546206, ONR MURI grant N00014-13-1-0341, a Google Focused Research award, the Max Planck Center for Visual Computing and Communications and hardware donations by NVIDIA. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
}
] |
What does SNLI means? Is it a model?
|
SNLI is one of benchmark dataset published in 2015 [26].
|
[
26
] |
[
{
"id": "2210.12302_all_0",
"text": " Pretrained Language Models (LMs) have shown singular succcess on a range of natural language understandings tasks, to the extent that they have become foundational for contemporary NLP systems. Several works have investigated why pretraining works so well Warstadt et al. (2019); Zhao et al. (2020). In particular, studies have shown that the pretrained LMs like BERT capture linguistic knowledge about syntax Lin et al. (2019); Wu et al. (2020), semantics Vulić et al. (2020b, a) and morphology Hofmann et al. (2020, 2021). In fact, Tenney et al. (2019) demonstrated that learned representations in pretrained LMs even internally reflect the classical NLP pipeline. Since most NLP benchmarks such as SuperGLUE Wang et al. (2019) naturally are focused on tasks such as textual entailment and reading comprehension that require linguistic knowledge and reasoning, it is unsurprising that LMs have achieved strong results on these tasks. On the other hand, little work so far has explored the abilities of pretrained LMs for learning non-linguistic tasks. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_1",
"text": " In this paper, we explore whether pretraining on text is inherently about learning language, or if pretraining also imbues LMs with skills for symbolic manipulation and non-linguistic reasoning (for example, performing quantitative computation such as finding the median of a set of numbers, recognizing regular expressions, or identifying whether a string is a palindrome, as shown in Figure 1). In other words, we investigate whether and how pretraining develops helpful inductive biases for non-linguistic reasoning. For this analysis, we create a set of 19 tasks from three categories of task paradigms: quantitative computation (§3.1), recognizing regular expressions (§3.2), and string reasoning (§3.3). Figure 1 shows an example for each category, and the full list of tasks is described in the table 1. We experiment with transformer and RNN based LMs (§4) for learning these tasks, and perform a comparative analysis with (non-pretrained) neural model variants from the perspective of learning metrics such as accuracy and sample efficiency. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_2",
"text": " Our experiments (§5) reveal that pretrained models overall perform substantially better and are more sample efficient on most tasks. However, there are significant differences and patterns in performance between task types, as well as variance between different LM architectures. Since non-pretrained models do not have the benefit of regularization that comes from pretraining, a plausible reason for the discrepancy between them and pretrained LMs might be underfitting of the non-pretrained models when trained on comparatively small dataset sizes. To account for this, we also comprehensively explore the effect of model size (§6) of non-pretrained models for both transformer and RNN architectures. We find that the discrepancy in performance remains even for smaller neural models, indicating that the differences are not simply due to a mismatch in model and data sizes. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_3",
"text": " Finally, we investigate the role that pretraining data plays in influencing task performance on non-linguistic tasks (§7). We experiment with pretraining on different domains of text, pretraining on perturbed representations of natural language text (such as shuffled word order), pretraining on text of computer programs (no linguistic properties of natural languages), pretraining on multi-lingual and non-English text, and pretraining with synthetic text (data sampled from synthetic distributions). Our analysis reveals that the advantages of pretraining surprisingly persist with various degrees across these variations, suggesting hithertho unexplored connections between pretraining and the learning abilities of language models. Our contributions are: ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_4",
"text": " • We compare a range of pretrained LMs and non-pretrained models on a carefully designed suite of 19 classifications tasks that require non-linguistic reasoning. • We comprehensively explore the role of the pretraining data by experimenting with models pretrained from texts with different provenances. • We establish that the positive effects of pretraining are not simply due to better model regularization by experimenting with neural models with different complexities and architectures. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_5",
"text": " A body of work has investigated contextual word embeddings to determine whether they capture aspects of mathematical meaning for numbers Naik et al. (2019). Wallace et al. (2019) probed numerical supremacy on token embeddings of contextual language models such as ELMO and BERT. Thawani et al. (2021) surveyed numerical understanding in NLP models using 7 sub-tasks such as measurement estimation and word problems. Our work diverges from these in exploring a richer set of tasks including harder tasks such as set operations. Further, previous methods explore mathematical reasoning tasks posed as language problems, which conflates the problems of language and mathematical learning and also makes the datasets susceptible to biases due to data collection. Our analysis circumvents both these issues by design. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_6",
"text": " Some previous works have explored the ability of RNN and Transformer architectures for learning regular languages Weiss et al. (2018); Sennhauser and Berwick (2018); Suzgun et al. (2019b); Bhattamishra et al. (2020), closing brackets Skachkova et al. (2018), and dynamic counting Suzgun et al. (2019a). However, they focus on the learnability of these tasks with specific architectures, and do not look at pretrained LMs, which are our focus here. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_7",
"text": " Finally, in our discussion, we conceptually stretch the notion of inductive bias. The idea of inductive bias is usually associated with specific model types McCoy et al. (2020); Kharitonov and Chaabouni (2021), architectures Xu et al. (2021); Brutzkus and Globerson (2021) and regularization approaches Helmbold and Long (2015). We believe that extending this to refer to learning tasks with pretrained LMs is both reasonable and useful. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_8",
"text": " In this section, we describe the tasks used for our analysis, which we refer to as NILM (measuring Non-linguistic Inductive bias in Language Models). The tasks correspond to three task paradigms: (1) quantitative computation, (2) regular expressions, and (3) string reasoning. Each task in NILM is posed as a classification task. The descriptions for all the tasks with input and output examples, class labels and the input range are shown in Table 1. Each task has a synthetically generated dataset with train/dev/test splits222The training set size for all tasks is 10K, dev set size is 1K and test set size is 1K, except for tasks on recognizing regular expressions, where the test set size is 2K following previous work Bhattamishra et al. (2020).. To avoid biases in the datasets, relevant numbers and strings in individual examples are uniformly sampled from the appropriate ranges. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_9",
"text": " This task paradigm focuses on tasks involving arithmetic and set statistics. Odd classification. Classify if a number is odd. Even classification. Classify if a number is even. Odd even classification. For a given number N𝑁N and a string “even” or “odd”, classify if the number satisfies the string condition. Decimal operation. Subtract or divide two numbers. Operands are represented in decimal notation. Decimal & word operation. Subtract or divide two numbers. Operands are represented in decimal or word notation. Mean. Given a set of numbers, output the mean. Median. Given a set, output the median. Mode. Given a set of numbers, output the mode. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_10",
"text": " This task paradigm focuses on recognizing regular expressions. The training data consists of positive and negative examples of strings matching a regular expression Bhattamishra et al. (2020). Recognize {0,1,2}*02*. Recognize if a pattern matches {0,1,2}*02*. The maximum length of the patterns is 20. Recognize AA*BB*CC*DD*EE*. Recognize if a pattern matches AA*BB*CC*DD*EE*. The maximum length of the patterns is 30. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_11",
"text": " This task paradigm focuses on reasoning tasks over individual strings or pairs of strings. Palindrome classification. A string is a palindrome if it reads the same forward and backward. The task is to classify whether a given string is a palindrome. The string length ranges from 1 to 15. Anagram classification. Two strings are anagrams if one is formed by rearranging letters from the other. The task is to classify if a pair of strings are anagrams. The string length ranges from 2 to 15. Isogram classification. A string is an isogram if it has no repeating characters. The task is to classify whether a given string is an isogram. The string length ranges from 1 to 52. Tautonym classification. A tautonym is a word which can be broken down into two identical parts, with the same spelling. The task is to classify whether a given string is a tautonym. The string length ranges from 1 to 10. Length of a string. Output the length of a given string. The string length ranges from 1 to 10. Count of unique characters. Given a string, count the number of unique characters in it. The string lengths ranges from 10 to 30. Parity check. Given a binary string, output if the counts of ones and zeros are the same. The maximum length of the binary string is 20. Vowels classification. Given a string, classify if the string contains only vowel characters. The string length ranges from 3 to 10. Maximum frequent character. Given a string, output the character with the maximum frequency. The string length ranges from 5 to 30. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_12",
"text": " Next, we describe the LMs and their variants used in NILM. We experiment with four language models, based on both Transformer and RNN architectures. BERT small. This is the bert-base-uncased model with 12 transformer encoder layers and the dimension of the representations is 768. BERT tokenizer is based on the WordPiece model Wu et al. (2016). BERT large. This is the bert-large-uncased model which has 24 transformer encoders and representations have 1024 dimensions. DeBERTa. This is a transformer based language model and its tokenizer is built using Byte Pair Encoding Sennrich et al. (2016). We consider the DeBERTa base model. It has 12 transformer encoder layers and representations have 768 dimensions. ELMO. This is an LSTM based language model Peters et al. (2018). It has 3 layers and the output representations have 1024 dimensions. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_13",
"text": " Our experiments are based on pretrained and non-pretrained variants of these architectures. For pretrained variants, the weights are initialized with the pretrained weights. The tokenization on the training data is performed using the pre-built vocabulary. For the non-pretrained neural models, the weights are initialized randomly and updated during training. The tokenizer used is the same as in the pretrained variant. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_14",
"text": " All the models are trained with varying training data of sizes 10, 20, 40, 80, 160, 320, 640, 1280, 2560, 5120, 6000, 7000, 8000, 9000 and 10000. For training set sizes of less than 1000 samples, we report the average of 10 runs. For training set sizes greater than 1000, all reported numbers are averages of 5 runs. In the next section, we present a comparative analysis of pretrained and non-pretrained models. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_15",
"text": " Next, we compare the performance of pretrained and non-pretrained models on tasks in NILM 333Details, including statistical significance results with the paired t-value test, are included in Appendix 6. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_16",
"text": " Quantitative computation: Figure 2 shows results on odd classification, even classification, odd even classification and decimal operation tasks. We find that pretrained LMs outperformed non-pretrained model for all of these tasks. Further, Transformer-based LMs outperformed the RNN-based ELMO models in all the tasks444We will focus on BERT small as representative of transformer models. Results for BERT large and DeBERTa follow similar trends, and are included in the supplementary material. We note that for the relatively easy tasks such as odd and even classifications, the pretrained LMs show more stable training. However, for harder tasks such as Decimal operations (where the baseline performance is around 10%), no models are able to learn the task well even with 10K labeled examples. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_17",
"text": " Figure 3 shows results on median, mean, mode and decimal & word operation tasks. The median task requires complex reasoning (sorting numbers and computing the middle element), and shows significantly lower performance than the mean and mode tasks for the non-pretrained models even with the maximum training set size. The pretrained LM models show little eventual difference in performance between these three tasks. On the other hand, for the easiest of these tasks (mode), non-pretrained models actually show higher performance than pretrained LMs in the low data regime. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_18",
"text": " Recognizing regular expressions: Figure 4 shows the comparative performance of pretrained LMs on non-pretrained models on the two tasks involving recognizing regular expressions. For both tasks, we note that the pretrained LMs can perfectly learn the tasks with many fewer labeled examples compared to the non-pretrained models. In both cases, the non-pretrained Transformer-based models eventually reach optimal performance as well. However, curiously the ELMO based non-pretrained models struggle with learning both tasks. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_19",
"text": " String reasoning: Figures 6 show the results on Palindrome, Anagram, Isogram and Tautonym classification. These tasks require character comparison within the string or with another string. Again, the pretrained variants consistently outperformed non-pretrained models variants in all of these tasks. In particular, the non-pretrained models completely fail to learn the Anagram and Palindrome tasks even for the largest training set size. Again, Transformer based LMs outperform LSTM based LMs. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_20",
"text": " Figure 7 shows the results on vowels classification, maximum frequent character, length of a string and parity check tasks. These tasks don’t require intra-string comparisons. We see that most Transformer-based variants eventually achieve optimal performance. For these simpler tasks, we again observe several instances where the Transformer-based non-pretrained models actually outperform pretrained LMs in the low data regime. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_21",
"text": " As previously mentioned, a possible explanation for the underperformance of non-pretrained models ise that the large number of parameters of the architecture relative to the sizes of the training data might be leading to under-fitting. To test this, we experiment with smaller Transformer-based models with varying numbers of parameters. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_22",
"text": " Figure 5 illustrates the effect of model sizes of non-pretrained model. The original 110 million parameter model has 12 encoder layers, 12 attention heads, and 768 dimensional representations. The 42 million parameter model has 8 encoder layers, 8 attention heads and 512 dimensional representations. The 29 million parameter model has 4 encoder layers, 8 attention heads and 512 dimensional representations. The 11 million parameter model has 4 encoder layers, 4 attention heads and 256 dimensional representations. The smallest 4 million parameter model has 2 encoder layers, 2 attention heads and 128 dimensional representations. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_23",
"text": " As seen in the figure, reducing the model size significantly improves the average performance of the non-pretrained models over 6 representative tasks. However, the smallest models show a performance drop. Most significantly, even the best performing intermediate-sized architectures are significantly worse than the pretrained LM models. This strongly suggests that the discrepancy between pretrained and non-pretrained models is not simply due to a mismatch between model and data sizes. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_24",
"text": " We observe that pretrained LMs consistently performed better than non-pretrained models. This leads to the natural question of what role the text data used for pretraining plays in the process. Next, we investigate this in depth by experimenting with language models pretrained on different types of text. For this, we pretrain models using the BERT-small and DeBERTa architectures and an MLM objective on different text datasets, and evaluate the performance of these models on NILM tasks. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_25",
"text": " We first explore models pretrained on three different domains of text. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_26",
"text": " SNLI. We pretrained BERT small from scratch on SNLI data Bowman et al. (2015). It has 1000k sentences (570k pairs of text and hypothesis). Amazon reviews. We selected 500k movies and tv reviews from the larger Amazon reviews dataset He and McAuley (2016) and used for pretraining. Since reviews are in a free-text format, and their collection was not tailored with a NLP task in mind, they might be more representative of the complexity of real-world language use than SNLI. ROC. ROC is a corpora of 100K children stories, each made up of five sentences Mostafazadeh et al. (2017). The language in ROC is relatively simple in both vocabulary and sentence structure. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_27",
"text": " Tables 2 and 3 shows the average accuracy of six non-linguistic tasks (palindrome classification, isogram classification, tautonym classification, odd even classification, decimal operation and median) fine-tuned using different BERT and DeBERTA representations respectively. We note that the models pretrained on all three domains outperformed the non-pretrained model (NP). This suggests that the results of experiments in Section 5 generalize to new text corpora for pretraining, and do not rely on having access to text on specific topics during pretraining. This is a non-trivial result, since it suggests for example, that the higher performance of pretrained models on tasks such as palindrome and anagram classification is not due to the pretrained models having seen information about such concepts during pretraining. This is especially so since the results even generalize to ROC stories, which contain no information on such technical concepts. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_28",
"text": " Next, we experiment with perturbing the text used for pretraining by changing the order of words in the text. We explore the following models: ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_29",
"text": " SNLI sort. The words in the sentences of SNLI dataset are sorted based on alphabetical order. SNLI shuffle. We randomly shuffle words in sentences in the SNLI dataset. Amazon reviews sort. Similar to SNLI sort, the words in sentences are alphabetically sorted. Amazon reviews shuffle. We randomly shuffle words in sentences in the Amazon reviews dataset. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_30",
"text": " We observe that models pretrained with perturbed text also significantly outperformed non-pretrained models, and perform comparably to the original pretrained LMs. For the SNLI dataset, there is 3% drop in best performance when pretrained on SNLI sort and 2% drop in performance when pretrained on SNLI shuffle for BERT (Table 2). In fact, for DeBERTa, SNLI shuffle outperformed the standard SNLI by 2% (Table 3). Similarly, the Amazon sort and Amazon shuffle versions outperformed or achieved similar performance as the standard Amazon data version. A likely explanation for this is that, even though syntactic word order is disturbed by shuffling, distributional information over sentence contexts is still preserved in the perturbed data. We describe experiments with text data having no distributional information in later sections. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_31",
"text": " A possible rationale for explaining the beneficial effect of pretraining for non-linguistic tasks is that irrespective of whether the tasks require non-linguistic reasoning, their format is in language, and hence language models should be able to learn these tasks with fewer examples. To test this hypothesis, we also experiment with models pretrained on text from languages different from English, as well as models pretrained on computer code. These include the following models: Multilingual BERT. Multilingual BERT is pretrained on text from 102 different languages. About 21% of the pretraining text is English. Chinese BERT. Chinese BERT is a BERT model pretrained on Chinese text. Code BERT. CodeBERT Feng et al. (2020) is pretrained on code from six programming languages. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_32",
"text": " In Table 2, we note that all three non-English pretrained LMs significantly outperformed non-pretrained models, with the best performance being comparable or marginally lower than English versions. In fact, Code-BERT surprisingly surpasses ROC by 5%. These findings strongly indicate that the advantages from pretraining have little to do with the format of the tasks, since they persist for scenarios with little shared linguistic structure. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_33",
"text": " Finally, to investigate what happens if we weaken the distributional properties that hold even in the perturbed text versions from Section 6.2, we experiment with pretraining models on synthetic text sampled from simple probability distributions: ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_34",
"text": " Zipf distribution. We select 30k words (types) from the Amazon reviews dataset. Words are picked with a unigram probability that follows Zipf’s word frequency law, which all natural languages empirically follow Piantadosi (2014). For the Zipf distribution, we chose α𝛼\\alpha=1 and β𝛽\\beta=2.7, to match the parameters of most natural languages. The text does not follow any word order. Uniform distribution. In this dataset, words are sampled from the same vocabulary as in ‘Zipf distribution’, but with a uniform unigram probability. The text does not follow any word order. Synthetic Vocabulary. Words are selected with uniform distribution from a vocabulary to form sentences. However, instead of a vocabulary of English words, the words in the vocabulary are also synthetically generated (3 letter combinations of lower-case alphabets). In this text, the words do not possess morphology in addition to no syntax. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_35",
"text": " In Tables 2 and 3, we note that surprisingly, even models pretrained on Zipfian and uniform distribution text continue to outperform the non-pretrained models. In fact, the Zipf version’s best accuracy is 3% higher than the standard Amazon data version and 2% compared to perturbed Amazon shuffled data version in case of BERT. Zipf outperforms standard amazon data by 1% and lags behind amazon shuffle by 3% for DeBERTA. The Uniform distribution version lags behind Zipf by 9% and 2% for BERT and DeBERTa respectively. We note that the Zipf and Uniform versions still use the prebuilt vocabulary from the Amazon data, and hence this text maintains morphological structure. However, the gains finally disappear for the Synthetic vocabulary model, which cannot leverage morphological structure in the text, and its performance is similar to the non-pretrained models. ",
"title": "What do Large Language Models Learn beyond Language?"
},
{
"id": "2210.12302_all_36",
"text": " We explore the non-linguistic inductive biases of pretrained LMs. While the general trend (that pretraining helps) is unsurprising, our analysis with models pretrained on different text corpora shows that this is not due to the model seeing related topics during pretraining. We find that these gains persist even in absence of any shared linguistic structure (in cross-lingual settings). Our observation that this behavior is seen even when pretraining on synthetically generated languages is intriguing and can be explored further by future work. ",
"title": "What do Large Language Models Learn beyond Language?"
}
] |
In the reference counter approach for managed allocated memory, is it possible that an unused variable is not cleaned because of circular dependencies?
|
Although the paper mentions that the reference counter is used to traversing the computation graph, it does not contain the detail algorithm or not working cases [21].
|
[
21
] |
[
{
"id": "1512.01274_all_0",
"text": " The scale and complexity of machine learning (ML) algorithms are becoming increasingly large. Almost all recent ImageNet challenge winners employ neural networks with very deep layers, requiring billions of floating-point operations to process one single sample. The rise of structural and computational complexity poses interesting challenges to ML system design and implementation. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_1",
"text": " Most ML systems embed a domain-specific language (DSL) into a host language (e.g. Python, Lua, C++). Possible programming paradigms range from imperative, where the user specifies exactly “how” computation needs to be performed, and declarative, where the user specification focuses on “what” to be done. Examples of imperative programming include numpy and Matlab, whereas packages such as Caffe, CXXNet program over layer definition which abstracts away and hide the inner-working of actual implementation. The dividing line between the two can be muddy at times. Frameworks such as Theano and the more recent Tensorflow can also be viewed as a mixture of both, they declare a computational graph, yet the computation within the graph is imperatively specified. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_2",
"text": " Related to the issue of programming paradigms is how the computation is carried out. Execution can be concrete, where the result is returned right away on the same thread, or asynchronize or delayed, where the statements are gathered and transformed into a dataflow graph as an intermediate representation first, before released to available devices. These two execution models have different implications on how inherent parallelisms are discovered. Concrete execution is restrictive (e.g. parallelized matrix multiplication), whereas asynchronize/delayed execution additionally identified all parallelism within the scope of an instance of dataflow graph automatically. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_3",
"text": " The combination of the programming paradigm and execution model yields a large design space, some of which are more interesting (and valid) than others. In fact, our team has collectively explored a number of them, as does the rest of the community. For example, Minerva combines imperative programming with asynchronize execution. While Theano takes an declarative approach, enabling more global graph-aware optimization. Similar discipline was adopted in Purine2 . Instead, CXXNet adopts declarative programming (over tensor abstraction) and concrete execution, similar to Caffe . Table 1 gives more examples. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_4",
"text": " Our combined new effort resulted in MXNet (or “mix-net”), intending to blend advantages of different approaches. Declarative programming offers clear boundary on the global computation graph, discovering more optimization opportunity, whereas imperative programs offers more flexibility. In the context of deep learning, declarative programming is useful in specifying the computation structure in neural network configurations, while imperative programming are more natural for parameter updates and interactive debugging. We also took the effort to embed into multiple host languages, including C++, Python, R, Go and Julia. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_5",
"text": " Despite the support of multiple languages and combination of different programming paradigm, we are able to fuse the execution to the same backend engine. The engine tracks data dependencies across computation graphs and imperative operations, and schedules them efficiently jointly. We aggressively reduce memory footprint, performing in-place update and memory space reuse whenever possible. Finally, we designed a compact communication API so that a MXNet program runs on multiple machines with little change. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_6",
"text": " Comparing to other open-source ML systems, MXNet provides a superset programming interface to Torch7 , Theano , Chainer and Caffe , and supports more systems such as GPU clusters. Besides supporting the optimization for declarative programs as TensorFlow do, MXNet additionally embed imperative tensor operations to provide more flexibility. MXNet is lightweight, e.g. the prediction codes fit into a single 50K lines C++ source file with no other dependency, and has more languages supports. More detailed comparisons are shown in Table 2. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_7",
"text": " MXNet uses multi-output symbolic expressions, Symbol, declare the computation graph. Symbols are composited by operators, such as simple matrix operations (e.g. “+”), or a complex neural network layer (e.g. convolution layer). An operator can take several input variables, produce more than one output variables, and have internal state variables. A variable can be either free, which we can bind with value later, or an output of another symbol. Figure 3 shows the construction of a multi-layer perception symbol by chaining a variable , which presents the input data, and several layer operators. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_8",
"text": " To evaluate a symbol we need to bind the free variables with data and declare the required outputs. Beside evaluation (“forward”), a symbol supports auto symbolic differentiation (“backward”). Other functions, such as load, save, memory estimation, and visualization, are also provided for symbols. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_9",
"text": " MXNet offers NDArray with imperative tensor computation to fill the gap between the declarative symbolic expression and the host language. Figure 3 shows an example which does matrix-constant multiplication on GPU and then prints the results by numpy.ndarray. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_10",
"text": " NDArray abstraction works seamlessly with the executions declared by Symbol, we can mix the imperative tensor computation of the former with the latter. For example, given a symbolic neural network and the weight updating function, e.g. w=w−ηg𝑤𝑤𝜂𝑔w=w-\\eta g. Then we can implement the gradient descent by ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_11",
"text": " The above is as efficient as the implementation using a single but often much more complex symbolic expression. The reason is that MXNet uses lazy evaluation of NDArray and the backend engine can correctly resolve the data dependency between the two. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_12",
"text": " The KVStore is a distributed key-value store for data synchronization over multiple devices. It supports two primitives: push a key-value pair from a device to the store, and pull the value on a key from the store. In addition, a user-defined updater can specify how to merge the pushed value. Finally, model divergence is controlled via consistency model . Currently, we support the sequential and eventual consistency. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_13",
"text": " The following example implements the distributed gradient descent by data parallelization. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_14",
"text": " where the weight updating function is registered to the KVStore, and each worker repeatedly pulls the newest weight from the store and then pushes out the locally computed gradient. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_15",
"text": " The above mixed implementation has the same performance comparing to a single declarative program, because the actual data push and pull are executed by lazy evaluation, which are scheduled by the backend engine just like others. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_16",
"text": " MXNet ships with tools to pack arbitrary sized examples into a single compact file to facilitate both sequential and random seek. Data iterators are also provided. Data pre-fetching and pre-processing are multi-threaded, reducing overheads due to possible remote file store reads and/or image decoding and transformation. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_17",
"text": " The training module implements the commonly used optimization algorithms, such as stochastic gradient descent. It trains a model on a given symbolic module and data iterators, optionally distributedly if an additional KVStore is provided. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_18",
"text": " A binded symbolic expression is presented as a computation graph for evaluation. Figure 4 shows a part of the graph of both forward and backward of the MLP symbol in Figure 3. Before evaluation, MXNet transforms the graph to optimize the efficiency and allocate memory to internal variables. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_19",
"text": " Graph Optimization. We explore the following straightforward optimizations. We note first that only the subgraph required to obtain the outputs specified during binding is needed. For example, in prediction only the forward graph is needed, while for extracting features from internal layers, the last layers can be skipped. Secondly, operators can be grouped into a single one. For example, a×b+1𝑎𝑏1a\\times b+1 is replaced by a single BLAS or GPU call. Finally, we manually implemented well-optimized “big” operations, such as a layer in neural network. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_20",
"text": " Memory Allocation. Note that each variable’s life time, namely the period between the creation and the last time will be used, is known for a computation graph. So we can reuse memory for non-intersected variables. However, an ideal allocation strategy requires O(n2)𝑂superscript𝑛2O(n^{2}) time complexity, where n𝑛n is the number of variables. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_21",
"text": " We proposed two heuristics strategies with linear time complexity. The first, called inplace, simulates the procedure of traversing the graph, and keeps a reference counter of depended nodes that are not used so far. If the counter reaches zero, the memory is recycled. The second, named co-share, allows two nodes to share a piece of memory if only if they cannot be run in parallel. Exploring co-share imposes one additional dependency constraint. In particular, each time upon scheduling, among the pending paths in the graph, we find the longest path and perform needed memory allocations. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_22",
"text": " In MXNet, each source units, including NDArray, random number generator and temporal space, is registered to the engine with a unique tag. Any operations, such as a matrix operation or data communication, is then pushed into the engine with specifying the required resource tags. The engine continuously schedules the pushed operations for execution if dependencies are resolved. Since there usually exists multiple computation resources such as CPUs, GPUs, and the memory/PCIe buses, the engine uses multiple threads to scheduling the operations for better resource utilization and parallelization. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_23",
"text": " Different to most dataflow engines , our engine tracks mutation operations as an existing resource unit. That is, ours supports the specification of the tags that a operation will write in addition to read. This enables scheduling of array mutations as in numpy and other tensor libraries. It also enables easier memory reuse of parameters, by representing parameter updates as mutating the parameter arrays. It also makes scheduling of some special operations easier. For example, when generating two random numbers with the same random seed, we can inform the engine they will write the seed so that they should not be executed in parallel. This helps reproducibility. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_24",
"text": " We implemented KVStore based on the parameter server (8, 9, 4)(Figure 5). It differs to previous works in two aspects: First, we use the engine to schedule the KVStore operations and manage the data consistency. The strategy not only makes the data synchronization works seamless with computation, and also greatly simplifies the implementation. Second, we adopt an two-level structure. A level-1 server manages the data synchronization between the devices within a single machine, while a level-2 server manages inter-machine synchronization. Outbound data from a level-1 server can be aggregated, reducing bandwidth requirement; intra- and inter-machine synchronization can use different consistency model (e.g. intra- is sequential and inter- is eventual). ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_25",
"text": " We fist compare MXNet with Torch7, Caffe, and TensorFlow on the popular “convnet-benchmarks” . All these systems are compiled with CUDA 7.5 and CUDNN 3 except for TensorFlow, which only supports CUDA 7.0 and CUDNN 2. We use batch size 32 for all networks and run the experiments on a single Nvidia GTX 980 card. Results are shown in Figure 7. As expected that MXNet has similar performance comparing to Torch7 and Caffe, because most computations are spent on the CUDA/CUDNN kernels. TensorFlow is always 2x slower, which might be due its use of a lower CUDNN version. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_26",
"text": " Figure 7 shows the memory usages of the internal variables excepts for the outputs. As can be seen, both “inplace” and “co-share” can effective reduce the memory footprint. Combing them leads to a 2x reduction for all networks during model training, and further improves to 4x for model prediction. For instance, even for the most expensive VGG net, training needs less than 16MB extra. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_27",
"text": " We run the experiment on Amazon EC2 g2.8x instances, each of which is shipped with four Nvidia GK104 GPUs and 10G Ethernet. We train googlenet with batch normalization on the ILSVRC12 dataset which consists of 1.3 million images and 1,000 classes. We fix the learning rate to .05.05.05, momentum to .9.9.9, weight decay to 10−4superscript10410^{-4}, and feed each GPU with 363636 images in one batch. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_28",
"text": " The convergence results are shown in Figure 8. As can be seen, comparing to single machine, the distributed training converges slower at the beginning, but outperforms after 10 data passes. The average cost of a data pass is 14K and 1.4K sec on a single machine and 10 machines, respectively. Consequently, this experiment reveals a super-linear speedup. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_29",
"text": " MXNet is a machine learning library combining symbolic expression with tensor computation to maximize efficiency and flexibility. It is lightweight and embeds in multiple host languages, and can be run in a distributed setting. Experimental results are encouraging. While we continue to explore new design choices, we believe it can already benefit the relevant research community. The codes are available at http://dmlc.io. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
},
{
"id": "1512.01274_all_30",
"text": " Acknowledgment. We sincerely thanks Dave Andersen, Carlos Guestrin, Tong He, Chuntao Hong, Qiang Kou, Hu Shiwen, Alex Smola, Junyuan Xie, Dale Schuurmans and all other contributors. ",
"title": "MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems"
}
] |
Does the challenge also include a workshop to discuss the ideas?
|
Yes, the challenge include discussion on challenges of creating this large-scale object recognition benchmark dataset [7].
|
[
7
] |
[
{
"id": "1409.0575_all_0",
"text": " The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.111In this paper, we will be using the term object recognition broadly to encompass both image classification (a task requiring an algorithm to determine what object classes are present in the image) as well as object detection (a task requiring an algorithm to localize all objects present in the image). ILSVRC follows in the footsteps of the PASCAL VOC challenge (Everingham et al.,, 2012), established in 2005, which set the precedent for standardized evaluation of recognition algorithms in the form of yearly competitions. As in PASCAL VOC, ILSVRC consists of two components: (1) a publically available dataset, and (2) an annual competition and corresponding workshop. The dataset allows for the development and comparison of categorical object recognition algorithms, and the competition and workshop provide a way to track the progress and discuss the lessons learned from the most successful and innovative entries each year. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_1",
"text": " The publically released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.222 In 2010, the test annotations were later released publicly; since then the test annotation have been kept hidden. Participants train their algorithms using the training images and then automatically annotate the test images. These predicted annotations are submitted to the evaluation server. Results of the evaluation are revealed at the end of the competition period and authors are invited to share insights at the workshop held at the International Conference on Computer Vision (ICCV) or European Conference on Computer Vision (ECCV) in alternate years. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_2",
"text": " ILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_3",
"text": " In creating the dataset, several challenges had to be addressed. Scaling up from 19,737 images in PASCAL VOC 2010 to 1,461,406 in ILSVRC 2010 and from 20 object classes to 1000 object classes brings with it several challenges. It is no longer feasible for a small group of annotators to annotate the data as is done for other datasets (Fei-Fei et al.,, 2004; Criminisi,, 2004; Everingham et al.,, 2012; Xiao et al.,, 2010). Instead we turn to designing novel crowdsourcing approaches for collecting large-scale annotations (Su et al.,, 2012; Deng et al.,, 2009, 2014). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_4",
"text": " Some of the 1000 object classes may not be as easy to annotate as the 20 categories of PASCAL VOC: e.g., bananas which appear in bunches may not be as easy to delineate as the basic-level categories of aeroplanes or cars. Having more than a million images makes it infeasible to annotate the locations of all objects (much less with object segmentations, human body parts, and other detailed annotations that subsets of PASCAL VOC contain). New evaluation criteria have to be defined to take into account the facts that obtaining perfect manual annotations in this setting may be infeasible. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_5",
"text": " Once the challenge dataset was collected, its scale allowed for unprecedented opportunities both in evaluation of object recognition algorithms and in developing new techniques. Novel algorithmic innovations emerge with the availability of large-scale training data. The broad spectrum of object categories motivated the need for algorithms that are even able to distinguish classes which are visually very similar. We highlight the most successful of these algorithms in this paper, and compare their performance with human-level accuracy. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_6",
"text": " Finally, the large variety of object classes in ILSVRC allows us to perform an analysis of statistical properties of objects and their impact on recognition algorithms. This type of analysis allows for a deeper understanding of object recognition, and for designing the next generation of general object recognition algorithms. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_7",
"text": " This paper has three key goals: 1. To discuss the challenges of creating this large-scale object recognition benchmark dataset, 2. To highlight the developments in object classification and detection that have resulted from this effort, and 3. To take a closer look at the current state of the field of categorical object recognition. The paper may be of interest to researchers working on creating large-scale datasets, as well as to anybody interested in better understanding the history and the current state of large-scale object recognition. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_8",
"text": " The collected dataset and additional information about ILSVRC can be found at: http://image-net.org/challenges/LSVRC/ ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_9",
"text": " We briefly discuss some prior work in constructing benchmark image datasets. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_10",
"text": " Caltech 101 (Fei-Fei et al.,, 2004) was among the first standardized datasets for multi-category image classification, with 101 object classes and commonly 15-30 training images per class. Caltech 256 (Griffin et al.,, 2007) increased the number of object classes to 256 and added images with greater scale and background variability. The TinyImages dataset (Torralba et al.,, 2008) contains 80 million 32x32 low resolution images collected from the internet using synsets in WordNet (Miller,, 1995) as queries. However, since this data has not been manually verified, there are many errors, making it less suitable for algorithm evaluation. Datasets such as 15 Scenes (Oliva and Torralba,, 2001; Fei-Fei and Perona,, 2005; Lazebnik et al.,, 2006) or recent Places (Zhou et al.,, 2014) provide a single scene category label (as opposed to an object category). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_11",
"text": " The ImageNet dataset (Deng et al.,, 2009) is the backbone of ILSVRC. ImageNet is an image dataset organized according to the WordNet hierarchy (Miller,, 1995). Each concept in WordNet, possibly described by multiple words or word phrases, is called a “synonym set” or “synset”. ImageNet populates 21,841 synsets of WordNet with an average of 650 manually verified and full resolution images. As a result, ImageNet contains 14,197,122 annotated images organized by the semantic hierarchy of WordNet (as of August 2014). ImageNet is larger in scale and diversity than the other image classification datasets. ILSVRC uses a subset of ImageNet images for training the algorithms and some of ImageNet’s image collection protocols for annotating additional images for testing the algorithms. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_12",
"text": " Many datasets aim to provide richer image annotations beyond image-category labels. LabelMe (Russell et al.,, 2007) contains general photographs with multiple objects per image. It has bounding polygon annotations around objects, but the object names are not standardized: annotators are free to choose which objects to label and what to name each object. The SUN2012 (Xiao et al.,, 2010) dataset contains 16,873 manually cleaned up and fully annotated images more suitable for standard object detection training and evaluation. SIFT Flow (Liu et al.,, 2011) contains 2,688 images labeled using the LabelMe system. The LotusHill dataset (Yao et al.,, 2007) contains very detailed annotations of objects in 636,748 images and video frames, but it is not available for free. Several datasets provide pixel-level segmentations: for example, MSRC dataset (Criminisi,, 2004) with 591 images and 23 object classes, Stanford Background Dataset (Gould et al.,, 2009) with 715 images and 8 classes, and the Berkeley Segmentation dataset (Arbelaez et al.,, 2011) with 500 images annotated with object boundaries. OpenSurfaces segments surfaces from consumer photographs and annotates them with surface properties, including material, texture, and contextual information (Bell et al.,, 2013) . ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_13",
"text": " The closest to ILSVRC is the PASCAL VOC dataset (Everingham et al.,, 2010, 2014), which provides a standardized test bed for object detection, image classification, object segmentation, person layout, and action classification. Much of the design choices in ILSVRC have been inspired by PASCAL VOC and the similarities and differences between the datasets are discussed at length throughout the paper. ILSVRC scales up PASCAL VOC’s goal of standardized training and evaluation of recognition algorithms by more than an order of magnitude in number of object classes and images: PASCAL VOC 2012 has 20 object classes and 21,738 images compared to ILSVRC2012 with 1000 object classes and 1,431,167 annotated images. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_14",
"text": " The recently released COCO dataset (Lin et al., 2014b, ) contains more than 328,000 images with 2.5 million object instances manually segmented. It has fewer object categories than ILSVRC (91 in COCO versus 200 in ILSVRC object detection) but more instances per category (27K on average compared to about 1K in ILSVRC object detection). Further, it contains object segmentation annotations which are not currently available in ILSVRC. COCO is likely to become another important large-scale benchmark. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_15",
"text": " ILSVRC makes extensive use of Amazon Mechanical Turk to obtain accurate annotations (Sorokin and Forsyth,, 2008). Works such as (Welinder et al.,, 2010; Sheng et al.,, 2008; Vittayakorn and Hays,, 2011) describe quality control mechanisms for this marketplace. (Vondrick et al.,, 2012) provides a detailed overview of crowdsourcing video annotation. A related line of work is to obtain annotations through well-designed games, e.g. (von Ahn and Dabbish,, 2005). Our novel approaches to crowdsourcing accurate image annotations are in Sections 3.1.3, 3.2.1 and 3.3.3. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_16",
"text": " There are several datasets with standardized online evaluation similar to ILSVRC: the aforementioned PASCAL VOC (Everingham et al.,, 2012), Labeled Faces in the Wild (Huang et al.,, 2007) for unconstrained face recognition, Reconstruction meets Recognition (Urtasun et al.,, 2014) for 3D reconstruction and KITTI (Geiger et al.,, 2013) for computer vision in autonomous driving. These datasets along with ILSVRC help benchmark progress in different areas of computer vision. Works such as (Torralba and Efros,, 2011) emphasize the importance of examining the bias inherent in any standardized dataset. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_17",
"text": " We begin with a brief overview of ILSVRC challenge tasks in Section 2. Dataset collection and annotation are described at length in Section 3. Section 4 discusses the evaluation criteria of algorithms in the large-scale recognition setting. Section 5 provides an overview of the methods developed by ILSVRC participants. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_18",
"text": " Section 6 contains an in-depth analysis of ILSVRC results: Section 6.1 documents the progress of large-scale recognition over the years, Section 6.2 concludes that ILSVRC results are statistically significant, Section 6.3 thoroughly analyzes the current state of the field of object recognition, and Section 6.4 compares state-of-the-art computer vision accuracy with human accuracy. We conclude and discuss lessons learned from ILSVRC in Section 7. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_19",
"text": " The goal of ILSVRC is to estimate the content of photographs for the purpose of retrieval and automatic annotation. Test images are presented with no initial annotation, and algorithms have to produce labelings specifying what objects are present in the images. New test images are collected and labeled especially for this competition and are not part of the previously published ImageNet dataset (Deng et al.,, 2009). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_20",
"text": " ILSVRC over the years has consisted of one or more of the following tasks (years in parentheses):333In addition, ILSVRC in 2012 also included a taster fine-grained classification task, where algorithms would classify dog photographs into one of 120 dog breeds (Khosla et al.,, 2011). Fine-grained classification has evolved into its own Fine-Grained classification challenge in 2013 (Berg et al.,, 2013), which is outside the scope of this paper. 1. Image classification (2010-2014): Algorithms produce a list of object categories present in the image. 2. Single-object localization (2011-2014): Algorithms produce a list of object categories present in the image, along with an axis-aligned bounding box indicating the position and scale of one instance of each object category. 3. Object detection (2013-2014): Algorithms produce a list of object categories present in the image along with an axis-aligned bounding box indicating the position and scale of every instance of each object category. This section provides an overview and history of each of the three tasks. Table 1 shows summary statistics. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_21",
"text": " Data for the image classification task consists of photographs collected from Flickr444www.flickr.com and other search engines, manually labeled with the presence of one of 1000 object categories. Each image contains one ground truth label. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_22",
"text": " For each image, algorithms produce a list of object categories present in the image. The quality of a labeling is evaluated based on the label that best matches the ground truth label for the image (see Section 4.1). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_23",
"text": " Constructing ImageNet was an effort to scale up an image classification dataset to cover most nouns in English using tens of millions of manually verified photographs (Deng et al.,, 2009). The image classification task of ILSVRC came as a direct extension of this effort. A subset of categories and images was chosen and fixed to provide a standardized benchmark while the rest of ImageNet continued to grow. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_24",
"text": " The single-object localization task, introduced in 2011, built off of the image classification task to evaluate the ability of algorithms to learn the appearance of the target object itself rather than its image context. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_25",
"text": " Data for the single-object localization task consists of the same photographs collected for the image classification task, hand labeled with the presence of one of 1000 object categories. Each image contains one ground truth label. Additionally, every instance of this category is annotated with an axis-aligned bounding box. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_26",
"text": " For each image, algorithms produce a list of object categories present in the image, along with a bounding box indicating the position and scale of one instance of each object category. The quality of a labeling is evaluated based on the object category label that best matches the ground truth label, with the additional requirement that the location of the predicted instance is also accurate (see Section 4.2). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_27",
"text": " The object detection task went a step beyond single-object localization and tackled the problem of localizing multiple object categories in the image. This task has been a part of the PASCAL VOC for many years on the scale of 20 object categories and tens of thousands of images, but scaling it up by an order of magnitude in object categories and in images proved to be very challenging from a dataset collection and annotation point of view (see Section 3.3). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_28",
"text": " Data for the detection tasks consists of new photographs collected from Flickr using scene-level queries. The images are annotated with axis-aligned bounding boxes indicating the position and scale of every instance of each target object category. The training set is additionally supplemented with (a) data from the single-object localization task, which contains annotations for all instances of just one object category, and (b) negative images known not to contain any instance of some object categories. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_29",
"text": " For each image, algorithms produce bounding boxes indicating the position and scale of all instances of all target object categories. The quality of labeling is evaluated by recall, or number of target object instances detected, and precision, or the number of spurious detections produced by the algorithm (see Section 4.3). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_30",
"text": " Our process of constructing large-scale object recognition image datasets consists of three key steps. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_31",
"text": " The first step is defining the set of target object categories. To do this, we select from among the existing ImageNet (Deng et al.,, 2009) categories. By using WordNet as a backbone (Miller,, 1995), ImageNet already takes care of disambiguating word meanings and of combining together synonyms into the same object category. Since the selection of object categories needs to be done only once per challenge task, we use a combination of automatic heuristics and manual post-processing to create the list of target categories appropriate for each task. For example, for image classification we may include broader scene categories such as a type of beach, but for single-object localization and object detection we want to focus only on object categories which can be unambiguously localized in images (Sections 3.1.1 and 3.3.1). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_32",
"text": " The second step is collecting a diverse set of candidate images to represent the selected categories. We use both automatic and manual strategies on multiple search engines to do the image collection. The process is modified for the different ILSVRC tasks. For example, for object detection we focus our efforts on collecting scene-like images using generic queries such as “African safari” to find pictures likely to contain multiple animals in one scene (Section 3.3.2). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_33",
"text": " The third (and most challenging) step is annotating the millions of collected images to obtain a clean dataset. We carefully design crowdsourcing strategies targeted to each individual ILSVRC task. For example, the bounding box annotation system used for localization and detection tasks consists of three distinct parts in order to include automatic crowdsourced quality control (Section 3.2.1). Annotating images fully with all target object categories (on a reasonable budget) for object detection requires an additional hierarchical image labeling system (Section 3.3.3). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_34",
"text": " We describe the data collection and annotation procedure for each of the ILSVRC tasks in order: image classification (Section 3.1), single-object localization (Section 3.2), and object detection (Section 3.3), focusing on the three key steps for each dataset. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_35",
"text": " The image classification task tests the ability of an algorithm to name the objects present in the image, without necessarily localizing them. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_36",
"text": " We describe the choices we made in constructing the ILSVRC image classification dataset: selecting the target object categories from ImageNet (Section 3.1.1), collecting a diverse set of candidate images by using multiple search engines and an expanded set of queries in multiple languages (Section 3.1.2), and finally filtering the millions of collected images using the carefully designed crowdsourcing strategy of ImageNet (Deng et al.,, 2009) (Section 3.1.3). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_37",
"text": " The 1000 categories used for the image classification task were selected from the ImageNet (Deng et al.,, 2009) categories. The 1000 synsets are selected such that there is no overlap between synsets: for any synsets i𝑖i and j𝑗j, i𝑖i is not an ancestor of j𝑗j in the ImageNet hierarchy. These synsets are part of the larger hierarchy and may have children in ImageNet; however, for ILSVRC we do not consider their child subcategories. The synset hierarchy of ILSVRC can be thought of as a “trimmed” version of the complete ImageNet hierarchy. Figure 1 visualizes the diversity of the ILSVRC2012 object categories. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_38",
"text": " The exact 1000 synsets used for the image classification and single-object localization tasks have changed over the years. There are 639 synsets which have been used in all five ILSVRC challenges so far. In the first year of the challenge synsets were selected randomly from the available ImageNet synsets at the time, followed by manual filtering to make sure the object categories were not too obscure. With the introduction of the object localization challenge in 2011 there were 321 synsets that changed: categories such as “New Zealand beach” which were inherently difficult to localize were removed, and some new categories from ImageNet containing object localization annotations were added. In ILSVRC2012, 90 synsets were replaced with categories corresponding to dog breeds to allow for evaluation of more fine-grained object classification, as shown in Figure 2. The synsets have remained consistent since year 2012. Appendix A provides the complete list of object categories used in ILSVRC2012-2014. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_39",
"text": " Image collection for ILSVRC classification task is the same as the strategy employed for constructing ImageNet (Deng et al.,, 2009). Training images are taken directly from ImageNet. Additional images are collected for the ILSVRC using this strategy and randomly partitioned into the validation and test sets. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_40",
"text": " We briefly summarize the process; (Deng et al.,, 2009) contains further details. Candidate images are collected from the Internet by querying several image search engines. For each synset, the queries are the set of WordNet synonyms. Search engines typically limit the number of retrievable images (on the order of a few hundred to a thousand). To obtain as many images as possible, we expand the query set by appending the queries with the word from parent synsets, if the same word appears in the glossary of the target synset. For example, when querying “whippet”, according to WordNet’s glossary a “small slender dog of greyhound type developed in England”, we also use “whippet dog” and “whippet greyhound.” To further enlarge and diversify the candidate pool, we translate the queries into other languages, including Chinese, Spanish, Dutch and Italian. We obtain accurate translations using WordNets in those languages. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_41",
"text": " Annotating images with corresponding object classes follows the strategy employed by ImageNet (Deng et al.,, 2009). We summarize it briefly here. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_42",
"text": " To collect a highly accurate dataset, we rely on humans to verify each candidate image collected in the previous step for a given synset. This is achieved by using Amazon Mechanical Turk (AMT), an online platform on which one can put up tasks for users for a monetary reward. With a global user base, AMT is particularly suitable for large scale labeling. In each of our labeling tasks, we present the users with a set of candidate images and the definition of the target synset (including a link to Wikipedia). We then ask the users to verify whether each image contains objects of the synset. We encourage users to select images regardless of occlusions, number of objects and clutter in the scene to ensure diversity. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_43",
"text": " While users are instructed to make accurate judgment, we need to set up a quality control system to ensure this accuracy. There are two issues to consider. First, human users make mistakes and not all users follow the instructions. Second, users do not always agree with each other, especially for more subtle or confusing synsets, typically at the deeper levels of the tree. The solution to these issues is to have multiple users independently label the same image. An image is considered positive only if it gets a convincing majority of the votes. We observe, however, that different categories require different levels of consensus among users. For example, while five users might be necessary for obtaining a good consensus on “Burmese cat” images, a much smaller number is needed for “cat” images. We develop a simple algorithm to dynamically determine the number of agreements needed for different categories of images. For each synset, we first randomly sample an initial subset of images. At least 10 users are asked to vote on each of these images. We then obtain a confidence score table, indicating the probability of an image being a good image given the consensus among user votes. For each of the remaining candidate images in this synset, we proceed with the AMT user labeling until a pre-determined confidence score threshold is reached. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_44",
"text": " Evaluation of the accuracy of the large-scale crowdsourced image annotation system was done on the entire ImageNet (Deng et al.,, 2009). A total of 80 synsets were randomly sampled at every tree depth of the mammal and vehicle subtrees. An independent group of subjects verified the correctness of each of the images. An average of 99.7%percent99.799.7\\% precision is achieved across the synsets. We expect similar accuracy on ILSVRC image classification dataset since the image annotation pipeline has remained the same. To verify, we manually checked 1500 ILSVRC2012-2014 image classification test set images (the test set has remained unchanged in these three years). We found 5 annotation errors, corresponding as expected to 99.7%percent99.799.7\\% precision. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_45",
"text": " Using the image collection and annotation procedure described in previous sections, we collected a large-scale dataset used for ILSVRC classification task. There are 1000 object classes and approximately 1.2 million training images, 50 thousand validation images and 100 thousand test images. Table 2 (top) documents the size of the dataset over the years of the challenge. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_46",
"text": " The single-object localization task evaluates the ability of an algorithm to localize one instance of an object category. It was introduced as a taster task in ILSVRC 2011, and became an official part of ILSVRC in 2012. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_47",
"text": " The key challenge was developing a scalable crowdsourcing method for object bounding box annotation. Our three-step self-verifying pipeline is described in Section 3.2.1. Having the dataset collected, we perform detailed analysis in Section 3.2.2 to ensure that the dataset is sufficiently varied to be suitable for evaluation of object localization algorithms. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_48",
"text": " The object classes for single-object localization task are the same as the object classes for image classification task described above in Section 3.1. The training images for localization task are a subset of the training images used for image classification task, and the validation and test images are the same between both tasks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_49",
"text": " Recall that for the image classification task every image was annotated with one object class label, corresponding to one object that is present in an image. For the single-object localization task, every validation and test image and a subset of the training images are annotated with axis-aligned bounding boxes around every instance of this object. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_50",
"text": " Every bounding box is required to be as small as possible while including all visible parts of the object instance. An alternate annotation procedure could be to annotate the full (estimated) extent of the object: e.g., if a person’s legs are occluded and only the torso is visible, the bounding box could be drawn to include the likely location of the legs. However, this alternative procedure is inherently ambiguous and ill-defined, leading to disagreement among annotators and among researchers (what is the true “most likely” extent of this object?). We follow the standard protocol of only annotating visible object parts (Russell et al.,, 2007; Everingham et al.,, 2010).555Some datasets such as PASCAL VOC (Everingham et al.,, 2010) and LabelMe (Russell et al.,, 2007) are able to provide more detailed annotations: for example, marking individual object instances as being truncated. We chose not to provide this level of detail in favor of annotating more images and more object instances. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_51",
"text": " We summarize the crowdsourced bounding box annotation system described in detail in (Su et al.,, 2012). The goal is to build a system that is fully automated, highly accurate, and cost-effective. Given a collection of images where the object of interest has been verified to exist, for each image the system collects a tight bounding box for every instance of the object. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_52",
"text": " There are two requirements: ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_53",
"text": " • Quality Each bounding box needs to be tight, i.e. the smallest among all bounding boxes that contains all visible parts of the object. This facilitates the object detection learning algorithms by providing the precise location of each object instance; • Coverage Every object instance needs to have a bounding box. This is important for training localization algorithms because it tells the learning algorithms with certainty what is not the object. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_54",
"text": " The core challenge of building such a system is effectively controlling the data quality with minimal cost. Our key observation is that drawing a bounding box is significantly more difficult and time consuming than giving answers to multiple choice questions. Thus quality control through additional verification tasks is more cost-effective than consensus-based algorithms. This leads to the following workflow with simple basic subtasks: ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_55",
"text": " 1. Drawing A worker draws one bounding box around one instance of an object on the given image. 2. Quality verification A second worker checks if the bounding box is correctly drawn. 3. Coverage verification A third worker checks if all object instances have bounding boxes. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_56",
"text": " The sub-tasks are designed following two principles. First, the tasks are made as simple as possible. For example, instead of asking the worker to draw all bounding boxes on the same image, we ask the worker to draw only one. This reduces the complexity of the task. Second, each task has a fixed and predictable amount of work. For example, assuming that the input images are clean (object presence is correctly verified) and the coverage verification tasks give correct results, the amount of work of the drawing task is always that of providing exactly one bounding box. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_57",
"text": " Quality control on Tasks 2 and 3 is implemented by embedding “gold standard” images where the correct answer is known. Worker training for each of these subtasks is described in detail in (Su et al.,, 2012). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_58",
"text": " The system is evaluated on 10 categories with ImageNet (Deng et al.,, 2009): balloon, bear, bed, bench, beach, bird, bookshelf, basketball hoop, bottle, and people. A subset of 200 images are randomly sampled from each category. On the image level, our evaluation shows that 97.9%percent97.997.9\\% images are completely covered with bounding boxes. For the remaining 2.1%percent2.12.1\\%, some bounding boxes are missing. However, these are all difficult cases: the size is too small, the boundary is blurry, or there is strong shadow. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_59",
"text": " On the bounding box level, 99.2%percent99.299.2\\% of all bounding boxes are accurate (the bounding boxes are visibly tight). The remaining 0.8%percent0.80.8\\% are somewhat off. No bounding boxes are found to have less than 50%percent5050\\% intersection over union overlap with ground truth. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_60",
"text": " Additional evaluation of the overall cost and an analysis of quality control can be found in (Su et al.,, 2012). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_61",
"text": " Using the annotation procedure described above, we collect a large set of bounding box annotations for the ILSVRC single-object classification task. All 50 thousand images in the validation set and 100 thousand images in the test set are annotated with bounding boxes around all instances of the ground truth object class (one object class per image). In addition, in ILSVRC2011 25%percent2525\\% of training images are annotated with bounding boxes the same way, yielding more than 310 thousand annotated images with more than 340 thousand annotated object instances. In ILSVRC2012 40%percent4040\\% of training images are annotated, yielding more than 520 thousand annotated images with more than 590 thousand annotated object instances. Table 2 (bottom) documents the size of this dataset. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_62",
"text": " In addition to the size of the dataset, we also analyze the level of difficulty of object localization in these images compared to the PASCAL VOC benchmark. We compute statistics on the ILSVRC2012 single-object localization validation set images compared to PASCAL VOC 2012 validation images. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_63",
"text": " Real-world scenes are likely to contain multiple instances of some objects, and nearby object instances are particularly difficult to delineate. The average object category in ILSVRC has 1.611.611.61 target object instances on average per positive image, with each instance having on average 0.470.470.47 neighbors (adjacent instances of the same object category). This is comparable to 1.691.691.69 instances per positive image and 0.520.520.52 neighbors per instance for an average object class in PASCAL. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_64",
"text": " As described in (Hoiem et al.,, 2012), smaller objects tend to be significantly more difficult to localize. In the average object category in PASCAL the object occupies 24.1%percent24.124.1\\% of the image area, and in ILSVRC 35.8%percent35.835.8\\%. However, PASCAL has only 20 object categories while ILSVRC has 1000. The 537 object categories of ILSVRC with the smallest objects on average occupy the same fraction of the image as PASCAL objects: 24.1%percent24.124.1\\%. Thus even though on average the object instances tend to be bigger in ILSVRC images, there are more than 25 times more object categories than in PASCAL VOC with the same average object scale. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_65",
"text": " Appendix B and (Russakovsky et al.,, 2013) have additional comparisons. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_66",
"text": " The ILSVRC task of object detection evaluates the ability of an algorithm to name and localize all instances of all target objects present in an image. It is much more challenging than object localization because some object instances may be small/occluded/difficult to accurately localize, and the algorithm is expected to locate them all, not just the one it finds easiest. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_67",
"text": " There are three key challenges in collecting the object detection dataset. The first challenge is selecting the set of common objects which tend to appear in cluttered photographs and are well-suited for benchmarking object detection performance. Our approach relies on statistics of the object localization dataset and the tradition of the PASCAL VOC challenge (Section 3.3.1). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_68",
"text": " The second challenge is obtaining a much more varied set of scene images than those used for the image classification and single-object localization datasets. Section 3.3.2 describes the procedure for utilizing as much data from the single-object localization dataset as possible and supplementing it with Flickr images queried using hundreds of manually designed high-level queries. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_69",
"text": " The third, and biggest, challenge is completely annotating this dataset with all the objects. This is done in two parts. Section 3.3.3 describes the first part: our hierarchical strategy for obtaining the list of all target objects which occur within every image. This is necessary since annotating in a straight-forward way by creating a task for every (image, object class) pair is no longer feasible at this scale. Appendix E describes the second part: annotating the bounding boxes around these objects, using the single-object localization bounding box annotation pipeline of Section 3.2.1 along with extra verification to ensure that every instance of the object is annotated with exactly one bounding box. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_70",
"text": " There are 200 object classes hand-selected for the detection task, eacg corresponding to a synset within ImageNet. These were chosen to be mostly basic-level object categories that would be easy for people to identify and label. The rationale is that the object detection system developed for this task can later be combined with a fine-grained classification model to further classify the objects if a finer subdivision is desired.666Some of the training objects are actually annotated with more detailed classes: for example, one of the 200 object classes is the category “dog,” and some training instances are annotated with the specific dog breed. As with the 1000 classification classes, the synsets are selected such that there is no overlap: for any synsets i𝑖i and j𝑗j, i𝑖i is not an ancestor of j𝑗j in the ImageNet hierarchy. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_71",
"text": " The selection of the 200 object detection classes in 2013 was guided by the ILSVRC 2012 classification and localization dataset. Starting with 1000 object classes and their bounding box annotations we first eliminated all object classes which tended to be too “big” in the image (on average the object area was greater than 50%percent5050\\% of the image area). These were classes such as T-shirt, spiderweb, or manhole cover. We then manually eliminated all classes which we did not feel were well-suited for detection, such as hay, barbershop, or poncho. This left 494 object classes which were merged into basic-level categories: for example, different species of birds were merged into just the “bird” class. The classes remained the same in ILSVRC2014. Appendix D contains the complete list of object categories used in ILSVRC2013-2014 (in the context of the hierarchy described in Section 3.3.3). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_72",
"text": " Staying mindful of the tradition of the PASCAL VOC dataset we also tried to ensure that the set of 200 classes contains as many of the 20 PASCAL VOC classes as possible. Table 3 shows the correspondences. The changes that were done were to ensure more accurate and consistent crowdsourced annotations. The object class with the weakest correspondence is “potted plant” in PASCAL VOC, corresponding to “flower pot” in ILSVRC. “Potted plant” was one of the most challenging object classes to annotate consistently among the PASCAL VOC classes, and in order to obtain accurate annotations using crowdsourcing we had to restrict the definition to a more concrete object. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_73",
"text": " Many images for the detection task were collected differently than the images in ImageNet and the classification and single-object localization tasks. Figure 3 summarizes the types of images that were collected. Ideally all of these images would be scene images fully annotated with all target categories. However, given budget constraints our goal was to provide as much suitable detection data as possible, even if the images were drawn from a few different sources and distributions. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_74",
"text": " The validation and test detection set images come from two sources (percent of images from each source in parentheses). The first source (77%)percent77(77\\%) is images from ILSVRC2012 single-object localization validation and test sets corresponding to the 200 detection classes (or their children in the ImageNet hierarchy). Images where the target object occupied more than 50%percent5050\\% of the image area were discarded, since they were unlikely to contain other objects of interest. The second source (23%)percent23(23\\%) is images from Flickr collected specifically for detection task. We queried Flickr using a large set of manually defined queries, such as “kitchenette” or “Australian zoo” to retrieve images of scenes likely to contain several objects of interest. Appendix C contains the full list. We also added pairwise queries, or queries with two target object names such as “tiger lion,” which also often returned cluttered scenes. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_75",
"text": " Figure 4 shows a random set of both types of validation images. Images were randomly split, with 33%percent3333\\% going into the validation set and 67%percent6767\\% into the test set.777The validation/test split is consistent with ILSVRC2012: validation images of ILSVRC2012 remained in the validation set of ILSVRC2013, and ILSVRC2012 test images remained in ILSVRC2013 test set. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_76",
"text": " The training set for the detection task comes from three sources of images (percent of images from each source in parentheses). The first source (63%)percent63(63\\%) is all training images from ILSVRC2012 single-object localization task corresponding to the 200 detection classes (or their children in the ImageNet hierarchy). We did not filter by object size, allowing teams to take advantage of all the positive examples available. The second source (24%)percent24(24\\%) is negative images which were part of the original ImageNet collection process but voted as negative: for example, some of the images were collected from Flickr and search engines for the ImageNet synset “animals” but during the manual verification step did not collect enough votes to be considered as containing an “animal.” These images were manually re-verified for the detection task to ensure that they did not in fact contain the target objects. The third source (13%)percent13(13\\%) is images collected from Flickr specifically for the detection task. These images were added for ILSVRC2014 following the same protocol as the second type of images in the validation and test set. This was done to bring the training and testing distributions closer together. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_77",
"text": " The key challenge in annotating images for the object detection task is that all objects in all images need to be labeled. Suppose there are N inputs (images) which need to be annotated with the presence or absence of K labels (objects). A naïve approach would query humans for each combination of input and label, requiring NK𝑁𝐾NK queries. However, N and K can be very large and the cost of this exhaustive approach quickly becomes prohibitive. For example, annotating 60,0006000060,000 validation and test images with the presence or absence of 200200200 object classes for the detection task naïvely would take 808080 times more effort than annotating 150,000150000150,000 validation and test images with 111 object each for the classification task – and this is not even counting the additional cost of collecting bounding box annotations around each object instance. This quickly becomes infeasible. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_78",
"text": " In (Deng et al.,, 2014) we study strategies for scalable multilabel annotation, or for efficiently acquiring multiple labels from humans for a collection of items. We exploit three key observations for labels in real world applications (illustrated in Figure LABEL:fig:chipull): ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_79",
"text": " 1. Correlation. Subsets of labels are often highly correlated. Objects such as a computer keyboard, mouse and monitor frequently co-occur in images. Similarly, some labels tend to all be absent at the same time. For example, all objects that require electricity are usually absent in pictures taken outdoors. This suggests that we could potentially “fill in” the values of multiple labels by grouping them into only one query for humans. Instead of checking if dog, cat, rabbit etc. are present in the photo, we just check about the “animal” group If the answer is no, then this implies a no for all categories in the group. 2. Hierarchy. The above example of grouping dog, cat, rabbit etc. into animal has implicitly assumed that labels can be grouped together and humans can efficiently answer queries about the group as a whole. This brings up our second key observation: humans organize semantic concepts into hierarchies and are able to efficiently categorize at higher semantic levels (Thorpe et al.,, 1996), e.g. humans can determine the presence of an animal in an image as fast as every type of animal individually. This leads to substantial cost savings. 3. Sparsity. The values of labels for each image tend to be sparse, i.e. an image is unlikely to contain more than a dozen types of objects, a small fraction of the hundreds of object categories. This enables rapid elimination of many objects by quickly filling in no. With a high degree of sparsity, an efficient algorithm can have a cost which grows logarithmically with the number of objects instead of linearly. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_80",
"text": " We propose algorithmic strategies that exploit the above intuitions. The key is to select a sequence of queries for humans such that we achieve the same labeling results with only a fraction of the cost of the naïve approach. The main challenges include how to measure cost and utility of queries, how to construct good queries, and how to dynamically order them. A detailed description of the generic algorithm, along with theoretical analysis and empirical evaluation, is presented in (Deng et al.,, 2014). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_81",
"text": " The generic algorithm automatically selects the most informative queries to ask based on object label statistics learned from the training set. In our case of 200 object classes, since obtaining the training set was by itself challenging we chose to design the queries by hand. We created a hierarchy of queries of the type “is there a… in the image?” For example, one of the high-level questions was “is there an animal in the image?” We ask the crowd workers this question about every image we want to label. The children of the “animal” question would correspond to specific examples of animals: for example, “is there a mammal in the image?” or “is there an animal with no legs?” To annotate images efficiently, these questions are asked only on images determined to contain an animal. The 200 leaf node questions correspond to the 200 target objects, e.g., “is there a cat in the image?”. A few sample iterations of the algorithm are shown in Figure 6. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_82",
"text": " Algorithm 1 is the formal algorithm for labeling an image with the presence or absence of each target object category. With this algorithm in mind, the hierarchy of questions was constructed following the principle that false positives only add extra cost whereas false negatives can significantly affect the quality of the labeling. Thus, it is always better to stick with more general but less ambiguous questions, such as “is there a mammal in the image?” as opposed to asking overly specific but potentially ambiguous questions, such as “is there an animal that can climb trees?” Constructing this hierarchy was a surprisingly time-consuming process, involving multiple iterations to ensure high accuracy of labeling and avoid question ambiguity. Appendix D shows the constructed hierarchy. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_83",
"text": " Once all images are labeled with the presence or absence of all object categories we use the bounding box system described in Section 3.2.1 along with some additional modifications of Appendix E to annotate the location of every instance of every present object category. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_84",
"text": " Using the procedure described above, we collect a large-scale dataset for ILSVRC object detection task. There are 200 object classes and approximately 450K training images, 20K validation images and 40K test images. Table 4 documents the size of the dataset over the years of the challenge. The major change between ILSVRC2013 and ILSVRC2014 was the addition of 60,658 fully annotated training images. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_85",
"text": " Prior to ILSVRC, the object detection benchmark was the PASCAL VOC challenge (Everingham et al.,, 2010). ILSVRC has 101010 times more object classes than PASCAL VOC (200 vs 20), 10.610.610.6 times more fully annotated training images (60,658 vs 5,717), 35.235.235.2 times more training objects (478,807 vs 13,609), 3.53.53.5 times more validation images (20,121 vs 5823) and 3.53.53.5 times more validation objects (55,501 vs 15,787). ILSVRC has 2.82.82.8 annotated objects per image on the validation set, compared to 2.72.72.7 in PASCAL VOC. The average object in ILSVRC takes up 17.0%percent17.017.0\\% of the image area and in PASCAL VOC takes up 20.7%percent20.720.7\\%; Table 3 contains per-class comparisons. Additionally, ILSVRC contains a wide variety of objects, including tiny objects such as sunglasses (1.3%percent1.31.3\\% of image area on average), ping-pong balls (1.5%percent1.51.5\\% of image area on average) and basketballs (2.0%percent2.02.0\\% of image area on average). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_86",
"text": " Once the dataset has been collected, we need to define a standardized evaluation procedure for algorithms. Some measures have already been established by datasets such as the Caltech 101 (Fei-Fei et al.,, 2004) for image classification and PASCAL VOC (Everingham et al.,, 2012) for both image classification and object detection. To adapt these procedures to the large-scale setting we had to address three key challenges. First, for the image classification and single-object localization tasks only one object category could be labeled in each image due to the scale of the dataset. This created potential ambiguity during evaluation (addressed in Section 4.1). Second, evaluating localization of object instances is inherently difficult in some images which contain a cluster of objects (addressed in Section 4.2). Third, evaluating localization of object instances which occupy few pixels in the image is challenging (addressed in Section 4.3). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_87",
"text": " In this section we describe the standardized evaluation criteria for each of the three ILSVRC tasks. We elaborate further on these and other more minor challenges with large-scale evaluation. Appendix F describes the submission protocol and other details of running the competition itself. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_88",
"text": " The scale of ILSVRC classification task (1000 categories and more than a million of images) makes it very expensive to label every instance of every object in every image. Therefore, on this dataset only one object category is labeled in each image. This creates ambiguity in evaluation. For example, an image might be labeled as a “strawberry” but contain both a strawberry and an apple. Then an algorithm would not know which one of the two objects to name. For the image classification task we allowed an algorithm to identify multiple (up to 5) objects in an image and not be penalized as long as one of the objects indeed corresponded to the ground truth label. Figure 7(top row) shows some examples. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_89",
"text": " Concretely, each image i𝑖i has a single class label Cisubscript𝐶𝑖C_{i}. An algorithm is allowed to return 5 labels ci1,…ci5subscript𝑐𝑖1…subscript𝑐𝑖5c_{i1},\\dots c_{i5}, and is considered correct if cij=Cisubscript𝑐𝑖𝑗subscript𝐶𝑖c_{ij}=C_{i} for some j𝑗j. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_90",
"text": " Let the error of a prediction dij=d(cij,Ci)subscript𝑑𝑖𝑗𝑑subscript𝑐𝑖𝑗subscript𝐶𝑖d_{ij}=d(c_{ij},C_{i}) be 111 if cij≠Cisubscript𝑐𝑖𝑗subscript𝐶𝑖c_{ij}\\neq C_{i} and 00 otherwise. The error of an algorithm is the fraction of test images on which the algorithm makes a mistake: error =1N∑i=1Nminjdijabsent1𝑁superscriptsubscript𝑖1𝑁subscript𝑗subscript𝑑𝑖𝑗\\displaystyle=\\frac{1}{N}\\sum_{i=1}^{N}\\min_{j}d_{ij} (1) ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_91",
"text": " We used two additional measures of error. First, we evaluated top-1 error. In this case algorithms were penalized if their highest-confidence output label ci1subscript𝑐𝑖1c_{i1} did not match ground truth class Cisubscript𝐶𝑖C_{i}. Second, we evaluated hierarchical error. The intuition is that confusing two nearby classes (such as two different breeds of dogs) is not as harmful as confusing a dog for a container ship. For the hierarchical criteria, the cost of one misclassification, d(cij,Ci)𝑑subscript𝑐𝑖𝑗subscript𝐶𝑖d(c_{ij},C_{i}), is defined as the height of the lowest common ancestor of cijsubscript𝑐𝑖𝑗c_{ij} and Cisubscript𝐶𝑖C_{i} in the ImageNet hierarchy. The height of a node is the length of the longest path to a leaf node (leaf nodes have height zero). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_92",
"text": " However, in practice we found that all three measures of error (top-5, top-1, and hierarchical) produced the same ordering of results. Thus, since ILSVRC2012 we have been exclusively using the top-5 metric which is the simplest and most suitable to the dataset. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_93",
"text": " The evaluation for single-object localization is similar to object classification, again using a top-5 criteria to allow the algorithm to return unannotated object classes without penalty. However, now the algorithm is considered correct only if it both correctly identifies the target class Cisubscript𝐶𝑖C_{i} and accurately localizes one of its instances. Figure 7(middle row) shows some examples. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_94",
"text": " Concretely, an image is associated with object class Cisubscript𝐶𝑖C_{i}, with all instances of this object class annotated with bounding boxes Biksubscript𝐵𝑖𝑘B_{ik}. An algorithm returns {(cij,bij)}j=15superscriptsubscriptsubscript𝑐𝑖𝑗subscript𝑏𝑖𝑗𝑗15\\{(c_{ij},b_{ij})\\}_{j=1}^{5} of class labels cijsubscript𝑐𝑖𝑗c_{ij} and associated locations bijsubscript𝑏𝑖𝑗b_{ij}. The error of a prediction j𝑗j is: dijsubscript𝑑𝑖𝑗\\displaystyle d_{ij} =max(d(cij,Ci),minkd(bij,Bik))absent𝑑subscript𝑐𝑖𝑗subscript𝐶𝑖subscript𝑘𝑑subscript𝑏𝑖𝑗subscript𝐵𝑖𝑘\\displaystyle=\\max(d(c_{ij},C_{i}),\\min_{k}d(b_{ij},B_{ik})) (2) Here d(bij,Bik)𝑑subscript𝑏𝑖𝑗subscript𝐵𝑖𝑘d(b_{ij},B_{ik}) is the error of localization, defined as 00 if the area of intersection of boxes bijsubscript𝑏𝑖𝑗b_{ij} and Biksubscript𝐵𝑖𝑘B_{ik} divided by the areas of their union is greater than 0.50.50.5, and 111 otherwise. (Everingham et al.,, 2010) The error of an algorithm is computed as in Eq. 1. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_95",
"text": " Evaluating localization is inherently difficult in some images. Consider a picture of a bunch of bananas or a carton of apples. It is easy to classify these images as containing bananas or apples, and even possible to localize a few instances of each fruit. However, in order for evaluation to be accurate every instance of banana or apple needs to be annotated, and that may be impossible. To handle the images where localizing individual object instances is inherently ambiguous we manually discarded 3.5%percent3.53.5\\% of images since ILSVRC2012. Some examples of discarded images are shown in Figure 8. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_96",
"text": " The criteria for object detection was adopted from PASCAL VOC (Everingham et al.,, 2010). It is designed to penalize the algorithm for missing object instances, for duplicate detections of one instance, and for false positive detections. Figure 7(bottom row) shows examples. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_97",
"text": " For each object class and each image Iisubscript𝐼𝑖I_{i}, an algorithm returns predicted detections (bij,sij)subscript𝑏𝑖𝑗subscript𝑠𝑖𝑗(b_{ij},s_{ij}) of predicted locations bijsubscript𝑏𝑖𝑗b_{ij} with confidence scores sijsubscript𝑠𝑖𝑗s_{ij}. These detections are greedily matched to the ground truth boxes {Bik}subscript𝐵𝑖𝑘\\{B_{ik}\\} using Algorithm 2. For every detection j𝑗j on image i𝑖i the algorithm returns zij=1subscript𝑧𝑖𝑗1z_{ij}=1 if the detection is matched to a ground truth box according to the threshold criteria, and 00 otherwise. For a given object class, let N𝑁N be the total number of ground truth instances across all images. Given a threshold t𝑡t, define recall as the fraction of the N𝑁N objects detected by the algorithm, and precision as the fraction of correct detections out of the total detections returned by the algorithm. Concretely, Recall(t)𝑅𝑒𝑐𝑎𝑙𝑙𝑡\\displaystyle Recall(t) =∑ij1(sij≥t)zijNabsentsubscript𝑖𝑗1delimited-()subscript𝑠𝑖𝑗𝑡subscript𝑧𝑖𝑗𝑁\\displaystyle=\\frac{\\sum_{ij}1(s_{ij}\\geq t)z_{ij}}{N} (3) Precision(t)𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛𝑡\\displaystyle Precision(t) =∑ij1(sij≥t)zij∑ij1(sij≥t)absentsubscript𝑖𝑗1delimited-()subscript𝑠𝑖𝑗𝑡subscript𝑧𝑖𝑗subscript𝑖𝑗1delimited-()subscript𝑠𝑖𝑗𝑡\\displaystyle=\\frac{\\sum_{ij}1(s_{ij}\\geq t)z_{ij}}{\\sum_{ij}1(s_{ij}\\geq t)} (4) ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_98",
"text": " The final metric for evaluating an algorithm on a given object class is average precision over the different levels of recall achieved by varying the threshold t𝑡t. The winner of each object class is then the team with the highest average precision, and then winner of the challenge is the team that wins on the most object classes.888In this paper we focus on the mean average precision across all categories as the measure of a team’s performance. This is done for simplicity and is justified since the ordering of teams by mean average precision was always the same as the ordering by object categories won. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_99",
"text": " Evaluating localization of object instances which occupy very few pixels in the image is challenging. The PASCAL VOC approach was to label such instances as “difficult” and ignore them during evaluation. However, since ILSVRC contains a more diverse set of object classes including, for example, “nail” and “ping pong ball” which have many very small instances, it is important to include even very small object instances in evaluation. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_100",
"text": " In Algorithm 2, a predicted bounding box b𝑏b is considered to have properly localized by a ground truth bounding box B𝐵B if IOU(b,B)≥thr(B)𝐼𝑂𝑈𝑏𝐵thr𝐵IOU(b,B)\\geq\\mbox{thr}(B). The PASCAL VOC metric uses the threshold thr(B)=0.5thr𝐵0.5\\mbox{thr}(B)=0.5. However, for small objects even deviations of a few pixels would be unacceptable according to this threshold. For example, consider an object B𝐵B of size 10×10101010\\times 10 pixels, with a detection window of 20×20202020\\times 20 pixels which fully contains that object. This would be an error of approximately 555 pixels on each dimension, which is average human annotation error. However, the IOU in this case would be 100/400=0.251004000.25100/400=0.25, far below the threshold of 0.50.50.5. Thus for smaller objects we loosen the threshold in ILSVRC to allow for the annotation to extend up to 5 pixels on average in each direction around the object. Concretely, if the ground truth box B𝐵B is of dimensions w×h𝑤ℎw\\times h then thr(B)=min(0.5,wh(w+10)(h+10))thr𝐵0.5𝑤ℎ𝑤10ℎ10\\mbox{thr}(B)=\\min\\left(0.5,\\frac{wh}{(w+10)(h+10)}\\right) (5) In practice, this changes the threshold only on objects which are smaller than approximately 25×25252525\\times 25 pixels, and affects 5.5%percent5.55.5\\% of objects in the detection validation set. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_101",
"text": " One additional practical consideration for ILSVRC detection evaluation is subtle and comes directly as a result of the scale of ILSVRC. In PASCAL, algorithms would often return many detections per class on the test set, including ones with low confidence scores. This allowed the algorithms to reach the level of high recall at least in the realm of very low precision. On ILSVRC detection test set if an algorithm returns 10 bounding boxes per object per image this would result in 10×200×40K=801020040𝐾8010\\times 200\\times 40K=80M detections. Each detection contains an image index, a class index, 4 bounding box coordinates, and the confidence score, so it takes on the order of 28 bytes. The full set of detections would then require 2.242.242.24Gb to store and submit to the evaluation server, which is impractical. This means that algorithms are implicitly required to limit their predictions to only the most confident locations. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_102",
"text": " The ILSVRC dataset and the competition has allowed significant algorithmic advances in large-scale image recognition and retrieval. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_103",
"text": " This section is organized chronologically, highlighting the particularly innovative and successful methods which participated in the ILSVRC each year. Tables LABEL:table:sub10-12, LABEL:table:sub13 and LABEL:table:sub14 list all the participating teams. We see a turning point in 2012 with the development of large-scale convolutional neural networks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_104",
"text": " The first year the challenge consisted of just the classification task. The winning entry from NEC team (Lin et al.,, 2011) used SIFT (Lowe,, 2004) and LBP (Ahonen et al.,, 2006) features with two non-linear coding representations (Zhou et al.,, 2010; Wang et al.,, 2010) and a stochastic SVM. The honorable mention XRCE team (Perronnin et al.,, 2010) used an improved Fisher vector representation (Perronnin and Dance,, 2007) along with PCA dimensionality reduction and data compression followed by a linear SVM. Fisher vector-based methods have evolved over five years of the challenge and continued performing strongly in every ILSVRC from 2010 to 2014. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_105",
"text": " The winning classification entry in 2011 was the 2010 runner-up team XRCE, applying high-dimensional image signatures (Perronnin et al.,, 2010) with compression using product quantization (Sanchez and Perronnin,, 2011) and one-vs-all linear SVMs. The single-object localization competition was held for the first time, with two brave entries. The winner was the UvA team using a selective search approach to generate class-independent object hypothesis regions (van de Sande et al., 2011b, ), followed by dense sampling and vector quantization of several color SIFT features (van de Sande et al.,, 2010), pooling with spatial pyramid matching (Lazebnik et al.,, 2006), and classifying with a histogram intersection kernel SVM (Maji and Malik,, 2009) trained on a GPU (van de Sande et al., 2011a, ). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_106",
"text": " This was a turning point for large-scale object recognition, when large-scale deep neural networks entered the scene. The undisputed winner of both the classification and localization tasks in 2012 was the SuperVision team. They trained a large, deep convolutional neural network on RGB values, with 60 million parameters using an efficient GPU implementation and a novel hidden-unit dropout trick (Krizhevsky et al.,, 2012; Hinton et al.,, 2012). The second place in image classification went to the ISI team, which used Fisher vectors (Sanchez and Perronnin,, 2011) and a streamlined version of Graphical Gaussian Vectors (Harada and Kuniyoshi,, 2012), along with linear classifiers using Passive-Aggressive (PA) algorithm (Crammer et al.,, 2006). The second place in single-object localization went to the VGG, with an image classification system including dense SIFT features and color statistics (Lowe,, 2004), a Fisher vector representation (Sanchez and Perronnin,, 2011), and a linear SVM classifier, plus additional insights from (Arandjelovic and Zisserman,, 2012; Sanchez et al.,, 2012). Both ISI and VGG used (Felzenszwalb et al.,, 2010) for object localization; SuperVision used a regression model trained to predict bounding box locations. Despite the weaker detection model, SuperVision handily won the object localization task. A detailed analysis and comparison of the SuperVision and VGG submissions on the single-object localization task can be found in (Russakovsky et al.,, 2013). The influence of the success of the SuperVision model can be clearly seen in ILSVRC2013 and ILSVRC2014. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_107",
"text": " There were 24 teams participating in the ILSVRC2013 competition, compared to 21 in the previous three years combined. Following the success of the deep learning-based method in 2012, the vast majority of entries in 2013 used deep convolutional neural networks in their submission. The winner of the classification task was Clarifai, with several large deep convolutional networks averaged together. The network architectures were chosen using the visualization technique of (Zeiler and Fergus,, 2013), and they were trained on the GPU following (Zeiler et al.,, 2011) using the dropout technique (Krizhevsky et al.,, 2012). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_108",
"text": " The winning single-object localization OverFeat submission was based on an integrated framework for using convolutional networks for classification, localization and detection with a multiscale sliding window approach (Sermanet et al.,, 2013). They were the only team tackling all three tasks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_109",
"text": " The winner of object detection task was UvA team, which utilized a new way of efficient encoding (van de Sande et al.,, 2014) densely sampled color descriptors (van de Sande et al.,, 2010) pooled using a multi-level spatial pyramid in a selective search framework (Uijlings et al.,, 2013). The detection results were rescored using a full-image convolutional network classifier. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_110",
"text": " 2014 attracted the most submissions, with 36 teams submitting 123 entries compared to just 24 teams in 2013 – a 1.5x increase in participation.999Table LABEL:table:sub14 omits 4 teams which submitted results but chose not to officially participate in the challenge. As in 2013 almost all teams used convolutional neural networks as the basis for their submission. Significant progress has been made in just one year: image classification error was almost halved since ILSVRC2013 and object detection mean average precision almost doubled compared to ILSVRC2013. Please refer to Section 6.1 for details. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_111",
"text": " In 2014 teams were allowed to use outside data for training their models in the competition, so there were six tracks: provided and outside data tracks in each of image classification, single-object localization, and object detection tasks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_112",
"text": " The winning image classification with provided data team was GoogLeNet, which explored an improved convolutional neural network architecture combining the multi-scale idea with intuitions gained from the Hebbian principle. Additional dimension reduction layers allowed them to increase both the depth and the width of the network significantly without incurring significant computational overhead. In the image classification with external data track, CASIAWS won by using weakly supervised object localization from only classification labels to improve image classification. MCG region proposals (Arbeláez et al.,, 2014) pretrained on PASCAL VOC 2012 data are used to extract region proposals, regions are represented using convolutional networks, and a multiple instance learning strategy is used to learn weakly supervised object detectors to represent the image. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_113",
"text": " In the single-object localization with provided data track, the winning team was VGG, which explored the effect of convolutional neural network depth on its accuracy by using three different architectures with up to 19 weight layers with rectified linear unit non-linearity, building off of the implementation of Caffe (Jia,, 2013). For localization they used per-class bounding box regression similar to OverFeat (Sermanet et al.,, 2013). In the single-object localization with external data track, Adobe used 2000 additional ImageNet classes to train the classifiers in an integrated convolutional neural network framework for both classification and localization, with bounding box regression. At test time they used k-means to find bounding box clusters and rank the clusters according to the classification scores. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_114",
"text": " In the object detection with provided data track, the winning team NUS used the RCNN framework (Girshick et al.,, 2013) with the network-in-network method (Lin et al., 2014a, ) and improvements of (Howard,, 2014). Global context information was incorporated following (Chen et al.,, 2014). In the object detection with external data track, the winning team was GoogLeNet (which also won image classification with provided data). It is truly remarkable that the same team was able to win at both image classification and object detection, indicating that their methods are able to not only classify the image based on scene information but also accurately localize multiple object instances. Just like most teams participating in this track, GoogLeNet used the image classification dataset as extra training data. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_115",
"text": " ILSVRC over the past five years has paved the way for several breakthroughs in computer vision. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_116",
"text": " The field of categorical object recognition has dramatically evolved in the large-scale setting. Section 5.1 documents the progress, starting from coded SIFT features and evolving to large-scale convolutional neural networks dominating at all three tasks of image classification, single-object localization, and object detection. With the availability of so much training data (along with an efficient algorithmic implementation and GPU computing resources) it became possible to learn neural networks directly from the image data, without needing to create multi-stage hand-tuned pipelines of extracted features and discriminative classifiers. The major breakthrough came in 2012 with the win of the SuperVision team on image classification and single-object localization tasks (Krizhevsky et al.,, 2012), and by 2014 all of the top contestants were relying heavily on convolutional neural networks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_117",
"text": " Further, over the past few years there has been a lot of focus on large-scale recognition in the computer vision community . Best paper awards at top vision conferences in 2013 were awarded to large-scale recognition methods: at CVPR 2013 to ”Fast, Accurate Detection of 100,000 Object Classes on a Single Machine” (Dean et al.,, 2013) and at ICCV 2013 to ”From Large Scale Image Categorization to Entry-Level Categories” (Ordonez et al.,, 2013). Additionally, several influential lines of research have emerged, such as large-scale weakly supervised localization work of (Kuettel et al.,, 2012) which was awarded the best paper award in ECCV 2012 and large-scale zero-shot learning, e.g., (Frome et al.,, 2013). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_118",
"text": " State-of-the-art accuracy has improved significantly from ILSVRC2010 to ILSVRC2014, showcasing the massive progress that has been made in large-scale object recognition over the past five years. The performance of the winning ILSVRC entries for each task and each year are shown in Figure 9. The improvement over the years is clearly visible. In this section we quantify and analyze this improvement. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_119",
"text": " There has been a 4.2x reduction in image classification error (from 28.2%percent28.228.2\\% to 6.7%percent6.76.7\\%) and a 1.7x reduction in single-object localization error (from 42.5%percent42.542.5\\% to 25.3%percent25.325.3\\%) since the beginning of the challenge. For consistency, here we consider only teams that use the provided training data. Even though the exact object categories have changed (Section 3.1.1), the large scale of the dataset has remained the same (Table 2), making the results comparable across the years. The dataset has not changed since 2012, and there has been a 2.4x reduction in image classification error (from 16.4%percent16.416.4\\% to 6.7%percent6.76.7\\%) and a 1.3x in single-object localization error (from 33.5%percent33.533.5\\% to 25.3%percent25.325.3\\%) in the past three years. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_120",
"text": " Object detection accuracy as measured by the mean average precision (mAP) has increased 1.9x since the introduction of this task, from 22.6%percent22.622.6\\% mAP in ILSVRC2013 to 43.9%percent43.943.9\\% mAP in ILSVRC2014. However, these results are not directly comparable for two reasons. First, the size of the object detection training data has increased significantly from 2013 to 2014 (Section 3.3). Second, the 43.9%percent43.943.9\\% mAP result was obtained with the addition of the image classification and single-object localization training data. Here we attempt to understand the relative effects of the training set size increase versus algorithmic improvements. All models are evaluated on the same ILSVRC2013-2014 object detection test set. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_121",
"text": " First, we quantify the effects of increasing detection training data between the two challenges by comparing the same model trained on ILSVRC2013 detection data versus ILSVRC2014 detection data. The UvA team’s framework from 2013 achieved 22.6%percent22.622.6\\% with ILSVRC2013 data (Table LABEL:table:sub13) and 26.3%percent26.326.3\\% with ILSVRC2014 data and no other modifications.101010Personal communication with members of the UvA team. The absolute increase in mAP was 3.7%percent3.73.7\\%. The RCNN model achieved 31.4%percent31.431.4\\% mAP with ILSVRC2013 detection plus image classification data (Girshick et al.,, 2013) and 34.5%percent34.534.5\\% mAP with ILSVRC2014 detection plus image classification data (Berkeley team in Table LABEL:table:sub14). The absolute increase in mAP by expanding ILSVRC2013 detection data to ILSVRC2014 was 3.1%percent3.13.1\\%. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_122",
"text": " Second, we quantify the effects of adding in the external data for training object detection models. The NEC model in 2013 achieved 19.6%percent19.619.6\\% mAP trained on ILSVRC2013 detection data alone and 20.9%percent20.920.9\\% mAP trained on ILSVRC2013 detection plus classification data (Table LABEL:table:sub13). The absolute increase in mAP was 1.3%percent1.31.3\\%. The UvA team’s best entry in 2014 achieved 32.0%percent32.032.0\\% mAP trained on ILSVRC2014 detection data and 35.4%percent35.435.4\\% mAP trained on ILSVRC2014 detection plus classification data. The absolute increase in mAP was 3.4%percent3.43.4\\%. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_123",
"text": " Thus, we conclude based on the evidence so far that expanding the ILSVRC2013 detection set to the ILSVRC2014 set, as well as adding in additional training data from the classification task, all account for approximately 1−4%1percent41-4\\% in absolute mAP improvement for the models. For comparison, we can also attempt to quantify the effect of algorithmic innovation. The UvA team’s 2013 framework achieved 26.3%percent26.326.3\\% mAP on ILSVRC2014 data as mentioned above, and their improved method in 2014 obtained 32.0%percent32.032.0\\% mAP (Table LABEL:table:sub14). This is 5.8%percent5.85.8\\% absolute increase in mAP over just one year from algorithmic innovation alone. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_124",
"text": " In summary, we conclude that the absolute 21.3%percent21.321.3\\% increase in mAP between winning entries of ILSVRC2013 (22.6%percent22.622.6\\% mAP) and of ILSVRC2014 (43.9%percent43.943.9\\% mAP) is the result of impressive algorithmic innovation and not just a consequence of increased training data. However, increasing the ISLVRC2014 object detection training dataset further is likely to produce additional improvements in detection accuracy for current algorithms. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_125",
"text": " One important question to ask is whether results of different submissions to ILSVRC are statistically significantly different from each other. Given the large scale, it is no surprise that even minor differences in accuracy are statistically significant; we seek to quantify exactly how much of a difference is enough. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_126",
"text": " Following the strategy employed by PASCAL VOC (Everingham et al.,, 2014), for each method we obtain a confidence interval of its score using bootstrap sampling. During each bootstrap round, we sample N𝑁N images with replacement from all the available N𝑁N test images and evaluate the performance of the algorithm on those sampled images. This can be done very efficiently by precomputing the accuracy on each image. Given the results of all the bootstrapping rounds we discard the lower and the upper α𝛼\\alpha fraction. The range of the remaining results represents the 1−2α12𝛼1-2\\alpha confidence interval. We run a large number of bootstrapping rounds (from 20,000 until convergence). Table 5 shows the results of the top entries to each task of ILSVRC2012-2014. The winning methods are statistically significantly different from the other methods, even at the 99.9%percent99.999.9\\% level. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_127",
"text": " Besides looking at just the average accuracy across hundreds of object categories and tens of thousands of images, we can also delve deeper to understand where mistakes are being made and where researchers’ efforts should be focused to expedite progress. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_128",
"text": " To do so, in this section we will be analyzing an “optimistic” measurement of state-of-the-art recognition performance instead of focusing on the differences in individual algorithms. For each task and each object class, we compute the best performance of any entry submitted to any ILSVRC2012-2014, including methods using additional training data. Since the test sets have remained the same, we can directly compare all the entries in the past three years to obtain the most “optimistic” measurement of state-of-the-art accuracy on each category. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_129",
"text": " For consistency with the object detection metric (higher is better), in this section we will be using image classification and single-object localization accuracy instead of error, where accuracy=1−error𝑎𝑐𝑐𝑢𝑟𝑎𝑐𝑦1𝑒𝑟𝑟𝑜𝑟accuracy=1-error. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_130",
"text": " Figure 10 shows the distribution of accuracy achieved by the “optimistic” models across the object categories. The image classification model achieves 94.6%percent94.694.6\\% accuracy on average (or 5.4%percent5.45.4\\% error), but there remains a 41.0%percent41.041.0\\% absolute difference inaccuracy between the most and least accurate object class. The single-object localization model achieves 81.5%percent81.581.5\\% accuracy on average (or 18.5%percent18.518.5\\% error), with a 77.0%percent77.077.0\\% range in accuracy across the object classes. The object detection model achieves 44.7%percent44.744.7\\% average precision, with an 84.7%percent84.784.7\\% range across the object classes. It is clear that the ILSVRC dataset is far from saturated: performance on many categories has remained poor despite the strong overall performance of the models. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_131",
"text": " Figures 11 and 12 show the easiest and hardest classes for each task, i.e., classes with the best and worst results obtained with the “optimistic” models. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_132",
"text": " For image classification, 121 out of 1000 object classes have 100%percent100100\\% image classification accuracy according to the optimistic estimate. Figure 11 (top) shows a random set of 10 of them. They contain a variety of classes, such as mammals like “red fox” and animals with distinctive structures like “stingray”. The hardest classes in the image classification task, with accuracy as low as 59.0%percent59.059.0\\%, include metallic and see-through man-made objects, such as “hook” and “water bottle,” the material “velvet” and the highly varied scene class “restaurant.” ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_133",
"text": " For single-object localization, the 10 easiest classes with 99.0−100%99.0percent10099.0-100\\% accuracy are all mammals and birds. The hardest classes include metallic man-made objects such as “letter opener” and “ladle”, plus thin structures such as “pole” and “spacebar” and highly varied classes such as “wing”. The most challenging class “spacebar” has a only 23.0%percent23.023.0\\% localization accuracy. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_134",
"text": " Object detection results are shown in Figure 12. The easiest classes are living organisms such as “dog” and “tiger”, plus “basketball” and “volleyball” with distinctive shape and color, and a somewhat surprising “snowplow.” The easiest class “butterfly” is not yet perfectly detected but is very close with 92.7%percent92.792.7\\% AP. The hardest classes are as expected small thin objects such as “flute” and “nail”, and the highly varied “lamp” and “backpack” classes, with as low as 8.0%percent8.08.0\\% AP. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_135",
"text": " We now take a closer look at the image properties to try to understand why current algorithms perform well on some object classes but not others. One hypothesis is that variation in accuracy comes from the fact that instances of some classes tend to be much smaller in images than instances of other classes, and smaller objects may be harder for computers to recognize. In this section we argue that while accuracy is correlated with object scale in the image, not all variation in accuracy can be accounted for by scale alone. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_136",
"text": " For every object class, we compute its average scale, or the average fraction of image area occupied by an instance of the object class on the ILSVRC2012-2014 validation set. Since the images and object classes in the image classification and single-object localization tasks are the same, we use the bounding box annotations of the single-object localization dataset for both tasks. In that dataset the object classes range from “swimming trunks” with scale of 1.5%percent1.51.5\\% to “spider web” with scale of 85.6%percent85.685.6\\%. In the object detection validation dataset the object classes range from “sunglasses” with scale of 1.3%percent1.31.3\\% to “sofa” with scale of 44.4%percent44.444.4\\%. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_137",
"text": " Figure 13 shows the performance of the “optimistic” method as a function of the average scale of the object in the image. Each dot corresponds to one object class. We observe a very weak positive correlation between object scale and image classification accuracy: ρ=0.14𝜌0.14\\rho=0.14. For single-object localization and object detection the correlation is stronger, at ρ=0.40𝜌0.40\\rho=0.40 and ρ=0.41𝜌0.41\\rho=0.41 respectively. It is clear that not all variation in accuracy can be accounted for by scale alone. Nevertheless, in the next section we will normalize for object scale to ensure that this factor is not affecting our conclusions. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_138",
"text": " Besides considering image-level properties we can also observe how accuracy changes as a function of intrinsic object properties. We define three properties inspired by human vision: the real-world size of the object, whether it’s deformable within instance, and how textured it is. For each property, the object classes are assigned to one of a few bins (listed below). These properties are illustrated in Figure 1. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_139",
"text": " Human subjects annotated each of the 1000 image classification and single-object localization object classes from ILSVRC2012-2014 with these properties. (Russakovsky et al.,, 2013). By construction (see Section 3.3.1), each of the 200 object detection classes is either also one of 1000 object classes or is an ancestor of one or more of the 1000 classes in the ImageNet hierarchy. To compute the values of the properties for each object detection class, we simply average the annotated values of the descendant classes. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_140",
"text": " In this section we draw the following conclusions about state-of-the-art recognition accuracy as a function of these object properties: • Real-world size: XS for extra small (e.g. nail), small (e.g. fox), medium (e.g. bookcase), large (e.g. car) or XL for extra large (e.g. church) The image classification and single-object localization “optimistic” models performs better on large and extra large real-world objects than on smaller ones. The “optimistic” object detection model surprisingly performs better on extra small objects than on small or medium ones. • Deformability within instance: Rigid (e.g., mug) or deformable (e.g., water snake) The “optimistic” model on each of the three tasks performs statistically significantly better on deformable objects compared to rigid ones. However, this effect disappears when analyzing natural objects separately from man-made objects. • Amount of texture: none (e.g. punching bag), low (e.g. horse), medium (e.g. sheep) or high (e.g. honeycomb) The “optimistic” model on each of the three tasks is significantly better on objects with at least low level of texture compared to untextured objects. These and other findings are justified and discussed in detail below. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_141",
"text": " We observed in Section 6.3.3 that objects that occupy a larger area in the image tend to be somewhat easier to recognize. To make sure that differences in object scale are not influencing results in this section, we normalize each bin by object scale. We discard object classes with the largest scales from each bin as needed until the average object scale of object classes in each bin across one property is the same (or as close as possible). For real-world size property for example, the resulting average object scale in each of the five bins is 31.6%−31.7%percent31.6percent31.731.6\\%-31.7\\% in the image classification and single-object localization tasks, and 12.9%−13.4%percent12.9percent13.412.9\\%-13.4\\% in the object detection task.111111For rigid versus deformable objects, the average scale in each bin is 34.1%−34.2%percent34.1percent34.234.1\\%-34.2\\% for classification and localization, and 13.5%−13.7%percent13.5percent13.713.5\\%-13.7\\% for detection. For texture, the average scale in each of the four bins is 31.1%−31.3%percent31.1percent31.331.1\\%-31.3\\% for classification and localization, and 12.7%−12.8%percent12.7percent12.812.7\\%-12.8\\% for detection. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_142",
"text": " Figure 14 shows the average performance of the “optimistic” model on the object classes that fall into each bin for each property. We analyze the results in detail below. Unless otherwise specified, the reported accuracies below are after the scale normalization step. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_143",
"text": " To evaluate statistical significance, we compute the 95%percent9595\\% confidence interval for accuracy using bootstrapping: we repeatedly sample the object classes within the bin with replacement, discard some as needed to normalize by scale, and compute the average accuracy of the “optimistic” model on the remaining classes. We report the 95%percent9595\\% confidence intervals (CI) in parentheses. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_144",
"text": " In Figure 14(top, left) we observe that in the image classification task the “optimistic” model tends to perform significantly better on objects which are larger in the real-world. The classification accuracy is 93.6%−93.9%percent93.6percent93.993.6\\%-93.9\\% on XS, S and M objects compared to 97.0%percent97.097.0\\% on L and 96.4%percent96.496.4\\% on XL objects. Since this is after normalizing for scale and thus can’t be explained by the objects’ size in the image, we conclude that either (1) larger real-world objects are easier for the model to recognize, or (2) larger real-world objects usually occur in images with very distinctive backgrounds. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_145",
"text": " To distinguish between the two cases we look Figure 14(top, middle). We see that in the single-object localization task, the L objects are easy to localize at 82.4%percent82.482.4\\% localization accuracy. XL objects, however, tend to be the hardest to localize with only 73.4%percent73.473.4\\% localization accuracy. We conclude that the appearance of L objects must be easier for the model to learn, while XL objects tend to appear in distinctive backgrounds. The image background make these XL classes easier for the image-level classifier, but the individual instances are difficult to accurately localize. Some examples of L objects are “killer whale,” “schooner,” and “lion,” and some examples of XL objects are “boathouse,” “mosque,” “toyshop” and “steel arch bridge.” ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_146",
"text": " In Figure 14(top,right) corresponding to the object detection task, the influence of real-world object size is not as apparent. One of the key reasons is that many of the XL and L object classes of the image classification and single-object localization datasets were removed in constructing the detection dataset (Section 3.3.1) since they were not basic categories well-suited for detection. There were only 3 XL object classes remaining in the dataset (“train,” “airplane” and “bus”), and none after scale normalization.We omit them from the analysis. The average precision of XS, S, M objects (44.5%percent44.544.5\\%, 39.0%percent39.039.0\\%, and 38.5%percent38.538.5\\% mAP respectively) is statistically insignificant from average precision on L objects: 95%percent9595\\% confidence interval of L objects is 37.5%−59.5%percent37.5percent59.537.5\\%-59.5\\%. This may be due to the fact that there are only 6 L object classes remaining after scale normalization; all other real-world size bins have at least 18 object classes. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_147",
"text": " Finally, it is interesting that performance on XS objects of 44.5%percent44.544.5\\% mAP (CI 40.5%−47.6%percent40.5percent47.640.5\\%-47.6\\%) is statistically significantly better than performance on S or M objects with 39.0%percent39.039.0\\% mAP and 38.5%percent38.538.5\\% mAP respectively. Some examples of XS objects are “strawberry,” “bow tie” and “rugby ball.” ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_148",
"text": " In Figure 14(second row) it is clear that the “optimistic” model performs statistically significantly worse on rigid objects than on deformable objects. Image classification accuracy is 93.2%percent93.293.2\\% on rigid objects (CI 92.6%−93.8%percent92.6percent93.892.6\\%-93.8\\%), much smaller than 95.7%percent95.795.7\\% on deformable ones. Single-object localization accuracy is 76.2%percent76.276.2\\% on rigid objects (CI 74.9%−77.4%percent74.9percent77.474.9\\%-77.4\\%), much smaller than 84.7%percent84.784.7\\% on deformable ones. Object detection mAP is 40.1%percent40.140.1\\% on rigid objects (CI 37.2%−42.9%percent37.2percent42.937.2\\%-42.9\\%), much smaller than 44.8%percent44.844.8\\% on deformable ones. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_149",
"text": " We can further analyze the effects of deformability after separating object classes into “natural” and “man-made” bins based on the ImageNet hierarchy. Deformability is highly correlated with whether the object is natural or man-made: 0.720.720.72 correlation for image classification and single-object localization classes, and 0.610.610.61 for object detection classes. Figure 14(third row) shows the effect of deformability on performance of the model for man-made and natural objects separately. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_150",
"text": " Man-made classes are significantly harder than natural classes: classification accuracy 92.8%percent92.892.8\\% (CI 92.3%−93.3%percent92.3percent93.392.3\\%-93.3\\%) for man-made versus 97.0%percent97.097.0\\% for natural, localization accuracy 75.5%percent75.575.5\\% (CI 74.3%−76.5%percent74.3percent76.574.3\\%-76.5\\%) for man-made versus 88.5%percent88.588.5\\% for natural, and detection mAP 38.7%percent38.738.7\\% (CI 35.6−41.3%35.6percent41.335.6-41.3\\%) for man-made versus 50.9%percent50.950.9\\% for natural. However, whether the classes are rigid or deformable within this subdivision is no longer significant in most cases. For example, the image classification accuracy is 92.3%percent92.392.3\\% (CI 91.4%−93.1%percent91.4percent93.191.4\\%-93.1\\%) on man-made rigid objects and 91.8%percent91.891.8\\% on man-made deformable objects – not statistically significantly different. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_151",
"text": " There are two cases where the differences in performance are statistically significant. First, for single-object localization, natural deformable objects are easier than natural rigid objects: localization accuracy of 87.9%percent87.987.9\\% (CI 85.9%−90.1%percent85.9percent90.185.9\\%-90.1\\%) on natural deformable objects is higher than 85.8%percent85.885.8\\% on natural rigid objects – falling slightly outside the 95%percent9595\\% confidence interval. This difference in performance is likely because deformable natural animals tend to be easier to localize than rigid natural fruit. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_152",
"text": " Second, for object detection, man-made rigid objects are easier than man-made deformable objects: 38.5%percent38.538.5\\% mAP (CI 35.2%−41.7%percent35.2percent41.735.2\\%-41.7\\%) on man-made rigid objects is higher than 33.0%percent33.033.0\\% mAP on man-made deformable objects. This is because man-made rigid objects include classes like “traffic light” or “car” whereas the man-made deformable objects contain challenging classes like “plastic bag,” “swimming trunks” or “stethoscope.” ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_153",
"text": " Finally, we analyze the effect that object texture has on the accuracy of the “optimistic” model. Figure 14(fourth row) demonstrates that the model performs better as the amount of texture on the object increases. The most significant difference is between the performance on untextured objects and the performance on objects with low texture. Image classification accuracy is 90.5%percent90.590.5\\% on untextured objects (CI 89.3%−91.6%percent89.3percent91.689.3\\%-91.6\\%), lower than 94.6%percent94.694.6\\% on low-textured objects. Single-object localization accuracy is 71.4%percent71.471.4\\% on untextured objects (CI 69.1%−73.3%percent69.1percent73.369.1\\%-73.3\\%), lower than 80.2%percent80.280.2\\% on low-textured objects. Object detection mAP is 33.2%percent33.233.2\\% on untextured objects (CI 29.5%−35.9%percent29.5percent35.929.5\\%-35.9\\%), lower than 42.9%percent42.942.9\\% on low-textured objects. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_154",
"text": " Texture is correlated with whether the object is natural or man-made, at 0.350.350.35 correlation for image classification and single-object localization, and 0.460.460.46 correlation for object detection. To determine if this is a contributing factor, in Figure 14(bottom row) we break up the object classes into natural and man-made and show the accuracy on objects with no texture versus objects with low texture. We observe that the model is still statistically significantly better on low-textured object classes than on untextured ones, both on man-made and natural object classes independently.121212Natural object detection classes are removed from this analysis because there are only 3 and 13 natural untextured and low-textured classes respectively, and none remain after scale normalization. All other bins contain at least 9 object classes after scale normalization. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_155",
"text": " Recent improvements in state-of-the-art accuracy on the ILSVRC dataset are easier to put in perspective when compared to human-level accuracy. In this section we compare the performance of the leading large-scale image classification method with the performance of humans on this task. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_156",
"text": " To support this comparison, we developed an interface that allowed a human labeler to annotate images with up to five ILSVRC target classes. We compare human errors to those of the winning ILSRC2014 image classification model, GoogLeNet (Section 5.1). For this analysis we use a random sample of 1500 ILSVRC2012-2014 image classification test set images. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_157",
"text": " Our web-based annotation interface consists of one test set image and a list of 1000 ILSVRC categories on the side. Each category is described by its title, such as “cowboy boot.” The categories are sorted in the topological order of the ImageNet hierarchy, which places semantically similar concepts nearby in the list. For example, all motor vehicle-related classes are arranged contiguously in the list. Every class category is additionally accompanied by a row of 13 examples images from the training set to allow for faster visual scanning. The user of the interface selects 5 categories from the list by clicking on the desired items. Since our interface is web-based, it allows for natural scrolling through the list, and also search by text. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_158",
"text": " We found the task of annotating images with one of 1000 categories to be an extremely challenging task for an untrained annotator. The most common error that an untrained annotator is susceptible to is a failure to consider a relevant class as a possible label because they are unaware of its existence. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_159",
"text": " Therefore, in evaluating the human accuracy we relied primarily on expert annotators who learned to recognize a large portion of the 1000 ILSVRC classes. During training, the annotators labeled a few hundred validation images for practice and later switched to the test set images. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_160",
"text": " We report results based on experiments with two expert annotators. The first annotator (A1) trained on 500 images and annotated 1500 test images. The second annotator (A2) trained on 100 images and then annotated 258 test images. The average pace of labeling was approximately 1 image per minute, but the distribution is strongly bimodal: some images are quickly recognized, while some images (such as those of fine-grained breeds of dogs, birds, or monkeys) may require multiple minutes of concentrated effort. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_161",
"text": " The results are reported in Table 6. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_162",
"text": " Annotator A1 evaluated a total of 1500 test set images. The GoogLeNet classification error on this sample was estimated to be 6.8%percent6.86.8\\% (recall that the error on full test set of 100,000 images is 6.7%percent6.76.7\\%, as shown in Table LABEL:table:sub14). The human error was estimated to be 5.1%. Thus, annotator A1 achieves a performance superior to GoogLeNet, by approximately 1.7%percent1.71.7\\%. We can analyze the statistical significance of this result under the null hypothesis that they are from the same distribution. In particular, comparing the two proportions with a z-test yields a one-sided p𝑝p-value of p=0.022𝑝0.022p=0.022. Thus, we can conclude that this result is statistically significant at the 95%percent9595\\% confidence level. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_163",
"text": " Our second annotator (A2) trained on a smaller sample of only 100 images and then labeled 258 test set images. As seen in Table 6, the final classification error is significantly worse, at approximately 12.0%percent12.012.0\\% Top-5 error. The majority of these errors (48.8%percent48.848.8\\%) can be attributed to the annotator failing to spot and consider the ground truth label as an option. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_164",
"text": " Thus, we conclude that a significant amount of training time is necessary for a human to achieve competitive performance on ILSVRC. However, with a sufficient amount of training, a human annotator is still able to outperform the GoogLeNet result (p=0.022𝑝0.022p=0.022) by approximately 1.7%percent1.71.7\\%. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_165",
"text": " We also compare the prediction accuracy of the two annotators. Of a total of 204 images that both A1 and A2 labeled, 174 (85%percent8585\\%) were correctly labeled by both A1 and A2, 19 (9%percent99\\%) were correctly labeled by A1 but not A2, 6 (3%percent33\\%) were correctly labeled by A2 but not A1, and 5 (2%percent22\\%) were incorrectly labeled by both. These include 2 images that we consider to be incorrectly labeled in the ground truth. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_166",
"text": " In particular, our results suggest that the human annotators do not exhibit strong overlap in their predictions. We can approximate the performance of an “optimistic” human classifier by assuming an image to be correct if at least one of A1 or A2 correctly labeled the image. On this sample of 204 images, we approximate the error rate of an “optimistic” human annotator at 2.4%percent2.42.4\\%, compared to the GoogLeNet error rate of 4.9%percent4.94.9\\%. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_167",
"text": " We manually inspected both human and GoogLeNet errors to gain an understanding of common error types and how they compare. For purposes of this section, we only discuss results based on the larger sample of 1500 images that were labeled by annotator A1. Examples of representative mistakes are in Figure 15. The analysis and insights below were derived specifically from GoogLeNet predictions, but we suspect that many of the same errors may be present in other methods. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_168",
"text": " 1. Multiple objects. Both GoogLeNet and humans struggle with images that contain multiple ILSVRC classes (usually many more than five), with little indication of which object is the focus of the image. This error is only present in the Classification setting, since every image is constrained to have exactly one correct label. In total, we attribute 24 (24%percent2424\\%) of GoogLeNet errors and 12 (16%percent1616\\%) of human errors to this category. It is worth noting that humans can have a slight advantage in this error type, since it can sometimes be easy to identify the most salient object in the image. 2. Incorrect annotations. We found that approximately 5 out of 1500 images (0.3%percent0.30.3\\%) were incorrectly annotated in the ground truth. This introduces an approximately equal number of errors for both humans and GoogLeNet. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_169",
"text": " 1. Object small or thin. GoogLeNet struggles with recognizing objects that are very small or thin in the image, even if that object is the only object present. Examples of this include an image of a standing person wearing sunglasses, a person holding a quill in their hand, or a small ant on a stem of a flower. We estimate that approximately 22 (21%percent2121\\%) of GoogLeNet errors fall into this category, while none of the human errors do. In other words, in our sample of images, no image was mislabeled by a human because they were unable to identify a very small or thin object. This discrepancy can be attributed to the fact that a human can very effectively leverage context and affordances to accurately infer the identity of small objects (for example, a few barely visible feathers near person’s hand as very likely belonging to a mostly occluded quill). 2. Image filters. Many people enhance their photos with filters that distort the contrast and color distributions of the image. We found that 13 (13%percent1313\\%) of the images that GoogLeNet incorrectly classified contained a filter. Thus, we posit that GoogLeNet is not very robust to these distortions. In comparison, only one image among the human errors contained a filter, but we do not attribute the source of the error to the filter. 3. Abstract representations. GoogLeNet struggles with images that depict objects of interest in an abstract form, such as 3D-rendered images, paintings, sketches, plush toys, or statues. An example is the abstract shape of a bow drawn with a light source in night photography, a 3D-rendered robotic scorpion, or a shadow on the ground, of a child on a swing. We attribute approximately 6 (6%percent66\\%) of GoogLeNet errors to this type of error and believe that humans are significantly more robust, with no such errors seen in our sample. 4. Miscellaneous sources. Additional sources of error that occur relatively infrequently include extreme closeups of parts of an object, unconventional viewpoints such as a rotated image, images that can significantly benefit from the ability to read text (e.g. a featureless container identifying itself as “face powder”), objects with heavy occlusions, and images that depict a collage of multiple images. In general, we found that humans are more robust to all of these types of error. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_170",
"text": " 1. Fine-grained recognition. We found that humans are noticeably worse at fine-grained recognition (e.g. dogs, monkeys, snakes, birds), even when they are in clear view. To understand the difficulty, consider that there are more than 120 species of dogs in the dataset. We estimate that 28 (37%percent3737\\%) of the human errors fall into this category, while only 7 (7%percent77\\%) of GoogLeNet errors do. 2. Class unawareness. The annotator may sometimes be unaware of the ground truth class present as a label option. When pointed out as an ILSVRC class, it is usually clear that the label applies to the image. These errors get progressively less frequent as the annotator becomes more familiar with ILSVRC classes. Approximately 18 (24%percent2424\\%) of the human errors fall into this category. 3. Insufficient training data. Recall that the annotator is only presented with 13 examples of a class under every category name. However, 13 images are not always enough to adequately convey the allowed class variations. For example, a brown dog can be incorrectly dismissed as a “Kelpie” if all examples of a “Kelpie” feature a dog with black coat. However, if more than 13 images were listed it would have become clear that a “Kelpie” may have brown coat. Approximately 4 (5%percent55\\%) of human errors fall into this category. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_171",
"text": " We investigated the performance of trained human annotators on a sample of 1500 ILSVRC test set images. Our results indicate that a trained human annotator is capable of outperforming the best model (GoogLeNet) by approximately 1.7%percent1.71.7\\% (p=0.022𝑝0.022p=0.022). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_172",
"text": " We expect that some sources of error may be relatively easily eliminated (e.g. robustness to filters, rotations, collages, effectively reasoning over multiple scales), while others may prove more elusive (e.g. identifying abstract representations of objects). On the other hand, a large majority of human errors come from fine-grained categories and class unawareness. We expect that the former can be significantly reduced with fine-grained expert annotators, while the latter could be reduced with more practice and greater familiarity with ILSVRC classes. Our results also hint that human errors are not strongly correlated and that human ensembles may further reduce human error rate. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_173",
"text": " It is clear that humans will soon outperform state-of-the-art ILSVRC image classification models only by use of significant effort, expertise, and time. One interesting follow-up question for future investigation is how computer-level accuracy compares with human-level accuracy on more complex image understanding tasks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_174",
"text": " In this paper we described the large-scale data collection process of ILSVRC, provided a summary of the most successful algorithms on this data, and analyzed the success and failure modes of these algorithms. In this section we discuss some of the key lessons we learned over the years of ILSVRC, strive to address the key criticisms of the datasets and the challenges we encountered over the years, and conclude by looking forward into the future. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_175",
"text": " The key lesson of collecting the datasets and running the challenges for five years is this: All human intelligence tasks need to be exceptionally well-designed. We learned this lesson both when annotating the dataset using Amazon Mechanical Turk workers (Section 3) and even when trying to evaluate human-level image classification accuracy using expert labelers (Section 6.4). The first iteration of the labeling interface was always bad – generally meaning completely unusable. If there was any inherent ambiguity in the questions posed (and there almost always was), workers found it and accuracy suffered. If there is one piece of advice we can offer to future research, it is to very carefully design, continuously monitor, and extensively sanity-check all crowdsourcing tasks. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_176",
"text": " The other lesson, already well-known to large-scale researchers, is this: Scaling up the dataset always reveals unexpected challenges. From designing complicated multi-step annotation strategies (Section 3.2.1) to having to modify the evaluation procedure (Section 4), we had to continuously adjust to the large-scale setting. On the plus side, of course, the major breakthroughs in object recognition accuracy (Section 5) and the analysis of the strength and weaknesses of current algorithms as a function of object class properties ( Section 6.3) would never have been possible on a smaller scale. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_177",
"text": " In the past five years, we encountered three major criticisms of the ILSVRC dataset and the corresponding challenge: (1) the ILSVRC dataset is insufficiently challenging, (2) the ILSVRC dataset contains annotation errors, and (3) the rules of ILSVRC competition are too restrictive. We discuss these in order. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_178",
"text": " The first criticism is that the objects in the dataset tend to be large and centered in the images, making the dataset insufficiently challenging. In Sections 3.2.2 and 3.3.4 we tried to put those concerns to rest by analyzing the statistics of the ILSVRC dataset and concluding that it is comparable with, and in many cases much more challenging than, the long-standing PASCAL VOC benchmark (Everingham et al.,, 2010). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_179",
"text": " The second is regarding the errors in ground truth labeling. We went through several rounds of in-house post-processing of the annotations obtained using crowdsourcing, and corrected many common sources of errors (e.g., Appendix E). The major remaining source of annotation errors stem from fine-grained object classes, e.g., labelers failing to distinguish different species of birds. This is a tradeoff that had to be made: in order to annotate data at this scale on a reasonable budget, we had to rely on non-expert crowd labelers. However, overall the dataset is encouragingly clean. By our estimates, 99.7%percent99.799.7\\% precision is achieved in the image classification dataset (Sections 3.1.3 and 6.4) and 97.9%percent97.997.9\\% of images that went through the bounding box annotation system have all instances of the target object class labeled with bounding boxes (Section 3.2.1). ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_180",
"text": " The third criticism we encountered is over the rules of the competition regarding using external training data. In ILSVRC2010-2013, algorithms had to only use the provided training and validation set images and annotations for training their models. With the growth of the field of large-scale unsupervised feature learning, however, questions began to arise about what exactly constitutes “outside” data: for example, are image features trained on a large pool of “outside” images in an unsupervised fashion allowed in the competition? After much discussion, in ILSVRC2014 we took the first step towards addressing this problem. We followed the PASCAL VOC strategy and created two tracks in the competition: entries using only “provided” data and entries using “outside” data, meaning any images or annotations not provided as part of ILSVRC training or validation sets. However, in the future this strategy will likely need to be further revised as the computer vision field evolves. For example, competitions can consider allowing the use of any image features which are publically available, even if these features were learned on an external source of data. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_181",
"text": " Given the massive algorithmic breakthroughs over the past five years, we are very eager to see what will happen in the next five years. There are many potential directions of improvement and growth for ILSVRC and other large-scale image datasets. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_182",
"text": " First, continuing the trend of moving towards richer image understanding (from image classification to single-object localization to object detection), the next challenge would be to tackle pixel-level object segmentation. The recently released large-scale COCO dataset (Lin et al., 2014b, ) is already taking a step in that direction. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_183",
"text": " Second, as datasets grow even larger in scale, it may become impossible to fully annotate them manually. The scale of ILSVRC is already imposing limits on the manual annotations that are feasible to obtain: for example, we had to restrict the number of objects labeled per image in the image classification and single-object localization datasets. In the future, with billions of images, it will become impossible to obtain even one clean label for every image. Datasets such as Yahoo’s Flickr Creative Commons 100M,131313http://webscope.sandbox.yahoo.com/catalog.php?datatype=i&did=67 released with weak human tags but no centralized annotation, will become more common. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_184",
"text": " The growth of unlabeled or only partially labeled large-scale datasets implies two things. First, algorithms will have to rely more on weakly supervised training data. Second, even evaluation might have to be done after the algorithms make predictions, not before. This means that rather than evaluating accuracy (how many of the test images or objects did the algorithm get right) or recall (how many of the desired images or objects did the algorithm manage to find), both of which require a fully annotated test set, we will be focusing more on precision: of the predictions that the algorithm made, how many were deemed correct by humans. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
},
{
"id": "1409.0575_all_185",
"text": " We are eagerly awaiting the future development of object recognition datasets and algorithms, and are grateful that ILSVRC served as a stepping stone along this path. ",
"title": "ImageNet Large Scale Visual Recognition Challenge"
}
] |
How is “character”-delimited models different from “word”-delimited models?
|
character-delimited models takes characters as input and outputs characters, the words spitted into constituent characters, resulting typically in a few hundred basic characters including special characters appeared in the data [34]. While in word-delimited models OOv words are collapsed into a single UNK symbols [81].
|
[
34,
81
] |
[
{
"id": "1609.08144_all_0",
"text": " Neural Machine Translation (NMT) (41, 2) has recently been introduced as a promising approach with the potential of addressing many shortcomings of traditional machine translation systems. The strength of NMT lies in its ability to learn directly, in an end-to-end fashion, the mapping from input text to associated output text. Its architecture typically consists of two recurrent neural networks (RNNs), one to consume the input text sequence and one to generate translated output text. NMT is often accompanied by an attention mechanism which helps it cope effectively with long input sequences. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_1",
"text": " An advantage of Neural Machine Translation is that it sidesteps many brittle design choices in traditional phrase-based machine translation . In practice, however, NMT systems used to be worse in accuracy than phrase-based translation systems, especially when training on very large-scale datasets as used for the very best publicly available translation systems. Three inherent weaknesses of Neural Machine Translation are responsible for this gap: its slower training and inference speed, ineffectiveness in dealing with rare words, and sometimes failure to translate all words in the source sentence. Firstly, it generally takes a considerable amount of time and computational resources to train an NMT system on a large-scale translation dataset, thus slowing the rate of experimental turnaround time and innovation. For inference they are generally much slower than phrase-based systems due to the large number of parameters used. Secondly, NMT lacks robustness in translating rare words. Though this can be addressed in principle by training a “copy model” to mimic a traditional alignment model , or by using the attention mechanism to copy rare words , these approaches are both unreliable at scale, since the quality of the alignments varies across languages, and the latent alignments produced by the attention mechanism are unstable when the network is deep. Also, simple copying may not always be the best strategy to cope with rare words, for example when a transliteration is more appropriate. Finally, NMT systems sometimes produce output sentences that do not translate all parts of the input sentence – in other words, they fail to completely “cover” the input, which can result in surprising translations. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_2",
"text": " This work presents the design and implementation of GNMT, a production NMT system at Google, that aims to provide solutions to the above problems. In our implementation, the recurrent networks are Long Short-Term Memory (LSTM) RNNs (23, 17). Our LSTM RNNs have 8 layers, with residual connections between layers to encourage gradient flow . For parallelism, we connect the attention from the bottom layer of the decoder network to the top layer of the encoder network. To improve inference time, we employ low-precision arithmetic for inference, which is further accelerated by special hardware (Google’s Tensor Processing Unit, or TPU). To effectively deal with rare words, we use sub-word units (also known as “wordpieces”) for inputs and outputs in our system. Using wordpieces gives a good balance between the flexibility of single characters and the efficiency of full words for decoding, and also sidesteps the need for special treatment of unknown words. Our beam search technique includes a length normalization procedure to deal efficiently with the problem of comparing hypotheses of different lengths during decoding, and a coverage penalty to encourage the model to translate all of the provided input. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_3",
"text": " Our implementation is robust, and performs well on a range of datasets across many pairs of languages without the need for language-specific adjustments. Using the same implementation, we are able to achieve results comparable to or better than previous state-of-the-art systems on standard benchmarks, while delivering great improvements over Google’s phrase-based production translation system. Specifically, on WMT’14 English-to-French, our single model scores 38.95 BLEU, an improvement of 7.5 BLEU from a single model without an external alignment model reported in and an improvement of 1.2 BLEU from a single model without an external alignment model reported in . Our single model is also comparable to a single model in , while not making use of any alignment model as being used in . Likewise on WMT’14 English-to-German, our single model scores 24.17 BLEU, which is 3.4 BLEU better than a previous competitive baseline . On production data, our implementation is even more effective. Human evaluations show that GNMT has reduced translation errors by 60% compared to our previous phrase-based system on many pairs of languages: English ↔↔\\leftrightarrow French, English ↔↔\\leftrightarrow Spanish, and English ↔↔\\leftrightarrow Chinese. Additional experiments suggest the quality of the resulting translation system gets closer to that of average human translators. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_4",
"text": " Statistical Machine Translation (SMT) has been the dominant translation paradigm for decades (3, 4, 5). Practical implementations of SMT are generally phrase-based systems (PBMT) which translate sequences of words or phrases where the lengths may differ . ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_5",
"text": " Even prior to the advent of direct Neural Machine Translation, neural networks have been used as a component within SMT systems with some success. Perhaps one of the most notable attempts involved the use of a joint language model to learn phrase representations which yielded an impressive improvement when combined with phrase-based translation. This approach, however, still makes use of phrase-based translation systems at its core, and therefore inherits their shortcomings. Other proposed approaches for learning phrase representations or learning end-to-end translation with neural networks offered encouraging hints, but ultimately delivered worse overall accuracy compared to standard phrase-based systems. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_6",
"text": " The concept of end-to-end learning for machine translation has been attempted in the past (e.g., ) with limited success. Following seminal papers in the area (41, 2), NMT translation quality has crept closer to the level of phrase-based translation systems for common research benchmarks. Perhaps the first successful attempt at surpassing phrase-based translation was described in . On WMT’14 English-to-French, this system achieved a 0.5 BLEU improvement compared to a state-of-the-art phrase-based system. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_7",
"text": " Since then, many novel techniques have been proposed to further improve NMT: using an attention mechanism to deal with rare words , a mechanism to model translation coverage , multi-task and semi-supervised training to incorporate more data (14, 29), a character decoder , a character encoder , subword units also to deal with rare word outputs, different kinds of attention mechanisms , and sentence-level loss minimization (39, 34). While the translation accuracy of these systems has been encouraging, systematic comparison with large scale, production quality phrase-based translation systems has been lacking. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_8",
"text": " Our model (see Figure 1) follows the common sequence-to-sequence learning framework with attention . It has three components: an encoder network, a decoder network, and an attention network. The encoder transforms a source sentence into a list of vectors, one vector per input symbol. Given this list of vectors, the decoder produces one symbol at a time, until the special end-of-sentence symbol (EOS) is produced. The encoder and decoder are connected through an attention module which allows the decoder to focus on different regions of the source sentence during the course of decoding. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_9",
"text": " For notation, we use bold lower case to denote vectors (e.g., 𝐯,𝐨𝐢𝐯subscript𝐨𝐢\\mathbf{v,o_{i}}), bold upper case to represent matrices (e.g., 𝐔,𝐖𝐔𝐖\\mathbf{U,W}), cursive upper case to represent sets (e.g., 𝒱,𝒯𝒱𝒯\\mathscr{V,T}), capital letters to represent sequences (e.g. X𝑋X, Y𝑌Y), and lower case to represent individual symbols in a sequence, (e.g., x1subscript𝑥1x_{1}, x2subscript𝑥2x_{2}). ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_10",
"text": " Let (X,Y)𝑋𝑌(X,Y) be a source and target sentence pair. Let X=x1,x2,x3,…,xM𝑋subscript𝑥1subscript𝑥2subscript𝑥3…subscript𝑥𝑀X=x_{1},x_{2},x_{3},...,x_{M} be the sequence of M𝑀M symbols in the source sentence and let Y=y1,y2,y3,…,yN𝑌subscript𝑦1subscript𝑦2subscript𝑦3…subscript𝑦𝑁Y=y_{1},y_{2},y_{3},...,y_{N} be the sequence of N𝑁N symbols in the target sentence. The encoder is simply a function of the following form: ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_11",
"text": " 𝐱𝟏,𝐱𝟐,…,𝐱𝐌=EncoderRNN(x1,x2,x3,…,xM)subscript𝐱1subscript𝐱2…subscript𝐱𝐌𝐸𝑛𝑐𝑜𝑑𝑒𝑟𝑅𝑁𝑁subscript𝑥1subscript𝑥2subscript𝑥3…subscript𝑥𝑀\\mathbf{x_{1},x_{2},...,x_{M}}=EncoderRNN(x_{1},x_{2},x_{3},...,x_{M}) (1) In this equation, 𝐱𝟏,𝐱𝟐,…,𝐱𝐌subscript𝐱1subscript𝐱2…subscript𝐱𝐌\\mathbf{x_{1},x_{2},...,x_{M}} is a list of fixed size vectors. The number of members in the list is the same as the number of symbols in the source sentence (M𝑀M in this example). Using the chain rule the conditional probability of the sequence P(Y|X)𝑃conditional𝑌𝑋P(Y|X) can be decomposed as: ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_12",
"text": " P(Y|X)=P(Y|𝐱𝟏,𝐱𝟐,𝐱𝟑,…,𝐱𝐌)=∏i=1NP(yi|y0,y1,y2,…,yi−1;𝐱𝟏,𝐱𝟐,𝐱𝟑,…,𝐱𝐌)𝑃conditional𝑌𝑋𝑃conditional𝑌subscript𝐱1subscript𝐱2subscript𝐱3…subscript𝐱𝐌superscriptsubscriptproduct𝑖1𝑁𝑃conditionalsubscript𝑦𝑖subscript𝑦0subscript𝑦1subscript𝑦2…subscript𝑦𝑖1subscript𝐱1subscript𝐱2subscript𝐱3…subscript𝐱𝐌\\begin{split}P(Y|X)&=P(Y|\\mathbf{x_{1},x_{2},x_{3},...,x_{M}})\\\\ &=\\prod_{i=1}^{N}P(y_{i}|y_{0},y_{1},y_{2},...,y_{i-1};\\mathbf{x_{1},x_{2},x_{3},...,x_{M}})\\end{split} (2) where y0subscript𝑦0y_{0} is a special “beginning of sentence” symbol that is prepended to every target sentence. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_13",
"text": " During inference we calculate the probability of the next symbol given the source sentence encoding and the decoded target sequence so far: P(yi|y0,y1,y2,y3,…,yi−1;𝐱𝟏,𝐱𝟐,𝐱𝟑,…,𝐱𝐌)𝑃conditionalsubscript𝑦𝑖subscript𝑦0subscript𝑦1subscript𝑦2subscript𝑦3…subscript𝑦𝑖1subscript𝐱1subscript𝐱2subscript𝐱3…subscript𝐱𝐌P(y_{i}|y_{0},y_{1},y_{2},y_{3},...,y_{i-1};\\mathbf{x_{1}},\\mathbf{x_{2}},\\mathbf{x_{3}},...,\\mathbf{x_{M}}) (3) Our decoder is implemented as a combination of an RNN network and a softmax layer. The decoder RNN network produces a hidden state 𝐲𝐢subscript𝐲𝐢\\mathbf{y_{i}} for the next symbol to be predicted, which then goes through the softmax layer to generate a probability distribution over candidate output symbols. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_14",
"text": " In our experiments we found that for NMT systems to achieve good accuracy, both the encoder and decoder RNNs have to be deep enough to capture subtle irregularities in the source and target languages. This observation is similar to previous observations that deep LSTMs significantly outperform shallow LSTMs . In that work, each additional layer reduced perplexity by nearly 10%. Similar to , we use a deep stacked Long Short Term Memory (LSTM) network for both the encoder RNN and the decoder RNN. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_15",
"text": " Our attention module is similar to . More specifically, let 𝐲i−1subscript𝐲𝑖1\\mathbf{y}_{i-1} be the decoder-RNN output from the past decoding time step (in our implementation, we use the output from the bottom decoder layer). Attention context 𝐚isubscript𝐚𝑖\\mathbf{a}_{i} for the current time step is computed according to the following formulas: st=AttentionFunction(𝐲i−1,𝐱t)∀t,1≤t≤Mpt=exp(st)/∑t=1Mexp(st)∀t,1≤t≤M𝐚i=∑t=1Mpt.𝐱t\\begin{split}s_{t}&=AttentionFunction(\\mathbf{y}_{i-1},\\mathbf{x}_{t})\\quad\\forall t,\\quad 1\\leq t\\leq M\\\\ p_{t}&=\\exp(s_{t})/\\sum_{t=1}^{M}\\exp(s_{t})\\quad\\quad\\forall t,\\quad 1\\leq t\\leq M\\\\ \\mathbf{a}_{i}&=\\sum_{t=1}^{M}p_{t}.\\mathbf{x}_{t}\\end{split} (4) where AttentionFunction𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛𝐹𝑢𝑛𝑐𝑡𝑖𝑜𝑛AttentionFunction in our implementation is a feed forward network with one hidden layer. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_16",
"text": " As mentioned above, deep stacked LSTMs often give better accuracy over shallower models. However, simply stacking more layers of LSTM works only to a certain number of layers, beyond which the network becomes too slow and difficult to train, likely due to exploding and vanishing gradient problems (33, 22). In our experience with large-scale translation tasks, simple stacked LSTM layers work well up to 4 layers, barely with 6 layers, and very poorly beyond 8 layers. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_17",
"text": " Motivated by the idea of modeling differences between an intermediate layer’s output and the targets, which has shown to work well for many projects in the past (16, 21, 40), we introduce residual connections among the LSTM layers in a stack (see Figure 2). More concretely, let LSTMisubscriptLSTM𝑖\\mathrm{LSTM}_{i} and LSTMi+1subscriptLSTM𝑖1\\mathrm{LSTM}_{i+1} be the i𝑖i-th and (i+1)𝑖1(i+1)-th LSTM layers in a stack, whose parameters are 𝐖isuperscript𝐖𝑖\\mathbf{W}^{i} and 𝐖i+1superscript𝐖𝑖1\\mathbf{W}^{i+1} respectively. At the t𝑡t-th time step, for the stacked LSTM without residual connections, we have: ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_18",
"text": " 𝐜ti,𝐦ti=LSTMi(𝐜t−1i,𝐦t−1i,𝐱ti−1;𝐖i)𝐱ti=𝐦ti𝐜ti+1,𝐦ti+1=LSTMi+1(𝐜t−1i+1,𝐦t−1i+1,𝐱ti;𝐖i+1)formulae-sequencesuperscriptsubscript𝐜𝑡𝑖superscriptsubscript𝐦𝑡𝑖subscriptLSTM𝑖superscriptsubscript𝐜𝑡1𝑖superscriptsubscript𝐦𝑡1𝑖superscriptsubscript𝐱𝑡𝑖1superscript𝐖𝑖superscriptsubscript𝐱𝑡𝑖superscriptsubscript𝐦𝑡𝑖superscriptsubscript𝐜𝑡𝑖1superscriptsubscript𝐦𝑡𝑖1subscriptLSTM𝑖1superscriptsubscript𝐜𝑡1𝑖1superscriptsubscript𝐦𝑡1𝑖1superscriptsubscript𝐱𝑡𝑖superscript𝐖𝑖1\\begin{split}\\mathbf{c}_{t}^{i},\\mathbf{m}_{t}^{i}&=\\mathrm{LSTM}_{i}(\\mathbf{c}_{t-1}^{i},\\mathbf{m}_{t-1}^{i},\\mathbf{x}_{t}^{i-1};\\mathbf{W}^{i})\\\\ \\mathbf{x}_{t}^{i}&=\\mathbf{m}_{t}^{i}\\\\ \\mathbf{c}_{t}^{i+1},\\mathbf{m}_{t}^{i+1}&=\\mathrm{LSTM}_{i+1}(\\mathbf{c}_{t-1}^{i+1},\\mathbf{m}_{t-1}^{i+1},\\mathbf{x}_{t}^{i};\\mathbf{W}^{i+1})\\end{split} (5) where 𝐱tisuperscriptsubscript𝐱𝑡𝑖\\mathbf{x}_{t}^{i} is the input to LSTMisubscriptLSTM𝑖\\mathrm{LSTM}_{i} at time step t𝑡t, and 𝐦tisuperscriptsubscript𝐦𝑡𝑖\\mathbf{m}_{t}^{i} and 𝐜tisuperscriptsubscript𝐜𝑡𝑖\\mathbf{c}_{t}^{i} are the hidden states and memory states of LSTMisubscriptLSTM𝑖\\mathrm{LSTM}_{i} at time step t𝑡t, respectively. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_19",
"text": " With residual connections between LSTMisubscriptLSTM𝑖\\mathrm{LSTM}_{i} and LSTMi+1subscriptLSTM𝑖1\\mathrm{LSTM}_{i+1}, the above equations become: ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_20",
"text": " 𝐜ti,𝐦ti=LSTMi(𝐜t−1i,𝐦t−1i,𝐱ti−1;𝐖i)𝐱ti=𝐦ti+𝐱ti−1𝐜ti+1,𝐦ti+1=LSTMi+1(𝐜t−1i+1,𝐦t−1i+1,𝐱ti;𝐖i+1)formulae-sequencesuperscriptsubscript𝐜𝑡𝑖superscriptsubscript𝐦𝑡𝑖subscriptLSTM𝑖superscriptsubscript𝐜𝑡1𝑖superscriptsubscript𝐦𝑡1𝑖superscriptsubscript𝐱𝑡𝑖1superscript𝐖𝑖superscriptsubscript𝐱𝑡𝑖superscriptsubscript𝐦𝑡𝑖superscriptsubscript𝐱𝑡𝑖1superscriptsubscript𝐜𝑡𝑖1superscriptsubscript𝐦𝑡𝑖1subscriptLSTM𝑖1superscriptsubscript𝐜𝑡1𝑖1superscriptsubscript𝐦𝑡1𝑖1superscriptsubscript𝐱𝑡𝑖superscript𝐖𝑖1\\begin{split}\\mathbf{c}_{t}^{i},\\mathbf{m}_{t}^{i}&=\\mathrm{LSTM}_{i}(\\mathbf{c}_{t-1}^{i},\\mathbf{m}_{t-1}^{i},\\mathbf{x}_{t}^{i-1};\\mathbf{W}^{i})\\\\ \\mathbf{x}_{t}^{i}&=\\mathbf{m}_{t}^{i}+\\mathbf{x}_{t}^{i-1}\\\\ \\mathbf{c}_{t}^{i+1},\\mathbf{m}_{t}^{i+1}&=\\mathrm{LSTM}_{i+1}(\\mathbf{c}_{t-1}^{i+1},\\mathbf{m}_{t-1}^{i+1},\\mathbf{x}_{t}^{i};\\mathbf{W}^{i+1})\\end{split} (6) Residual connections greatly improve the gradient flow in the backward pass, which allows us to train very deep encoder and decoder networks. In most of our experiments, we use 8 LSTM layers for the encoder and decoder, though residual connections can allow us to train substantially deeper networks (similar to what was observed in ). ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_21",
"text": " For translation systems, the information required to translate certain words on the output side can appear anywhere on the source side. Often the source side information is approximately left-to-right, similar to the target side, but depending on the language pair the information for a particular output word can be distributed and even be split up in certain regions of the input side. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_22",
"text": " To have the best possible context at each point in the encoder network it makes sense to use a bi-directional RNN for the encoder, which was also used in . To allow for maximum possible parallelization during computation (to be discussed in more detail in section 3.3), bi-directional connections are only used for the bottom encoder layer – all other encoder layers are uni-directional. Figure 3 illustrates our use of bi-directional LSTMs at the bottom encoder layer. The layer LSTMfsubscriptLSTM𝑓\\mathrm{LSTM}_{f} processes the source sentence from left to right, while the layer LSTMbsubscriptLSTM𝑏\\mathrm{LSTM}_{b} processes the source sentence from right to left. Outputs from LSTMfsubscriptLSTM𝑓\\mathrm{LSTM}_{f} (𝐱𝐭𝐟→→superscriptsubscript𝐱𝐭𝐟\\overrightarrow{\\mathbf{x_{t}^{f}}}) and LSTMbsubscriptLSTM𝑏\\mathrm{LSTM}_{b} (𝐱𝐭𝐛←←superscriptsubscript𝐱𝐭𝐛\\overleftarrow{\\mathbf{x_{t}^{b}}}) are first concatenated and then fed to the next layer LSTM1subscriptLSTM1\\mathrm{LSTM}_{1}. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_23",
"text": " Due to the complexity of our model, we make use of both model parallelism and data parallelism to speed up training. Data parallelism is straightforward: we train n𝑛n model replicas concurrently using a Downpour SGD algorithm . The n𝑛n replicas all share one copy of model parameters, with each replica asynchronously updating the parameters using a combination of Adam and SGD algorithms. In our experiments, n𝑛n is often around 10. Each replica works on a mini-batch of m𝑚m sentence pairs at a time, which is often 128 in our experiments. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_24",
"text": " In addition to data parallelism, model parallelism is used to improve the speed of the gradient computation on each replica. The encoder and decoder networks are partitioned along the depth dimension and are placed on multiple GPUs, effectively running each layer on a different GPU. Since all but the first encoder layer are uni-directional, layer i+1𝑖1i+1 can start its computation before layer i𝑖i is fully finished, which improves training speed. The softmax layer is also partitioned, with each partition responsible for a subset of symbols in the output vocabulary. Figure 1 shows more details of how partitioning is done. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_25",
"text": " Model parallelism places certain constraints on the model architectures we can use. For example, we cannot afford to have bi-directional LSTM layers for all the encoder layers, since doing so would reduce parallelism among subsequent layers, as each layer would have to wait until both forward and backward directions of the previous layer have finished. This would effectively constrain us to make use of only 2 GPUs in parallel (one for the forward direction and one for the backward direction). For the attention portion of the model, we chose to align the bottom decoder output to the top encoder output to maximize parallelism when running the decoder network. Had we aligned the top decoder layer to the top encoder layer, we would have removed all parallelism in the decoder network and would not benefit from using more than one GPU for decoding. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_26",
"text": " Neural Machine Translation models often operate with fixed word vocabularies even though translation is fundamentally an open vocabulary problem (names, numbers, dates etc.). There are two broad categories of approaches to address the translation of out-of-vocabulary (OOV) words. One approach is to simply copy rare words from source to target (as most rare words are names or numbers where the correct translation is just a copy), either based on the attention model , using an external alignment model , or even using a more complicated special purpose pointing network . Another broad category of approaches is to use sub-word units, e.g., chararacters , mixed word/characters , or more intelligent sub-words . ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_27",
"text": " Our most successful approach falls into the second category (sub-word units), and we adopt the wordpiece model (WPM) implementation initially developed to solve a Japanese/Korean segmentation problem for the Google speech recognition system . This approach is completely data-driven and guaranteed to generate a deterministic segmentation for any possible sequence of characters. It is similar to the method used in to deal with rare words in Neural Machine Translation. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_28",
"text": " For processing arbitrary words, we first break words into wordpieces given a trained wordpiece model. Special word boundary symbols are added before training of the model such that the original word sequence can be recovered from the wordpiece sequence without ambiguity. At decoding time, the model first produces a wordpiece sequence, which is then converted into the corresponding word sequence. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_29",
"text": " Here is an example of a word sequence and the corresponding wordpiece sequence: • Word: Jet makers feud over seat width with big orders at stake • wordpieces: _J et _makers _fe ud _over _seat _width _with _big _orders _at _stake ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_30",
"text": " In the above example, the word “Jet” is broken into two wordpieces “_J” and “et”, and the word “feud” is broken into two wordpieces “_fe” and “ud”. The other words remain as single wordpieces. “_” is a special character added to mark the beginning of a word. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_31",
"text": " The wordpiece model is generated using a data-driven approach to maximize the language-model likelihood of the training data, given an evolving word definition. Given a training corpus and a number of desired tokens D𝐷D, the optimization problem is to select D𝐷D wordpieces such that the resulting corpus is minimal in the number of wordpieces when segmented according to the chosen wordpiece model. Our greedy algorithm to this optimization problem is similar to and is described in more detail in . Compared to the original implementation used in , we use a special symbol only at the beginning of the words and not at both ends. We also cut the number of basic characters to a manageable number depending on the data (roughly 500 for Western languages, more for Asian languages) and map the rest to a special unknown character to avoid polluting the given wordpiece vocabulary with very rare characters. We find that using a total vocabulary of between 8k and 32k wordpieces achieves both good accuracy (BLEU scores) and fast decoding speed across all pairs of language pairs we have tried. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_32",
"text": " As mentioned above, in translation it often makes sense to copy rare entity names or numbers directly from the source to the target. To facilitate this type of direct copying, we always use a shared wordpiece model for both the source language and target language. Using this approach, it is guaranteed that the same string in source and target sentence will be segmented in exactly the same way, making it easier for the system to learn to copy these tokens. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_33",
"text": " Wordpieces achieve a balance between the flexibility of characters and efficiency of words. We also find that our models get better overall BLEU scores when using wordpieces – possibly due to the fact that our models now deal efficiently with an essentially infinite vocabulary without resorting to characters only. The latter would make the average lengths of the input and output sequences much longer, and therefore would require more computation. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_34",
"text": " A second approach we use is the mixed word/character model. As in a word model, we keep a fixed-size word vocabulary. However, unlike in a conventional word model where OOV words are collapsed into a single UNK symbol, we convert OOV words into the sequence of its constituent characters. Special prefixes are prepended to the characters, to 1) show the location of the characters in a word, and 2) to distinguish them from normal in-vocabulary characters. There are three prefixes: <B>,<M>, and <E>, indicating beginning of the word, middle of the word and end of the word, respectively. For example, let’s assume the word Miki is not in the vocabulary. It will be preprocessed into a sequence of special tokens: <B>M <M>i <M>k <E>i. The process is done on both the source and the target sentences. During decoding, the output may also contain sequences of special tokens. With the prefixes, it is trivial to reverse the tokenization to the original words as part of a post-processing step. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_35",
"text": " Given a dataset of parallel text containing N𝑁N input-output sequence pairs, denoted 𝒟≡{(X(i),Y∗(i))}i=1N𝒟superscriptsubscriptsuperscript𝑋𝑖superscript𝑌absent𝑖𝑖1𝑁\\mathcal{D}\\equiv\\left\\{(X^{(i)},Y^{*(i)})\\right\\}_{i=1}^{N}, standard maximum-likelihood training aims at maximizing the sum of log probabilities of the ground-truth outputs given the corresponding inputs, 𝒪ML(𝜽)=∑i=1NlogPθ(Y∗(i)∣X(i)).subscript𝒪ML𝜽superscriptsubscript𝑖1𝑁subscript𝑃𝜃conditionalsuperscript𝑌absent𝑖superscript𝑋𝑖\\mathcal{O}_{\\mathrm{ML}}(\\bm{\\mathbf{\\theta}})=\\sum_{i=1}^{N}\\log{P}_{\\theta}(Y^{*(i)}\\mid X^{(i)})~{}. (7) The main problem with this objective is that it does not reflect the task reward function as measured by the BLEU score in translation. Further, this objective does not explicitly encourage a ranking among incorrect output sequences – where outputs with higher BLEU scores should still obtain higher probabilities under the model – since incorrect outputs are never observed during training. In other words, using maximum-likelihood training only, the model will not learn to be robust to errors made during decoding since they are never observed, which is quite a mismatch between the training and testing procedure. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_36",
"text": " Several recent papers (34, 39, 32) have considered different ways of incorporating the task reward into optimization of neural sequence-to-sequence models. In this work, we also attempt to refine a model pre-trained on the maximum likelihood objective to directly optimize for the task reward. We show that, even on large datasets, refinement of state-of-the-art maximum-likelihood models using task reward improves the results considerably. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_37",
"text": " We consider model refinement using the expected reward objective (also used in ), which can be expressed as 𝒪RL(𝜽)=∑i=1N∑Y∈𝒴Pθ(Y∣X(i))r(Y,Y∗(i)).subscript𝒪RL𝜽superscriptsubscript𝑖1𝑁subscript𝑌𝒴subscript𝑃𝜃conditional𝑌superscript𝑋𝑖𝑟𝑌superscript𝑌absent𝑖\\mathcal{O}_{\\mathrm{RL}}(\\bm{\\mathbf{\\theta}})=\\sum_{i=1}^{N}\\sum_{Y\\in\\mathcal{Y}}{P}_{\\theta}(Y\\mid X^{(i)})~{}r(Y,Y^{*(i)}). (8) Here, r(Y,Y∗(i))𝑟𝑌superscript𝑌absent𝑖r(Y,Y^{*(i)}) denotes the per-sentence score, and we are computing an expectation over all of the output sentences Y𝑌Y, up to a certain length. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_38",
"text": " The BLEU score has some undesirable properties when used for single sentences, as it was designed to be a corpus measure. We therefore use a slightly different score for our RL experiments which we call the “GLEU score”. For the GLEU score, we record all sub-sequences of 1, 2, 3 or 4 tokens in output and target sequence (n-grams). We then compute a recall, which is the ratio of the number of matching n-grams to the number of total n-grams in the target (ground truth) sequence, and a precision, which is the ratio of the number of matching n-grams to the number of total n-grams in the generated output sequence. Then GLEU score is simply the minimum of recall and precision. This GLEU score’s range is always between 0 (no matches) and 1 (all match) and it is symmetrical when switching output and target. According to our experiments, GLEU score correlates quite well with the BLEU metric on a corpus level but does not have its drawbacks for our per sentence reward objective. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_39",
"text": " As is common practice in reinforcement learning, we subtract the mean reward from r(Y,Y∗(i))𝑟𝑌superscript𝑌absent𝑖r(Y,Y^{*(i)}) in equation 8. The mean is estimated to be the sample mean of m𝑚m sequences drawn independently from distribution Pθ(Y∣X(i))subscript𝑃𝜃conditional𝑌superscript𝑋𝑖{P}_{\\theta}(Y\\mid X^{(i)}). In our implementation, m𝑚m is set to be 15. To further stabilize training, we optimize a linear combination of ML (equation 7) and RL (equation 8) objectives as follows: 𝒪Mixed(𝜽)=α∗𝒪ML(𝜽)+𝒪RL(𝜽)subscript𝒪Mixed𝜽𝛼subscript𝒪ML𝜽subscript𝒪RL𝜽\\mathcal{O}_{\\mathrm{Mixed}}(\\bm{\\mathbf{\\theta}})=\\alpha*\\mathcal{O}_{\\mathrm{ML}}(\\bm{\\mathbf{\\theta}})+\\mathcal{O}_{\\mathrm{RL}}(\\bm{\\mathbf{\\theta}}) (9) α𝛼\\alpha in our implementation is typically set to be 0.0170.0170.017. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_40",
"text": " In our setup, we first train a model using the maximum likelihood objective (equation 7) until convergence. We then refine this model using a mixed maximum likelihood and expected reward objective (equation 9), until BLEU score on a development set is no longer improving. The second step is optional. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_41",
"text": " One of the main challenges in deploying our Neural Machine Translation model to our interactive production translation service is that it is computationally intensive at inference, making low latency translation difficult, and high volume deployment computationally expensive. Quantized inference using reduced precision arithmetic is one technique that can significantly reduce the cost of inference for these models, often providing efficiency improvements on the same computational devices. For example, in , it is demonstrated that a convolutional neural network model can be sped up by a factor of 4-6 with minimal loss on classification accuracy on the ILSVRC-12 benchmark. In , it is demonstrated that neural network model weights can be quantized to only three states, -1, 0, and +1. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_42",
"text": " Many of those previous studies (19, 20, 43, 27) however mostly focus on CNN models with relatively few layers. Deep LSTMs with long sequences pose a novel challenge in that quantization errors can be significantly amplified after many unrolled steps or after going through a deep LSTM stack. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_43",
"text": " In this section, we present our approach to speed up inference with quantized arithmetic. Our solution is tailored towards the hardware options available at Google. To reduce quantization errors, additional constraints are added to our model during training so that it is quantizable with minimal impact on the output of the model. That is, once a model is trained with these additional constraints, it can be subsequently quantized without loss to translation quality. Our experimental results suggest that those additional constraints do not hurt model convergence nor the quality of a model once it has converged. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_44",
"text": " Recall from equation 6 that in an LSTM stack with residual connections there are two accumulators: 𝐜tisuperscriptsubscript𝐜𝑡𝑖\\mathbf{c}_{t}^{i} along the time axis and 𝐱tisuperscriptsubscript𝐱𝑡𝑖\\mathbf{x}_{t}^{i} along the depth axis. In theory, both of the accumulators are unbounded, but in practice, we noticed their values remain quite small. For quantized inference, we explicitly constrain the values of these accumulators to be within (-δ𝛿\\delta, δ𝛿\\delta) to guarantee a certain range that can be used for quantization later. The forward computation of an LSTM stack with residual connections is modified to the following: ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_45",
"text": " 𝐜′ti,𝐦ti=LSTMi(𝐜t−1i,𝐦t−1i,𝐱ti−1;𝐖i)𝐜ti=max(−δ,min(δ,𝐜′ti))𝐱′ti=𝐦ti+𝐱ti−1𝐱ti=max(−δ,min(δ,𝐱′ti))𝐜′ti+1,𝐦ti+1=LSTMi+1(𝐜t−1i+1,𝐦t−1i+1,𝐱ti;𝐖i+1)𝐜ti+1=max(−δ,min(δ,𝐜′ti+1))formulae-sequencesuperscriptsubscriptsuperscript𝐜′𝑡𝑖superscriptsubscript𝐦𝑡𝑖subscriptLSTM𝑖superscriptsubscript𝐜𝑡1𝑖superscriptsubscript𝐦𝑡1𝑖superscriptsubscript𝐱𝑡𝑖1superscript𝐖𝑖superscriptsubscript𝐜𝑡𝑖𝛿𝛿superscriptsubscriptsuperscript𝐜′𝑡𝑖superscriptsubscriptsuperscript𝐱′𝑡𝑖superscriptsubscript𝐦𝑡𝑖superscriptsubscript𝐱𝑡𝑖1superscriptsubscript𝐱𝑡𝑖𝛿𝛿superscriptsubscriptsuperscript𝐱′𝑡𝑖superscriptsubscriptsuperscript𝐜′𝑡𝑖1superscriptsubscript𝐦𝑡𝑖1subscriptLSTM𝑖1superscriptsubscript𝐜𝑡1𝑖1superscriptsubscript𝐦𝑡1𝑖1superscriptsubscript𝐱𝑡𝑖superscript𝐖𝑖1superscriptsubscript𝐜𝑡𝑖1𝛿𝛿superscriptsubscriptsuperscript𝐜′𝑡𝑖1\\begin{split}\\mathbf{c^{\\prime}}_{t}^{i},\\mathbf{m}_{t}^{i}&=\\mathrm{LSTM}_{i}(\\mathbf{c}_{t-1}^{i},\\mathbf{m}_{t-1}^{i},\\mathbf{x}_{t}^{i-1};\\mathbf{W}^{i})\\\\ \\mathbf{c}_{t}^{i}&=\\max(-\\delta,\\min(\\delta,\\mathbf{c^{\\prime}}_{t}^{i}))\\\\ \\mathbf{x^{\\prime}}_{t}^{i}&=\\mathbf{m}_{t}^{i}+\\mathbf{x}_{t}^{i-1}\\\\ \\mathbf{x}_{t}^{i}&=\\max(-\\delta,\\min(\\delta,\\mathbf{x^{\\prime}}_{t}^{i}))\\\\ \\mathbf{c^{\\prime}}_{t}^{i+1},\\mathbf{m}_{t}^{i+1}&=\\mathrm{LSTM}_{i+1}(\\mathbf{c}_{t-1}^{i+1},\\mathbf{m}_{t-1}^{i+1},\\mathbf{x}_{t}^{i};\\mathbf{W}^{i+1})\\\\ \\mathbf{c}_{t}^{i+1}&=\\max(-\\delta,\\min(\\delta,\\mathbf{c^{\\prime}}_{t}^{i+1}))\\end{split} (10) Let us expand LSTMisubscriptLSTM𝑖\\mathrm{LSTM}_{i} in equation 10 to include the internal gating logic. For brevity, we drop all the superscripts i𝑖i. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_46",
"text": " 𝐖=(𝐖1,𝐖2,𝐖3,𝐖4,𝐖5,𝐖6,𝐖7,𝐖8)𝐢t=sigmoid(𝐖1𝐱t+𝐖2𝐦t)𝐢′t=tanh(𝐖3𝐱t+𝐖4𝐦t)𝐟t=sigmoid(𝐖5𝐱t+𝐖6𝐦t)𝐨t=sigmoid(𝐖7𝐱t+𝐖8𝐦t)𝐜t=𝐜t−1⊙𝐟t+𝐢′t⊙𝐢t𝐦t=𝐜t⊙𝐨t𝐖subscript𝐖1subscript𝐖2subscript𝐖3subscript𝐖4subscript𝐖5subscript𝐖6subscript𝐖7subscript𝐖8subscript𝐢𝑡sigmoidsubscript𝐖1subscript𝐱𝑡subscript𝐖2subscript𝐦𝑡subscriptsuperscript𝐢′𝑡subscript𝐖3subscript𝐱𝑡subscript𝐖4subscript𝐦𝑡subscript𝐟𝑡sigmoidsubscript𝐖5subscript𝐱𝑡subscript𝐖6subscript𝐦𝑡subscript𝐨𝑡sigmoidsubscript𝐖7subscript𝐱𝑡subscript𝐖8subscript𝐦𝑡subscript𝐜𝑡direct-productsubscript𝐜𝑡1subscript𝐟𝑡direct-productsubscriptsuperscript𝐢′𝑡subscript𝐢𝑡subscript𝐦𝑡direct-productsubscript𝐜𝑡subscript𝐨𝑡\\begin{split}\\mathbf{W}&=(\\mathbf{W}_{1},\\mathbf{W}_{2},\\mathbf{W}_{3},\\mathbf{W}_{4},\\mathbf{W}_{5},\\mathbf{W}_{6},\\mathbf{W}_{7},\\mathbf{W}_{8})\\\\ \\mathbf{i}_{t}&=\\text{sigmoid}(\\mathbf{W}_{1}\\mathbf{x}_{t}+\\mathbf{W}_{2}\\mathbf{m}_{t})\\\\ \\mathbf{i^{\\prime}}_{t}&=\\tanh(\\mathbf{W}_{3}\\mathbf{x}_{t}+\\mathbf{W}_{4}\\mathbf{m}_{t})\\\\ \\mathbf{f}_{t}&=\\text{sigmoid}(\\mathbf{W}_{5}\\mathbf{x}_{t}+\\mathbf{W}_{6}\\mathbf{m}_{t})\\\\ \\mathbf{o}_{t}&=\\text{sigmoid}(\\mathbf{W}_{7}\\mathbf{x}_{t}+\\mathbf{W}_{8}\\mathbf{m}_{t})\\\\ \\mathbf{c}_{t}&=\\mathbf{c}_{t-1}\\odot\\mathbf{f}_{t}+\\mathbf{i^{\\prime}}_{t}\\odot\\mathbf{i}_{t}\\\\ \\mathbf{m}_{t}&=\\mathbf{c}_{t}\\odot\\mathbf{o}_{t}\\end{split} (11) When doing quantized inference, we replace all the floating point operations in equations 10 and 11 with fixed-point integer operations with either 8-bit or 16-bit resolution. The weight matrix 𝐖𝐖\\mathbf{W} above is represented using an 8-bit integer matrix 𝐖𝐐𝐖𝐐\\mathbf{WQ} and a float vector 𝐬𝐬\\mathbf{s}, as shown below: ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_47",
"text": " 𝐬i=max(abs(𝐖(i,:)))𝐖𝐐(i,j)=round(𝐖(i,j)/𝐬i×127.0)subscript𝐬𝑖abs𝐖𝑖:𝐖𝐐𝑖𝑗round𝐖𝑖𝑗subscript𝐬𝑖127.0\\begin{split}\\mathbf{s}_{i}&=\\max(\\text{abs}(\\mathbf{W}(i,:)))\\\\ \\mathbf{WQ}(i,j)&=\\text{round}(\\mathbf{W}(i,j)/\\mathbf{s}_{i}\\times 127.0)\\end{split} (12) All accumulator values (𝐜tisuperscriptsubscript𝐜𝑡𝑖\\mathbf{c}_{t}^{i} and 𝐱tisuperscriptsubscript𝐱𝑡𝑖\\mathbf{x}_{t}^{i}) are represented using 16-bit integers representing the range (−δ,δ)𝛿𝛿(-\\delta,\\delta). All matrix multiplications (e.g., 𝐖1𝐱tsubscript𝐖1subscript𝐱𝑡\\mathbf{W}_{1}\\mathbf{x}_{t}, 𝐖2𝐦tsubscript𝐖2subscript𝐦𝑡\\mathbf{W}_{2}\\mathbf{m}_{t}, etc.) in equation 11 are done using 8-bit integer multiplication accumulated into larger accumulators. All other operations, including all the activations (sigmoid, tanh\\tanh) and elementwise operations (⊙direct-product\\odot, ++) are done using 16-bit integer operations. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_48",
"text": " We now turn our attention to the log-linear softmax layer. During training, given the decoder RNN network output 𝐲𝐭subscript𝐲𝐭\\mathbf{y_{t}}, we compute the probability vector 𝐩𝐭subscript𝐩𝐭\\mathbf{p_{t}} over all candidate output symbols as follows: ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_49",
"text": " 𝐯𝐭=𝐖𝐬∗𝐲𝐭𝐯𝐭′=max(−γ,min(γ,𝐯𝐭))𝐩𝐭=softmax(𝐯𝐭′)subscript𝐯𝐭subscript𝐖𝐬subscript𝐲𝐭superscriptsubscript𝐯𝐭′𝛾𝛾subscript𝐯𝐭subscript𝐩𝐭𝑠𝑜𝑓𝑡𝑚𝑎𝑥superscriptsubscript𝐯𝐭′\\begin{split}\\mathbf{v_{t}}&=\\mathbf{W_{s}}*\\mathbf{y_{t}}\\\\ \\mathbf{v_{t}^{\\prime}}&=\\max(-\\gamma,\\min(\\gamma,\\mathbf{v_{t}}))\\\\ \\mathbf{p_{t}}&=softmax(\\mathbf{v_{t}^{\\prime}})\\end{split} (13) In equation 13, 𝐖𝐬subscript𝐖𝐬\\mathbf{W_{s}} is the weight matrix for the linear layer, which has the same number of rows as the number of symbols in the target vocabulary with each row corresponding to one unique target symbol. 𝐯𝐯\\mathbf{v} represents the raw logits, which are first clipped to be between −γ𝛾-\\gamma and γ𝛾\\gamma and then normalized into a probability vector 𝐩𝐩\\mathbf{p}. Input 𝐲𝐭subscript𝐲𝐭\\mathbf{y_{t}} is guaranteed to be between −δ𝛿-\\delta and δ𝛿\\delta due to the quantization scheme we applied to the decoder RNN. The clipping range γ𝛾\\gamma for the logits 𝐯𝐯\\mathbf{v} is determined empirically, and in our case, it is set to 252525. In quantized inference, the weight matrix 𝐖𝐬subscript𝐖𝐬\\mathbf{W_{s}} is quantized into 8 bits as in equation 12, and the matrix multiplication is done using 8 bit arithmetic. The calculations within the softmax𝑠𝑜𝑓𝑡𝑚𝑎𝑥softmax function and the attention model are not quantized during inference. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_50",
"text": " It is worth emphasizing that during training of the model we use full-precision floating point numbers. The only constraints we add to the model during training are the clipping of the RNN accumulator values into (−δ,δ)𝛿𝛿(-\\delta,\\delta) and softmax logits into (−γ,γ)𝛾𝛾(-\\gamma,\\gamma). γ𝛾\\gamma is fixed to be at 25.025.025.0, while the value for δ𝛿\\delta is gradually annealed from a generous bound of δ=8.0𝛿8.0\\delta=8.0 at the beginning of training, to a rather stringent bound of δ=1.0𝛿1.0\\delta=1.0 towards the end of training. At inference time, δ𝛿\\delta is fixed at 1.01.01.0. Those additional constraints do not degrade model convergence nor the decoding quality of the model when it has converged. In Figure 4, we compare the loss vs. steps for an unconstrained model (the blue curve) and a constrained model (the red curve) on WMT’14 English-to-French. We can see that the loss for the constrained model is slightly better, possibly due to regularization roles those constraints play. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_51",
"text": " Our solution strikes a good balance between efficiency and accuracy. Since the computationally expensive operations (the matrix multiplications) are done using 8-bit integer operations, our quantized inference is quite efficient. Also, since error-sensitive accumulator values are stored using 16-bit integers, our solution is very accurate and is robust to quantization errors. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_52",
"text": " In Table 1 we compare the inference speed and quality when decoding the WMT’14 English-to-French development set (a concatenation of newstest2012 and newstest2013 test sets for a total of 6003 sentences) on CPU, GPU and Google’s Tensor Processing Unit (TPU) respectively.111https://cloudplatform.googleblog.com/2016/05/Google-supercharges-machine-learning-tasks-with-custom-chip.html The model used here for comparison is trained with quantization constraints on the ML objective only (i.e., without reinforcement learning based model refinement). When the model is decoded on CPU and GPU, it is not quantized and all operations are done using full-precision floats. When it is decoded on TPU, certain operations, such as embedding lookup and attention module, remain on the CPU, and all other quantized operations are off-loaded to the TPU. In all cases, decoding is done on a single machine with two Intel Haswell CPUs, which consists in total of 88 CPU cores (hyperthreads). The machine is equipped with an NVIDIA GPU (Tesla k80) for the experiment with GPU or a single Google TPU for the experiment with TPU. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_53",
"text": " Table 1 shows that decoding using reduced precision arithmetics on the TPU suffers a very minimal loss of 0.0072 on log perplexity, and no loss on BLEU at all. This result matches previous work reporting that quantizing convolutional neural network models can retain most of the model quality. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_54",
"text": " Table 1 also shows that decoding our model on CPU is actually 2.3 times faster than on GPU. Firstly, our dual-CPUs host machine offers a theoretical peak FLOP performance which is more than two thirds that of the GPU. Secondly, the beam search algorithm forces the decoder to incur a non-trivial amount of data transfer between the host and the GPU at every decoding step. Hence, our current decoder implementation is not fully utilizing the computation capacities that a GPU can theoretically offer during inference. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_55",
"text": " Finally, Table 1 shows that decoding on TPUs is 3.4 times faster than decoding on CPUs, demonstrating that quantized arithmetics is much faster on TPUs than both CPUs or GPUs. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_56",
"text": " Unless otherwise noted, we always train and evaluate quantized models in our experiments. Because there is little difference from a quality perspective between a model decoded on CPUs and one decoded on TPUs, we use CPUs to decode for model evaluation during training and experimentation and use TPUs to serve production traffic. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_57",
"text": " We use beam search during decoding to find the sequence Y𝑌Y that maximizes a score function s(Y,X)𝑠𝑌𝑋s(Y,X) given a trained model. We introduce two important refinements to the pure max-probability based beam search algorithm: a coverage penalty and length normalization. With length normalization, we aim to account for the fact that we have to compare hypotheses of different length. Without some form of length-normalization regular beam search will favor shorter results over longer ones on average since a negative log-probability is added at each step, yielding lower (more negative) scores for longer sentences. We first tried to simply divide by the length to normalize. We then improved on that original heuristic by dividing by lengthα𝑙𝑒𝑛𝑔𝑡superscriptℎ𝛼length^{\\alpha}, with 0<α<10𝛼10<\\alpha<1 where α𝛼\\alpha is optimized on a development set (α∈(0.6−0.7)𝛼delimited-()0.60.7\\alpha\\in(0.6-0.7) was usually found to be best). Eventually we designed the empirically-better scoring function below, which also includes a coverage penalty to favor translations that fully cover the source sentence according to the attention module. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_58",
"text": " More concretely, the scoring function s(Y,X)𝑠𝑌𝑋s(Y,X) that we employ to rank candidate translations is defined as follows: ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_59",
"text": " s(Y,X)=log(P(Y|X))/lp(Y)+cp(X;Y)lp(Y)=(5+|Y|)α(5+1)αcp(X;Y)=β∗∑i=1|X|log(min(∑j=1|Y|pi,j,1.0)),𝑠𝑌𝑋𝑃conditional𝑌𝑋𝑙𝑝𝑌𝑐𝑝𝑋𝑌𝑙𝑝𝑌superscript5𝑌𝛼superscript51𝛼𝑐𝑝𝑋𝑌𝛽superscriptsubscript𝑖1𝑋superscriptsubscript𝑗1𝑌subscript𝑝𝑖𝑗1.0\\begin{split}s(Y,X)&=\\log(P(Y|X))/lp(Y)+cp(X;Y)\\\\ lp(Y)&=\\frac{(5+|Y|)^{\\alpha}}{(5+1)^{\\alpha}}\\\\ cp(X;Y)&=\\beta*\\sum_{i=1}^{|X|}{\\log(\\min(\\sum_{j=1}^{|Y|}{p_{i,j}},1.0))},\\end{split} (14) where pi,jsubscript𝑝𝑖𝑗p_{i,j} is the attention probability of the j𝑗j-th target word yjsubscript𝑦𝑗y_{j} on the i𝑖i-th source word xisubscript𝑥𝑖x_{i}. By construction (equation 4), ∑i=0|X|pi,jsuperscriptsubscript𝑖0𝑋subscript𝑝𝑖𝑗\\sum_{i=0}^{|X|}{p_{i,j}} is equal to 1. Parameters α𝛼\\alpha and β𝛽\\beta control the strength of the length normalization and the coverage penalty. When α=0𝛼0\\alpha=0 and β=0𝛽0\\beta=0, our decoder falls back to pure beam search by probability. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_60",
"text": " During beam search, we typically keep 8-12 hypotheses but we find that using fewer (4 or 2) has only slight negative effects on BLEU scores. Besides pruning the number of considered hypotheses, two other forms of pruning are used. Firstly, at each step, we only consider tokens that have local scores that are not more than beamsize𝑏𝑒𝑎𝑚𝑠𝑖𝑧𝑒beamsize below the best token for this step. Secondly, after a normalized best score has been found according to equation 14, we prune all hypotheses that are more than beamsize𝑏𝑒𝑎𝑚𝑠𝑖𝑧𝑒beamsize below the best normalized score so far. The latter type of pruning only applies to full hypotheses because it compares scores in the normalized space, which is only available when a hypothesis ends. This latter form of pruning also has the effect that very quickly no more hypotheses will be generated once a sufficiently good hypothesis has been found, so the search will end quickly. The pruning speeds up search by 30%−40%percent30percent4030\\%-40\\% when run on CPUs compared to not pruning (where we simply stop decoding after a predetermined maximum output length of twice the source length). Typically we use beamsize=3.0𝑏𝑒𝑎𝑚𝑠𝑖𝑧𝑒3.0beamsize=3.0, unless otherwise noted. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_61",
"text": " To improve throughput during decoding we can put many sentences (typically up to 35) of similar length into a batch and decode all of those in parallel to make use of available hardware optimized for parallel computations. In this case the beam search only finishes if all hypotheses for all sentences in the batch are out of beam, which is slightly less efficient theoretically, but in practice is of negligible additional computational cost. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_62",
"text": " Table 2 shows the impact of α𝛼\\alpha and β𝛽\\beta on the BLEU score when decoding the WMT’14 English-to-French development set. The model used here for experiments is trained using the ML objective only (without RL refinement). As can be seen from the results, having some length normalization and coverage penalty improves BLEU score considerably (from 30.3 to 31.4). ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_63",
"text": " We find that length normalization (α𝛼\\alpha) and coverage penalty (β𝛽\\beta) are less effective for models with RL refinement. Table 3 summarizes our results. This is understandable, as during RL refinement, the models already learn to pay attention to the full source sentence to not under-translate or over-translate, which would result in a penalty on the BLEU (or GLEU) scores. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_64",
"text": " We found that the optimal α𝛼\\alpha and β𝛽\\beta vary slightly for different models. Based on tuning results using internal Google datasets, we use α=0.2𝛼0.2\\alpha=0.2 and β=0.2𝛽0.2\\beta=0.2 in our experiments, unless noted otherwise. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_65",
"text": " In this section, we present our experimental results on two publicly available corpora used extensively as benchmarks for Neural Machine Translation systems: WMT’14 English-to-French (WMT En→→\\rightarrowFr) and English-to-German (WMT En→→\\rightarrowDe). On these two datasets, we benchmark GNMT models with word-based, character-based, and wordpiece-based vocabularies. We also present the improved accuracy of our models after fine-tuning with RL and model ensembling. Our main objective with these datasets is to show the contributions of various components in our implementation, in particular the wordpiece model, RL model refinement, and model ensembling. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_66",
"text": " In addition to testing on publicly available corpora, we also test GNMT on Google’s translation production corpora, which are two to three decimal orders of magnitudes bigger than the WMT corpora for a given language pair. We compare the accuracy of our model against human accuracy and the best Phrase-Based Machine Translation (PBMT) production system for Google Translate. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_67",
"text": " In all experiments, our models consist of 8 encoder layers and 8 decoder layers. (Since the bottom encoder layer is actually bi-directional, in total there are 9 logically distinct LSTM passes in the encoder.) The attention network is a simple feedforward network with one hidden layer with 1024 nodes. All of the models use 1024 LSTM nodes per encoder and decoder layers. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_68",
"text": " We evaluate our model on the WMT En→→\\rightarrowFr dataset, the WMT En→→\\rightarrowDe dataset, as well as many Google-internal production datasets. On WMT En→→\\rightarrowFr, the training set contains 36M sentence pairs. On WMT En→→\\rightarrowDe, the training set contains 5M sentence pairs. In both cases, we use newstest2014 as the test sets to compare against previous work (31, 37, 45). The combination of newstest2012 and newstest2013 is used as the development set. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_69",
"text": " In addition to WMT, we also evaluate our model on some Google-internal datasets representing a wider spectrum of languages with distinct linguistic properties: English ↔↔\\leftrightarrow French, English ↔↔\\leftrightarrow Spanish and English ↔↔\\leftrightarrow Chinese. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_70",
"text": " We evaluate our models using the standard BLEU score metric. To be comparable to previous work (41, 31, 45), we report tokenized BLEU score as computed by the multi-bleu.pl script, downloaded from the public implementation of Moses (on Github), which is also used in . ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_71",
"text": " As is well-known, BLEU score does not fully capture the quality of a translation. For that reason we also carry out side-by-side (SxS) evaluations where we have human raters evaluate and compare the quality of two translations presented side by side for a given source sentence. Side-by-side scores range from 0 to 6, with a score of 0 meaning “completely nonsense translation”, and a score of 6 meaning “perfect translation: the meaning of the translation is completely consistent with the source, and the grammar is correct”. A translation is given a score of 4 if “the sentence retains most of the meaning of the source sentence, but may have some grammar mistakes”, and a translation is given a score of 2 if “the sentence preserves some of the meaning of the source sentence but misses significant parts”. These scores are generated by human raters who are fluent in both languages and hence often capture translation quality better than BLEU scores. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_72",
"text": " The models are trained by a system we implemented using TensorFlow. The training setup follows the classic data parallelism paradigm. There are 12 replicas running concurrently on separate machines. Every replica updates the shared parameters asynchronously. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_73",
"text": " We initialize all trainable parameters uniformly between (-0.04, 0.04). As is common wisdom in training RNN models, we apply gradient clipping (similar to ): all gradients are uniformly scaled down such that the norm of the modified gradients is no larger than a fixed constant, which is 5.05.05.0 in our case. If the norm of the original gradients is already smaller than or equal to the given threshold, then gradients are not changed. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_74",
"text": " For the first stage of maximum likelihood training (that is, to optimize for objective function 7), we use a combination of Adam and simple SGD learning algorithms provided by the TensorFlow runtime system. We run Adam for the first 60k steps, after which we switch to simple SGD. Each step in training is a mini-batch of 128 examples. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_75",
"text": " We find that Adam accelerates training at the beginning, but Adam alone converges to a worse point than a combination of Adam first, followed by SGD (Figure 5). For the Adam part, we use a learning rate of 0.00020.00020.0002, and for the SGD part, we use a learning rate of 0.50.50.5. We find that it is important to also anneal the learning rate after a certain number of total steps. For the WMT En→→\\rightarrowFr dataset, we begin to anneal the learning rate after 1.2M steps, after which we halve the learning rate every 200k steps for an additional 800k steps. On WMT En→→\\rightarrowFr, it takes around 6 days to train a basic model using 96 NVIDIA K80 GPUs. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_76",
"text": " Once a model is fully converged using the ML objective, we switch to RL based model refinement, i.e., we further optimize the objective function as in equation 9. We refine a model until the BLEU score does not change much on the development set. For this model refinement phase, we simply run the SGD optimization algorithm. The number of steps needed to refine a model varies from dataset to dataset. For WMT En→→\\rightarrowFr, it takes around 3 days to complete 400k steps. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_77",
"text": " To prevent overfitting, we apply dropout during training with a scheme similar to . For the WMT En→→\\rightarrowFr and En→→\\rightarrowDe datasets, we set the dropout probability to be 0.20.20.2 and 0.30.30.3 respectively. Due to various technical reasons, dropout is only applied during the ML training phase, not during the RL refinement phase. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_78",
"text": " The exact hyper-parameters vary from dataset to dataset and from model to model. For the WMT En→→\\rightarrowDe dataset, since it is significantly smaller than the WMT En→→\\rightarrowFr dataset, we use a higher dropout probability, and also train smaller models for fewer steps overall. On the production data sets, we typically do not use dropout, and we train the models for more steps. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_79",
"text": " The models in our experiments are word-based, character-based, mixed word-character-based or several wordpiece models with varying vocabulary sizes. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_80",
"text": " For the word model, we selected the most frequent 212K source words as the source vocabulary and the most popular 80k target words as the target vocabulary. Words not in the source vocabulary or the target vocabulary (unknown words) are converted into special <first_char>_UNK_<last_char> symbols. Note, in this case, there is more than one UNK (e.g., our production word models have roughly 5000 different UNKs in this case). We then use the attention mechanism to copy a corresponding word from the source to replace these unknown words during decoding . ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_81",
"text": " The mixed word-character model is similar to the word model, except the out-of-vocabulary (OOV) words are converted into sequences of characters with special delimiters around them as described in section 4.2 in more detail. In our experiments, the vocabulary size for the mixed word-character model is 32K. For the pure character model, we simply split all words into constituent characters, resulting typically in a few hundred basic characters (including special symbols appearing in the data). For the wordpiece models, we train 3 different models with vocabulary sizes of 8K, 16K, and 32K. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_82",
"text": " Table 4 summarizes our results on the WMT En→→\\rightarrowFr dataset. In this table, we also compare against other strong baselines without model ensembling. As can be seen from the table, “WPM-32K”, a wordpiece model with a shared source and target vocabulary of 32K wordpieces, performs well on this dataset and achieves the best quality as well as the fastest inference speed. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_83",
"text": " The pure character model (char input, char output) works surprisingly well on this task, not much worse than the best wordpiece models in BLEU score. However, these models are rather slow to train and slow to use as the sequences are much longer. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_84",
"text": " Our best model, WPM-32K, achieves a BLEU score of 38.95. Note that this BLEU score represents the averaged score of 8 models we trained. The maximum BLEU score of the 8 models is higher at 39.37. We point out that our models are completely self-contained, as opposed to previous models reported in , which depend on some external alignment models to achieve their best results. Also note that all our test set numbers were achieved by picking an optimal model on the development set which was then used to decode the test set. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_85",
"text": " Note that the timing numbers for this section are obtained on CPUs, not TPUs. We use here the same CPU machine as described above, and run the decoder with a batchsize of 16 sentences in parallel and a maximum of 4 concurrent hypotheses at any time per sentence. The time per sentence is the total decoding time divided by the number of respective sentences in the test set. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_86",
"text": " Similarly, the results of WMT En→→\\rightarrowDe are presented in Table 5. Again, we find that wordpiece models achieves the best BLEU scores. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_87",
"text": " WMT En→→\\rightarrowDe is considered a more difficult task than WMT En→→\\rightarrowFr as it has much less training data, and German, as a more morphologically rich language, needs a huge vocabulary for word models. Thus it is more advantageous to use wordpiece or mixed word/character models, which provide a gain of more than 2 BLEU points on top of the word model and about 4 BLEU points on top of previously reported results in (6, 45). Our best model, WPM-32K, achieves a BLEU score of 24.61, which is averaged over 8 runs. Consistently, on the production corpora, wordpiece models tend to be better than other models both in terms of speed and accuracy. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_88",
"text": " The models trained in the previous section are optimized for log-likelihood of the next step prediction which may not correlate well with translation quality, as discussed in section 5. We use RL training to fine-tune sentence BLEU scores after normal maximum-likelihood training. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_89",
"text": " The results of RL fine-tuning on the best En→→\\rightarrowFr and En→→\\rightarrowDe models are presented in Table 6, which show that fine-tuning the models with RL can improve BLEU scores. On WMT En→→\\rightarrowFr, model refinement improves BLEU score by close to 1 point. On En→→\\rightarrowDe, RL-refinement slightly hurts the test performance even though we observe about 0.4 BLEU points improvement on the development set. The results presented in Table 6 are the average of 8 independent models. We also note that there is an overlap between the wins from the RL refinement and the decoder fine-tuning (i.e., the introduction of length normalization and coverage penalty). On a less fine-tuned decoder (e.g., if the decoder does beam search by log-probability only), the win from RL would have been bigger (as is evident from comparing results in Table 2 and Table 3). ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_90",
"text": " We ensemble 8 RL-refined models to obtain a state-of-the-art result of 41.16 BLEU points on the WMT En→→\\rightarrowFr dataset. Our results are reported in Table 7. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_91",
"text": " We ensemble 8 RL-refined models to obtain a state-of-the-art result of 26.30 BLEU points on the WMT En→→\\rightarrowDe dataset. Our results are reported in Table 8. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_92",
"text": " Finally, to better understand the quality of our models and the effect of RL refinement, we carried out a four-way side-by-side human evaluation to compare our NMT translations against the reference translations and the best phrase-based statistical machine translations. During the side-by-side comparison, humans are asked to rate four translations given a source sentence. The four translations are: 1) the best phrase-based translations as downloaded from http://matrix.statmt.org/systems/show/2065, 2) an ensemble of 8 ML-trained models, 3) an ensemble of 8 ML-trained and then RL-refined models, and 4) reference human translations as taken directly from newstest2014, Our results are presented in Table 9. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_93",
"text": " The results show that even though RL refinement can achieve better BLEU scores, it barely improves the human impression of the translation quality. This could be due to a combination of factors including: 1) the relatively small sample size for the experiment (only 500 examples for side-by-side), 2) the improvement in BLEU score by RL is relatively small after model ensembling (0.81), which may be at a scale that human side-by-side evaluations are insensitive to, and 3) the possible mismatch between BLEU as a metric and real translation quality as perceived by human raters. Table 11 contains some example translations from PBMT, \"NMT before RL\" and \"Human\", along with the side-by-side scores that human raters assigned to each translation (some of which we disagree with, see the table caption). ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_94",
"text": " We have carried out extensive experiments on many Google-internal production data sets. As the experiments above cast doubt on whether RL improves the real translation quality or simply the BLEU metric, RL-based model refinement is not used during these experiments. Given the larger volume of training data available in the Google corpora, dropout is also not needed in these experiments. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_95",
"text": " In this section we describe our experiments with human perception of the translation quality. We asked human raters to rate translations in a three-way side-by-side comparison. The three sides are from: 1) translations from the production phrase-based statistical translation system used by Google, 2) translations from our GNMT system, and 3) translations by humans fluent in both languages. Reported here in Table 10 are averaged rated scores for English ↔↔\\leftrightarrow French, English ↔↔\\leftrightarrow Spanish and English ↔↔\\leftrightarrow Chinese. All the GNMT models are wordpiece models, without model ensembling, and use a shared source and target vocabulary with 32K wordpieces. On each pair of languages, the evaluation data consist of 500 randomly sampled sentences from Wikipedia and news websites, and the corresponding human translations to the target language. The results show that our model reduces translation errors by more than 60% compared to the PBMT model on these major pairs of languages. A typical distribution of side-by-side scores is shown in Figure 6. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_96",
"text": " As expected, on this metric the GNMT system improves also compared to the PBMT system. In some cases human and GNMT translations are nearly indistinguishable on the relatively simplistic and isolated sentences sampled from Wikipedia and news articles for this experiment. Note that we have observed that human raters, even though fluent in both languages, do not necessarily fully understand each randomly sampled sentence sufficiently and hence cannot necessarily generate the best possible translation or rate a given translation accurately. Also note that, although the scale for the scores goes from 0 (complete nonsense) to 6 (perfect translation) the human translations get an imperfect score of only around 5 in Table 10, which shows possible ambiguities in the translations and also possibly non-calibrated raters and translators with a varying level of proficiency. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_97",
"text": " Testing our GNMT system on particularly difficult translation cases and longer inputs than just single sentences is the subject of future work. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_98",
"text": " In this paper, we describe in detail the implementation of Google’s Neural Machine Translation (GNMT) system, including all the techniques that are critical to its accuracy, speed, and robustness. On the public WMT’14 translation benchmark, our system’s translation quality approaches or surpasses all currently published results. More importantly, we also show that our approach carries over to much larger production data sets, which have several orders of magnitude more data, to deliver high quality translations. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_99",
"text": " Our key findings are: 1) that wordpiece modeling effectively handles open vocabularies and the challenge of morphologically rich languages for translation quality and inference speed, 2) that a combination of model and data parallelism can be used to efficiently train state-of-the-art sequence-to-sequence NMT models in roughly a week, 3) that model quantization drastically accelerates translation inference, allowing the use of these large models in a deployed production environment, and 4) that many additional details like length-normalization, coverage penalties, and similar are essential to making NMT systems work well on real data. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
},
{
"id": "1609.08144_all_100",
"text": " Using human-rated side-by-side comparison as a metric, we show that our GNMT system approaches the accuracy achieved by average bilingual human translators on some of our test sets. In particular, compared to the previous phrase-based production system, this GNMT system delivers roughly a 60% reduction in translation errors on several popular language pairs. ",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation"
}
] |
What is the criteria for training multiple substitute DNNs?
|
The criteria for training multiple substitute DNNs is achieve good accuray but mainly the goal is to create a substitute capable of mimicking the oracle decision boundaries [24].
|
[
24
] |
[
{
"id": "1602.02697_all_0",
"text": " A classifier is a ML model that learns a mapping between inputs and a set of classes. For instance, a malware detector is a classifier taking executables as inputs and assigning them to the benign or malware class. Efforts in the security (5, 2, 9, 18) and machine learning (14, 4) communities exposed the vulnerability of classifiers to integrity attacks. Such attacks are often instantiated by adversarial examples: legitimate inputs altered by adding small, often imperceptible, perturbations to force a learned classifier to misclassify the resulting adversarial inputs, while remaining correctly classified by a human observer. To illustrate, consider the following images, potentially consumed by an autonomous vehicle : ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_1",
"text": " To humans, these images appear to be the same: our biological classifiers (vision) identify each image as a stop sign. The image on the left is indeed an ordinary image of a stop sign. We produced the image on the right by adding a precise perturbation that forces a particular DNN to classify it as a yield sign, as described in Section 5.2. Here, an adversary could potentially use the altered image to cause a car without failsafes to behave dangerously. This attack would require modifying the image used internally by the car through transformations of the physical traffic sign. Related works showed the feasibility of such physical transformations for a state-of-the-art vision classifier and face recognition model . It is thus conceivable that physical adversarial traffic signs could be generated by maliciously modifying the sign itself, e.g., with stickers or paint. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_2",
"text": " In this paper, we introduce the first demonstration that black-box attacks against DNN classifiers are practical for real-world adversaries with no knowledge about the model. We assume the adversary (a) has no information about the structure or parameters of the DNN, and (b) does not have access to any large training dataset. The adversary’s only capability is to observe labels assigned by the DNN for chosen inputs, in a manner analog to a cryptographic oracle. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_3",
"text": " Our novel attack strategy is to train a local substitute DNN with a synthetic dataset: the inputs are synthetic and generated by the adversary, while the outputs are labels assigned by the target DNN and observed by the adversary. Adversarial examples are crafted using the substitute parameters, which are known to us. They are not only misclassified by the substitute but also by the target DNN, because both models have similar decision boundaries. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_4",
"text": " This is a considerable departure from previous work, which evaluated perturbations required to craft adversarial examples using either: (a) detailed knowledge of the DNN architecture and parameters (2, 4, 9, 14), or (b) an independently collected training set to fit an auxiliary model (2, 4, 14). This limited their applicability to strong adversaries capable of gaining insider knowledge of the targeted ML model, or collecting large labeled training sets. We release assumption (a) by learning a substitute: it gives us the benefit of having full access to the model and apply previous adversarial example crafting methods. We release assumption (b) by replacing the independently collected training set with a synthetic dataset constructed by the adversary with synthetic inputs and labeled by observing the target DNN’s output. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_5",
"text": " Our threat model thus corresponds to the real-world scenario of users interacting with classifiers hosted remotely by a third-party keeping the model internals secret. In fact, we instantiate our attack against classifiers automatically trained by MetaMind, Amazon, and Google. We are able to access them only after training is completed. Thus, we provide the first correctly blinded experiments concerning adversarial examples as a security risk. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_6",
"text": " We show that our black-box attack is applicable to many remote systems taking decisions based on ML, because it combines three key properties: (a) the capabilities required are limited to observing output class labels, (b) the number of labels queried is limited, and (c) the approach applies and scales to different ML classifier types (see Section 7), in addition to state-of-the-art DNNs. In contrast, previous work failed to simultaneously provide all of these three key properties (4, 14, 12, 15, 18). Our contributions are: • We introduce in Section 4 an attack against black-box DNN classifiers. It crafts adversarial examples without knowledge of the classifier training data or model. To do so, a synthetic dataset is constructed by the adversary to train a substitute for the targeted DNN classifier. • In Section 5, we instantiate the attack against a remote DNN classifier hosted by MetaMind. The DNN misclassifies 84.24%percent84.2484.24\\% of the adversarial inputs crafted. • The attack is calibrated in Section 6 to (a) reduce the number of queries made to the target model and (b) maximize misclassification of adversarial examples. • We generalize the attack to other ML classifiers like logistic regression. In Section 7, we target models hosted by Amazon and Google. They misclassify adversarial examples at rates of 96.19%percent96.1996.19\\% and 88.94%percent88.9488.94\\%. • Section 8 shows that our attack evades defenses proposed in the literature because the substitute trained by the adversary is unaffected by defenses deployed on the targeted oracle model to reduce its vulnerability. • In Appendix B, we provide an intuition of why adversarial examples crafted with the substitute also mislead target models by empirically observing that substitutes have gradients correlated to the target’s. Disclosure: We disclosed our attacks to MetaMind, Amazon, and Google. Note that no damage was caused as we demonstrated control of models created for our own account. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_7",
"text": " We provide preliminaries of deep learning to enable understanding of our threat model and attack. We refer readers interested to the more detailed presentation in . ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_8",
"text": " A deep neural network (DNN), as illustrated in Figure 1, is a ML technique that uses a hierarchical composition of n𝑛n parametric functions to model an input x→→𝑥\\vec{x}. Each function fisubscript𝑓𝑖f_{i} for i∈1..ni\\in 1..n is modeled using a layer of neurons, which are elementary computing units applying an activation function to the previous layer’s weighted representation of the input to generate a new representation. Each layer is parameterized by a weight vector θisubscript𝜃𝑖\\theta_{i} (we omit the vector notation) impacting each neuron’s activation. Such weights hold the knowledge of a DNN model F𝐹F and are evaluated during its training phase, as detailed below. Thus, a DNN defines and computes: F(x→)=fn(θn,fn−1(θn−1, … f2(θ2,f1(θ1,x→))))𝐹→𝑥subscript𝑓𝑛subscript𝜃𝑛subscript𝑓𝑛1subscript𝜃𝑛1 … subscript𝑓2subscript𝜃2subscript𝑓1subscript𝜃1→𝑥F(\\vec{x})=f_{n}\\left(\\theta_{n},f_{n-1}\\left(\\theta_{n-1},\\text{ ... }f_{2}\\left(\\theta_{2},f_{1}\\left(\\theta_{1},\\vec{x}\\right)\\right)\\right)\\right) (1) ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_9",
"text": " The training phase of a DNN F𝐹F learns values for its parameters θF={θ1,…,θn}subscript𝜃𝐹subscript𝜃1…subscript𝜃𝑛\\theta_{F}=\\{\\theta_{1},...,\\theta_{n}\\}. We focus on classification tasks, where the goal is to assign inputs a label among a predefined set of labels. The DNN is given a large set of known input-output pairs (x→,y→)→𝑥→𝑦(\\vec{x},\\vec{y}) and it adjusts weight parameters to reduce a cost quantifying the prediction error between the prediction F(x→)𝐹→𝑥F(\\vec{x}) and the correct output y→→𝑦\\vec{y}. The adjustment is typically performed using techniques derived from the backpropagation algorithm. Briefly, such techniques successively propagate error gradients with respect to network parameters from the network’s output layer to its input layer. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_10",
"text": " During the test phase, the DNN is deployed with a fixed set of parameters θFsubscript𝜃𝐹\\theta_{F} to make predictions on inputs unseen during training. We consider classifiers: the DNN produces a probability vector F(x→)𝐹→𝑥F(\\vec{x}) encoding its belief of input x→→𝑥\\vec{x} being in each of the classes (cf. Figure 1). The weight parameters θFsubscript𝜃𝐹\\theta_{F} hold the model knowledge acquired by training. Ideally, the model should generalize and make accurate predictions for inputs outside of the domain explored during training. However, attacks manipulating DNN inputs with adversarial examples showed this is not the case in practice (4, 9, 14). ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_11",
"text": " A taxonomy of adversaries against DNN classifiers is found in . In our work, the adversary seeks to force a classifier to misclassify inputs in any class different from their correct class. To achieve this, we consider a weak adversary with access to the DNN output only. The adversary has no knowledge of the architectural choices made to design the DNN, which include the number, type, and size of layers, nor of the training data used to learn the DNN’s parameters. Such attacks are referred to as black box, where adversaries need not know internal details of a system to compromise it. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_12",
"text": " Targeted Model: We consider attackers targeting a multi-class DNN classifier. It outputs probability vectors, where each vector component encodes the DNN’s belief of the input being part of one of the predefined classes. We consider the ongoing example of a DNN classifying images, as shown in Figure 1. Such DNNs can be used to classify handwritten digits into classes associated with digits from 0 to 9, images of objects in a fixed number of categories, or images of traffic signs into classes identifying its type (STOP, yield, …). ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_13",
"text": " Adversarial Capabilities: The oracle O𝑂O is the targeted DNN. Its name refers to the only capability of the adversary: accessing the label O~(x→)~𝑂→𝑥\\tilde{O}(\\vec{x}) for any input x→→𝑥\\vec{x} by querying oracle O𝑂O. The output label O~(x→)~𝑂→𝑥\\tilde{O}(\\vec{x}) is the index of the class assigned the largest probability by the DNN: O~(x→)=argmaxj∈0..N−1Oj(x→)\\tilde{O}(\\vec{x})=\\arg\\max_{j\\in 0..N-1}O_{j}(\\vec{x}) (2) where Oj(x→)subscript𝑂𝑗→𝑥O_{j}(\\vec{x}) is the j𝑗j-th component of the probability vector O(x→)𝑂→𝑥O(\\vec{x}) output by DNN O𝑂O. Distinguishing between labels and probabilities makes adversaries realistic (they more often have access to labels than probabilities) but weaker: labels encode less information about the model’s learned behavior. Accessing labels O~~𝑂\\tilde{O} produced by the DNN O𝑂O is the only capability assumed in our threat model. We do not have access to the oracle internals or training data. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_14",
"text": " Adversarial Goal: We want to produce a minimally altered version of any input x→→𝑥\\vec{x}, named adversarial sample, and denoted x∗→→superscript𝑥\\vec{x^{*}}, misclassified by oracle O𝑂O: O~(x∗→)≠O~(x→)~𝑂→superscript𝑥~𝑂→𝑥\\tilde{O}(\\vec{x^{*}})\\neq\\tilde{O}(\\vec{x}). This corresponds to an attack on the oracle’s output integrity. Adversarial samples solve the following optimization problem: x∗→=x→+argmin{z→:O~(x→+z→)≠O~(x→)}=x→+δx→→superscript𝑥→𝑥:→𝑧~𝑂→𝑥→𝑧~𝑂→𝑥→𝑥subscript𝛿→𝑥\\vec{x^{*}}=\\vec{x}+\\arg\\min\\{\\vec{z}:\\tilde{O}(\\vec{x}+\\vec{z})\\neq\\tilde{O}(\\vec{x})\\}=\\vec{x}+\\delta_{\\vec{x}} (3) Examples of adversarial samples can be found in Figure 2. The first row contains legitimate samples and the second corresponding adversarial samples that are misclassified. This misclassification must be achieved by adding a minimal perturbation δx→𝛿→𝑥\\delta\\vec{x} so as to evade human detection. Even with total knowledge of the architecture used to train model O𝑂O and its parameters resulting from training, finding such a minimal perturbation is not trivial, as properties of DNNs preclude the optimization problem from being linear or convex. This is exacerbated by our threat model: removing knowledge of model O𝑂O’s architecture and training data makes it harder to find a perturbation such that O~(x→+δx→)≠O~(x→)~𝑂→𝑥𝛿→𝑥~𝑂→𝑥\\tilde{O}(\\vec{x}+\\delta\\vec{x})\\neq\\tilde{O}(\\vec{x}) holds. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_15",
"text": " In Appendix C, we give a presentation of attacks conducted in related threat models—with stronger assumptions. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_16",
"text": " We introduce our black-box attack. As stated in Section 3, the adversary wants to craft inputs misclassified by the ML model using the sole capability of accessing the label O~(x→)~𝑂→𝑥\\tilde{O}(\\vec{x}) assigned by classifier for any chosen input x→→𝑥\\vec{x}. The strategy is to learn a substitute for the target model using a synthetic dataset generated by the adversary and labeled by observing the oracle output. Then, adversarial examples are crafted using this substitute. We expect the target DNN to misclassify them due to transferability between architectures (14, 4) ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_17",
"text": " To understand the difficulty of conducting the attack under this threat model, recall Equation 3 formalizing the adversarial goal of finding a minimal perturbation that forces the targeted oracle to misclassify. A closed form solution cannot be found when the target is a non-convex ML model: e.g., a DNN. The basis for most adversarial attacks (4, 9, 14) is to approximate its solution using gradient-based optimization on functions defined by a DNN. Because evaluating these functions and their gradients requires knowledge of the DNN architecture and parameters, such an attack is not possible under our black-box scenario. It was shown that adversaries with access to an independently collected labeled training set from the same population distribution than the oracle could train a model with a different architecture and use it as a substitute : adversarial examples designed to manipulate the substitute are often misclassified by the targeted model. However, many modern machine learning systems require large and expensive training sets for training. For instance, we consider models trained with several tens of thousands of labeled examples. This makes attacks based on this paradigm unfeasible for adversaries without large labeled datasets. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_18",
"text": " In this paper, we show black-box attacks can be accomplished at a much lower cost, without labeling an independent training set. In our approach, to enable the adversary to train a substitute model without a real labeled dataset, we use the target DNN as an oracle to construct a synthetic dataset. The inputs are synthetically generated and the outputs are labels observed from the oracle. Using this synthetic dataset, the attacker builds an approximation F𝐹F of the model O𝑂O learned by the oracle. This substitute network F𝐹F is then used to craft adversarial samples misclassified by F𝐹F Indeed, with its full knowledge of the substitute DNN F𝐹F parameters, the adversary can use one of the previously described attacks (4, 9) to craft adversarial samples misclassified by F𝐹F. As long as the transferability property holds between F𝐹F and O𝑂O, adversarial samples crafted for F𝐹F will also be misclassified by O𝑂O. This leads us to propose the following strategy: 1. Substitute Model Training: the attacker queries the oracle with synthetic inputs selected by a Jacobian-based heuristic to build a model F𝐹F approximating the oracle model O𝑂O’s decision boundaries. 2. Adversarial Sample Crafting: the attacker uses substitute network F𝐹F to craft adversarial samples, which are then misclassified by oracle O𝑂O due to the transferability of adversarial samples. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_19",
"text": " Training a substitute model F𝐹F approximating oracle O𝑂O is challenging because we must: (1) select an architecture for our substitute without knowledge of the targeted oracle’s architecture, and (2) limit the number of queries made to the oracle in order to ensure that the approach is tractable. Our approach, illustrated in Figure 3, overcomes these challenges mainly by introducing a synthetic data generation technique, the Jacobian-based Dataset Augmentation. We emphasize that this technique is not designed to maximize the substitute DNN’s accuracy but rather ensure that it approximates the oracle’s decision boundaries with few label queries. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_20",
"text": " Substitute Architecture: This factor is not the most limiting as the adversary must at least have some partial knowledge of the oracle input (e.g., images, text) and expected output (e.g., classification). The adversary can thus use an architecture adapted to the input-output relation. For instance, a convolutional neural network is suitable for image classification. Furthermore, we show in Section 6 that the type, number, and size of layers used in the substitute DNN have relatively little impact on the success of the attack. Adversaries can also consider performing an architecture exploration and train several substitute models before selecting the one yielding the highest attack success. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_21",
"text": " Generating a Synthetic Dataset: To better understand the need for synthetic data, note that we could potentially make an infinite number of queries to obtain the oracle’s output O(x→)𝑂→𝑥O(\\vec{x}) for any input x→→𝑥\\vec{x} belonging to the input domain. This would provide us with a copy of the oracle. However, this is simply not tractable: consider a DNN with M𝑀M input components, each taking discrete values among a set of K𝐾K possible values, the number of possible inputs to be queried is KMsuperscript𝐾𝑀K^{M}. The intractability is even more apparent for inputs in the continuous domain. Furthermore, making a large number of queries renders the adversarial behavior easy to detect. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_22",
"text": " A natural alternative is to resort to randomly selecting additional points to be queried. For instance, we tried using Gaussian noise to select points on which to train substitutes. However, the resulting models were not able to learn by querying the oracle. This is likely due to noise not being representative of the input distribution. To address this issue, we thus introduce a heuristic efficiently exploring the input domain and, as shown in Sections 5 and 6, drastically limits the number of oracle queries. Furthermore, our technique also ensures that the substitute DNN is an approximation of the targeted DNN i.e. it learns similar decision boundaries. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_23",
"text": " The heuristic used to generate synthetic training inputs is based on identifying directions in which the model’s output is varying, around an initial set of training points. Such directions intuitively require more input-output pairs to capture the output variations of the target DNN O𝑂O. Therefore, to get a substitute DNN accurately approximating the oracle’s decision boundaries, the heuristic prioritizes these samples when querying the oracle for labels. These directions are identified with the substitute DNN’s Jacobian matrix JFsubscript𝐽𝐹J_{F}, which is evaluated at several input points x→→𝑥\\vec{x} (how these points are chosen is described below). Precisely, the adversary evaluates the sign of the Jacobian matrix dimension corresponding to the label assigned to input x→→𝑥\\vec{x} by the oracle: sgn(JF(x→)(O~(x→)))sgnsubscript𝐽𝐹→𝑥delimited-()~𝑂→𝑥\\operatorname{sgn}\\left(J_{F}(\\vec{x})(\\tilde{O}(\\vec{x}))\\right). To obtain a new synthetic training point, a term λ⋅sgn(JF(x→)(O~(x→)))⋅𝜆sgnsubscript𝐽𝐹→𝑥delimited-()~𝑂→𝑥\\lambda\\cdot\\operatorname{sgn}\\left(J_{F}(\\vec{x})(\\tilde{O}(\\vec{x}))\\right) is added to the original point x→→𝑥\\vec{x}. We name this technique Jacobian-based Dataset Augmentation. We base our substitute training algorithm on the idea of iteratively refining the model in directions identified using the Jacobian. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_24",
"text": " Substitute DNN Training Algorithm: We now describe the five-step training procedure outlined in Algorithm 1: • Initial Collection (1): The adversary collects a very small set S0subscript𝑆0S_{0} of inputs representative of the input domain. For instance, if the targeted oracle O𝑂O classifies handwritten digits, the adversary collects 10 images of each digit 0 through 9. We show in Section 5 that this set does not necessarily have to come from the distribution from which the targeted oracle was trained. • Architecture Selection (2): The adversary selects an architecture to be trained as the substitute F𝐹F. Again, this can be done using high-level knowledge of the classification task performed by the oracle (e.g., convolutional networks are appropriate for vision) • Substitute Training: The adversary iteratively trains more accurate substitute DNNs Fρsubscript𝐹𝜌F_{\\rho} by repeating the following for ρ∈0..ρmax\\rho\\in 0..\\rho_{max}: – Labeling (3): By querying for the labels O~(x→)~𝑂→𝑥\\tilde{O}(\\vec{x}) output by oracle O𝑂O, the adversary labels each sample x→∈Sρ→𝑥subscript𝑆𝜌\\vec{x}\\in S_{\\rho} in its initial substitute training set Sρsubscript𝑆𝜌S_{\\rho}. – Training (4): The adversary trains the architecture chosen at step (2) using substitute training set Sρsubscript𝑆𝜌S_{\\rho} in conjunction with classical training techniques. – Augmentation (5): The adversary applies our augmentation technique on the initial substitute training set Sρsubscript𝑆𝜌S_{\\rho} to produce a larger substitute training set Sρ+1subscript𝑆𝜌1S_{\\rho+1} with more synthetic training points. This new training set better represents the model’s decision boundaries. The adversary repeats steps (3) and (4) with the augmented set Sρ+1subscript𝑆𝜌1S_{\\rho+1}. Step (3) is repeated several times to increase the substitute DNN’s accuracy and the similarity of its decision boundaries with the oracle. We introduce the term substitute training epoch, indexed with ρ𝜌\\rho, to refer to each iteration performed. This leads to this formalization of the Jacobian-based Dataset Augmentation performed at step (5) of our substitute training algorithm to find more synthetic training points: Sρ+1={x→+λ⋅sgn(JF(O~(x→))):x→∈Sρ}∪Sρsubscript𝑆𝜌1conditional-set→𝑥⋅𝜆sgnsubscript𝐽𝐹delimited-()~𝑂→𝑥→𝑥subscript𝑆𝜌subscript𝑆𝜌S_{\\rho+1}=\\{\\vec{x}+\\lambda\\cdot\\operatorname{sgn}(J_{F}(\\tilde{O}(\\vec{x}))):\\vec{x}\\in S_{\\rho}\\}\\cup S_{\\rho} (4) where λ𝜆\\lambda is a parameter of the augmentation: it defines the size of the step taken in the sensitive direction identified by the Jacobian matrix to augment the set Sρsubscript𝑆𝜌S_{\\rho} into Sρ+1subscript𝑆𝜌1S_{\\rho+1}. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_25",
"text": " Once the adversary trained a substitute DNN, it uses it to craft adversarial samples. This is performed by implementing two previously introduced approaches described in (4, 9). We provide an overview of the two approaches, namely the Goodfellow et al. algorithm and the Papernot et al. algorithm. Both techniques share a similar intuition of evaluating the model’s sensitivity to input modifications in order to select a small perturbation achieving the misclassification goal111Our attack can be implemented with other adversarial example algorithms. We focus on these two in our evaluation.. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_26",
"text": " Goodfellow et al. algorithm: This algorithm is also known as the fast gradient sign method . Given a model F𝐹F with an associated cost function c(F,x→,y)𝑐𝐹→𝑥𝑦c(F,\\vec{x},y), the adversary crafts an adversarial sample x∗→=x→+δx→→superscript𝑥→𝑥subscript𝛿→𝑥\\vec{x^{*}}=\\vec{x}+\\delta_{\\vec{x}} for a given legitimate sample x→→𝑥\\vec{x} by computing the following perturbation: δx→=εsgn(∇x→c(F,x→,y))subscript𝛿→𝑥𝜀sgnsubscript∇→𝑥𝑐𝐹→𝑥𝑦\\delta_{\\vec{x}}=\\varepsilon\\operatorname{sgn}(\\nabla_{\\vec{x}}c(F,\\vec{x},y)) (5) where perturbation sgn(∇x→c(F,x→,y))sgnsubscript∇→𝑥𝑐𝐹→𝑥𝑦\\operatorname{sgn}(\\nabla_{\\vec{x}}c(F,\\vec{x},y)) is the sign of the model’s cost function 222As described here, the method causes simple misclassification. It has been extended to achieve chosen target classes. gradient. The cost gradient is computed with respect to x→→𝑥\\vec{x} using sample x→→𝑥\\vec{x} and label y𝑦y as inputs. The value of the input variation parameter ε𝜀\\varepsilon factoring the sign matrix controls the perturbation’s amplitude. Increasing its value increases the likelihood of x∗→→superscript𝑥\\vec{x^{*}} being misclassified by model F𝐹F but on the contrary makes adversarial samples easier to detect by humans. In Section 6, we evaluate the impact of parameter ε𝜀\\varepsilon on the successfulness of our attack. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_27",
"text": " Papernot et al. algorithm: This algorithm is suitable for source-target misclassification attacks where adversaries seek to take samples from any legitimate source class to any chosen target class . Misclassification attacks are a special case of source-target misclassifications, where the target class can be any class different from the legitimate source class. Given model F𝐹F, the adversary crafts an adversarial sample x∗→=x→+δx→→superscript𝑥→𝑥subscript𝛿→𝑥\\vec{x^{*}}=\\vec{x}+\\delta_{\\vec{x}} for a given legitimate sample x→→𝑥\\vec{x} by adding a perturbation δx→subscript𝛿→𝑥\\delta_{\\vec{x}} to a subset of the input components x→isubscript→𝑥𝑖\\vec{x}_{i}. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_28",
"text": " To choose input components forming perturbation δx→subscript𝛿→𝑥\\delta_{\\vec{x}}, components are sorted by decreasing adversarial saliency value. The adversarial saliency value S(x→,t)(i)𝑆→𝑥𝑡delimited-()𝑖S(\\vec{x},t)(i) of component i𝑖i for an adversarial target class t𝑡t is defined as: S(x→,t)(i)={0 if ∂Ft∂x→i(x→)<0 or ∑j≠t∂Fj∂x→i(x→)>0∂Ft∂x→i(x→)|∑j≠t∂Fj∂x→i(x→)| otherwise𝑆→𝑥𝑡delimited-()𝑖cases0 if subscript𝐹𝑡subscript→𝑥𝑖→𝑥expectation0 or subscript𝑗𝑡subscript𝐹𝑗subscript→𝑥𝑖→𝑥0subscript𝐹𝑡subscript→𝑥𝑖→𝑥subscript𝑗𝑡subscript𝐹𝑗subscript→𝑥𝑖→𝑥 otherwiseS(\\vec{x},t)(i)=\\left\\{\\begin{array}(){c}0\\mbox{ if }\\frac{\\partial F_{t}}{\\partial\\vec{x}_{i}}(\\vec{x})<0\\mbox{ or }\\sum_{j\\neq t}\\frac{\\partial F_{j}}{\\partial\\vec{x}_{i}}(\\vec{x})>0\\\\ \\frac{\\partial F_{t}}{\\partial\\vec{x}_{i}}(\\vec{x})\\left|\\sum_{j\\neq t}\\frac{\\partial F_{j}}{\\partial\\vec{x}_{i}}(\\vec{x})\\right|\\mbox{ otherwise}\\end{array}\\right. (6) where matrix JF=(∂Fj∂x→i)ijsubscript𝐽𝐹subscriptdelimited-()subscript𝐹𝑗subscript→𝑥𝑖𝑖𝑗J_{F}=\\left(\\frac{\\partial F_{j}}{\\partial\\vec{x}_{i}}\\right)_{ij} is the model’s Jacobian matrix. Input components i𝑖i are added to perturbation δx→subscript𝛿→𝑥\\delta_{\\vec{x}} in order of decreasing adversarial saliency value S(x→,t)(i)𝑆→𝑥𝑡delimited-()𝑖S(\\vec{x},t)(i) until the resulting adversarial sample x∗→=x→+δx→→superscript𝑥→𝑥subscript𝛿→𝑥\\vec{x^{*}}=\\vec{x}+\\delta_{\\vec{x}} is misclassified by F𝐹F. The perturbation introduced for each selected input component can vary: greater perturbation reduce the number of components perturbed to achieve misclassification. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_29",
"text": " Each algorithm has its benefits and drawbacks. The Goodfellow algorithm is well suited for fast crafting of many adversarial samples with relatively large perturbations thus potentially easier to detect. The Papernot algorithm reduces perturbations at the expense of a greater computing cost. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_30",
"text": " We validate our attack against remote and local classifiers. We first apply it to target a DNN remotely provided by MetaMind, through their API333The API can be accessed online at www.metamind.io that allows a user to train classifiers using deep learning. The API returns labels produced by the DNN for any given input but does not provide access to the DNN. This corresponds to the oracle described in our threat model. We show that: • An adversary using our attack can reliably force the DNN trained using MetaMind on MNIST to misclassify 84.24%percent84.2484.24\\% of adversarial examples crafted with a perturbation not affecting human recognition. • A second oracle trained locally with the German Traffic Signs Recognition Benchmark (GTSRB) , can be forced to misclassify more than 64.24%percent64.2464.24\\% of altered inputs without affecting human recognition. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_31",
"text": " Description of the Oracle: We used the MNIST handwritten digit dataset to train the DNN . It comprises 60,0006000060,000 training and 10,0001000010,000 test images of handwritten digits. The task associated with the dataset is to identify the digit corresponding to each image. Each 282828x282828 grayscale sample is encoded as a vector of pixel intensities in the interval (0,1)01(0,1) and obtained by reading the image pixel matrix row-wise. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_32",
"text": " We registered for an API key on MetaMind’s website, which gave us access to three functionalities: dataset upload, automated model training, and model prediction querying. We uploaded the 50,0005000050,000 samples included in the MNIST training set to MetaMind and then used the API to train a classifier on the dataset. We emphasize that training is automated: we have no access to the training algorithm, model architecture, or model parameters. All we are given is the accuracy of the resulting model, computed by MetaMind using a validation set created by isolating 10%percent1010\\% of the training samples. Details can be found on MetaMind’s website. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_33",
"text": " Training took 36 hours to return a classifier with a 94.97%percent94.9794.97\\% accuracy. This performance cannot be improved as we cannot access or modify the model’s specifications and training algorithm. Once training is completed, we could access the model predictions, for any input of our choice, through the API. Predictions take the form of a class label. This corresponds to the threat model described in Section 3. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_34",
"text": " Initial Substitute Training Sets: First, the adversary collects an initial substitute training set. We describe two such sets used to attack the MetaMind oracle: ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_35",
"text": " • MNIST subset: This initial substitute training set is made of 150150150 samples from the MNIST test set. They differ from those used by the oracle for training as test and training sets are distinct. We assume adversaries can collect such a limited sample set under the threat model described in Section 3 with minimal knowledge of the oracle task: here, handwritten digit classification. • Handcrafted set: To ensure our results do not stem from similarities between the MNIST test and training sets, we also consider a handcrafted initial substitute training set. We handcrafted 100100100 samples by handwriting 101010 digits for each class between 00 and 999 with a laptop trackpad. We then adapted them to the MNIST format of 282828x282828 grayscale pixels. Some are shown below. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_36",
"text": " Substitute DNN Training: The adversary uses the initial substitute training sets and the oracle to train subsitute DNNs. Our substitute architecture A, a standard for image classification, is described in Table 13 (cf. appendix). The substitute DNN is trained on our machine for 666 substitute epochs. During each of these 666 epochs, the model is trained for 101010 epochs from scratch with a learning rate of 10−2superscript10210^{-2} and momentum of 0.90.90.9. Between substitute epochs, we perform a Jacobian-based dataset augmentation with a step size of λ=0.1𝜆0.1\\lambda=0.1 to generate additional synthetic training data, which we label using the MetaMind oracle. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_37",
"text": " The accuracy of the two substitute DNNs is reported in Figure 4. It is computed with the MNIST test set (minus the 150150150 samples used in the first initial substitute training set). The adversary does not have access to this full test set: we solely use it to analyze our results. The two substitute DNNs respectively achieve a 81.20%percent81.2081.20\\% and 67.00%percent67.0067.00\\% accuracy on the MNIST test set after 666 substitute training epochs. These accuracies fall short of current state-of-the-art accuracies on this task. However, the adversary has access to a limited number of samples (in this case 6,400=100×266400100superscript266,400=100\\times 2^{6} instead of 50,0005000050,000 for state-of-the-art models). Furthermore, the adversarial goal is to craft adversarial samples misclassified by the oracle. Instead of learning a substitute DNN with optimal accuracy, the adversary is interested in learning a substitute capable of mimicking the oracle decision boundaries. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_38",
"text": " Adversarial Sample Crafting: Using the substitute DNNs, we then craft adversarial samples using Goodfellow’s algorithm. We decided to use the 10,0001000010,000 samples from the MNIST test set as our legitimate samples.444Again, adversaries do not need access to the dataset and can use any legitimate sample of their choice to craft adversarial samples. We use it in order to show that expected inputs can be misclassified on a large scale. We evaluate sample crafting using two metrics: success rate and transferability. The success rate is the proportion of adversarial samples misclassified by the substitute DNN. Our goal is to verify whether these samples are also misclassified by the oracle or not. Therefore, the transferability of adversarial samples refers to the oracle misclassification rate of adversarial samples crafted using the substitute DNN. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_39",
"text": " Figure 5 details both metrics for each substitute DNN and for several values of the input variation ε𝜀\\varepsilon (cf. Equation 5). Transferability reaches 84.24%percent84.2484.24\\% for the first substitute DNN and 78.72%percent78.7278.72\\% for the second, with input variations of ε=0.3𝜀0.3\\varepsilon=0.3. Our attack strategy is thus effectively able to severely damage the output integrity of the MetaMind oracle. Using the substitute training set handcrafted by the adversary limits the transferability of adversarial samples when compared to the substitute set extracted from MNIST data, for all input variations except ε=0.2𝜀0.2\\varepsilon=0.2. Yet, the transferability of both substitutes is similar, corroborating that our attack can be executed without access to any of the oracle’s training data. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_40",
"text": " To analyze the labels assigned by the MetaMind oracle, we plot confusion matrices for adversarial samples crafted using the first substitute DNN with 444 values of ε𝜀\\varepsilon. In Figure 6, rates on the diagonal indicate the proportion of samples correctly classified by the oracle for each of the 101010 classes. Off-diagonal values are the proportion of samples misclassified in a wrong class. For instance, cell (8,3)83(8,3) in the third matrix indicates that 89%percent8989\\% instances of a 333 are classified as a 888 by the oracle when perturbed with an input variation of ε=0.25𝜀0.25\\varepsilon=0.25. Confusion matrices converge to most samples being classified as 444s and 888s as ε𝜀\\varepsilon increases. This could be due to DNNs more easily classifying inputs in these classes . ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_41",
"text": " We now validate our attack on a different dataset, using an oracle trained locally to recognize traffic signs on the GTSRB dataset. The attack achieves higher transferability rates at lower distortions compared to the MNIST oracle. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_42",
"text": " Oracle Description: The GTSRB dataset is an image collection consisting of 43 traffic signs . Images vary in size and are RGB-encoded. To simplify, we resize images to 323232x323232 pixels, recenter them by subtracting the mean component, and rescale them by factoring their standard deviations out. We keep 35,0003500035,000 images for our training set and 4,00040004,000 for our validation set (out of the 39,2093920939,209 available), and 10,0001000010,000 for our test set (out of 12,6301263012,630). We train the oracle on our machine, using the DNN B from Table 13 (cf. appendix), for 505050 epochs with a learning rate of 10−2superscript10210^{-2} and a momentum of 0.90.90.9 (both decayed by 0.50.50.5 every 101010 epochs). ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_43",
"text": " Substitute DNN Training: The adversary uses two initial substitute training sets extracted from the GTSRB test set. The first includes the first 1,00010001,000 samples and the second the first 500500500. The number of initial samples is higher than for MNIST substitutes as inputs have a higher dimensionality. We train three substitute architectures C, D, and E (cf. Table 13) using the oracle for 666 substitute training epochs with a Jacobian-based dataset augmentation parameter of λ=0.1𝜆0.1\\lambda=0.1. Substitute C and E where trained with the 1,00010001,000 sample initial substitute training set and achieve a 71.42%percent71.4271.42\\% accuracy. Substitute D was trained with the initial set of 500500500 samples. Its accuracy of 60.12%percent60.1260.12\\% is lower than C and E. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_44",
"text": " Adversarial Crafting: We use Goodfellow’s algorithm with ε𝜀\\varepsilon between 0.010.010.01 and 0.50.50.5 to craft adversarial samples from the test set. Results are shown in Figure 7. Adversarial samples crafted with variations ε<0.3𝜀0.3\\varepsilon<0.3 are more transferable than those crafted with the same ε𝜀\\varepsilon for MNIST models. This is likely due to the higher input dimensionality—3,07230723,072 components instead of 784784784—which means almost 444 times more perturbation is applied with the same ε𝜀\\varepsilon. Nevertheless, with success rates higher than 98.98%percent98.9898.98\\% and transferability rates ranging from 64.24%percent64.2464.24\\% to 69.03%percent69.0369.03\\% for ε=0.3𝜀0.3\\varepsilon=0.3, which is hard to distinguish for humans, the attack is successful. The transferability of adversarial samples crafted using substitute DNN D is comparable or higher than corresponding samples for DNNs C and E, despite being less accurate (trained with less samples). This emphasizes that there is no strong correlation between substitute accuracy and transferability. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_45",
"text": " Having shown in Section 5 that an adversary can force an MNIST oracle from MetaMind, and a GTSRB oracle trained locally, to misclassify inputs, we now perform a parameter space exploration of both attack steps–the substitute DNN training and the adversarial sample crafting. We explore the following questions: “(1) How can substitute training be fine-tuned to improve adversarial sample transferability?” and (2) “For each adversarial sample crafting strategies, which parameters optimize transferability?”. We found that: • In Section 6.1, we show that the choice of substitute DNN architecture (number of layers, size, activation function, type) has a limited impact on adversarial sample transferability. Increasing the number of epochs, after the substitute DNN has reached an asymptotic accuracy, does not improve adversarial sample transferability. • At comparable input perturbation magnitude, the Goodfellow and Papernot algorithms have similar transferability rates (see Section 6.2). In this section, we use an oracle trained locally to limit querying of the MetaMind API. We train architecture A (cf. Table 13) for 505050 epochs with a learning parameter 10−2superscript10210^{-2} and a momentum 0.90.90.9 (both decayed by 0.50.50.5 every 101010 epochs). ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_46",
"text": " We first seek to quantify the impact of substitute training algorithm parameters on adversarial sample transferability and introduce a refinement to reduce oracle querying. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_47",
"text": " Choosing an Architecture: We train substitute DNNs A and F to M (cf. Table 13) using 150150150 samples from the MNIST test set as the substitute training set. During each of the 666 substitute training epochs, the DNN is trained for 555 epochs from scratch. Between epochs, synthetic data is added to the training set using Jacobian-based dataset augmentations with step λ=0.1𝜆0.1\\lambda=0.1. The substitute architectures differ from the oracle’s by the type, number, and size of layers. In Table 1, we report the accuracy of each architecture after 222 and 666 substitute training epochs, as well as the adversarial sample transferability after 6 epochs. Adversarial samples are crafted using the Goodfellow algorithm with an input variation of ε=0.4𝜀0.4\\varepsilon=0.4 (which we justify later). The last column of Table 1 shows that the choice of architecture has a limited impact on adversarial sample transferability, and therefore on the attack success. The most important transferability drop follows from removing all convolutional layers. Changing the hidden layer activation function from rectified linear to a sigmoid does not impact transferability significantly. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_48",
"text": " Choosing the number of substitute epochs: Another tunable parameter is the number of epochs for which substitute DNNs are trained. Intuitively, one would hypothesize that the longer we train the substitute, the more samples labeled using the oracle are included in the substitute training set, thus the higher the transferability of adversarial samples will be. This intuition is confirmed only partially by our experiments on substitute DNN A. We find that for for input variations ε≤0.3𝜀0.3\\varepsilon\\leq 0.3, the transferability is slightly improved by a rate between +3%percent3+3\\% to +9%percent9+9\\%, but for variations ε≥0.4𝜀0.4\\varepsilon\\geq 0.4, the transferability is slightly degraded by less than 1%percent11\\%. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_49",
"text": " Setting the step size: We trained substitute A using different Jacobian-based dataset augmentation step sizes λ𝜆\\lambda. Increasing or decreasing the step size (from λ=0.1𝜆0.1\\lambda=0.1 used in the rest of this paper) does not modify the substitute accuracy by more than 3%percent33\\%. Larger step sizes decrease convergence stability while smaller values yield slower convergence. However, increasing step size λ𝜆\\lambda negatively impacts adversarial sample transferability : for instance with a step size of 0.30.30.3 compared to 0.10.10.1, the transferability rate for ε=0.25𝜀0.25\\varepsilon=0.25 is 10.82%percent10.8210.82\\% instead of 22.35%percent22.3522.35\\% and for ε=0.5𝜀0.5\\varepsilon=0.5, 82.07%percent82.0782.07\\% instead of 85.22%percent85.2285.22\\%. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_50",
"text": " However, having the step size periodically alternating between positive and negative values improves the quality of the oracle approximation made by the substitute. This could be explained by the fact that after a few substitute epochs, synthetic inputs are outside of the input domain and are thus clipped to produce an acceptable input. We introduce an iteration period τ𝜏\\tau after which the step size is multiplied by −11-1. Thus, the step size λ𝜆\\lambda is now replaced by: λρ=λ⋅(−1)⌊ρτ⌋subscript𝜆𝜌⋅𝜆superscript1𝜌𝜏\\lambda_{\\rho}=\\lambda\\cdot(-1)^{\\left\\lfloor\\frac{\\rho}{\\tau}\\right\\rfloor} (7) where τ𝜏\\tau is set to be the number of epochs after which the Jacobian-based dataset augmentation does not lead any substantial improvement in the substitute. A grid search can also be performed to find an optimal value for the period τ𝜏\\tau. We also experimented with a decreasing grid step amplitude λ𝜆\\lambda, but did not find that it yielded substantial improvements. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_51",
"text": " Reducing Oracle Querying: We apply reservoir sampling to reduce the number of queries made to the oracle. This is useful when learning substitutes in realistic environments, or when interacting with paid APIs, where the number of label queries an adversary can make without exceeding a quota or being detected by a defender is limited. Reservoir sampling is a technique that randomly select κ𝜅\\kappa samples from a list of samples. The total number of samples in the list can be both very large and unknown. We use it to select κ𝜅\\kappa new inputs before a Jacobian-based dataset augmentation. This prevents the exponential growth of queries made to the oracle at each augmentation. At iterations ρ>σ𝜌𝜎\\rho>\\sigma (the first σ𝜎\\sigma iterations are performed normally), when considering the previous set Sρ−1subscript𝑆𝜌1S_{\\rho-1} of substitute training inputs, we select κ𝜅\\kappa inputs from Sρ−1subscript𝑆𝜌1S_{\\rho-1} to be augmented in Sρsubscript𝑆𝜌S_{\\rho}. Using reservoir sampling ensures that each input in Sρ−1subscript𝑆𝜌1S_{\\rho-1} has an equal probability 1|Sρ−1|1subscript𝑆𝜌1\\frac{1}{\\left|S_{\\rho-1}\\right|} to be augmented in Sρsubscript𝑆𝜌S_{\\rho}. The number of queries made to the oracle is reduced from n⋅2ρ⋅𝑛superscript2𝜌n\\cdot 2^{\\rho} for the vanilla Jacobian-based augmentation to n⋅2σ+κ⋅(ρ−σ)⋅𝑛superscript2𝜎⋅𝜅𝜌𝜎n\\cdot 2^{\\sigma}+\\kappa\\cdot(\\rho-\\sigma) with reservoir sampling. In Section 7, we show that using reservoir sampling to reduce the number of synthetic training inputs does not significantly degrade the substitute accuracy. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_52",
"text": " We compare the transferability of adversarial samples produced by each algorithm introduced previously (4, 9), to elect the strongest technique under our threat model. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_53",
"text": " Goodfellow’s algorithm: Recall from Equation 5 the perturbation computed in the Goodfellow attack. Its only parameter is the variation ε𝜀\\varepsilon added in the direction of the gradient sign. We use the same architecture set as before to quantify the impact of ε𝜀\\varepsilon on adversarial sample transferability. In Figure 8, architecture A outperforms all others: it is a copy of the oracle’s and acts as a baseline. Other architectures have asymptotic transferability rates ranging between 72.24%percent72.2472.24\\% and 80.21%percent80.2180.21\\%, confirming that the substitute architecture choice has a limited impact on transferability. Increasing the value of ε𝜀\\varepsilon above 0.40.40.4 yields little improvement in transferability and should be avoided to guarantee indistinguishability of adversarial samples to humans. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_54",
"text": " Papernot’s algorithm: This algorithm is fine-tuned by two parameters: the maximum distortion ΥΥ\\Upsilon and the input variation ε𝜀\\varepsilon. The maximum distortion555In , the algorithm stopped perturbing when the input reached the target class. Here, we force the algorithm to continue perturbing until it changed ΥΥ\\Upsilon input components. defines the number of input components that are altered in perturbation δx→subscript𝛿→𝑥\\delta_{\\vec{x}}. The input variation, similarly to the Goodfellow algorithm, controls the amount of change induced to altered input components. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_55",
"text": " We first evaluate the impact of the maximum distortion ΥΥ\\Upsilon on adversarial sample transferability. For now, components selected to be perturbed are increased by ε=1𝜀1\\varepsilon=1. Intuitively, increasing the maximum distortion makes adversarial samples more transferable. Higher distortions increase the misclassification confidence of the substitute DNN, and also increases the likelihood of the oracle misclassifying the same sample. These results are reported in Figure 9. Increasing distortion ΥΥ\\Upsilon from 7.14%percent7.147.14\\% to 28.57%percent28.5728.57\\% improves transferability: at a 7.14%percent7.147.14\\% distortion, the average transferability across all architectures is 14.70%percent14.7014.70\\% whereas at a 28.57%percent28.5728.57\\% distortion, the average transferability is at 55.53%percent55.5355.53\\%. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_56",
"text": " We now quantify the impact of the variation ε𝜀\\varepsilon introduced to each input component selected in δx→subscript𝛿→𝑥\\delta_{\\vec{x}}. We find that reducing the input variation from 111 to 0.70.70.7 significantly degrades adversarial sample transferability, approximatively by a factor of 2 (cf. Figure 10). This is explained by the fixed distortion parameter ΥΥ\\Upsilon, which prevents the crafting algorithm from increasing the number of components altered to compensate for the reduced effectiveness yielded by the smaller ε𝜀\\varepsilon. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_57",
"text": " Comparing Crafting Algorithms: To compare the two crafting strategies and their differing perturbation styles fairly, we compare their success rate given a fixed L1 norm of the introduced perturbation δx→subscript𝛿→𝑥\\delta_{\\vec{x}}, which can be defined as: ‖δx→‖1=ε⋅‖δx→‖0subscriptnormsubscript𝛿→𝑥1⋅𝜀subscriptnormsubscript𝛿→𝑥0\\|\\delta_{\\vec{x}}\\|_{1}=\\varepsilon\\cdot\\|\\delta_{\\vec{x}}\\|_{0} (8) where ‖δx→‖0subscriptnormsubscript𝛿→𝑥0\\|\\delta_{\\vec{x}}\\|_{0} is the number of input components selected in the perturbation δx→subscript𝛿→𝑥\\delta_{\\vec{x}}, and ε𝜀\\varepsilon the input variation introduced to each component perturbed. For the Goodfellow algorithm, we always have ‖δx→‖0=1subscriptnormsubscript𝛿→𝑥01\\|\\delta_{\\vec{x}}\\|_{0}=1, whereas for the Papernot algorithm, values vary for both ε𝜀\\varepsilon and ‖δx→‖0subscriptnormsubscript𝛿→𝑥0\\|\\delta_{\\vec{x}}\\|_{0}. For instance, ‖δx→‖1=0.4subscriptnormsubscript𝛿→𝑥10.4\\|\\delta_{\\vec{x}}\\|_{1}=0.4 corresponds to a Goodfellow algorithm with ε=0.4𝜀0.4\\varepsilon=0.4 and a Papernot algorithm with ε=1𝜀1\\varepsilon=1 and Υ=40%Υpercent40\\Upsilon=40\\%. Corresponding transferability rates can be found in Table 1 and Figure 9 for our running set of architectures. Performances are comparable with some DNNs performing better with one algorithm and others with the other. Thus, the choice of algorithm depends on acceptable perturbations: e.g., all features perturbed a little vs. few features perturbed a lot. Indeed, the Goodfellow algorithm gives more control on ε𝜀\\varepsilon while the Papernot algorithm gives more control on ΥΥ\\Upsilon. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_58",
"text": " So far, all substitutes and oracles considered were learned with DNNs. However, no part of the attack limits its applicability to other ML techniques. For instance, we show that the attack generalizes to non-differentiable target oracles like decision trees. As pointed out by Equation 4, the only limitation is placed on the substitute: it must model a differentiable function—to allow for synthetic data to be generated with its Jacobian matrix. We show below that: • Substitutes can also be learned with logistic regression. • The attack generalizes to additional ML models by: (1) learning substitutes of 4 classifier types (logistic regression, SVM, decision tree, nearest neighbors) in addition to DNNs, and (2) targeting remote models hosted by Amazon Web Services and Google Cloud Prediction with success rates of 96.19%percent96.1996.19\\% and 88.94%percent88.9488.94\\% after 800800800 queries to train the substitute. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_59",
"text": " We here show that our approach generalizes to ML models that are not DNNs. Indeed, we learn substitutes for 4 representative types of ML classifiers in addition to DNNs: logistic regression (LR), support vector machines (SVM), decision trees (DT), and nearest neighbor (kNN). All of these classifiers are trained on MNIST, with no feature engineering (i.e. directly on raw pixel values) as done in Section 5. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_60",
"text": " Whereas we previously trained all of our substitutes using DNNs only, we now use both DNNs and LR as substitute models. The Jacobian-based dataset augmentation described in the context of DNNs is easily adapted to logistic regression: the later is analog to the softmax layer frequently used by the former when outputting probability vectors. We use 100100100 samples from the MNIST test set as the initial substitute training set and use the two refinements introduced in Section 6: a periodic step size and reservoir sampling. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_61",
"text": " Figure 11(a) and 11(b) plot for each iteration ρ𝜌\\rho the share of samples on which the substitute DNNs and LRs agree with predictions made by the oracle they are approximating. This proportion is estimated by comparing labels assigned to the test set by the substitutes and oracles before each iteration ρ𝜌\\rho of the Jacobian-based dataset augmentation. All substitutes are able to approximate the corresponding oracle at rates higher between 77%percent7777\\% and 83%percent8383\\% after ρ=10𝜌10\\rho=10 iterations (to the exception of the decision tree oracle, which could be due to its non-continuity). LR substitute accuracies are generally lower than those of DNN substitutes, except when targeting the LR and SVM oracles where LR substitutes outperform DNN ones. However, LR substitutes are computationally more efficient and reach their asymptotic match rate faster, after ρ=3𝜌3\\rho=3 iterations, corresponding to 800800800 oracle queries. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_62",
"text": " Table 2 quantifies the impact of refinements introduced in Section 6 on results reported in Figure 11(a) and 11(b). The periodic step size (PSS) increases the oracle approximation accuracy of substitutes. After ρ=9𝜌9\\rho=9 epochs, a substitute DNN trained with PSS matches 89.28%percent89.2889.28\\% of the DNN oracle labels, whereas the vanilla substitute DNN matches only 78.01%percent78.0178.01\\%. Similarly, the LR substitute with PSS matches 84.01%percent84.0184.01\\% of the LR oracle labels while the vanilla substitute matched 72.00%percent72.0072.00\\%. Using reservoir sampling (RS) reduces oracle querying. For instance, 101010 iterations with RS (σ=3𝜎3\\sigma=3 and κ=400𝜅400\\kappa=400) make 100⋅23+400(10−3)=3,600⋅100superscript234001033600100\\cdot 2^{3}+400(10-3)=3,600 queries to the oracle instead of 102,400102400102,400 without RS. This decreases the substitute accuracy, but when combined with PSS it remains superior to the vanilla substitutes. For instance, the vanilla substitute matched 7,80178017,801 of the DNN oracle labels, the PSS one 8,92889288,928, and the PSS with RS one 8,29082908,290. Simarly, the vanilla LR substitute matched 71.56%percent71.5671.56\\% of the SVM oracle labels, the PSS one 82.19%percent82.1982.19\\%, and the PSS with RS 79.20%percent79.2079.20\\%. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_63",
"text": " Amazon oracle: To train a classifier on Amazon Machine Learning,666https://aws.amazon.com/machine-learning, we uploaded a CSV version of the MNIST dataset to a S3 bucket. We then loaded the data, selected the multi-class model type, and keept default configuration settings. The process took a few minutes and produced a classifier achieving a 92.17%percent92.1792.17\\% test set accuracy. We cannot improve the accuracy due to the automated nature of training. We then activate real-time predictions to query the model for labels from our machine with the provided API. Although probabilities are returned, we discard them and retain only the most likely label—as stated in our threat model (Section 3). ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_64",
"text": " Google oracle: The procedure to train a classifier on Google’s Cloud Prediction API777https://cloud.google.com/prediction/ is similar to Amazon’s. We upload the CSV file with the MNIST training data to Google Cloud Storage. We then train a model using the Prediction API. The only property we can specify is the expected multi-class nature of our model. We then evaluate the resulting model on the MNIST test set. The API reports an accuracy of 92%percent9292\\% on this test set for the model trained. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_65",
"text": " Substitute Training: By augmenting an initial training set of 100100100 test set samples, we train a DNN and LR substitute for each of the two oracles. We measure success as the rate of adversarial samples misclassified by the corresponding oracle, among the 10,0001000010,000 produced from the test set using the fast gradient sign method with parameter ε=0.3𝜀0.3\\varepsilon=0.3. These rates, computed after ρ∈{3,6}𝜌36\\rho\\in\\{3,6\\} dataset augmentation iterations, are reported in Table 3. Results reported in the last row use both a periodic step size and reservoir sampling (hence the reduced number of queries made to train the substitute). ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_66",
"text": " Experimental Results: With a 96.19%percent96.1996.19\\% misclassification rate for a perturbation ε=0.3𝜀0.3\\varepsilon=0.3 crafted using a LR substitute trained with 800800800 oracle queries, the model hosted by Amazon is easily misled. The model trained by Google is somewhat more robust to adversarial samples, but is still vulnerable to a large proportion of samples: 88.94%percent88.9488.94\\% of adversarial samples produced in the same conditions are misclassified. A careful read of the documentation indicated that the model trained by Amazon is a multinomial logistic regression.888docs.aws.amazon.com/machine-learning As pointed out in , shallow models like logistic regression are unable to cope with adversarial samples and learn robust classifiers. This explains why the attack is very successful and the LR substitute performs better than the DNN substitute. We were however not able to find the ML technique Google uses. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_67",
"text": " The last row of Table 3 shows how combining periodic step sizes with reservoir sampling allow us to reduce querying of both oracles during substitute training, while crafting adversarial samples with higher transferability to the target classifier. Indeed, querying is reduced by a factor larger than 333 from 6,40064006,400 to 2,00020002,000 queries, while misclassification decreases only from 96.78%percent96.7896.78\\% to 95.68%percent95.6895.68\\% for the Amazon DNN substitute. It is still larger than the rate of 87.44%percent87.4487.44\\% achieved after 800800800 queries by the substitute learned without the refinements. Similarly, the misclassification rate of the Google LR substitute is 97.72%percent97.7297.72\\%—compared to 92.05%percent92.0592.05\\% with the original method after ρ=6𝜌6\\rho=6 epochs, confirming the result. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_68",
"text": " The two types of defense strategies are: (1) reactive where one seeks to detect adversarial examples, and (2) proactive where one makes the model itself more robust. Our attack is not more easily detectable than a classic adversarial example attack. Indeed, oracle queries may be distributed among a set of colluding users, and as such remain hard to detect. The defender may increase the attacker’s cost by training models with higher input dimensionality or modeling complexity, as our experimental results indicate that these two factors increase the number of queries required to train substitutes. In the following, we thus only analyze our attack in the face of defenses that seek to make the (oracle) model robust. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_69",
"text": " Many potential defense mechanisms fall into a category we call gradient masking. These techniques construct a model that does not have useful gradients, e.g., by using a nearest neighbor classifier instead of a DNN. Such methods make it difficult to construct an adversarial example directly, due to the absence of a gradient, but are often still vulnerable to the adversarial examples that affect a smooth version of the same model. Previously, it has been shown that nearest neighbor was vulnerable to attacks based on transferring adversarial examples from smoothed nearest neighbors. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_70",
"text": " We show a more general flaw in the category of gradient masking. Even if the defender attempts to prevent attacks by not publishing the directions in which the model is sensitive, these directions can be discovered by other means, in which case the same attack can still succeed. We show that the black-box attack based on transfer from a substitute model overcomes gradient masking defenses. No fully effective defense mechanism is known, but we study the two with the greatest empirical success so far: adversarial training (4, 14), and defensive distillation for DNNs . ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_71",
"text": " Adversarial training: It was shown that injecting adversarial examples throughout training increases the robustness of significantly descriptive models, such as DNNs (4, 14, 17). We implemented an approximation of this defense using the Google Prediction API. Since the API does not support the generation of adversarial examples at every step of training, as a correct implementation of adversarial training would do, we instead inject a large amount of adversarial examples infrequently. After training in this way, the model has a misclassification rate of 8.75%percent8.758.75\\% on the unperturbed test set, but the adversarial misclassification rate rises to 100%percent100100\\% when ρ=6𝜌6\\rho=6. To evaluate this defense strategy using a correct implementation, we resort to training the oracle locally, using our own codebase that includes support for generating adversarial examples at each step. After each training batch, we compute and train on adversarial examples generated with the fast gradient sign method before starting training on the next batch of the original training data. Results are given in Table 4. We observe that for ε=0.15𝜀0.15\\varepsilon=0.15, the defense can be evaded using the black-box attack with adversarial examples crafted on the substitute and misclassified by the oracle at rates up to 71.25%percent71.2571.25\\%. However, for ε=0.3𝜀0.3\\varepsilon=0.3, the black-box attack is not effective anymore. Therefore, making a machine learning model robust to small and infinitesimal perturbations of its inputs is an example of gradient masking and can be evaded using our substitute-based black-box approach. However, making the model robust to larger and finite perturbations prevents the black-box attack. To confirm this hypothesis, we now show that defensive distillation, which makes the model robust to infinitesimal perturbations, can be evaded by the black-box approach. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_72",
"text": " Defensive distillation: Due to space constraints, we refer readers to for a detailed presentation of defensive distillation, which is an alternative defense. Because the remotely hosted APIs we study here do not implement defensive distillation or provide primitives that could be used to implement it, we are forced to evaluate this defense on a locally trained oracle. Therefore, we train a distilled model as described in to act as our MNIST oracle. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_73",
"text": " We train several variants of the DNN architecture A at different distillation temperatures T=5,10,100𝑇510100T=5,10,100. For each of them, we measure the success of the fast gradient sign attack (i.e., the Goodfellow et al. algorithm) directly performed on the distilled oracle—as a baseline corresponding to a white-box attack—and using a substitute DNN trained with synthetic data as described throughout the present paper. The results are reported in Figure 12 for different values of the input variation parameter ε𝜀\\varepsilon on the horizontal axis. We find that defensive distillation defends against the fast gradient sign method when the attack is performed directly on the distilled model, i.e. in white-box settings. However, in black-box settings using the attack introduced in the present paper, the fast gradient sign method is found to be successful regardless of the distillation temperature used by the oracle. We hypothesize that this is due to the way distillation defends against the attack: it reduces the gradients in local neighborhoods of training points. However, our substitute model is not distilled, and as such possesses the gradients required for the fast gradient sign method to be successful when computing adversarial examples. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_74",
"text": " Defenses which make models robust in a small neighborhood of the training manifold perform gradient masking: they smooth the decision surface and reduce gradients used by adversarial crafting in small neighborhoods. However, using a substitute and our black-box approach evades these defenses, as the substitute model is not trained to be robust to the said small perturbations. We conclude that defending against finite perturbations is a more promising avenue for future work than defending against infinitesimal perturbations. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_75",
"text": " We introduced an attack, based on a novel substitute training algorithm using synthetic data generation, to craft adversarial examples misclassified by black-box DNNs. Our work is a significant step towards relaxing strong assumptions about adversarial capabilities made by previous attacks. We assumed only that the adversary is capable of observing labels assigned by the model to inputs of its choice. We validated our attack design by targeting a remote DNN served by MetaMind, forcing it to misclassify 84.24%percent84.2484.24\\% of our adversarial samples. We also conducted an extensive calibration of our algorithm and generalized it to other ML models by instantiating it against classifiers hosted by Amazon and Google, with success rates of 96.19%percent96.1996.19\\% and 88.94%percent88.9488.94\\%. Our attack evades a category of defenses, which we call gradient masking, previously proposed to increase resilience to adversarial examples. Finally, we provided an intuition for adversarial sample transferability across DNNs in Appendix B. ",
"title": "Practical Black-Box Attacks against Machine Learning"
},
{
"id": "1602.02697_all_76",
"text": " Nicolas Papernot is supported by a Google PhD Fellowship in Security. Research was also supported in part by the Army Research Laboratory, under Cooperative Agreement Number W911NF-13-2-0045 (ARL Cyber Security CRA), and the Army Research Office under grant W911NF-13-1-0421. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation hereon. ",
"title": "Practical Black-Box Attacks against Machine Learning"
}
] |
Is it better to use class specific or class agnostic masks in general ?
|
Both class specific or class agnostic masks in general is nearly as effective [40].
|
[
40
] |
[
{
"id": "1703.06870_all_0",
"text": " The vision community has rapidly improved object detection and semantic segmentation results over a short period of time. In large part, these advances have been driven by powerful baseline systems, such as the Fast/Faster R-CNN (12, 36) and Fully Convolutional Network (FCN) frameworks for object detection and semantic segmentation, respectively. These methods are conceptually intuitive and offer flexibility and robustness, together with fast training and inference time. Our goal in this work is to develop a comparably enabling framework for instance segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_1",
"text": " Instance segmentation is challenging because it requires the correct detection of all objects in an image while also precisely segmenting each instance. It therefore combines elements from the classical computer vision tasks of object detection, where the goal is to classify individual objects and localize each using a bounding box, and semantic segmentation, where the goal is to classify each pixel into a fixed set of categories without differentiating object instances.111Following common terminology, we use object detection to denote detection via bounding boxes, not masks, and semantic segmentation to denote per-pixel classification without differentiating instances. Yet we note that instance segmentation is both semantic and a form of detection. Given this, one might expect a complex method is required to achieve good results. However, we show that a surprisingly simple, flexible, and fast system can surpass prior state-of-the-art instance segmentation results. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_2",
"text": " Our method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting segmentation masks on each Region of Interest (RoI), in parallel with the existing branch for classification and bounding box regression (Figure 1). The mask branch is a small FCN applied to each RoI, predicting a segmentation mask in a pixel-to-pixel manner. Mask R-CNN is simple to implement and train given the Faster R-CNN framework, which facilitates a wide range of flexible architecture designs. Additionally, the mask branch only adds a small computational overhead, enabling a fast system and rapid experimentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_3",
"text": " In principle Mask R-CNN is an intuitive extension of Faster R-CNN, yet constructing the mask branch properly is critical for good results. Most importantly, Faster R-CNN was not designed for pixel-to-pixel alignment between network inputs and outputs. This is most evident in how RoIPool (18, 12), the de facto core operation for attending to instances, performs coarse spatial quantization for feature extraction. To fix the misalignment, we propose a simple, quantization-free layer, called RoIAlign, that faithfully preserves exact spatial locations. Despite being a seemingly minor change, RoIAlign has a large impact: it improves mask accuracy by relative 10% to 50%, showing bigger gains under stricter localization metrics. Second, we found it essential to decouple mask and class prediction: we predict a binary mask for each class independently, without competition among classes, and rely on the network’s RoI classification branch to predict the category. In contrast, FCNs usually perform per-pixel multi-class categorization, which couples segmentation and classification, and based on our experiments works poorly for instance segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_4",
"text": " Without bells and whistles, Mask R-CNN surpasses all previous state-of-the-art single-model results on the COCO instance segmentation task , including the heavily-engineered entries from the 2016 competition winner. As a by-product, our method also excels on the COCO object detection task. In ablation experiments, we evaluate multiple basic instantiations, which allows us to demonstrate its robustness and analyze the effects of core factors. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_5",
"text": " Our models can run at about 200ms per frame on a GPU, and training on COCO takes one to two days on a single 8-GPU machine. We believe the fast train and test speeds, together with the framework’s flexibility and accuracy, will benefit and ease future research on instance segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_6",
"text": " Finally, we showcase the generality of our framework via the task of human pose estimation on the COCO keypoint dataset . By viewing each keypoint as a one-hot binary mask, with minimal modification Mask R-CNN can be applied to detect instance-specific poses. Mask R-CNN surpasses the winner of the 2016 COCO keypoint competition, and at the same time runs at 5 fps. Mask R-CNN, therefore, can be seen more broadly as a flexible framework for instance-level recognition and can be readily extended to more complex tasks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_7",
"text": " We have released code to facilitate future research. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_8",
"text": " The Region-based CNN (R-CNN) approach to bounding-box object detection is to attend to a manageable number of candidate object regions (42, 20) and evaluate convolutional networks (25, 24) independently on each RoI. R-CNN was extended (18, 12) to allow attending to RoIs on feature maps using RoIPool, leading to fast speed and better accuracy. Faster R-CNN advanced this stream by learning the attention mechanism with a Region Proposal Network (RPN). Faster R-CNN is flexible and robust to many follow-up improvements (e.g., (38, 27, 21)), and is the current leading framework in several benchmarks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_9",
"text": " Driven by the effectiveness of R-CNN, many approaches to instance segmentation are based on segment proposals. Earlier methods (13, 15, 16, 9) resorted to bottom-up segments (42, 2). DeepMask and following works (34, 8) learn to propose segment candidates, which are then classified by Fast R-CNN. In these methods, segmentation precedes recognition, which is slow and less accurate. Likewise, Dai et al. proposed a complex multiple-stage cascade that predicts segment proposals from bounding-box proposals, followed by classification. Instead, our method is based on parallel prediction of masks and class labels, which is simpler and more flexible. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_10",
"text": " Most recently, Li et al. combined the segment proposal system in and object detection system in for “fully convolutional instance segmentation” (FCIS). The common idea in (8, 11, 26) is to predict a set of position-sensitive output channels fully convolutionally. These channels simultaneously address object classes, boxes, and masks, making the system fast. But FCIS exhibits systematic errors on overlapping instances and creates spurious edges (Figure 6), showing that it is challenged by the fundamental difficulties of segmenting instances. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_11",
"text": " Another family of solutions (23, 4, 3, 29) to instance segmentation are driven by the success of semantic segmentation. Starting from per-pixel classification results (e.g., FCN outputs), these methods attempt to cut the pixels of the same category into different instances. In contrast to the segmentation-first strategy of these methods, Mask R-CNN is based on an instance-first strategy. We expect a deeper incorporation of both strategies will be studied in the future. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_12",
"text": " Mask R-CNN is conceptually simple: Faster R-CNN has two outputs for each candidate object, a class label and a bounding-box offset; to this we add a third branch that outputs the object mask. Mask R-CNN is thus a natural and intuitive idea. But the additional mask output is distinct from the class and box outputs, requiring extraction of much finer spatial layout of an object. Next, we introduce the key elements of Mask R-CNN, including pixel-to-pixel alignment, which is the main missing piece of Fast/Faster R-CNN. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_13",
"text": " We begin by briefly reviewing the Faster R-CNN detector . Faster R-CNN consists of two stages. The first stage, called a Region Proposal Network (RPN), proposes candidate object bounding boxes. The second stage, which is in essence Fast R-CNN , extracts features using RoIPool from each candidate box and performs classification and bounding-box regression. The features used by both stages can be shared for faster inference. We refer readers to for latest, comprehensive comparisons between Faster R-CNN and other frameworks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_14",
"text": " Mask R-CNN adopts the same two-stage procedure, with an identical first stage (which is RPN). In the second stage, in parallel to predicting the class and box offset, Mask R-CNN also outputs a binary mask for each RoI. This is in contrast to most recent systems, where classification depends on mask predictions (e.g. (33, 10, 26)). Our approach follows the spirit of Fast R-CNN that applies bounding-box classification and regression in parallel (which turned out to largely simplify the multi-stage pipeline of original R-CNN ). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_15",
"text": " Formally, during training, we define a multi-task loss on each sampled RoI as L=Lcls+Lbox+Lmask𝐿subscript𝐿𝑐𝑙𝑠subscript𝐿𝑏𝑜𝑥subscript𝐿𝑚𝑎𝑠𝑘L=L_{cls}+L_{box}+L_{mask}. The classification loss Lclssubscript𝐿𝑐𝑙𝑠L_{cls} and bounding-box loss Lboxsubscript𝐿𝑏𝑜𝑥L_{box} are identical as those defined in . The mask branch has a Km2𝐾superscript𝑚2Km^{2}-dimensional output for each RoI, which encodes K𝐾K binary masks of resolution m×m𝑚𝑚m\\times m, one for each of the K𝐾K classes. To this we apply a per-pixel sigmoid, and define Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} as the average binary cross-entropy loss. For an RoI associated with ground-truth class k𝑘k, Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} is only defined on the k𝑘k-th mask (other mask outputs do not contribute to the loss). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_16",
"text": " Our definition of Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} allows the network to generate masks for every class without competition among classes; we rely on the dedicated classification branch to predict the class label used to select the output mask. This decouples mask and class prediction. This is different from common practice when applying FCNs to semantic segmentation, which typically uses a per-pixel softmax and a multinomial cross-entropy loss. In that case, masks across classes compete; in our case, with a per-pixel sigmoid and a binary loss, they do not. We show by experiments that this formulation is key for good instance segmentation results. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_17",
"text": " A mask encodes an input object’s spatial layout. Thus, unlike class labels or box offsets that are inevitably collapsed into short output vectors by fully-connected (fc) layers, extracting the spatial structure of masks can be addressed naturally by the pixel-to-pixel correspondence provided by convolutions. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_18",
"text": " Specifically, we predict an m×m𝑚𝑚m\\times m mask from each RoI using an FCN . This allows each layer in the mask branch to maintain the explicit m×m𝑚𝑚m\\times m object spatial layout without collapsing it into a vector representation that lacks spatial dimensions. Unlike previous methods that resort to fc layers for mask prediction (33, 34, 10), our fully convolutional representation requires fewer parameters, and is more accurate as demonstrated by experiments. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_19",
"text": " This pixel-to-pixel behavior requires our RoI features, which themselves are small feature maps, to be well aligned to faithfully preserve the explicit per-pixel spatial correspondence. This motivated us to develop the following RoIAlign layer that plays a key role in mask prediction. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_20",
"text": " RoIPool is a standard operation for extracting a small feature map (e.g., 7×\\times7) from each RoI. RoIPool first quantizes a floating-number RoI to the discrete granularity of the feature map, this quantized RoI is then subdivided into spatial bins which are themselves quantized, and finally feature values covered by each bin are aggregated (usually by max pooling). Quantization is performed, e.g., on a continuous coordinate x𝑥x by computing (x/16)delimited-()𝑥16(x/16), where 16 is a feature map stride and (⋅)delimited-()⋅(\\cdot) is rounding; likewise, quantization is performed when dividing into bins (e.g., 7×\\times7). These quantizations introduce misalignments between the RoI and the extracted features. While this may not impact classification, which is robust to small translations, it has a large negative effect on predicting pixel-accurate masks. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_21",
"text": " To address this, we propose an RoIAlign layer that removes the harsh quantization of RoIPool, properly aligning the extracted features with the input. Our proposed change is simple: we avoid any quantization of the RoI boundaries or bins (i.e., we use x/16𝑥16x/16 instead of (x/16)delimited-()𝑥16(x/16)). We use bilinear interpolation to compute the exact values of the input features at four regularly sampled locations in each RoI bin, and aggregate the result (using max or average), see Figure 3 for details. We note that the results are not sensitive to the exact sampling locations, or how many points are sampled, as long as no quantization is performed. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_22",
"text": " RoIAlign leads to large improvements as we show in §4.2. We also compare to the RoIWarp operation proposed in . Unlike RoIAlign, RoIWarp overlooked the alignment issue and was implemented in as quantizing RoI just like RoIPool. So even though RoIWarp also adopts bilinear resampling motivated by , it performs on par with RoIPool as shown by experiments (more details in Table 2c), demonstrating the crucial role of alignment. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_23",
"text": " To demonstrate the generality of our approach, we instantiate Mask R-CNN with multiple architectures. For clarity, we differentiate between: (i) the convolutional backbone architecture used for feature extraction over an entire image, and (ii) the network head for bounding-box recognition (classification and regression) and mask prediction that is applied separately to each RoI. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_24",
"text": " We denote the backbone architecture using the nomenclature network-depth-features. We evaluate ResNet and ResNeXt networks of depth 50 or 101 layers. The original implementation of Faster R-CNN with ResNets extracted features from the final convolutional layer of the 4-th stage, which we call C4. This backbone with ResNet-50, for example, is denoted by ResNet-50-C4. This is a common choice used in (19, 10, 21, 39). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_25",
"text": " We also explore another more effective backbone recently proposed by Lin et al. , called a Feature Pyramid Network (FPN). FPN uses a top-down architecture with lateral connections to build an in-network feature pyramid from a single-scale input. Faster R-CNN with an FPN backbone extracts RoI features from different levels of the feature pyramid according to their scale, but otherwise the rest of the approach is similar to vanilla ResNet. Using a ResNet-FPN backbone for feature extraction with Mask R-CNN gives excellent gains in both accuracy and speed. For further details on FPN, we refer readers to . ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_26",
"text": " For the network head we closely follow architectures presented in previous work to which we add a fully convolutional mask prediction branch. Specifically, we extend the Faster R-CNN box heads from the ResNet and FPN papers. Details are shown in Figure 4. The head on the ResNet-C4 backbone includes the 5-th stage of ResNet (namely, the 9-layer ‘res5’ ), which is compute-intensive. For FPN, the backbone already includes res5 and thus allows for a more efficient head that uses fewer filters. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_27",
"text": " We note that our mask branches have a straightforward structure. More complex designs have the potential to improve performance but are not the focus of this work. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_28",
"text": " We set hyper-parameters following existing Fast/Faster R-CNN work (12, 36, 27). Although these decisions were made for object detection in original papers (12, 36, 27), we found our instance segmentation system is robust to them. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_29",
"text": " As in Fast R-CNN, an RoI is considered positive if it has IoU with a ground-truth box of at least 0.5 and negative otherwise. The mask loss Lmasksubscript𝐿𝑚𝑎𝑠𝑘L_{mask} is defined only on positive RoIs. The mask target is the intersection between an RoI and its associated ground-truth mask. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_30",
"text": " We adopt image-centric training . Images are resized such that their scale (shorter edge) is 800 pixels . Each mini-batch has 2 images per GPU and each image has N𝑁N sampled RoIs, with a ratio of 1:3 of positive to negatives . N𝑁N is 64 for the C4 backbone (as in (12, 36)) and 512 for FPN (as in ). We train on 8 GPUs (so effective mini-batch size is 16) for 160k iterations, with a learning rate of 0.02 which is decreased by 10 at the 120k iteration. We use a weight decay of 0.0001 and momentum of 0.9. With ResNeXt , we train with 1 image per GPU and the same number of iterations, with a starting learning rate of 0.01. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_31",
"text": " The RPN anchors span 5 scales and 3 aspect ratios, following . For convenient ablation, RPN is trained separately and does not share features with Mask R-CNN, unless specified. For every entry in this paper, RPN and Mask R-CNN have the same backbones and so they are shareable. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_32",
"text": " At test time, the proposal number is 300 for the C4 backbone (as in ) and 1000 for FPN (as in ). We run the box prediction branch on these proposals, followed by non-maximum suppression . The mask branch is then applied to the highest scoring 100 detection boxes. Although this differs from the parallel computation used in training, it speeds up inference and improves accuracy (due to the use of fewer, more accurate RoIs). The mask branch can predict K𝐾K masks per RoI, but we only use the k𝑘k-th mask, where k𝑘k is the predicted class by the classification branch. The m𝑚m×\\timesm𝑚m floating-number mask output is then resized to the RoI size, and binarized at a threshold of 0.5. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_33",
"text": " Note that since we only compute masks on the top 100 detection boxes, Mask R-CNN adds a small overhead to its Faster R-CNN counterpart (e.g., ∼similar-to\\scriptstyle\\sim20% on typical models). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_34",
"text": " We perform a thorough comparison of Mask R-CNN to the state of the art along with comprehensive ablations on the COCO dataset . We report the standard COCO metrics including AP (averaged over IoU thresholds), AP50, AP75, and APS, APM, APL (AP at different scales). Unless noted, AP is evaluating using mask IoU. As in previous work (5, 27), we train using the union of 80k train images and a 35k subset of val images (trainval35k), and report ablations on the remaining 5k val images (minival). We also report results on test-dev . ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_35",
"text": " We compare Mask R-CNN to the state-of-the-art methods in instance segmentation in Table 1. All instantiations of our model outperform baseline variants of previous state-of-the-art models. This includes MNC and FCIS , the winners of the COCO 2015 and 2016 segmentation challenges, respectively. Without bells and whistles, Mask R-CNN with ResNet-101-FPN backbone outperforms FCIS+++ , which includes multi-scale train/test, horizontal flip test, and online hard example mining (OHEM) . While outside the scope of this work, we expect many such improvements to be applicable to ours. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_36",
"text": " Mask R-CNN outputs are visualized in Figures 2 and 5. Mask R-CNN achieves good results even under challenging conditions. In Figure 6 we compare our Mask R-CNN baseline and FCIS+++ . FCIS+++ exhibits systematic artifacts on overlapping instances, suggesting that it is challenged by the fundamental difficulty of instance segmentation. Mask R-CNN shows no such artifacts. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_37",
"text": " We run a number of ablations to analyze Mask R-CNN. Results are shown in Table 2 and discussed in detail next. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_38",
"text": " Table 2a shows Mask R-CNN with various backbones. It benefits from deeper networks (50 vs. 101) and advanced designs including FPN and ResNeXt. We note that not all frameworks automatically benefit from deeper or advanced networks (see benchmarking in ). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_39",
"text": " Mask R-CNN decouples mask and class prediction: as the existing box branch predicts the class label, we generate a mask for each class without competition among classes (by a per-pixel sigmoid and a binary loss). In Table 2b, we compare this to using a per-pixel softmax and a multinomial loss (as commonly used in FCN ). This alternative couples the tasks of mask and class prediction, and results in a severe loss in mask AP (5.5 points). This suggests that once the instance has been classified as a whole (by the box branch), it is sufficient to predict a binary mask without concern for the categories, which makes the model easier to train. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_40",
"text": " Our default instantiation predicts class-specific masks, i.e., one m𝑚m×\\timesm𝑚m mask per class. Interestingly, Mask R-CNN with class-agnostic masks (i.e., predicting a single m𝑚m×\\timesm𝑚m output regardless of class) is nearly as effective: it has 29.7 mask AP vs. 30.3 for the class-specific counterpart on ResNet-50-C4. This further highlights the division of labor in our approach which largely decouples classification and segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_41",
"text": " An evaluation of our proposed RoIAlign layer is shown in Table 2c. For this experiment we use the ResNet-50-C4 backbone, which has stride 16. RoIAlign improves AP by about 3 points over RoIPool, with much of the gain coming at high IoU (AP75). RoIAlign is insensitive to max/average pool; we use average in the rest of the paper. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_42",
"text": " Additionally, we compare with RoIWarp proposed in MNC that also adopt bilinear sampling. As discussed in §3, RoIWarp still quantizes the RoI, losing alignment with the input. As can be seen in Table 2c, RoIWarp performs on par with RoIPool and much worse than RoIAlign. This highlights that proper alignment is key. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_43",
"text": " We also evaluate RoIAlign with a ResNet-50-C5 backbone, which has an even larger stride of 32 pixels. We use the same head as in Figure 4 (right), as the res5 head is not applicable. Table 2d shows that RoIAlign improves mask AP by a massive 7.3 points, and mask AP75 by 10.5 points (50% relative improvement). Moreover, we note that with RoIAlign, using stride-32 C5 features (30.9 AP) is more accurate than using stride-16 C4 features (30.3 AP, Table 2c). RoIAlign largely resolves the long-standing challenge of using large-stride features for detection and segmentation. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_44",
"text": " Finally, RoIAlign shows a gain of 1.5 mask AP and 0.5 box AP when used with FPN, which has finer multi-level strides. For keypoint detection that requires finer alignment, RoIAlign shows large gains even with FPN (Table 6). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_45",
"text": " Segmentation is a pixel-to-pixel task and we exploit the spatial layout of masks by using an FCN. In Table 2e, we compare multi-layer perceptrons (MLP) and FCNs, using a ResNet-50-FPN backbone. Using FCNs gives a 2.1 mask AP gain over MLPs. We note that we choose this backbone so that the conv layers of the FCN head are not pre-trained, for a fair comparison with MLP. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_46",
"text": " We compare Mask R-CNN to the state-of-the-art COCO bounding-box object detection in Table 3. For this result, even though the full Mask R-CNN model is trained, only the classification and box outputs are used at inference (the mask output is ignored). Mask R-CNN using ResNet-101-FPN outperforms the base variants of all previous state-of-the-art models, including the single-model variant of G-RMI , the winner of the COCO 2016 Detection Challenge. Using ResNeXt-101-FPN, Mask R-CNN further improves results, with a margin of 3.0 points box AP over the best previous single model entry from (which used Inception-ResNet-v2-TDM). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_47",
"text": " As a further comparison, we trained a version of Mask R-CNN but without the mask branch, denoted by “Faster R-CNN, RoIAlign” in Table 3. This model performs better than the model presented in due to RoIAlign. On the other hand, it is 0.9 points box AP lower than Mask R-CNN. This gap of Mask R-CNN on box detection is therefore due solely to the benefits of multi-task training. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_48",
"text": " Lastly, we note that Mask R-CNN attains a small gap between its mask and box AP: e.g., 2.7 points between 37.1 (mask, Table 1) and 39.8 (box, Table 3). This indicates that our approach largely closes the gap between object detection and the more challenging instance segmentation task. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_49",
"text": " We train a ResNet-101-FPN model that shares features between the RPN and Mask R-CNN stages, following the 4-step training of Faster R-CNN . This model runs at 195ms per image on an Nvidia Tesla M40 GPU (plus 15ms CPU time resizing the outputs to the original resolution), and achieves statistically the same mask AP as the unshared one. We also report that the ResNet-101-C4 variant takes ∼similar-to\\scriptstyle\\sim400ms as it has a heavier box head (Figure 4), so we do not recommend using the C4 variant in practice. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_50",
"text": " Although Mask R-CNN is fast, we note that our design is not optimized for speed, and better speed/accuracy trade-offs could be achieved , e.g., by varying image sizes and proposal numbers, which is beyond the scope of this paper. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_51",
"text": " Mask R-CNN is also fast to train. Training with ResNet-50-FPN on COCO trainval35k takes 32 hours in our synchronized 8-GPU implementation (0.72s per 16-image mini-batch), and 44 hours with ResNet-101-FPN. In fact, fast prototyping can be completed in less than one day when training on the train set. We hope such rapid training will remove a major hurdle in this area and encourage more people to perform research on this challenging topic. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_52",
"text": " Our framework can easily be extended to human pose estimation. We model a keypoint’s location as a one-hot mask, and adopt Mask R-CNN to predict K𝐾K masks, one for each of K𝐾K keypoint types (e.g., left shoulder, right elbow). This task helps demonstrate the flexibility of Mask R-CNN. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_53",
"text": " We note that minimal domain knowledge for human pose is exploited by our system, as the experiments are mainly to demonstrate the generality of the Mask R-CNN framework. We expect that domain knowledge (e.g., modeling structures ) will be complementary to our simple approach. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_54",
"text": " We make minor modifications to the segmentation system when adapting it for keypoints. For each of the K𝐾K keypoints of an instance, the training target is a one-hot m×m𝑚𝑚m\\times m binary mask where only a single pixel is labeled as foreground. During training, for each visible ground-truth keypoint, we minimize the cross-entropy loss over an m2superscript𝑚2m^{2}-way softmax output (which encourages a single point to be detected). We note that as in instance segmentation, the K𝐾K keypoints are still treated independently. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_55",
"text": " We adopt the ResNet-FPN variant, and the keypoint head architecture is similar to that in Figure 4 (right). The keypoint head consists of a stack of eight 3×\\times3 512-d conv layers, followed by a deconv layer and 2×\\times bilinear upscaling, producing an output resolution of 56×\\times56. We found that a relatively high resolution output (compared to masks) is required for keypoint-level localization accuracy. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_56",
"text": " Models are trained on all COCO trainval35k images that contain annotated keypoints. To reduce overfitting, as this training set is smaller, we train using image scales randomly sampled from (640, 800) pixels; inference is on a single scale of 800 pixels. We train for 90k iterations, starting from a learning rate of 0.02 and reducing it by 10 at 60k and 80k iterations. We use bounding-box NMS with a threshold of 0.5. Other details are identical as in §3.1. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_57",
"text": " We evaluate the person keypoint AP (APkpkp{}^{\\text{kp}}) and experiment with a ResNet-50-FPN backbone; more backbones will be studied in the appendix. Table 4 shows that our result (62.7 APkpkp{}^{\\text{kp}}) is 0.9 points higher than the COCO 2016 keypoint detection winner that uses a multi-stage processing pipeline (see caption of Table 4). Our method is considerably simpler and faster. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_58",
"text": " More importantly, we have a unified model that can simultaneously predict boxes, segments, and keypoints while running at 5 fps. Adding a segment branch (for the person category) improves the APkpkp{}^{\\text{kp}} to 63.1 (Table 4) on test-dev. More ablations of multi-task learning on minival are in Table 5. Adding the mask branch to the box-only (i.e., Faster R-CNN) or keypoint-only versions consistently improves these tasks. However, adding the keypoint branch reduces the box/mask AP slightly, suggesting that while keypoint detection benefits from multitask training, it does not in turn help the other tasks. Nevertheless, learning all three tasks jointly enables a unified system to efficiently predict all outputs simultaneously (Figure 7). ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_59",
"text": " We also investigate the effect of RoIAlign on keypoint detection (Table 6). Though this ResNet-50-FPN backbone has finer strides (e.g., 4 pixels on the finest level), RoIAlign still shows significant improvement over RoIPool and increases APkpkp{}^{\\text{kp}} by 4.4 points. This is because keypoint detections are more sensitive to localization accuracy. This again indicates that alignment is essential for pixel-level localization, including masks and keypoints. ",
"title": "Mask R-CNN"
},
{
"id": "1703.06870_all_60",
"text": " Given the effectiveness of Mask R-CNN for extracting object bounding boxes, masks, and keypoints, we expect it be an effective framework for other instance-level tasks. ",
"title": "Mask R-CNN"
}
] |
What if we introduce zero-shot generation for the synthetic query generation by using large-scale generative language models such as GPT-3, to get rid of the assumption that the training datasets exist even for the general domain? Would this too generate quality queries?
|
if we introduce zero-shot generation for the synthetic query generation by using large-scale generative language models such as GPT-3, to get rid of the assumption that the training datasets exist even for the general domain, Would this still generate quality queries [9].
|
[
9
] |
[
{
"id": "2004.14503_all_0",
"text": " Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks Guo et al. (2016); Pang et al. (2016); Hui et al. (2017); Dai et al. (2018); Gillick et al. (2018); Nogueira and Cho (2019a); MacAvaney et al. (2019); Yang et al. (2019a, b, c). Most neural passage retrieval systems are, in fact, two stages Zamani et al. (2018); Yilmaz et al. (2019), illustrated in Figure 1. The first is a true retrieval model (aka first-stage retrieval111Also called open domain retrieval.) that takes a question and retrieves a set of candidate passages from a large collection of documents. This stage itself is rarely a neural model and most commonly is an term-based retrieval model such as BM25 Robertson et al. (2004); Yang et al. (2017), though there is recent work on neural models Zamani et al. (2018); Dai and Callan (2019); Chang et al. (2020); Karpukhin et al. (2020); Luan et al. (2020). This is usually due to the computational costs required to dynamically score large-scale collections. Another consideration is that BM25 is often high quality Lin (2019). After first-stage retrieval, the second stage uses a neural model to rescore the filtered set of passages. Since the size of the filtered set is small, this is feasible. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_1",
"text": " The focus of the present work is methods for building neural models for first-stage passage retrieval for large collections of documents. While rescoring models are key components to any retrieval system, they are out of the scope of this study. Specifically, we study the zero-shot setting where there is no target-domain supervised training data Xian et al. (2018). This is a common situation, examples of which include enterprise or personal search environments Hawking (2004); Chirita et al. (2005), but generally any specialized domain. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_2",
"text": " The zero-shot setting is challenging as the most effective neural models have a large number of parameters, which makes them prone to overfitting. Thus, a key factor in training high quality neural models is the availability of large training sets. To address this, we propose two techniques to improve neural retrieval models in the zero-shot setting. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_3",
"text": " First, we observe that general-domain question-passage pairs can be acquired from community platforms Shah and Pomerantz (2010); Duan et al. (2017) or high quality academic datasets that are publicly available Kwiatkowski et al. (2019); Bajaj et al. (2016). Such resources have been used to create open domain QA passage retrieval models. However, as shown in Guo et al. (2020) and in our later experiments, neural retrieval models trained on the general domain data often do not transfer well, especially for specialized domains. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_4",
"text": " Towards zero-shot neural retrieval with improved domain adaptability, we propose a data augmentation approach Wong et al. (2016) that leverages these naturally occurring question/answer pairs to train a generative model that synthesizes questions given a text Zhou et al. (2017). We apply this model to passages in the target domain to generate unlimited pairs of synthetic questions and target-domain passages. This data can then be used for training. This technique is outlined in Figure 2. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_5",
"text": " A second contribution is a simple hybrid model that interpolates a traditional term-based model – BM25 Robertson et al. (1995) – with our zero-shot neural model. BM25 is also zero-shot, as its parameters do not require supervised training. Instead of using inverted index which is commonly used in term-based search, we exploit the fact that BM25 and neural models can be cast as vector similarity (see Section 4.4) and thus nearest neighbour search can be used for retrieval Liu et al. (2011); Johnson et al. (2017). The hybrid model takes the advantage of both the term matching and semantic matching. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_6",
"text": " We compare a number of baselines including other data augmentation and domain transfer techniques. We show on three specialized domains (scientific literature, travel and tech forums) and one general domain that the question generation approach is effective, especially when considering the hybrid model. Finally, for passage retrieval in the scientific domain, we compare with a number of recent supervised models from the BioASQ challenge, including many with rescoring stages. Interestingly, the quality of the zero-shot hybrid model approaches supervised alternatives. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_7",
"text": " The retrieval vs. rescorer distinction (Figure 1) often dictates modelling choices for each task. For first-stage retrieval, as mentioned earlier, term-based models that compile document collections into inverted indexes are most common since they allow for efficient lookup Robertson et al. (2004); Yang et al. (2017). However, there are studies that investigate neural first-stage retrieval. A common technique is to learn the term weights to be used in an inverted index Zamani et al. (2018); Dai and Callan (2019, 2020). Another technique is representation-based models that embed questions and passages into a common dense subspace Palangi et al. (2016) and use nearest neighbour search for retrieval Liu et al. (2011); Johnson et al. (2017). Recent work has shown this can be effective for passage scoring Chang et al. (2020); Karpukhin et al. (2020); MacAvaney et al. (2020). Though all of the aforementioned first-stage neural models assume supervised data for fine-tuning. For rescoring, scoring a small set of passages permits computationally intense models. These are often called interaction-based, one-tower or cross-attention models and numerous techniques have been developed Guo et al. (2016); Hui et al. (2017); Xiong et al. (2017); Dai et al. (2018); McDonald et al. (2018), many of which employ pre-trained contextualized models Nogueira and Cho (2019a); MacAvaney et al. (2019); Yang et al. (2019a, b). Khattab and Zaharia (2020) also showed that by delaying interaction to the last layer, one can build a first stage retrieval model which also leverages the modeling capacity of an interaction based models. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_8",
"text": " Previous work has attempted to alleviate reliance on large supervised training sets by pre-training deep retrieval models on weakly supervised data such as click-logs Borisov et al. (2016); Dehghani et al. (2017). Recently, Yilmaz et al. (2019) has shown that training models on general-domain corpora adapts well to new domains without targeted supervision. Another common technique for adaptation to specialized domains is to learn cross-domain representations Cohen et al. (2018); Tran et al. (2019). Our work is more aligned with methods like Yilmaz et al. (2019) which use general domain resources to build neural models for new domains, though via different techniques – data augmentation vs. model transfer. Our experiments show that data augmentation compares favorably a model transfer baseline. For specialized domains, recently, there have been a number of studies using cross-domain transfer and other techniques for biomedical passage retrieval via the TREC-COVID challenge222ir.nist.gov/covidSubmit/333ir.nist.gov/covidSubmit/archive.html that uses the CORD-19 collection Wang et al. (2020). ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_9",
"text": " Question generation for data augmentation is a common tool, but has not been tested in the pure zero-shot setting nor for neural passage retrieval. Duan et al. (2017) use community QA as a data source, as we do, to train question generators. The generated question-passage pairs are not used to train a neural model, but QA is instead done via question-question similarity. Furthermore, they do not test on specialized domains. Alberti et al. (2019) show that augmenting supervised training resources with synthetic question-answer pairs can lead to improvements. Nogueira et al. (2019) employed query generation in the context of first-stage retrieval. In that study, the generated queries were used to augment documents to improve BM25 keyword search. Here we focus on using synthetic queries to train the neural retrieval models. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_10",
"text": " Combining neural and term-based models have been studied, most commonly via linearly interpolating scores in an approximate re-ranking stage Karpukhin et al. (2020); Luan et al. (2020) or through the final layer of a rescoring network Severyn et al. (2015); McDonald et al. (2018). Since rescoring can be cast as classification, blending signals is straight-forward. However, this is approximate as it does not operate over the whole collection. For first-stage retrieval, the most common method is to learn term weights for a standard inverted index in order to make search efficient Zamani et al. (2018); Dai and Callan (2019). Here we propose a first-stage retrieval model that incorporates both term-based (sparse) and neural-based (dense) representations in a hybrid model that uses nearest neighbor search for exact inference Liu et al. (2011); Johnson et al. (2017); Wu et al. (2019). Similar methods using approximate nearest neighbour search have been investigated by Seo et al. (2019). ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_11",
"text": " In this work, we are specifically investigating the zero-shot scenario where there exists neither user issued questions nor domain specific data except the passage collection itself. We propose to address the training data scarcity issue by generating synthetic questions Zhou et al. (2017); Duan et al. (2017); Alberti et al. (2019); Nogueira et al. (2019). Leverage the fact that there are large question-answer data sources freely available from the web Shah and Pomerantz (2010); Duan et al. (2017). we first train a question generator using general domain question-answer pairs. The passage collection of a target domain is then fed into this generator to create pairs of noisy question-passage pairs, which are used to train a retrieval model (see Figure 2). In this work, we mine English question-answer pairs from community resources, primarily StackExchange444archive.org/details/stackexchange and Yahoo! Answers555webscope.sandbox.yahoo.com/catalog.php?datatype=l. Note we use stackexchange as it covers a wide range of topics, and we focus on investigating the domain adaptability of using a question generation approach. We leave comparing question generator trained on different datasets or using different architectures to future work. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_12",
"text": " To ensure data quality, we further filter the data by only keeping question-answer pairs that were positively rated by at least one user on these sites. In total, the final dataset contains 2 millions pairs, and the average length of questions and answers are 12 tokens and 155 tokens respectively. This dataset is general domain in that it contains question-answer pairs from a wide variety of topics. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_13",
"text": " Our question generator is an encoder-decoder with Transformer Vaswani et al. (2017) layers, which is a common for generation tasks such as translation and summarization Vaswani et al. (2017); Rothe et al. (2019). The encoder is trained to build a representation for a text and the decoder generates a question for which that text is a plausible answer. Appendix B has model specifics. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_14",
"text": " Our approach is robust to domain shift as the generator is trained to create questions based on a given text. As a result, generated questions stay close to the source passage material. Real examples are shown in Table 1 for technical and biomedical domains, highlighting the model’s adaptability. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_15",
"text": " In this section we describe our architecture for training a first-stage neural passage retriever. Our retrieval model belongs to the family of relevance-based dense retrieval 666A.k.a. two-tower, dual encoder or dense retrieval. that encodes pairs of items in dense subspaces Palangi et al. (2016). Let Q=(q1,…qn)𝑄subscript𝑞1…subscript𝑞𝑛Q=(q_{1},\\ldots q_{n}) and P=(p1,…,pm)𝑃subscript𝑝1…subscript𝑝𝑚P=(p_{1},\\ldots,p_{m}) be a question and passage of n𝑛n and m𝑚m tokens respectively. Our model consists of two encoders, {fQ(),fP()}subscript𝑓𝑄subscript𝑓𝑃\\{f_{Q}(),f_{P}()\\} and a similarity function, sim()sim\\text{sim}(). An encoder is a function f𝑓f that takes an item x𝑥x as input and outputs a real valued vector as the encoding, The similarity function, sim()sim\\text{sim}(), takes two encodings, q,p∈ℝNqpsuperscriptℝ𝑁\\textbf{q},\\textbf{p}\\in\\mathbb{R}^{N} and calculates a real valued score, s=sim(q,p)𝑠simqps=\\text{sim}(\\textbf{q},\\textbf{p}). For passage retrieval, the two encoders are responsible for computing dense vector representation of questions and passages. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_16",
"text": " In this work, both query and document encoders are based on BERT Devlin et al. (2019), which has been shown to lead to large performance gains across a number of tasks, including document ranking Nogueira and Cho (2019a); MacAvaney et al. (2019); Yang et al. (2019b). In addition, we share parameters between the query and passage encoder – i.e., fQ=fPsubscript𝑓𝑄subscript𝑓𝑃f_{Q}=f_{P}, so called Siamese networks – as we found this greatly increased performance while reducing parameters. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_17",
"text": " We encode P𝑃P as (CLS, p1,…,pm, SEP)CLS, subscript𝑝1…subscript𝑝𝑚 SEP(\\text{CLS, }p_{1},\\ldots,p_{m},\\text{ SEP}). For some datasets, a passage contains both a title T=(t1,…,tl)𝑇subscript𝑡1…subscript𝑡𝑙T=(t_{1},...,t_{l}) and content C=(c1,…,co)𝐶subscript𝑐1…subscript𝑐𝑜C=(c_{1},...,c_{o}), in which case we encode the passage as (CLS, t1,…,tl,SEP,c1,…,co, SEP)CLS, subscript𝑡1…subscript𝑡𝑙SEPsubscript𝑐1…subscript𝑐𝑜 SEP(\\text{CLS, }t_{1},...,t_{l},\\text{SEP},c_{1},...,c_{o},\\text{ SEP}). These sequences are fed to the BERT encoder. Let hCLS∈ℝNsubscriptℎCLSsuperscriptℝ𝑁h_{\\text{CLS}}\\in\\mathbb{R}^{N} be the final representation of the “CLS” token. Passage encodings p are computed by applying a linear projection, i.e., p=W∗hCLSpWsubscriptℎCLS\\textbf{p}=\\textbf{W}*h_{\\text{CLS}}, where W is a N×N𝑁𝑁N\\times N weight matrix (thus N=768𝑁768N=768), which preserves the original size of hCLSsubscriptℎCLSh_{\\text{CLS}}. This has been shown to perform better than down-projecting to a lower dimensional vector Luan et al. (2020), especially for long passages. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_18",
"text": " We encode Q𝑄Q as (CLS, q1,q2,…,qn, SEP)CLS, subscript𝑞1subscript𝑞2…subscript𝑞𝑛 SEP(\\text{CLS, }q_{1},q_{2},...,q_{n},\\text{ SEP}) which is then fed to the BERT encoder. Similarly, a linear projection on the corresponding “CLS” token, using the same weight matrix W, is applied to generate q. Following previous work Luan et al. (2020); Lee et al. (2019b), we use dot product as the similarity function, i.e., sim(q,p)=⟨q,p⟩=q⊺psimqpqpsuperscriptq⊺p\\text{sim}(\\textbf{q},\\textbf{p})=\\langle\\textbf{q},\\textbf{p}\\rangle=\\textbf{q}^{\\intercal}\\textbf{p}. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_19",
"text": " The top half of Figure 3 illustrates the model. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_20",
"text": " For training, we adopt softmax cross-entropy loss. Formally, given an instance {q,p+,p1−,…,pk−}qsuperscriptpsuperscriptsubscriptp1…superscriptsubscriptp𝑘\\{\\textbf{q},\\textbf{p}^{+},\\textbf{p}_{1}^{-},...,\\textbf{p}_{k}^{-}\\} which comprises one query q, one relevant passage p+superscriptp\\textbf{p}^{+} and k𝑘k non-relevant passages pi−superscriptsubscriptp𝑖\\textbf{p}_{i}^{-}. The objective is to minimize the negative log-likelihood: L(q,p+,p1−,…,pk−)=log(e⟨q,q+⟩+∑i=1ke⟨q,qi−⟩)−⟨q,q+⟩𝐿qsuperscriptpsuperscriptsubscriptp1…superscriptsubscriptp𝑘superscript𝑒qsuperscriptqsuperscriptsubscript𝑖1𝑘superscript𝑒qsuperscriptsubscriptq𝑖qsuperscriptqL(\\textbf{q},\\textbf{p}^{+},\\textbf{p}_{1}^{-},...,\\textbf{p}_{k}^{-})=\\\\ \\log(e^{\\langle\\textbf{q},\\textbf{q}^{+}\\rangle}+\\sum_{i=1}^{k}{e^{\\langle\\textbf{q},\\textbf{q}_{i}^{-}\\rangle}})-\\langle\\textbf{q},\\textbf{q}^{+}\\rangle (1) This loss function is a special case of ListNet loss Cao et al. (2007) where all relevance judgements are binary, and only one passage is marked relevant for each training example. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_21",
"text": " For the set {p1−,…,pk−}superscriptsubscriptp1…superscriptsubscriptp𝑘\\{\\textbf{p}_{1}^{-},...,\\textbf{p}_{k}^{-}\\}, we use in-batch negatives. Given a batch of (query, relevant-passage) pairs, negative passages for a query are passages from different pairs in the batch. In-batch negatives has been widely adopted as it enables efficient training via computation sharing Yih et al. (2011); Gillick et al. (2018); Karpukhin et al. (2020). ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_22",
"text": " Since the relevance-based model encodes questions and passages independently, we run the encoder over every passage in a collection offline to create a distributed lookup-table as a backend. At inference, we run the question encoder online and then perform nearest neighbor search to find relevant passages, as illustrated in the bottom half of Figure 3. While there has been extensive work in fast approximate nearest neighbour retrieval for dense representations Liu et al. (2011); Johnson et al. (2017), we simply use distributed brute-force search as our passage collections are at most in the millions, resulting in exact retrieval. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_23",
"text": " Traditional term-based methods like BM25 Robertson et al. (1995) are powerful zero-shot models and can outperform supervised neural models in many cases Lin (2019). Rescoring systems have shown that integrating BM25 into a neural model improves performance McDonald et al. (2018). However, for first-stage retrieval most work focuses on approximations via re-ranking Karpukhin et al. (2020); Luan et al. (2020). Here we present a technique for exact hybrid first-stage retrieval without the need for a re-ranking stage. Our method is motivated by the work of Seo et al. (2019) for sparse-dense QA. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_24",
"text": " For a query Q𝑄Q and a passage P𝑃P, BM25 is computed as the following similarity score, BM25(Q,P)=∑i=1nIDF(qi)∗cnt(qi∈P)∗(k+1)cnt(qi∈P)+k∗(1−b+b∗mmavg),BM25𝑄𝑃superscriptsubscript𝑖1𝑛IDFsubscript𝑞𝑖cntsubscript𝑞𝑖𝑃𝑘1cntsubscript𝑞𝑖𝑃𝑘1𝑏𝑏𝑚subscript𝑚avg\\text{BM25}(Q,P)=\\\\ \\sum_{i=1}^{n}\\frac{\\text{IDF}(q_{i})*\\text{cnt}(q_{i}\\in P)*(k+1)}{\\text{cnt}(q_{i}\\in P)+k*(1-b+b*\\frac{m}{m_{\\text{avg}}})}, (2) where k𝑘k/b𝑏b are BM25 hyperparameters, IDF is the term’s inverse document frequency from the corpus, cnt is the term’s frequency in a passage, n𝑛n/m𝑚m are the number of tokens in Q𝑄Q/P𝑃P, and mavgsubscript𝑚avgm_{\\text{avg}} is the collection’s average passage length. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_25",
"text": " Like most TF-IDF models, this can be written as a vector space model. Specifically, let qbm25∈(0,1)|V|superscriptqbm25superscript01𝑉\\textbf{q}^{\\text{bm25}}\\in(0,1)^{|V|} be a sparse binary encoding of a query of dimension |V|𝑉|V|, where V𝑉V is the term vocabulary. Specifically this vector is 1 at position i𝑖i if vi∈Qsubscript𝑣𝑖𝑄v_{i}\\in Q, here visubscript𝑣𝑖v_{i} is the i𝑖i-th entry in V𝑉V. Furthermore, let pbm25∈ℝ|V|superscriptpbm25superscriptℝ𝑉\\textbf{p}^{\\text{bm25}}\\in\\mathbb{R}^{|V|} be a sparse real-valued vector where, pibm25=IDF(vi)∗cnt(vi∈P)∗(k+1)cnt(vi∈P)+k∗(1−b+b∗mmavg)superscriptsubscriptp𝑖bm25IDFsubscript𝑣𝑖cntsubscript𝑣𝑖𝑃𝑘1cntsubscript𝑣𝑖𝑃𝑘1𝑏𝑏𝑚subscript𝑚avg\\textbf{p}_{i}^{\\text{bm25}}=\\frac{\\text{IDF}(v_{i})*\\text{cnt}(v_{i}\\in P)*(k+1)}{\\text{cnt}(v_{i}\\in P)+k*(1-b+b*\\frac{m}{m_{\\text{avg}}})} (3) We can see that, BM25(Q,P)=⟨qbm25,pbm25⟩BM25𝑄𝑃superscriptqbm25superscriptpbm25\\text{BM25}(Q,P)=\\langle\\textbf{q}^{\\text{bm25}},\\textbf{p}^{\\text{bm25}}\\rangle As BM25 score can be written as vector dot-product, this gives rise to a simple hybrid model, sim(qhyb,phyb)simsuperscriptqhybsuperscriptphyb\\displaystyle\\text{sim}(\\textbf{q}^{\\text{hyb}},\\textbf{p}^{\\text{hyb}}) =⟨qhyb,phyb⟩absentsuperscriptqhybsuperscriptphyb\\displaystyle=\\langle\\textbf{q}^{\\text{hyb}},\\textbf{p}^{\\text{hyb}}\\rangle =⟨(λqbm25,qnn),(pbm25,pnn)⟩absent𝜆superscriptqbm25superscriptqnnsuperscriptpbm25superscriptpnn\\displaystyle=\\langle(\\lambda\\textbf{q}^{\\text{bm25}},\\textbf{q}^{\\text{nn}}),(\\textbf{p}^{\\text{bm25}},\\textbf{p}^{\\text{nn}})\\rangle =λ⟨qbm25,pbm25⟩+⟨qnn,pnn⟩,absent𝜆superscriptqbm25superscriptpbm25superscriptqnnsuperscriptpnn\\displaystyle=\\lambda\\langle\\textbf{q}^{\\text{bm25}},\\textbf{p}^{\\text{bm25}}\\rangle+\\langle\\textbf{q}^{\\text{nn}},\\textbf{p}^{\\text{nn}}\\rangle, where qhybsuperscriptqhyb\\textbf{q}^{\\text{hyb}} and phybsuperscriptphyb\\textbf{p}^{\\text{hyb}} are the hybrid encodings that concatenate the BM25 (qbm25superscriptqbm25\\textbf{q}^{\\text{bm25}}/pbm25superscriptpbm25\\textbf{p}^{\\text{bm25}}) and the neural encodings (qnnsuperscriptqnn\\textbf{q}^{\\text{nn}}/pnnsuperscriptpnn\\textbf{p}^{\\text{nn}}, from Sec 4.1); and λ𝜆\\lambda is a interpolation hyperparameter that trades-off the relative weight of BM25 versus neural models. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_26",
"text": " Thus, we can implement BM25 and our hybrid model as nearest neighbor search with hybrid sparse-dense vector dot-product Wu et al. (2019). ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_27",
"text": " We outline data and experimental details. The Appendix has further information to aid replicability. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_28",
"text": " Biomedical questions from Task B Phase A of BioASQ Tsatsaronis et al. (2015). We use BioASQ 7 and 8 test data for evaluation. The collection contains all abstracts from MEDLINE articles. Given an article, we split its abstract into chunks with sentence boundaries preserved. A passage is constructed by concatenating the title and one chunk. Chunk size is set so that each passage has no more than 200 wordpiece tokens. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_29",
"text": " Threads from two online user forum domains: Ubuntu technical help and TripAdvisor topics for New York City Bhatia and Mitra (2010). For each thread, we concatenate the title and initial post to generate passages. For BERT-based models we truncate at 350 wordpiece tokens. Unlike the BioASQ data, this data generally does not contain specialist knowledge queries. Thus, compared to the collection of question-answer pairs mined from the web, there is less of a domain shift. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_30",
"text": " Aggregated queries issued to Google Search Kwiatkowski et al. (2019) with relevance judgements. We convert the original format to a passage retrieval task, where the goal is to retrieval the long answer among all wiki paragraphs Ahmad et al. (2019). We discarded questions whose long answer is either a table or a list. We evaluate retrieval performance on the development set as the test set is not publicly available. The target collection contains all passages from the development set and is augmented with passages from 2016-12-21 dump of Wikipedia Chen et al. (2017). Each passage is also concatenated with title. For BERT-based models passages are truncated at 350 wordpiece tokens. This data is different from the previous data in two regards. First, there is a single annotated relevant paragraph per query. This is due to the nature in which the data was curated. Second, this data is entirely “general domain”. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_31",
"text": " Dataset statistics are listed in Appendix A. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_32",
"text": " Term-matching systems such as BM25 Robertson et al. (1995) are themselves zero-shot, since they require no training resources except the document collection itself. We train a standard BM25 retrieval model on the document collection for each target domain. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_33",
"text": " The Inverse Cloze Task (ICT) Lee et al. (2019b) is an unsupervised pre-training objective which randomly masks out a sentence from a passage and creates synthetic sentence-passage pairs representing membership of the sentence in the passage. These masked examples can then used to train or pre-train a retrieval model. Lee et al. (2019b) showed that masking a sentence with a certain probability, p𝑝p, can both mimic the performance of lexical matching (p=0𝑝0p=0) or semantic matching (p>0𝑝0p>0). ICT is domain-targeted since training examples are created directly from the relevant collection. Chang et al. (2020) showed that ICT-based pre-training outperforms a number of alternatives such as Body First Selection (BFS) or Wiki Link Prediction (WLP) for large-scale retrieval. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_34",
"text": " Gysel et al. (2018) proposes to train unsupervised neural retrieval system by extracting ngrams and titles from each document as queries. Different from ICT, this approach does not mask the extract ngrams from the original document. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_35",
"text": " The dataset mined from community question-answer forums (Sec. 3) itself can be used directly to train a neural retrieval model since it comes of the form query and relevant text (passage) pair. This data is naturally occurring and not systematically noisy, which is an advantage. However, the data is not domain-targeted, in that it comes from general knowledge questions. We call models trained on this dataset as QA. Applying a model trained on general domain data to a specific domain with no adaptation is a strong baseline Yilmaz et al. (2019). ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_36",
"text": " The QGen retrieval model trained on the domain-targeted synthetic question-passage pairs described in Section 3. While this model can contain noise from the generator, it is domain-targeted. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_37",
"text": " This is identical to QGen, but instead of using the pure neural model, we train the hybrid model in Section 4.4 setting λ=1.0𝜆1.0\\lambda=1.0 for all models to avoid any domain-targeted tuning. We train the term and neural components independently, combing them only at inference. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_38",
"text": " All ICT, NGram, QA and QGen models are trained using the neural architecture from Section 4. For BioASQ experiments, question and passage encoders are initialized with BioBERT base v-1.1 Lee et al. (2019a). All other data uses uncased BERT base Devlin et al. (2019). ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_39",
"text": " We can categorize the neural zero-shot models along two dimensions extractive vs. transfer. ICT and Ngram are extractive, in that they extract exact substrings from a passage to create synthetic questions for model training. Note that extractive models are also unsupervised, since they do not rely on general domain resources. QA is a direct cross-domain transfer model, in that we train the model on data from one domain (or general domain) and directly apply it to the target domain for retrieval. QGen models are in-direct cross-domain transfer models, in that we use the out-of-domain data to generate resources for model training. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_40",
"text": " The nature of each zero-shot neural system requires different generated training sets. For ICT, we follow Lee et al. (2019b) and randomly select at most 5 sentences from a document, with a mask rate of 0.9. For Ngram models, Gysel et al. (2018) suggests that retrieval models trained with ngram-order of around 16 was consistently high in quality. Thus, in our experiment we also use 16 and move the ngram window with a stride of 8 to allow 8 token overlap between consecutive ngrams. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_41",
"text": " For QGen models, each passage is truncated to 512 sentence tokens and feed to the question generation system. We also run the question generator on individual sentences from each passage to promote questions that focus on different aspects of the same document. We select at most 5 salient sentences from a passage, where sentence saliency is the max term IDF value in a sentence. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_42",
"text": " The size of the generated training set for each baseline is shown in Table 2. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_43",
"text": " Our main results are shown in Table 3. We compute Mean Average Precision over the first N777BioASQ: N=100; and Forum: N=1000. results (MAP), Precision@10 and nDCG@10 Manning et al. (2008) with TREC evaluation script888https://trec.nist.gov/trec_eval/. All numbers are in percentage. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_44",
"text": " Accuracy of pure neural models are shown in the upper group of Table 3. First, we see that both QA and QGen consistently outperform neural baselines such as ICT and Ngram that are based on sub-string masking or matching. Matching on sub-strings likely biases the model towards memorization instead of learning salient concepts of the passage. Furthermore, query encoders trained on sub-strings are not exposed to many questions, which leads to adaptation issues when applied to true retrieval tasks. Comparing QGen with QA, typically QGen performs better, especially for specialized target domains. This suggests that domain-targeted query generation is more effective for domain shift than direct cross-domain transfer Yilmaz et al. (2019). ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_45",
"text": " Performance of term-based models and hybrid models are shown in Table 3 (bottom). We can see that BM25 is a very strong baseline. However, this could be an artifact of the datasets as the queries are created by annotators who already have the relevant passage in mind. Queries created this way typically have large lexical overlapping with the passage, thus favoring term matching based approaches like BM25. This phenomenon has been observed by previous work Lee et al. (2019b). Nonetheless, the hybrid model outperforms BM25 on all domains, and the improvements are statistically significant on 9/12 metrics. This illustrate that term-based model and neural-based model return complementary results, and the proposed hybrid approach effectively combines their strengths. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_46",
"text": " For NaturalQuestions since there is a single relevant passage annotation, we report Precision@1 and Mean reciprocal rank (MRR)999MRR = MAP when there is one relevant item.. Results are show in Table 4. We can see here that while QGen still significantly outperform other baselines, the gap between QGen and QA is smaller. Unlike BioASQ and Forum datasets, NaturalQuestions contains general domain queries, which aligns well with the question-answer pairs for training the QA model. Another difference is that NaturalQuestions consists of real information seeking queries, in this case QGen performs better than BM25. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_47",
"text": " One question we can ask is how close to the state-of-the-art in supervised passage retrieval are these zero-shot models. To test this we looked at BioASQ 8 dataset and compare to the top-participant systems.101010participants-area.bioasq.org Since BioASQ provides annotated training data, the top teams typically use supervised models with a first-stage retrieval plus rescorer architecture. For instance, the AUEB group, which is the top or near top system for BioASQ 6, 7 and 8, uses a BM25 first-stage retrieval model plus a supervised neural rescorer Brokos et al. (2018); Pappas et al. (2019). ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_48",
"text": " In order to make our results comparable to participant systems, we return only 10 passages per question (as per shared-task guidelines) and use the official BioASQ 8 evaluation software. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_49",
"text": " Table 5 shows the results for three zero-shot systems (BM25, QGen and QGenHyb) relative to the top 4 systems on average across all 5 batches of the shared task. We can see the QGenHyb performs quite favorably and on average is indistinguishable from the top systems. This is very promising and suggests that top-performance for zero-shot retrieval models is possible. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_50",
"text": " A natural question is whether improved first-stage model plus supervised rescoring is additive. The last two lines of the table takes the two-best first-stage retrieval models and adds a simple BERT-based cross-attention rescorer Nogueira and Cho (2019b); MacAvaney et al. (2019). We can see that, on average, this does improve quality. Furthermore, having a better first-stage retriever (QGenHyb vs. BM25) makes a difference. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_51",
"text": " As noted earlier, on BioASQ, BM25 is a very strong baseline. This makes the BM25/QGenHyb zero-shot models highly likely to be competitive. When we look at NaturalQuestions, where BM25 is significantly worse than neural models, we see that the gap between zero-shot and supervised widens substantially. The last row of Table 4 shows a model trained on the NaturalQuestions training data, which is nearly 2-3 times more accurate than the best zero-shot models. Thus, while zero-shot neural models have the potential to be competitive with supervised counterparts, the experiments here show this is data dependant. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_52",
"text": " Since our approach allows us to generate queries on every passage of the target corpus, one question is that whether retrieval system trained this way simply memorizes the target corpus or it also generalize on unseen passages. Furthermore, from an efficiency standpoint, how many synthetic training examples are required to achieve maximum performance. To answer these questions, we uniformly sample a subset of documents and then generate synthetic queries only on that subset. Results on BIOASQ 7 are shown in Figure 4, where x-axis denotes the percentage of sampled documents. We can see that retrieval accuracy improves as passage coverage increases. The peak is achieved when using a 20%percent2020\\% subset, which covers 21%percent2121\\% of the reference passages. This is not surprising because the number of frequently discussed entities/topics are typically limited, and a subset of the passages covers most of them. This result also indicates that the learned system does generalize, otherwise optimal performance would be seen with 100%percent100100\\% of the data. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_53",
"text": " Another interesting question is how important is the quality of the question generator relative to retrieval performance. Below we measured generation quality (via Rouge-based metrics Lin and Hovy (2002)) versus retrieval quality for three systems. The base generator contains 12 transformer layers, the lite version only uses the first 3 layer. The large one contains 24 transformer layers and each layer with larger hidden layer size, 4096, and more attention heads, 16. Retrieval quality was measured on BIOASQ 7 and generation quality with a held out set of the community question-answer data set. Results are shown in Table 6. We can see that larger generation models lead to improved generators. However, there is little difference in retrieval metrics, suggesting that large domain targeted data is the more important criteria. ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
},
{
"id": "2004.14503_all_54",
"text": " We study methods for neural zero-shot passage retrieval and find that domain targeted synthetic question generation coupled with hybrid term-neural first-stage retrieval models consistently outperforms alternatives. Furthermore, for at least one domain, approaches supervised quality. While out of the scope of this study, future work includes further testing the efficacy of these first-stage models in a full end-to-end system (evaluated briefly in Section 6.1), as well as for pre-training supervised models Chang et al. (2020). ",
"title": "Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation"
}
] |
Why is MNIST so popular?
|
The popularity is related to size which allows researchers to check and prototype their model [1].
|
[
1
] |
[
{
"id": "1708.07747_all_0",
"text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can do so much the simple MNIST dataset has become the most widely used testbed in deep learning, surpassing CIFAR-10 (Krizhevsky and Hinton, 2009) and ImageNet (Deng et al., 2009) in its popularity via Google trends111https://trends.google.com/trends/explore?date=all&q=mnist,CIFAR,ImageNet. Despite its simplicity its usage does not seem to be decreasing despite calls for it in the deep learning community. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_1",
"text": " The reason MNIST is so popular has to do with its size, allowing deep learning researchers to quickly check and prototype their algorithms. This is also complemented by the fact that all machine learning libraries (e.g. scikit-learn) and deep learning frameworks (e.g. Tensorflow, Pytorch) provide helper functions and convenient examples that use MNIST out of the box. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_2",
"text": " Our aim with this work is to create a good benchmark dataset which has all the accessibility of MNIST, namely its small size, straightforward encoding and permissive license. We took the approach of sticking to the 101010 classes 70,0007000070,000 grayscale images in the size of 28×28282828\\times 28 as in the original MNIST. In fact, the only change one needs to use this dataset is to change the URL from where the MNIST dataset is fetched. Moreover, Fashion-MNIST poses a more challenging classification task than the simple MNIST digits data, whereas the latter has been trained to accuracies above 99.7% as reported in Wan et al. (2013); Ciregan et al. (2012). ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_3",
"text": " We also looked at the EMNIST dataset provided by Cohen et al. (2017), an extended version of MNIST that extends the number of classes by introducing uppercase and lowercase characters. However, to be able to use it seamlessly one needs to not only extend the deep learning framework’s MNIST helpers, but also change the underlying deep neural network to classify these extra classes. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_4",
"text": " Fashion-MNIST is based on the assortment on Zalando’s website222Zalando is the Europe’s largest online fashion platform. http://www.zalando.com. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762×10007621000762\\times 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_5",
"text": " We use the front look thumbnail images of 70,0007000070,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, white-color products are not included in the dataset as they have low contrast to the background. The thumbnails (51×73517351\\times 73) are then fed into the following conversion pipeline, which is visualized in Figure 1. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_6",
"text": " 1. Converting the input to a PNG image. 2. Trimming any edges that are close to the color of the corner pixels. The “closeness” is defined by the distance within 5%percent55\\% of the maximum possible intensity in RGB space. 3. Resizing the longest edge of the image to 282828 by subsampling the pixels, i.e. some rows and columns are skipped over. 4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.01.01.0, with increasing effect near outlines. 5. Extending the shortest edge to 282828 and put the image to the center of the canvas. 6. Negating the intensities of the image. 7. Converting the image to 8-bit grayscale pixels. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_7",
"text": " For the class labels, we use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product contains only one silhouette code. Table 2 gives a summary of all class labels in Fashion-MNIST with examples for each class. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_8",
"text": " Finally, the dataset is divided into a training and a test set. The training set receives a randomly-selected 6,00060006,000 examples from each class. Images and labels are stored in the same file format as the MNIST data set, which is designed for storing vectors and multidimensional matrices. The result files are listed in Table 1. We sort examples by their labels while storing, resulting in smaller label files after compression comparing to the MNIST. It is also easier to retrieve examples with a certain class label. The data shuffling job is therefore left to the algorithm developer. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_9",
"text": " We provide some classification results in LABEL:tbl:benchmark to form a benchmark on this data set. All algorithms are repeated 555 times by shuffling the training data and the average accuracy on the test set is reported. The benchmark on the MNIST dataset is also included for a side-by-side comparison. A more comprehensive table with explanations on the algorithms can be found on https://github.com/zalandoresearch/fashion-mnist. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_10",
"text": " This paper introduced Fashion-MNIST, a fashion product images dataset intended to be a drop-in replacement of MNIST and whilst providing a more challenging alternative for benchmarking machine learning algorithm. The images in Fashion-MNIST are converted to a format that matches that of the MNIST dataset, making it immediately compatible with any machine learning package capable of working with the original MNIST dataset. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
}
] |
Who were the annotators of the new real-world scanning dataset used for real-world reconstruction ?
|
The reconstructions are not create by manual annotations [40]. Instead, the authors use publicly-available VoxelHashing framework [25] to obtain dense 3D reconstructions [5].
|
[
40,
5
] |
[
{
"id": "1604.03265_all_0",
"text": " Understanding 3D environments is a vital element of modern computer vision research due to paramount relevance in many vision systems, spanning a wide field of application scenarios from self-driving cars to autonomous robots. Recent advancements in real-time SLAM techniques and crowd-sourcing of virtual 3D models have additionally facilitated the availability of 3D data. (29, 34, 31, 33, 2). This development has encouraged the lifting of 2D to 3D for deep learning, opening up new opportunities with the additional information of 3D data; e.g., aligning models is easier in 3D Euclidean space. In this paper, we specifically focus on the object classification task on 3D data obtained from both CAD models and commodity RGB-D sensors. In addition, we demonstrate retrieval results in the supplemental material. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_1",
"text": " While the extension of 2D convolutional neural networks to 3D seems natural, the additional computational complexity (volumetric domain) and data sparsity introduces significant challenges; for instance, in an image, every pixel contains observed information, whereas in 3D, a shape is only defined on its surface. Seminal work by Wu et al. propose volumetric CNN architectures on volumetric grids for object classification and retrieval. While these approaches achieve good results, it turns out that training a CNN on multiple 2D views achieves a significantly higher performance, as shown by Su et al. , who augment their 2D CNN with pre-training from ImageNet RGB data . These results indicate that existing 3D CNN architectures and approaches are unable to fully exploit the power of 3D representations. In this work, we analyze these observations and evaluate the design choices. Moreover, we show how to reduce the gap between volumetric CNNs and multi-view CNNs by efficiently augmenting training data, introducing new CNN architectures in 3D. Finally, we examine multi-view CNNs; our experiments show that we are able to improve upon state of the art with improved training data augmentation and a new multi-resolution component. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_2",
"text": " We consider volumetric representations of 3D point clouds or meshes as input to the 3D object classification problem. This is primarily inspired by recent advances in real-time scanning technology, which use volumetric data representations. We further assume that the input data is already pre-segmented by 3D bounding boxes. In practice, these bounding boxes can be extracted using the sliding windows, object proposals, or background subtraction. The output of the method is the category label of the volumetric data instance. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_3",
"text": " We provide a detailed analysis over factors that influence the performance of volumetric CNNs, including network architecture and volumn resolution. Based upon our analysis, we strive to improve the performance of volumetric CNNs. We propose two volumetric CNN network architectures that signficantly improve state-of-the-art of volumetric CNNs on 3D shape classification. This result has also closed the gap between volumetric CNNs and multi-view CNNs, when they are provided with 3D input discretized at 30×30×3030303030\\times 30\\times 30 3D resolution. The first network introduces auxiliary learning tasks by classifying part of an object, which help to scrutize details of 3D objects more deeply. The second network uses long anisotropic kernels to probe for long-distance interactions. Combining data augmentation with a multi-orientation pooling, we observe significant performance improvement for both networks. We also conduct extensive experiments to study the influence of volume resolution, which sheds light on future directions of improving volumetric CNNs. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_4",
"text": " Furthermore, we introduce a new multi-resolution component to multi-view CNNs, which improves their already compelling performance. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_5",
"text": " In addition to providing extensive experiments on 3D CAD model datasets, we also introduce a dataset of real-world 3D data, constructed using dense 3D reconstruction taken with . Experiments show that our networks can better adapt from synthetic data to this real-world data than previous methods. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_6",
"text": " A large variety of shape descriptors has been developed in the computer vision and graphics community. For instance, shapes can be represented as histograms or bag-of-feature models which are constructed from surface normals and curvatures . Alternatives include models based on distances, angles, triangle areas, or tetrahedra volumes , local shape diameters measured at densely-sampled surface points , Heat kernel signatures (1, 19), or extensions of SIFT and SURF feature descriptors to 3D voxel grids . The spherical harmonic descriptor (SPH) and the Light Field descriptor (LFD) are other popular descriptors. LFD extracts geometric and Fourier descriptors from object silhouettes rendered from several different viewpoints, and can be directly applied to the shape classification task. In contrast to recently developed feature learning techniques, these features are hand-crafted and do not generalize well across different domains. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_7",
"text": " Convolutional Neural Networks (CNNs) have been successfully used in different areas of computer vision and beyond. In particular, significant progress has been made in the context of learning features. It turns out that training from large RGB image datasets (e.g., ImageNet ) is able to learn general purpose image descriptors that outperform hand-crafted features for a number of vision tasks, including object detection, scene recognition, texture recognition and classification (7, 10, 27, 5, 12). This significant improvement in performance on these tasks has decidedly moved the field forward. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_8",
"text": " With the introduction of commodity range sensors, the depth channel became available to provide additional information that could be incorporated into common CNN architectures. A very first approach combines convolutional and recursive neural networks for learning features and classifying RGB-D images . Impressive performance for object detection from RGB-D images has been achieved using a geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity . Recently, a CNN architecture has been proposed where the RGB and depth data are processed in two separate streams; in the end, the two streams are combined with a late fusion network . All these descriptors operate on single RGB-D images, thus processing 2.5D data. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_9",
"text": " Wu et al. lift 2.5D to 3D with their 3DShapeNets approach by categorizing each voxel as free space, surface or occluded, depending on whether it is in front of, on, or behind the visible surface (i.e., the depth value) from the depth map. The resulting representation is a 3D binary voxel grid, which is the input to a CNN with 3D filter banks. Their method is particularly relevant in the context of this work, as they are the first to apply CNNs on a 3D representation. A similar approach is VoxNet , which also uses binary voxel grids and a corresponding 3D CNN architecture. The advantage of these approaches is that it can process different sources of 3D data, including LiDAR point clouds, RGB-D point clouds, and CAD models; we likewise follow this direction. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_10",
"text": " An alternative direction is to exploit established 2D CNN architectures; to this end, 2D data is extracted from the 3D representation. In this context, DeepPano converts 3D shapes into panoramic views; i.e., a cylinder projection around its principle axis. Current state-of-the-art uses multiple rendered views, and trains a CNN that can process all views jointly . This multi-view CNN (MVCNN) is pre-trained on ImageNet and uses view-point pooling to combine all streams obtained from each view. A similar idea on stereo views has been proposed earlier . ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_11",
"text": " Two representations of generic 3D shapes are popularly used for object classification, volumetric and multi-view (Fig 1). The volumetric representation encodes a 3D shape as a 3D tensor of binary or real values. The multi-view representation encodes a 3D shape as a collection of renderings from multiple viewpoints. Stored as tensors, both representations can easily be used to train convolutional neural networks, i.e., volumetric CNNs and multi-view CNNs. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_12",
"text": " Intuitively, a volumetric representation should encode as much information, if not more, than its multi-view counterpart. However, experiments indicate that multi-view CNNs produce superior performance in object classification. Fig 2 reports the classification accuracy on the ModelNet40 dataset by state-of-the-art volumetric/multi-view architectures111We train models by replicating the architecture of for volumetric CNNs and for multi-view CNNs. All networks are trained in an end-to-end fashion. All methods are trained/tested on the same split for fair comparison. The reported numbers are average instance accuracy. See Sec 6 for details.. A volumetric CNN based on voxel occupancy (green) is 7.3%percent7.37.3\\% worse than a multi-view CNN (yellow). ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_13",
"text": " We investigate this performance gap in order to ascertain how to improve volumetric CNNs. The gap seems to be caused by two factors: input resolution and network architecture differences. The multi-view CNN down-samples each rendered view to 227×227227227227\\times 227 pixels (Multi-view Standard Rendering in Fig 1); to maintain a similar computational cost, the volumetric CNN uses a 30×30×3030303030\\times 30\\times 30 occupancy grid (Volumetric Occupancy Grid in Fig 1)222Note that 30×30×30≈227×22730303022722730\\times 30\\times 30\\approx 227\\times 227.. As shown in Fig 1, the input to the multi-view CNN captures more detail. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_14",
"text": " However, the difference in input resolution is not the primary reason for this performance gap, as evidenced by further experiments. We compare the two networks by providing them with data containing similar level of detail. To this end, we feed the multi-view CNN with renderings of the 30×30×3030303030\\times 30\\times 30 occupancy grid using sphere rendering333It is computationally prohibitive to match the volumetric CNN resolution to multi-view CNN, which would be 227×227×227227227227227\\times 227\\times 227., i.e., for each occupied voxel, a ball is placed at its center, with radius equal to the edge length of a voxel (Multi-View Sphere Rendering in Fig 1). We train the multi-view CNN from scratch using these sphere renderings. The accuracy of this multi-view CNN is reported in blue. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_15",
"text": " As shown in Fig 2, even with similar level of object detail, the volumetric CNN (green) is 4.8%percent4.84.8\\% worse than the multi-view CNN (blue). That is, there is still significant room to improve the architecture of volumetric CNNs. This discovery motivates our efforts in Sec 4 to improve volumetric CNNs. Additionally, low-frequency information in 3D seems to be quite discriminative for object classification—it is possible to achieve 89.5%percent89.589.5\\% accuracy (blue) at a resolution of only 30×30×3030303030\\times 30\\times 30. This discovery motivates our efforts in Sec 5 to improve multi-view CNNs with a 3D multi-resolution approach. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_16",
"text": " We improve volumetric CNNs through three separate means: 1) introducing new network structures; 2) data augmentation; 3) feature pooling. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_17",
"text": " We propose two network variations that significantly improve state-of-the-art CNNs on 3D volumetric data. The first network is designed to mitigate overfitting by introducing auxiliary training tasks, which are themselves challenging. These auxiliary tasks encourage the network to predict object class labels from partial subvolumes. Therefore, no additional annotation efforts are needed. The second network is designed to mimic multi-view CNNs, as they are strong in 3D shape classification. Instead of using rendering routines from computer graphics, our network projects a 3D shape to 2D by convolving its 3D volume with an anisotropic probing kernel. This kernel is capable of encoding long-range interactions between points. An image CNN is then appended to classify the 2D projection. Note that the training of the projection module and the image classification module is end-to-end. This emulation of multi-view CNNs achieves similar performance to them, using only standard layers in CNN. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_18",
"text": " In order to mitigate overfitting from too many parameters, we adopt the mlpconv layer from as our basic building block in both network variations. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_19",
"text": " Compared with 2D image datasets, currently available 3D shape datasets are limited in scale and variation. To fully exploit the design of our networks, we augment the training data with different azimuth and elevation rotations. This allows the first network to cover local regions at different orientations, and the second network to relate distant points at different relative angles. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_20",
"text": " Both of our new networks are sensitive to shape orientation, i.e., they capture different information at different orientations. To capture a more holistic sense of a 3D object, we add an orientation pooling stage that aggregates information from different orientations. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_21",
"text": " We observe significant overfitting when we train the volumetric CNN proposed by in an end-to-end fashion (see supplementary). When the volumetric CNN overfits to the training data, it has no incentive to continue learning. We thus introduce auxiliary tasks that are closely correlated with the main task but are difficult to overfit, so that learning continues even if our main task is overfitted. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_22",
"text": " These auxiliary training tasks also predict the same object labels, but the predictions are made solely on a local subvolume of the input. Without complete knowledge of the object, the auxiliary tasks are more challenging, and can thus better exploit the discriminative power of local regions. This design is different from the classic multi-task learning setting of hetergenous auxiliary tasks, which inevitably requires collecting additional annotations (e.g., conducting both object classification and detection ). ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_23",
"text": " We implement this design through an architecture shown in Fig 3. The first three layers are mlpconv (multilayer perceptron convolution) layers, a 3D extension of the 2D mlpconv proposed by . The input and output of our mlpconv layers are both 4D tensors. Compared with the standard combination of linear convolutional layers and max pooling layers, mlpconv has a three-layer structure and is thus a universal function approximator if enough neurons are provided in its intermediate layers. Therefore, mlpconv is a powerful filter for feature extraction of local patches, enhancing approximation of more abstract representations. In addition, mlpconv has been validated to be more discriminative with fewer parameters than ordinary convolution with pooling . ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_24",
"text": " At the fourth layer, the network branches into two. The lower branch takes the whole object as input for traditional classification. The upper branch is a novel branch for auxiliary tasks. It slices the 512×2×2×2512222512\\times 2\\times 2\\times 2 4D tensor (222 grids along x𝑥x, y𝑦y, z𝑧z axes and 512512512 channels) into 2×2×2=822282\\times 2\\times 2=8 vectors of dimension 512512512. We set up a classification task for each vector. A fully connected layer and a softmax layer are then appended independently to each vector to construct classification losses. Simple calculation shows that the receptive field of each task is 22×22×2222222222\\times 22\\times 22, covering roughly 2/3232/3 of the entire volume. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_25",
"text": " The success of multi-view CNNs is intriguing. multi-view CNNs first project 3D objects to 2D and then make use of well-developed 2D image CNNs for classification. Inspired by its success, we design a neural network architecture that is also composed of the two stages. However, while multi-view CNNs use external rendering pipelines from computer graphics, we achieve the 3D-to-2D projection using network layers in a manner similar to ‘X-ray scanning’. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_26",
"text": " Key to this network is the use of an elongated anisotropic kernel which helps capture the global structure of the 3D volume. As illustrated in Fig 4, the neural network has two modules: an anisotropic probing module and a network in network module. The anisotropic probing module contains three convolutional layers of elongated kernels, each followed by a nonlinear ReLU layer. Note that both the input and output of each layer are 3D tensors. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_27",
"text": " In contrast to traditional isotropic kernels, an anisotropic probing module has the advantage of aggregating long-range interactions in the early feature learning stage with fewer parameters. As a comparison, with traditional neural networks constructed from isotropic kernels, introducing long-range interactions at an early stage can only be achieved through large kernels, which inevitably introduce many more parameters. After anisotropic probing, we use an adapted NIN network to address the classification problem. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_28",
"text": " Our anistropic probing network is capable of capturing internal structures of objects through its X-ray like projection mechanism. This is an ability not offered by standard rendering. Combined with multi-orientation pooling (introduced below), it is possible for this probing mechanism to capture any 3D structure, due to its relationship with the Radon transform. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_29",
"text": " In addition, this architecture is scalable to higher resolutions, since all its layers can be viewed as 2D. While 3D convolution involves computation at locations of cubic resolution, we maintain quadratic compute. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_30",
"text": " The two networks proposed above are both sensitive to model orientation. In the subvolume supervision method, different model orientations define different local subvolumes; in the anisotropic probing method, only voxels of the same height and along the probing direction can have interaction in the early feature extraction stage. Thus it is helpful to augment the training data by varying object orientation and combining predictions through orientation pooling. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_31",
"text": " Similar to Su-MVCNN which aggregates information from multiple view inputs through a view-pooling layer and follow-on fully connected layers, we sample 3D input from different orientations and aggregate them in a multi-orientation volumetric CNN (MO-VCNN) as shown in Fig 5. At training time, we generate different rotations of the 3D model by changing both azimuth and elevation angles, sampled randomly. A volumetric CNN is firstly trained on single rotations. Then we decompose the network to CNN1subscriptCNN1\\text{CNN}_{1} (lower layers) and CNN2subscriptCNN2\\text{CNN}_{2} (higher layers) to construct a multi-orientation version. The MO-VCNN’s weights are initialized by a previously trained volumetric CNN with CNN1subscriptCNN1\\text{CNN}_{1}’s weights fixed during fine-tuning. While a common practice is to extract the highest level features (features before the last classification linear layer) of multiple orientations, average/max/concatenate them, and train a linear SVM on the combined feature, this is just a special case of the MO-VCNN. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_32",
"text": " Compared to 3DShapeNets which only augments data by rotating around vertical axis, our experiment shows that orientation pooling combined with elevation rotation can greatly increase performance. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_33",
"text": " The multi-view CNN proposed by is a strong alternative to volumetric representations. This multi-view representation is constructed in three steps: first, a 3D shape is rendered into multiple images using varying camera extrinsics; then image features (e.g. conv5 feature in VGG or AlexNet) are extracted for each view; lastly features are combined across views through a pooling layer, followed by fully connected layers. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_34",
"text": " Although the multi-view CNN presented by produces compelling results, we are able to improve its performance through a multi-resolution extension with improved data augmentation. We introduce multi-resolution 3D filtering to capture information at multiple scales. We perform sphere rendering (see Sec 3) at different volume resolutions. Note that we use spheres for this discretization as they are view-invariant. In particular, this helps regularize out potential noise or irregularities in real-world scanned data (relative to synthetic training data), enabling robust performance on real-world scans. Note that our 3D multi-resolution filtering is different from classical 2D multi-resolution approaches, since the 3D filtering respects the distance in 3D. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_35",
"text": " Additionally, we also augment training data with variations in both azimuth and elevation, as opposed to azimuth only. We use AlexNet instead of VGG for efficiency. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_36",
"text": " We evaluate our volumetric CNNs and multi-view CNNs along with current state of the art on the ModelNet dataset and a new dataset of real-world reconstructions of 3D objects. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_37",
"text": " For convenience in following discussions, we define 3D resolution to be the discretization resolution of a 3D shape. That is, a 30×30×3030303030\\times 30\\times 30 volume has 3D resolution 303030. The sphere rendering from this volume also has 3D resolution 303030, though it may have higher 2D image resolution. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_38",
"text": " We use ModelNet for our training and testing datasets. ModelNet currently contains 127,915127915127,915 3D CAD models from 662662662 categories. ModelNet40, a subset including 12,3111231112,311 models from 404040 categories, is well annotated and can be downloaded from the web. The authors also provide a training and testing split on the website, in which there are 9,84398439,843 training and 2,46824682,468 test models444VoxNet uses the train/test split provided on the website and report average class accuracy on the 2,46824682,468 test split. 3DShapeNets and MVCNN use another train/test split comprising the first 80 shapes of each category in the “train” folder (or all shapes if there are fewer than 80) and the first 20 shapes of each category in the “test” folder, respectively.. We use this train/test split for our experiments. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_39",
"text": " By default, we report classification accuracy on all models in the test set (average instance accuracy). For comparisons with previous work we also report average class accuracy. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_40",
"text": " We provide a new real-world scanning dataset benchmark, comprising 243 objects of 12 categories; the geometry is captured with an ASUS Xtion Pro and a dense reconstruction is obtained using the publicly-available VoxelHashing framework . For each scan, we have performed a coarse, manual segmentation of the object of interest. In addition, each scan is aligned with the world-up vector. While there are existing datasets captured with commodity range sensors – e.g., (29, 34, 31) – this is the first containing hundreds of annotated models from dense 3D reconstructions. The goal of this dataset is to provide an example of modern real-time 3D reconstructions; i.e., structured representations more complete than a single RGB-D frame but still with many occlusions. This dataset is used as a test set. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_41",
"text": " We compare our methods with state of the art for shape classification on the ModelNet40 dataset. In the following, we discuss the results within volumetric CNN methods and within multi-view CNN methods. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_42",
"text": " Fig 7 summarizes the performance of volumetric CNNs. Ours-MO-SubvolumeSup is the subvolume supervision network in Sec 4.2 and Ours-MO-AniProbing is the anistropic probing network in Sec 4.3. Data augmentation is applied as described in Sec 6.4 (azimuth and elevation rotations). For clarity, we use MO- to denote that both networks are trained with an additional multi-orientation pooling step (202020 orientations in practice). For reference of multi-view CNN performance at the same 3D resolution, we also include Ours-MVCNN-Sphere-30, the result of our multi-view CNN with sphere rendering at 3D resolution 303030. More details of setup can be found in the supplementary. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_43",
"text": " As can be seen, both of our proposed volumetric CNNs significantly outperform state-of-the-art volumetric CNNs. Moreover, they both match the performance of our multi-view CNN under the same 3D resolution. That is, the gap between volumetric CNNs and multi-view CNNs is closed under 3D resolution 303030 on ModelNet40 dataset, an issue that motivates our study (Sec 3). ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_44",
"text": " Fig 8 summarizes the performance of multi-view CNNs. Ours-MVCNN-MultiRes is the result by training an SVM over the concatenation of fc7 features from Ours-MVCNN-Sphere-30, 60, and Ours-MVCNN. HoGPyramid-LFD is the result by training an SVM over a concatenation of HoG features at three 2D resolutions. Here LFD (lightfield descriptor) simply refers to extracting features from renderings. Ours-MVCNN-MultiRes achieves state-of-the-art. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_45",
"text": " Sec 6.2 shows that our volumetric CNN and multi-view CNN performs comparably at 3D resolution 303030. Here we study the effect of 3D resolution for both types of networks. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_46",
"text": " Fig 9 shows the performance of our volumetric CNN and multi-view CNN at different 3D resolutions (defined at the beginning of Sec 6). Due to computational cost, we only test our volumetric CNN at 3D resolutions 101010 and 303030. The observations are: first, the performance of our volumetric CNN and multi-view CNN is on par at tested 3D resolutions; second, the performance of multi-view CNN increases as the 3D resolution grows up. To further improve the performance of volumetric CNN, this experiment suggests that it is worth exploring how to scale volumetric CNN to higher 3D resolutions. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_47",
"text": " We use the same volumetric CNN model, the end-to-end learning verion of 3DShapeNets , to train and test on three variations of augmented data (Table 1). Similar trend is observed for other volumetric CNN variations. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_48",
"text": " When combined with multi-orientation pooling, applying both azimuth rotation (AZ) and elevation rotation (EL) augmentations is extremely effective. Using only azimuth augmentation (randomly sampled from 0∘superscript00^{\\circ} to 360∘superscript360360^{\\circ}) with orientation pooling, the classification performance is increased by 86.1%−84.7%=1.4%percent86.1percent84.7percent1.486.1\\%-84.7\\%=1.4\\%; combined with elevation augmentation (randomly sampled from −45∘superscript45-45^{\\circ} to 45∘superscript4545^{\\circ}), the improvement becomes more significant – increasing by 87.8%−83.0%=4.8%percent87.8percent83.0percent4.887.8\\%-83.0\\%=4.8\\%. On the other hand, translation jittering (randomly sampled shift from 00 to 666 voxels in each direction) provides only marginal influence. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_49",
"text": " The architectures in comparison include VoxNet , E2E- (the end-to-end learning variation of implemented in Caffe by ourselves), 3D-NIN (a 3D variation of Network in Network designed by ourselves as in Fig 3 without the “Prediction by partial object” branch), SubvolumeSup (Sec 4.2) and AniProbing (Sec 4.3). Data augmentation of AZ+EL (Sec 6.4) are applied. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_50",
"text": " From Table 2, first, the two volumetric CNNs we propose, SubvolumeSup and AniProbing networks, both show superior performance, indicating the effectiveness of our design; second, multi-orientation pooling increases performance for all network variations. This is especially significant for the anisotropic probing network, since each orientation usually only carries partial information of the object. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_51",
"text": " We compare different methods that are based on multi-view representations in Table 3. Methods in the second group are trained on the full ModelNet40 train set. Methods in the first group, SPH, LFD, FV, and Su-MVCNN, are trained on a subset of ModelNet40 containing 3,183 training samples. They are provided for reference. Also note that the MVCNNs in the second group are our implementations in Caffe with AlexNet instead of VGG as in Su-MVCNN . ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_52",
"text": " We observe that MVCNNs are superior to methods by SVMs on hand-crafted features. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_53",
"text": " We further assess the performance of volumetric CNNs and multi-view CNNs on real-world reconstructions in Table 4. All methods are trained on CAD models in ModelNet40 but tested on real data, which may be highly partial, noisy, or oversmoothed (Fig 6). Our networks continue to outperform state-of-the-art results. In particular, our 3D multi-resolution filtering is quite effective on real-world data, possibly because the low 3D resolution component filters out spurious and noisy micro-structures. Example results for object retrieval can be found in supplementary. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_54",
"text": " In this paper, we have addressed the task of object classification on 3D data using volumetric CNNs and multi-view CNNs. We have analyzed the performance gap between volumetric CNNs and multi-view CNNs from perspectives of network architecture and 3D resolution. The analysis motivates us to propose two new architectures of volumetric CNNs, which outperform state-of-the-art volumetric CNNs, achieving comparable performance to multi-view CNNs at the same 3D resolution of 30×30×3030303030\\times 30\\times 30. Further evalution over the influence of 3D resolution indicates that 3D resolution is likely to be the bottleneck for the performance of volumetric CNNs. Therefore, it is worth exploring the design of efficient volumetric CNN architectures that scale up to higher resolutions. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
},
{
"id": "1604.03265_all_55",
"text": " The authors gratefully acknowledge the support of Stanford Graduate Fellowship, NSF grants IIS-1528025 and DMS-1546206, ONR MURI grant N00014-13-1-0341, a Google Focused Research award, the Max Planck Center for Visual Computing and Communications and hardware donations by NVIDIA. ",
"title": "Volumetric and Multi-view CNNs for Object Classification on 3D Data"
}
] |
Is the reason for using max pooling for permutation invariant in the paper above?
|
[Note:The question is not phrased correctly [21]. In order to make a model invariant to input permutation, one of the strategies is to use a simple symmetric function to aggregate the information from each point [3].
|
[
21,
3
] |
[
{
"id": "1612.00593_all_0",
"text": " In this paper we explore deep learning architectures capable of reasoning about 3D geometric data such as point clouds or meshes. Typical convolutional architectures require highly regular input data formats, like those of image grids or 3D voxels, in order to perform weight sharing and other kernel optimizations. Since point clouds or meshes are not in a regular format, most researchers typically transform such data to regular 3D voxel grids or collections of images (e.g, views) before feeding them to a deep net architecture. This data representation transformation, however, renders the resulting data unnecessarily voluminous — while also introducing quantization artifacts that can obscure natural invariances of the data. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_1",
"text": " For this reason we focus on a different input representation for 3D geometry using simply point clouds – and name our resulting deep nets PointNets. Point clouds are simple and unified structures that avoid the combinatorial irregularities and complexities of meshes, and thus are easier to learn from. The PointNet, however, still has to respect the fact that a point cloud is just a set of points and therefore invariant to permutations of its members, necessitating certain symmetrizations in the net computation. Further invariances to rigid motions also need to be considered. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_2",
"text": " Our PointNet is a unified architecture that directly takes point clouds as input and outputs either class labels for the entire input or per point segment/part labels for each point of the input. The basic architecture of our network is surprisingly simple as in the initial stages each point is processed identically and independently. In the basic setting each point is represented by just its three coordinates (x,y,z)𝑥𝑦𝑧(x,y,z). Additional dimensions may be added by computing normals and other local or global features. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_3",
"text": " Key to our approach is the use of a single symmetric function, max pooling. Effectively the network learns a set of optimization functions/criteria that select interesting or informative points of the point cloud and encode the reason for their selection. The final fully connected layers of the network aggregate these learnt optimal values into the global descriptor for the entire shape as mentioned above (shape classification) or are used to predict per point labels (shape segmentation). ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_4",
"text": " Our input format is easy to apply rigid or affine transformations to, as each point transforms independently. Thus we can add a data-dependent spatial transformer network that attempts to canonicalize the data before the PointNet processes them, so as to further improve the results. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_5",
"text": " We provide both a theoretical analysis and an experimental evaluation of our approach. We show that our network can approximate any set function that is continuous. More interestingly, it turns out that our network learns to summarize an input point cloud by a sparse set of key points, which roughly corresponds to the skeleton of objects according to visualization. The theoretical analysis provides an understanding why our PointNet is highly robust to small perturbation of input points as well as to corruption through point insertion (outliers) or deletion (missing data). ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_6",
"text": " On a number of benchmark datasets ranging from shape classification, part segmentation to scene segmentation, we experimentally compare our PointNet with state-of-the-art approaches based upon multi-view and volumetric representations. Under a unified architecture, not only is our PointNet much faster in speed, but it also exhibits strong performance on par or even better than state of the art. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_7",
"text": " The key contributions of our work are as follows: • We design a novel deep net architecture suitable for consuming unordered point sets in 3D; • We show how such a net can be trained to perform 3D shape classification, shape part segmentation and scene semantic parsing tasks; • We provide thorough empirical and theoretical analysis on the stability and efficiency of our method; • We illustrate the 3D features computed by the selected neurons in the net and develop intuitive explanations for its performance. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_8",
"text": " The problem of processing unordered sets by neural nets is a very general and fundamental problem – we expect that our ideas can be transferred to other domains as well. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_9",
"text": " Most existing features for point cloud are handcrafted towards specific tasks. Point features often encode certain statistical properties of points and are designed to be invariant to certain transformations, which are typically classified as intrinsic (2, 24, 3) or extrinsic (20, 19, 14, 10, 5). They can also be categorized as local features and global features. For a specific task, it is not trivial to find the optimal feature combination. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_10",
"text": " 3D data has multiple popular representations, leading to various approaches for learning. Volumetric CNNs: (28, 17, 18) are the pioneers applying 3D convolutional neural networks on voxelized shapes. However, volumetric representation is constrained by its resolution due to data sparsity and computation cost of 3D convolution. FPNN and Vote3D proposed special methods to deal with the sparsity problem; however, their operations are still on sparse volumes, it’s challenging for them to process very large point clouds. Multiview CNNs: (23, 18) have tried to render 3D point cloud or shapes into 2D images and then apply 2D conv nets to classify them. With well engineered image CNNs, this line of methods have achieved dominating performance on shape classification and retrieval tasks . However, it’s nontrivial to extend them to scene understanding or other 3D tasks such as point classification and shape completion. Spectral CNNs: Some latest works (4, 16) use spectral CNNs on meshes. However, these methods are currently constrained on manifold meshes such as organic objects and it’s not obvious how to extend them to non-isometric shapes such as furniture. Feature-based DNNs: (6, 8) firstly convert the 3D data into a vector, by extracting traditional shape features and then use a fully connected net to classify the shape. We think they are constrained by the representation power of the features extracted. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_11",
"text": " From a data structure point of view, a point cloud is an unordered set of vectors. While most works in deep learning focus on regular input representations like sequences (in speech and language processing), images and volumes (video or 3D data), not much work has been done in deep learning on point sets. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_12",
"text": " One recent work from Oriol Vinyals et al looks into this problem. They use a read-process-write network with attention mechanism to consume unordered input sets and show that their network has the ability to sort numbers. However, since their work focuses on generic sets and NLP applications, there lacks the role of geometry in the sets. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_13",
"text": " We design a deep learning framework that directly consumes unordered point sets as inputs. A point cloud is represented as a set of 3D points {Pi|i=1,…,n}conditional-setsubscript𝑃𝑖𝑖1…𝑛\\{P_{i}|\\ i=1,...,n\\}, where each point Pisubscript𝑃𝑖P_{i} is a vector of its (x,y,z)𝑥𝑦𝑧(x,y,z) coordinate plus extra feature channels such as color, normal etc. For simplicity and clarity, unless otherwise noted, we only use the (x,y,z)𝑥𝑦𝑧(x,y,z) coordinate as our point’s channels. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_14",
"text": " For the object classification task, the input point cloud is either directly sampled from a shape or pre-segmented from a scene point cloud. Our proposed deep network outputs k𝑘k scores for all the k𝑘k candidate classes. For semantic segmentation, the input can be a single object for part region segmentation, or a sub-volume from a 3D scene for object region segmentation. Our model will output n×m𝑛𝑚n\\times m scores for each of the n𝑛n points and each of the m𝑚m semantic sub-categories. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_15",
"text": " The architecture of our network (Sec 4.2) is inspired by the properties of point sets in ℝnsuperscriptℝ𝑛\\mathbb{R}^{n} (Sec 4.1). ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_16",
"text": " Our input is a subset of points from an Euclidean space. It has three main properties: ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_17",
"text": " • Unordered. Unlike pixel arrays in images or voxel arrays in volumetric grids, point cloud is a set of points without specific order. In other words, a network that consumes N𝑁N 3D point sets needs to be invariant to N!𝑁N! permutations of the input set in data feeding order. • Interaction among points. The points are from a space with a distance metric. It means that points are not isolated, and neighboring points form a meaningful subset. Therefore, the model needs to be able to capture local structures from nearby points, and the combinatorial interactions among local structures. • Invariance under transformations. As a geometric object, the learned representation of the point set should be invariant to certain transformations. For example, rotating and translating points all together should not modify the global point cloud category nor the segmentation of the points. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_18",
"text": " Our full network architecture is visualized in Fig 2, where the classification network and the segmentation network share a great portion of structures. Please read the caption of Fig 2 for the pipeline. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_19",
"text": " Our network has three key modules: the max pooling layer as a symmetric function to aggregate information from all the points, a local and global information combination structure, and two joint alignment networks that align both input points and point features. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_20",
"text": " We will discuss our reason behind these design choices in separate paragraphs below. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_21",
"text": " In order to make a model invariant to input permutation, three strategies exist: 1) sort input into a canonical order; 2) treat the input as a sequence to train an RNN, but augment the training data by all kinds of permutations; 3) use a simple symmetric function to aggregate the information from each point. Here, a symmetric function takes n𝑛n vectors as input and outputs a new vector that is invariant to the input order. For example, ++ and ∗* operators are symmetric binary functions. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_22",
"text": " While sorting sounds like a simple solution, in high dimensional space there in fact does not exist an ordering that is stable w.r.t. point perturbations in the general sense. This can be easily shown by contradiction. If such an ordering strategy exists, it defines a bijection map between a high-dimensional space and a 1d1𝑑1d real line. It is not hard to see, to require an ordering to be stable w.r.t point perturbations is equivalent to requiring that this map preserves spatial proximity as the dimension reduces, a task that cannot be achieved in the general case. Therefore, sorting does not fully resolve the ordering issue, and it’s hard for a network to learn a consistent mapping from input to output as the ordering issue persists. As shown in experiments (Fig 5), we find that applying a MLP directly on the sorted point set performs poorly, though slightly better than directly processing an unsorted input. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_23",
"text": " The idea to use RNN considers the point set as a sequential signal and hopes that by training the RNN with randomly permuted sequences, the RNN will become invariant to input order. However in “OrderMatters” the authors have shown that order does matter and cannot be totally omitted. While RNN has relatively good robustness to input ordering for sequences with small length (dozens), it’s hard to scale to thousands of input elements, which is the common size for point sets. Empirically, we have also shown that model based on RNN does not perform as well as our proposed method (Fig 5). ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_24",
"text": " Our idea is to approximate a general function defined on a point set by applying a symmetric function on transformed elements in the set: f({x1,…,xn})≈g(h(x1),…,h(xn)),𝑓subscript𝑥1…subscript𝑥𝑛𝑔ℎsubscript𝑥1…ℎsubscript𝑥𝑛\\displaystyle f(\\{x_{1},\\dots,x_{n}\\})\\approx g(h(x_{1}),\\dots,h(x_{n})), (1) where f:2ℝN→ℝ:𝑓→superscript2superscriptℝ𝑁ℝf:2^{\\mathbb{R}^{N}}\\rightarrow\\mathbb{R}, h:ℝN→ℝK:ℎ→superscriptℝ𝑁superscriptℝ𝐾h:\\mathbb{R}^{N}\\rightarrow\\mathbb{R}^{K} and g:ℝK×⋯×ℝK⏟n→ℝ:𝑔→subscript⏟superscriptℝ𝐾⋯superscriptℝ𝐾𝑛ℝg:\\underbrace{\\mathbb{R}^{K}\\times\\dots\\times\\mathbb{R}^{K}}_{n}\\rightarrow\\mathbb{R} is a symmetric function. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_25",
"text": " Empirically, our basic module is very simple: we approximate hℎh by a multi-layer perceptron network and g𝑔g by a composition of a single variable function and a max pooling function. This is found to work well by experiments. Through a collection of hℎh, we can learn a number of f𝑓f’s to capture different properties of the set. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_26",
"text": " While our key module seems simple, it has interesting properties (see Sec 5.3) and can achieve strong performace (see Sec 5.1) in a few different applications. Due to the simplicity of our module, we are also able to provide theoretical analysis as in Sec 4.3. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_27",
"text": " The output from the above section forms a vector (f1,…,fK)subscript𝑓1…subscript𝑓𝐾(f_{1},\\dots,f_{K}), which is a global signature of the input set. We can easily train a SVM or multi-layer perceptron classifier on the shape global features for classification. However, point segmentation requires a combination of local and global knowledge. We can achieve this by a simple yet highly effective manner. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_28",
"text": " Our solution can be seen in Fig 2 (Segmentation Network). After computing the global point cloud feature vector, we feed it back to per point features by concatenating the global feature with each of the point features. Then we extract new per point features based on the combined point features - this time the per point feature is aware of both the local and global information. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_29",
"text": " With this modification our network is able to predict per point quantities that rely on both local geometry and global semantics. For example we can accurately predict per-point normals (fig in supplementary), validating that the network is able to summarize information from the point’s local neighborhood. In experiment session, we also show that our model can achieve state-of-the-art performance on shape part segmentation and scene segmentation. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_30",
"text": " The semantic labeling of a point cloud has to be invariant if the point cloud undergoes certain geometric transformations, such as rigid transformation. We therefore expect that the learnt representation by our point set is invariant to these transformations. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_31",
"text": " A natural solution is to align all input set to a canonical space before feature extraction. Jaderberg et al. introduces the idea of spatial transformer to align 2D images through sampling and interpolation, achieved by a specifically tailored layer implemented on GPU. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_32",
"text": " Our input form of point clouds allows us to achieve this goal in a much simpler way compared with . We do not need to invent any new layers and no alias is introduced as in the image case. We predict an affine transformation matrix by a mini-network (T-net in Fig 2) and directly apply this transformation to the coordinates of input points. The mini-network itself resembles the big network and is composed by basic modules of point independent feature extraction, max pooling and fully connected layers. More details about the T-net are in the supplementary. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_33",
"text": " This idea can be further extended to the alignment of feature space, as well. We can insert another alignment network on point features and predict a feature transformation matrix to align features from different input point clouds. However, transformation matrix in the feature space has much higher dimension than the spatial transform matrix, which greatly increases the difficulty of optimization. We therefore add a regularization term to our softmax training loss. We constrain the feature transformation matrix to be close to orthogonal matrix: Lreg=‖I−AAT‖F2,subscript𝐿𝑟𝑒𝑔superscriptsubscriptnorm𝐼𝐴superscript𝐴𝑇𝐹2L_{reg}=\\|I-AA^{T}\\|_{F}^{2}, (2) where A𝐴A is the feature alignment matrix predicted by a mini-network. An orthogonal transformation will not lose information in the input, thus is desired. We find that by adding the regularization term, the optimization becomes more stable and our model achieves better performance. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_34",
"text": " We first show the universal approximation ability of our neural network to continuous set functions. By the continuity of set functions, intuitively, a small perturbation to the input point set should not greatly change the function values, such as classification or segmentation scores. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_35",
"text": " Formally, let 𝒳={S:S⊆(0,1)m and |S|=n}𝒳conditional-set𝑆𝑆superscript01𝑚 and 𝑆𝑛\\mathcal{X}=\\{S:S\\subseteq(0,1)^{m}\\text{ and }|S|=n\\}, f:𝒳→ℝ:𝑓→𝒳ℝf:\\mathcal{X}\\rightarrow\\mathbb{R} is a continuous set function on 𝒳𝒳\\mathcal{X} w.r.t to Hausdorff distance dH(⋅,⋅)subscript𝑑𝐻⋅⋅d_{H}(\\cdot,\\cdot), i.e., ∀ϵ>0,∃δ>0formulae-sequencefor-allitalic-ϵ0𝛿0\\forall\\epsilon>0,\\exists\\delta>0, for any S,S′∈𝒳𝑆superscript𝑆′𝒳S,S^{\\prime}\\in\\mathcal{X}, if dH(S,S′)<δsubscript𝑑𝐻𝑆superscript𝑆′𝛿d_{H}(S,S^{\\prime})<\\delta, then |f(S)−f(S′)|<ϵ𝑓𝑆𝑓superscript𝑆′italic-ϵ|f(S)-f(S^{\\prime})|<\\epsilon. Our theorem says that f𝑓f can be arbitrarily approximated by our network given enough neurons at the max pooling layer, i.e., K𝐾K in (1) is sufficiently large. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_36",
"text": " The proof to this theorem can be found in our supplementary material. The key idea is that in the worst case the network can learn to convert a point cloud into a volumetric representation, by partitioning the space into equal-sized voxels. In practice, however, the network learns a much smarter strategy to probe the space, as we shall see in point function visualizations. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_37",
"text": " Theoretically and experimentally we find that the expressiveness of our network is strongly affected by the dimension of the max pooling layer, i.e., K𝐾K in (1). Here we provide an analysis, which also reveals properties related to the stability of our model. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_38",
"text": " We define 𝐮=MAXxi∈S{h(xi)}𝐮subscript𝑥𝑖𝑆MAXℎsubscript𝑥𝑖\\mathbf{u}=\\underset{x_{i}\\in S}{\\mbox{MAX}}\\{h(x_{i})\\} to be the sub-network of f𝑓f which maps a point set in (0,1)msuperscript01𝑚(0,1)^{m} to a K𝐾K-dimensional vector. The following theorem tells us that small corruptions or extra noise points in the input set are not likely to change the output of our network: ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_39",
"text": " We explain the implications of the theorem. (a) says that f(S)𝑓𝑆f(S) is unchanged up to the input corruption if all points in 𝒞Ssubscript𝒞𝑆\\mathcal{C}_{S} are preserved; it is also unchanged with extra noise points up to 𝒩Ssubscript𝒩𝑆\\mathcal{N}_{S}. (b) says that 𝒞Ssubscript𝒞𝑆\\mathcal{C}_{S} only contains a bounded number of points, determined by K𝐾K in (1). In other words, f(S)𝑓𝑆f(S) is in fact totally determined by a finite subset 𝒞S⊆Ssubscript𝒞𝑆𝑆\\mathcal{C}_{S}\\subseteq S of less or equal to K𝐾K elements. We therefore call 𝒞Ssubscript𝒞𝑆\\mathcal{C}_{S} the critical point set of S𝑆S and K𝐾K the bottleneck dimension of f𝑓f. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_40",
"text": " Combined with the continuity of hℎh, this explains the robustness of our model w.r.t point perturbation, corruption and extra noise points. The robustness is gained in analogy to the sparsity principle in machine learning models. Intuitively, our network learns to summarize a shape by a sparse set of key points. In experiment section we see that the key points form the skeleton of an object. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_41",
"text": " Experiments are divided into four parts. First, we show PointNets can be applied to multiple 3D recognition tasks (Sec 5.1). Second, we provide detailed experiments to validate our network design (Sec 5.2). At last we visualize what the network learns (Sec 5.3) and analyze time and space complexity (Sec 5.4). ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_42",
"text": " In this section we show how our network can be trained to perform 3D object classification, object part segmentation and semantic scene segmentation 111More application examples such as correspondence and point cloud based CAD model retrieval are included in supplementary material.. Even though we are working on a brand new data representation (point sets), we are able to achieve comparable or even better performance on benchmarks for several tasks. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_43",
"text": " Our network learns global point cloud feature that can be used for object classification. We evaluate our model on the ModelNet40 shape classification benchmark. There are 12,311 CAD models from 40 man-made object categories, split into 9,843 for training and 2,468 for testing. While previous methods focus on volumetric and mult-view image representations, we are the first to directly work on raw point cloud. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_44",
"text": " We uniformly sample 1024 points on mesh faces according to face area and normalize them into a unit sphere. During training we augment the point cloud on-the-fly by randomly rotating the object along the up-axis and jitter the position of each points by a Gaussian noise with zero mean and 0.02 standard deviation. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_45",
"text": " In Table 1, we compare our model with previous works as well as our baseline using MLP on traditional features extracted from point cloud (point density, D2, shape contour etc.). Our model achieved state-of-the-art performance among methods based on 3D input (volumetric and point cloud). With only fully connected layers and max pooling, our net gains a strong lead in inference speed and can be easily parallelized in CPU as well. There is still a small gap between our method and multi-view based method (MVCNN ), which we think is due to the loss of fine geometry details that can be captured by rendered images. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_46",
"text": " Part segmentation is a challenging fine-grained 3D recognition task. Given a 3D scan or a mesh model, the task is to assign part category label (e.g. chair leg, cup handle) to each point or face. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_47",
"text": " We evaluate on ShapeNet part data set from , which contains 16,881 shapes from 16 categories, annotated with 50 parts in total. Most object categories are labeled with two to five parts. Ground truth annotations are labeled on sampled points on the shapes. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_48",
"text": " We formulate part segmentation as a per-point classification problem. Evaluation metric is mIoU on points. For each shape S of category C, to calculate the shape’s mIoU: For each part type in category C, compute IoU between groundtruth and prediction. If the union of groundtruth and prediction points is empty, then count part IoU as 1. Then we average IoUs for all part types in category C to get mIoU for that shape. To calculate mIoU for the category, we take average of mIoUs for all shapes in that category. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_49",
"text": " In this section, we compare our segmentation version PointNet (a modified version of Fig 2, Segmentation Network) with two traditional methods and that both take advantage of point-wise geometry features and correspondences between shapes, as well as our own 3D CNN baseline. See supplementary for the detailed modifications and network architecture for the 3D CNN. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_50",
"text": " In Table 2, we report per-category and mean IoU(%) scores. We observe a 2.3% mean IoU improvement and our net beats the baseline methods in most categories. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_51",
"text": " We also perform experiments on simulated Kinect scans to test the robustness of these methods. For every CAD model in the ShapeNet part data set, we use Blensor Kinect Simulator to generate incomplete point clouds from six random viewpoints. We train our PointNet on the complete shapes and partial scans with the same network architecture and training setting. Results show that we lose only 5.3% mean IoU. In Fig 3, we present qualitative results on both complete and partial data. One can see that though partial data is fairly challenging, our predictions are reasonable. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_52",
"text": " Our network on part segmentation can be easily extended to semantic scene segmentation, where point labels become semantic object classes instead of object part labels. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_53",
"text": " We experiment on the Stanford 3D semantic parsing data set . The dataset contains 3D scans from Matterport scanners in 6 areas including 271 rooms. Each point in the scan is annotated with one of the semantic labels from 13 categories (chair, table, floor, wall etc. plus clutter). ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_54",
"text": " To prepare training data, we firstly split points by room, and then sample rooms into blocks with area 1m by 1m. We train our segmentation version of PointNet to predict per point class in each block. Each point is represented by a 9-dim vector of XYZ, RGB and normalized location as to the room (from 0 to 1). At training time, we randomly sample 4096 points in each block on-the-fly. At test time, we test on all the points. We follow the same protocol as to use k-fold strategy for train and test. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_55",
"text": " We compare our method with a baseline using handcrafted point features. The baseline extracts the same 9-dim local features and three additional ones: local point density, local curvature and normal. We use standard MLP as the classifier. Results are shown in Table 3, where our PointNet method significantly outperforms the baseline method. In Fig 4, we show qualitative segmentation results. Our network is able to output smooth predictions and is robust to missing points and occlusions. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_56",
"text": " Based on the semantic segmentation output from our network, we further build a 3D object detection system using connected component for object proposal (see supplementary for details). We compare with previous state-of-the-art method in Table 4. The previous method is based on a sliding shape method (with CRF post processing) with SVMs trained on local geometric features and global room context feature in voxel grids. Our method outperforms it by a large margin on the furniture categories reported. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_57",
"text": " In this section we validate our design choices by control experiments. We also show the effects of our network’s hyperparameters. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_58",
"text": " As mentioned in Sec 4.2, there are at least three options for consuming unordered set inputs. We use the ModelNet40 shape classification problem as a test bed for comparisons of those options, the following two control experiment will also use this task. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_59",
"text": " The baselines (illustrated in Fig 5) we compared with include multi-layer perceptron on unsorted and sorted points as n×3𝑛3n\\times 3 arrays, RNN model that considers input point as a sequence, and a model based on symmetry functions. The symmetry operation we experimented include max pooling, average pooling and an attention based weighted sum. The attention method is similar to that in , where a scalar score is predicted from each point feature, then the score is normalized across points by computing a softmax. The weighted sum is then computed on the normalized scores and the point features. As shown in Fig 5, max-pooling operation achieves the best performance by a large winning margin, which validates our choice. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_60",
"text": " In Table 5 we demonstrate the positive effects of our input and feature transformations (for alignment). It’s interesting to see that the most basic architecture already achieves quite reasonable results. Using input transformation gives a 0.8%percent0.80.8\\% performance boost. The regularization loss is necessary for the higher dimension transform to work. By combining both transformations and the regularization term, we achieve the best performance. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_61",
"text": " We show our PointNet, while simple and effective, is robust to various kinds of input corruptions. We use the same architecture as in Fig 5’s max pooling network. Input points are normalized into a unit sphere. Results are in Fig 6. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_62",
"text": " As to missing points, when there are 50%percent5050\\% points missing, the accuracy only drops by 2.4%percent2.42.4\\% and 3.8%percent3.83.8\\% w.r.t. furthest and random input sampling. Our net is also robust to outlier points, if it has seen those during training. We evaluate two models: one trained on points with (x,y,z)𝑥𝑦𝑧(x,y,z) coordinates; the other on (x,y,z)𝑥𝑦𝑧(x,y,z) plus point density. The net has more than 80%percent8080\\% accuracy even when 20%percent2020\\% of the points are outliers. Fig 6 right shows the net is robust to point perturbations. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_63",
"text": " In Fig 7, we visualize critical point sets 𝒞Ssubscript𝒞𝑆\\mathcal{C}_{S} and upper-bound shapes 𝒩Ssubscript𝒩𝑆\\mathcal{N}_{S} (as discussed in Thm 2) for some sample shapes S𝑆S. The point sets between the two shapes will give exactly the same global shape feature f(S)𝑓𝑆f(S). ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_64",
"text": " We can see clearly from Fig 7 that the critical point sets 𝒞Ssubscript𝒞𝑆\\mathcal{C}_{S}, those contributed to the max pooled feature, summarizes the skeleton of the shape. The upper-bound shapes 𝒩Ssubscript𝒩𝑆\\mathcal{N}_{S} illustrates the largest possible point cloud that give the same global shape feature f(S)𝑓𝑆f(S) as the input point cloud S𝑆S. 𝒞Ssubscript𝒞𝑆\\mathcal{C}_{S} and 𝒩Ssubscript𝒩𝑆\\mathcal{N}_{S} reflect the robustness of PointNet, meaning that losing some non-critical points does not change the global shape signature f(S)𝑓𝑆f(S) at all. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_65",
"text": " The 𝒩Ssubscript𝒩𝑆\\mathcal{N}_{S} is constructed by forwarding all the points in a edge-length-2 cube through the network and select points p𝑝p whose point function values (h1(p),h2(p),⋯,hK(p))subscriptℎ1𝑝subscriptℎ2𝑝⋯subscriptℎ𝐾𝑝(h_{1}(p),h_{2}(p),\\cdots,h_{K}(p)) are no larger than the global shape descriptor. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_66",
"text": " Table 6 summarizes space (number of parameters in the network) and time (floating-point operations/sample) complexity of our classification PointNet. We also compare PointNet to a representative set of volumetric and multi-view based architectures in previous works. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_67",
"text": " While MVCNN and Subvolume (3D CNN) achieve high performance, PointNet is orders more efficient in computational cost (measured in FLOPs/sample: 141x and 8x more efficient, respectively). Besides, PointNet is much more space efficient than MVCNN in terms of #param in the network (17x less parameters). Moreover, PointNet is much more scalable – it’s space and time complexity is O(N)𝑂𝑁O(N) – linear in the number of input points. However, since convolution dominates computing time, multi-view method’s time complexity grows squarely on image resolution and volumetric convolution based method grows cubically with the volume size. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_68",
"text": " Empirically, PointNet is able to process more than one million points per second for point cloud classification (around 1K objects/second) or semantic segmentation (around 2 rooms/second) with a 1080X GPU on TensorFlow, showing great potential for real-time applications. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_69",
"text": " In this work, we propose a novel deep neural network PointNet that directly consumes point cloud. Our network provides a unified approach to a number of 3D recognition tasks including object classification, part segmentation and semantic segmentation, while obtaining on par or better results than state of the arts on standard benchmarks. We also provide theoretical analysis and visualizations towards understanding of our network. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
},
{
"id": "1612.00593_all_70",
"text": " The authors gratefully acknowledge the support of a Samsung GRO grant, ONR MURI N00014-13-1-0341 grant, NSF grant IIS-1528025, a Google Focused Research Award, a gift from the Adobe corporation and hardware donations by NVIDIA. ",
"title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation"
}
] |
What is Majority in baselines?
|
Majority is the results when selecting with the most frequent label as an answer [22].
|
[
22
] |
[
{
"id": "2206.03715_all_0",
"text": " The ability to understand natural language through commonsense reasoning is one of the core focuses in the field of natural language processing. To measure and study the different aspects of commonsense reasoning, several datasets are developed, such as SocialIQA (Sap et al., 2019b), CommonsenseQA (Talmor et al., 2018), and PhysicalIQA (Bisk et al., 2020), each requiring different type of commonsense knowledge (e.g., social, taxonomic, causal, declarative, etc) to select the correct answer. While large-scale neural systems (Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019b) have shown human-level accuracy on these benchmarks, recent studies (Mitra et al., 2019) also criticize that these models solve individual datasets, rather than learning how to perform general semantic reasoning. To this end, Ma et al. (2021) suggested zero-shot evaluation as a genuine measure for the reasoning capability of the machine. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_1",
"text": " Inspired by this new metric, in this work, we focus on building unsupervised zero-shot multiple-choice QA systems. That is, we target an arbitrary commonsense reasoning task where conventional approaches (that rely heavily on task-specific supervision) are not applicable to such zero-shot learning scenarios. To learn QA models without expensive annotation efforts, recent works (Ma et al., 2021; Banerjee and Baral, 2020; Malaviya et al., 2020) propose to generate a synthetic QA dataset using a commonsense KG such as ATOMIC (Sap et al., 2019a) and ConceptNet (Speer et al., 2017). Such an approach mostly focuses only on one specific type of reasoning relations (e.g., if-then relation, or declarative relation), neglecting the fact that real-world QA systems require simultaneously considering different types of reasoning abilities (e.g., declarative and social, or causal and physical reasoning; Ilievski et al., 2021; Chang et al., 2021). ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_2",
"text": " To consider different types of reasoning, this paper extends ideas from the aforementioned zero-shot learning to the multi-source case such that it benefits from different types of commonsense knowledge on individual KGs. For example, ATOMIC (Sap et al., 2019a) focuses on social commonsense while ConceptNet (Speer et al., 2017) contains conceptual knowledge. A practical approach is multi-task learning (MTL; Caruana, 1997; Liu et al., 2019a), which learns a shared encoder for different synthetic QA datasets from multiple KGs. Despite its effectiveness, MTL scheme suffers from interference among different KGs, which results in forgetting previously learned knowledge when trained on new KG which has different kinds of knowledge (Pilault et al., 2021; Pfeiffer et al., 2021; Wang et al., 2021a; Wu et al., 2020). ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_3",
"text": " To address these limitations, we propose a novel, modularized framework that aims to learn multiple expert models for KGs, then conduct zero-shot fusion to allow collaboration among KGs. For this purpose, we leverage AdapterFusion (Pfeiffer et al., 2021) where multiple tiny modules between Transformer blocks called adapters (Houlsby et al., 2019) can be combined after independent training, thus allowing a continual integration of the adapters without retraining the entire framework. Specifically, we treat the adapters as different KG-specific experts, and combine them using an attention-like fusion module. To improve the fusion of adapters, we suggest a KG-alignment adapter that guides to the apt expert adapters. Here, we use KGs in three different synthetic supervision training: (1) KG-specific QA datasets to train the KG-specific expert adapters, (2) a KG classification datasets to train the KG-alignment adapter, and (3) a balanced mixture of KG-specific QA datasets to train the fusion module. Our modularized method alleviates the interference between different KGs, which is the pitfall of MTL from our empirical observation, and thus combines multiple KGs into a synergetic zero-shot framework. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_4",
"text": " Our contributions are: (1) We suggest a simple, yet effective KG modularization strategy for the use of multiple KGs in commonsense reasoning. (2) We then explore the use of AdapterFusion (Pfeiffer et al., 2021) for better knowledge aggregation based on the KG modularization in zero-shot setting. We believe that such modularized transfer learning is critical to using different knowledge sources synergetically against interference between them. (3) In extensive experiments on various commonsense reasoning benchmarks, our framework achieves significant improvements over baselines using a single KG, even using multiple KGs, which implies the robustness in commonsense reasoning. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_5",
"text": " Many researchers have recently focused on building unsupervised models without any benchmark supervisions (i.e., zero-shot learning). In such zero-shot setting, KGs are often used as an external resource for improving model prior (e.g., continually learned from pre-trained language models) (Banerjee and Baral, 2020; Bosselut and Choi, 2019; Ma et al., 2021), especially for commonsense reasoning, as much existing work couples language models with neural/symbolic commonsense KGs. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_6",
"text": " However, most of existing work are either assuming the existence of the alignment information between tasks and KGs (Banerjee and Baral, 2020) or an integrated KG (Ma et al., 2021). For example, ATOMIC2020subscriptsuperscriptATOMIC2020\\texttt{ATOMIC}^{20}_{20} (Hwang et al., 2021), a commonsense KG which incorporates tuples from ConceptNet and ATOMIC with new relations and further crowdsourcing, combines multiple KGs into a new integrated KG, but as widely known (Ilievski et al., 2020; Hwang et al., 2021), heterogeneous schema between different KGs may limit triplets that can be integrated.111Only 172K tuples of the 3.4M tuples and 5 relations of 36 relations in ConceptNet are integrated into ATOMIC2020subscriptsuperscriptATOMIC2020\\texttt{ATOMIC}^{20}_{20}. Rather than such symbolic KG integration with the inevitable loss of knowledge, in this work, we explore the neural KG integration leveraging the multiple KGs without additional processing and alignment information between KG and task. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_7",
"text": " The idea of having specialized parameters, or so-called experts, has been widely studied to integrate multiple sources of knowledge via transfer learning. The adapter module (Rebuffi et al., 2017; Houlsby et al., 2019) has been explored as one of such approaches, introducing a small number of task-specific parameters at every layer of pre-trained language model (PLM) while sharing the parameters of underlying PLM which is fixed. To address the limitations of transfer learning due to high re-training cost, many works utilize the multiple adapter modules for individual tasks with different domains (Puigcerver et al., 2020; Bapna et al., 2019; Rücklé et al., 2020; Madotto et al., 2021) considering each adapter to be an expert of each domain. Similar to our work, K-Adapter (Wang et al., 2021a) encodes factual and linguistic knowledge to each adapter, but in this paper, we further explore how to mitigate catastrophic forgetting or interference among multiple adapters for better knowledge transfer in zero-shot setting. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_8",
"text": " MTL (Liu et al., 2019a; Zhang and Yang, 2017; Caruana, 1997) learns a shared representation while aggregating knowledge across multiple learning tasks, often leading to better generalization ability of a model. However, parametric aggregation of knowledge with MTL has following limitations: (1) retraining the full model when adding new tasks (Houlsby et al., 2019; Pfeiffer et al., 2021, 2020b) (2) catastrophic forgetting and interference between tasks leading to difficulties of solving each task equally well (Pilault et al., 2021; Wu et al., 2020; Yu et al., 2020) and (3) inconsistent effect (Lourie et al., 2021). To deal with these challenges, Mixture-of-Experts (MoE) is a parameterized generalization of ensembling techniques, which has been adapted for MTL with gating network trained to optimize each task (Ma et al., 2018). However, simple linear gating networks are too shallow and thus may destruct task knowledge for commonsense reasoning. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_9",
"text": " To address this problem, AdapterFusion (Pfeiffer et al., 2021) has been proposed to fuse task specific parameters called adapters for the given target task leveraging attention-like mechanism. AdapterFusion aggregates adapters, which is trained independently for each task, in a non-destructive manner mitigating aforementioned MTL problems such as forgetting and interference between tasks. Recently, it has been used for zero-shot cross-lingual transfer framework (Pfeiffer et al., 2020c; Wang et al., 2021b), which motivates our work to transfer multi-source knowledge with less interference for zero-shot commonsense reasoning. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_10",
"text": " In our setup, we repurpose synthetic QA generation (Ma et al., 2021) for the task of knowledge-driven zero-shot learning for commonsense reasoning, i.e., we transform a KG into multiple (Qi,Ai)subscript𝑄𝑖subscript𝐴𝑖(Q_{i},A_{i}) pairs where Qisubscript𝑄𝑖Q_{i} is a natural language question and Ai={Ai,1,…,Ai,m}subscript𝐴𝑖subscript𝐴𝑖1…subscript𝐴𝑖𝑚A_{i}=\\{A_{i,1},...,A_{i,m}\\} is the set of options with m𝑚m answer candidates. Specifically, given a triple (ehead,r,etail)superscript𝑒ℎ𝑒𝑎𝑑𝑟superscript𝑒𝑡𝑎𝑖𝑙(e^{head},r,e^{tail}) in a KG, where eheadsuperscript𝑒ℎ𝑒𝑎𝑑e^{head}, etailsuperscript𝑒𝑡𝑎𝑖𝑙e^{tail} and r𝑟r denote head/tail entity and relation respectively, we transform eheadsuperscript𝑒ℎ𝑒𝑎𝑑e^{head} and r𝑟r into a natural language question Qisubscript𝑄𝑖Q_{i} using templates. For the option set Aisubscript𝐴𝑖A_{i}, we use the combination of the correct answer etailsuperscript𝑒𝑡𝑎𝑖𝑙e^{tail} and m−1𝑚1m-1 distractors which are tail entities from other triples sampled randomly (Ma et al., 2021). Details are described in Appendix B. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_11",
"text": " First, we modularize the KGs to preserve their intrinsic knowledge. Considering the importance of using a suitable and well-aligned KG (Ma et al., 2019, 2021) on a downstream task, the subtle difference between each KG should be learned by the model without any interference from each other. Accordingly, we adopt the adapter module (Houlsby et al., 2019) which repurposes a pre-trained language model (PLM) to incorporate each KG as tiny modules in between Transformer blocks. Specifically, as illustrated in Figure 2 (except for green area), the adapter training strategy involves injecting new layers (parameterized by ΦΦ\\Phi) into the original PLM (parameterized by θ𝜃\\theta). The weights of the original PLM are untouched, while the new adapter layers are initialized at random. Formally, we call each adapter trained with 𝒟QAksubscriptsuperscript𝒟𝑘𝑄𝐴\\mbox{${\\cal D}$}^{k}_{QA} as an expert adapter for KG k𝑘k, parameterized by ΦQAksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k}. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_12",
"text": " When a QA sample (Qi,Ai)subscript𝑄𝑖subscript𝐴𝑖(Q_{i},A_{i}) is given for dataset 𝒟QAksuperscriptsubscript𝒟𝑄𝐴𝑘\\mbox{${\\cal D}$}_{QA}^{k}, we first concatenate question Qisubscript𝑄𝑖Q_{i} and each answer option Ai={Ai,1,…,Ai,m}subscript𝐴𝑖subscript𝐴𝑖1…subscript𝐴𝑖𝑚A_{i}=\\{A_{i,1},...,A_{i,m}\\} to generate input sequences Ti={Ti,1,…,Ti,m}subscript𝑇𝑖subscript𝑇𝑖1…subscript𝑇𝑖𝑚T_{i}=\\{T_{i,1},...,T_{i,m}\\}. Then, we compute a score Si,jsubscript𝑆𝑖𝑗S_{i,j} (Ma et al., 2021) for the answer candidate Ai,jsubscript𝐴𝑖𝑗A_{i,j} is computed as follows: Si,j=−1|Ti,j|∑t=1|Ti,j|logP(wt|…wt−1,wt+1…;θ,Φ)subscript𝑆𝑖𝑗1subscript𝑇𝑖𝑗superscriptsubscript𝑡1subscript𝑇𝑖𝑗𝑙𝑜𝑔𝑃conditionalsubscript𝑤𝑡…subscript𝑤𝑡1subscript𝑤𝑡1…𝜃ΦS_{i,j}=-\\frac{1}{|T_{i,j}|}\\sum_{t=1}^{|T_{i,j}|}logP(w_{t}|...w_{t-1},w_{t+1}...;\\theta,\\Phi) (2) where wtsubscript𝑤𝑡w_{t} is a word token in the sequence Ti,jsubscript𝑇𝑖𝑗T_{i,j} and P𝑃P is the conditional probability from Transformer blocks parameterized by θ𝜃\\theta and ΦΦ\\Phi. To train the adapter ΦQAksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k}, we use the marginal ranking loss (Ma et al., 2021) as follows: ℒQA=1m∑i=1Nk∑j=1j≠labelmmax(0,η−Si,label+Si,j)subscriptℒ𝑄𝐴1𝑚superscriptsubscript𝑖1subscript𝑁𝑘superscriptsubscript𝑗1𝑗𝑙𝑎𝑏𝑒𝑙𝑚𝑚𝑎𝑥0𝜂subscript𝑆𝑖𝑙𝑎𝑏𝑒𝑙subscript𝑆𝑖𝑗\\mbox{${\\cal L}$}_{QA}=\\frac{1}{m}\\sum_{i=1}^{N_{k}}\\sum_{\\begin{subarray}{c}j=1\\\\ j\\neq label\\end{subarray}}^{m}max(0,\\eta-S_{i,label}+S_{i,j}) (3) where η𝜂\\eta represents the margin. ΦQAk←argminΦℒQA(𝒟QAk;θ,Φ)←superscriptsubscriptΦ𝑄𝐴𝑘subscriptargminΦsubscriptℒ𝑄𝐴subscriptsuperscript𝒟𝑘𝑄𝐴𝜃Φ\\Phi_{QA}^{k}\\leftarrow\\operatorname*{argmin}_{\\Phi}\\mbox{${\\cal L}$}_{QA}(\\mathcal{D}^{k}_{QA};\\theta,\\Phi) (4) where KG-invariant parameters θ𝜃\\theta are fixed and only KG-dependent parameters ΦQAksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k} are learned, which enables to store the corresponding knowledge separately without any interference. Further, we can parallelize the training of adapter for all KGs. The efficiency of adapter training allows our modularization to be more scalable. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_13",
"text": " Once the expert adapters are learned, we combine the knowledge from each expert adapter using an attention-like mechanism. We present a novel fusion strategy as shown in Figure 2, which is referred to as the zero-shot fusion. In contrast to AdapterFusion (Pfeiffer et al., 2021) where the focus is learning to transfer knowledge to a specific target task, our zero-shot fusion aims to generalize this transfer to any arbitrary target task. Specifically, the zero-shot fusion parameters ΨΨ\\Psi learn to combine fixed expert adapters which are parameterized by ΦQA1,…,ΦQAKsuperscriptsubscriptΦ𝑄𝐴1…superscriptsubscriptΦ𝑄𝐴𝐾\\Phi_{QA}^{1},...,\\Phi_{QA}^{K}. In each Transformer layer l𝑙l of PLM with the injected fusion layer, the zero-shot fusion parameters ΨQAsubscriptΨ𝑄𝐴\\Psi_{QA} consist of query, key, and value matrices, denoted by WlQsuperscriptsubscriptW𝑙𝑄\\textbf{W}_{l}^{Q}, WlKsuperscriptsubscriptW𝑙𝐾\\textbf{W}_{l}^{K}, and WlVsuperscriptsubscriptW𝑙𝑉\\textbf{W}_{l}^{V} respectively. These parameters are used to learn the balancing between the representation of each expert adapters through attention-like mechanism. While fixing both the parameters θ𝜃\\theta and all expert adapters ΦQA1,…,ΦQAKsuperscriptsubscriptΦ𝑄𝐴1…superscriptsubscriptΦ𝑄𝐴𝐾\\Phi_{QA}^{1},...,\\Phi_{QA}^{K}, the only trainable weights ΨQAsubscriptΨ𝑄𝐴\\Psi_{QA} on the fusion layer learns to combine the knowledge from different K𝐾K expert adapters by using the subset of {𝒟QAk}k=1Ksuperscriptsubscriptsuperscriptsubscript𝒟𝑄𝐴𝑘𝑘1𝐾\\{\\mbox{${\\cal D}$}_{QA}^{k}\\}_{k=1}^{K} by random sampling. Here, we balance the ratio between the K𝐾K knowledge-driven datasets as N𝑁N samples (details are in Appendix D). Formally, ΨQA←argminΨ∑k=1KℒQA(𝒟QAk;θ,{ΦQAk}k=1K,Ψ)←subscriptΨ𝑄𝐴subscriptargminΨsuperscriptsubscript𝑘1𝐾subscriptℒ𝑄𝐴subscriptsuperscript𝒟𝑘𝑄𝐴𝜃superscriptsubscriptsuperscriptsubscriptΦ𝑄𝐴𝑘𝑘1𝐾Ψ\\Psi_{QA}\\leftarrow\\operatorname*{argmin}_{\\Psi}\\sum_{k=1}^{K}\\mbox{${\\cal L}$}_{QA}(\\mathcal{D}^{k}_{QA};\\theta,\\{\\Phi_{QA}^{k}\\}_{k=1}^{K},\\Psi) (5) where ΨΨ\\Psi refers to the initialized zero-shot fusion parameters. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_14",
"text": " More specifically, in the l𝑙l-th Transformer layer, let hPLMlsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙h_{PLM}^{l} and hEk,lsuperscriptsubscriptℎ𝐸𝑘𝑙h_{E}^{k,l} be the representations of underlying PLM parameterized by θ𝜃\\theta and an expert adapter parameterized by ΦQAksuperscriptsubscriptΦ𝑄𝐴𝑘\\Phi_{QA}^{k}, respectively. Then, using the hidden representation hPLMlsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙h_{PLM}^{l} of PLM as a query, the fusion layer performs the attention-like function as follows: Kl,VlsubscriptK𝑙subscriptV𝑙\\displaystyle\\textbf{K}_{l},\\textbf{V}_{l} =(hE1,l,…,hEK,l)absentsuperscriptsubscriptℎ𝐸1𝑙…superscriptsubscriptℎ𝐸𝐾𝑙\\displaystyle=(h_{E}^{1,l},...,h_{E}^{K,l}) (6) QlsubscriptQ𝑙\\displaystyle\\textbf{Q}_{l} =hPLMlabsentsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙\\displaystyle=h_{PLM}^{l} (7) zlsubscriptz𝑙\\displaystyle\\textbf{z}_{l} =Attention(QlWlQ,KlWlK,VlWlV)absentAttentionsubscriptQ𝑙superscriptsubscriptW𝑙𝑄subscriptK𝑙superscriptsubscriptW𝑙𝐾subscriptV𝑙superscriptsubscriptW𝑙𝑉\\displaystyle=\\text{Attention}(\\textbf{Q}_{l}\\textbf{W}_{l}^{Q},\\textbf{K}_{l}\\textbf{W}_{l}^{K},\\textbf{V}_{l}\\textbf{W}_{l}^{V}) (8) where zlsubscriptz𝑙\\textbf{z}_{l} is passed to the next Transformer layer. Given a sample, the zero-shot fusion learns the suitable balancing parameters between the expert adapters for zero-shot reasoning. Eventually, it learns to identify generalizability across commonsense reasoning tasks. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_15",
"text": " AdapterFusion uses the PLM hidden representation hPLMlsuperscriptsubscriptℎ𝑃𝐿𝑀𝑙h_{PLM}^{l} as a query which is learned when training on a specific downstream task. In our zero-shot setting, however, we use a mixture of synthetic QA for fusion training, which is not exactly a training dataset for a downstream task. To compensate for this issue, we present KG-Classifier adapter, which is a KG alignment-aware adapter, which is motivated from the fact that the ability to find which KG has an alignment with the given sample can be helpful as a role of providing a guidance for better performance (Ma et al., 2019, 2021). ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_16",
"text": " Specifically, we propose a novel training task for KG-Classifier adapter, which requires predicting the KG for the given sample of the task. For that, given {𝒟QAk}k=1Ksuperscriptsubscriptsuperscriptsubscript𝒟𝑄𝐴𝑘𝑘1𝐾\\{\\mbox{${\\cal D}$}_{QA}^{k}\\}_{k=1}^{K}, we first transform a QA sample (Qi,Ai)subscript𝑄𝑖subscript𝐴𝑖(Q_{i},A_{i}) into a new KG classification sample (Qi;Ai,label)subscript𝑄𝑖subscript𝐴𝑖𝑙𝑎𝑏𝑒𝑙(Q_{i};A_{i,label}) where (;)(;) is the concatenation. Then, we obtain a new label yi∈{0,1}Ksubscript𝑦𝑖superscript01𝐾y_{i}\\in\\{0,1\\}^{K} indicating the corresponding KG source. The samples are in Appendix E. Formally, KG classification dataset 𝒟KGCsubscript𝒟𝐾𝐺𝐶\\mbox{${\\cal D}$}_{KGC} is defined as: 𝒟KGC={((Qi;Ai,label),yi)}i=1Msubscript𝒟𝐾𝐺𝐶superscriptsubscriptsubscript𝑄𝑖subscript𝐴𝑖𝑙𝑎𝑏𝑒𝑙subscript𝑦𝑖𝑖1𝑀\\mbox{${\\cal D}$}_{KGC}=\\{((Q_{i};A_{i,label}),y_{i})\\}_{i=1}^{M} (9) where M𝑀M is the total size of {𝒟QAk}k=1Ksuperscriptsubscriptsuperscriptsubscript𝒟𝑄𝐴𝑘𝑘1𝐾\\{\\mbox{${\\cal D}$}_{QA}^{k}\\}_{k=1}^{K}. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_17",
"text": " Based on 𝒟KGCsubscript𝒟𝐾𝐺𝐶\\mbox{${\\cal D}$}_{KGC}, we learn the KG-Classifier adapter parameterized by θ𝜃\\theta and ΦKGCsubscriptΦ𝐾𝐺𝐶\\Phi_{KGC}. First, a classification sample i𝑖i is encoded into hCLS∈ℝHsubscriptℎ𝐶𝐿𝑆superscriptℝ𝐻h_{CLS}\\in\\mathbb{R}^{H} then scored as y^i∈ℝKsubscript^𝑦𝑖superscriptℝ𝐾\\hat{y}_{i}\\in\\mathbb{R}^{K} with a linear layer WKGC∈ℝK×Hsubscript𝑊𝐾𝐺𝐶superscriptℝ𝐾𝐻W_{KGC}\\in\\mathbb{R}^{K\\times H}, i.e., y^i=WKGChCLSsubscript^𝑦𝑖subscript𝑊𝐾𝐺𝐶subscriptℎ𝐶𝐿𝑆\\hat{y}_{i}=W_{KGC}h_{CLS}. Once y^isubscript^𝑦𝑖\\hat{y}_{i} is normalized by a softmax layer, the network is trained to minimize the cross-entropy loss ℒKGCsubscriptℒ𝐾𝐺𝐶\\mbox{${\\cal L}$}_{KGC} between the prediction y^isubscript^𝑦𝑖\\hat{y}_{i} and its ground truth yisubscript𝑦𝑖y_{i}: ΦKGC←argminΦ∑i=1MℒKGC(yi,y^i;θ,Φ)←subscriptΦ𝐾𝐺𝐶subscriptargminΦsuperscriptsubscript𝑖1𝑀subscriptℒ𝐾𝐺𝐶subscript𝑦𝑖subscript^𝑦𝑖𝜃Φ\\Phi_{KGC}\\leftarrow\\operatorname*{argmin}_{\\Phi}\\sum_{i=1}^{M}\\mbox{${\\cal L}$}_{KGC}(y_{i},\\hat{y}_{i};\\theta,\\Phi) (10) ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_18",
"text": " We propose to use the representation of KG-Classifier adapter as a query in attention-like mechanism, referred to as the zero-shot fusion with KG-Classifier adapter. That is, using the hidden representation hKGClsuperscriptsubscriptℎ𝐾𝐺𝐶𝑙h_{KGC}^{l} of a KG-Classifier adapter parameterized by ΦKGCsubscriptΦ𝐾𝐺𝐶\\Phi_{KGC} as a query, we substitute QlsubscriptQ𝑙\\textbf{Q}_{l} in Eq. (11) as follows: Ql=hKGClsubscriptQ𝑙superscriptsubscriptℎ𝐾𝐺𝐶𝑙\\textbf{Q}_{l}=h_{KGC}^{l} (11) The overall zero-shot fusion architecture including KG-Classifier is illustrated in Figure 2. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_19",
"text": " In this section we evaluate the efficacy of our framework on five commonsense reasoning tasks. We denote KG-Classifier adapter by KG-C adapter. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_20",
"text": " All our experiments are conducted in a zero-shot setting, in which the models do not have access to the official training data or labels of the benchmark. For the evaluation, we use the validation set of each benchmark222Since the official test sets are not publicly available, however, the validation set of each benchmark can be role as an test set since it is not used for hyperparameter tuning or model selection. We use accuracy as a metric. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_21",
"text": " We evaluate our proposed framework on five question-answering benchmarks for commonsense reasoning: SocialIQA (SIQA) (Sap et al., 2019b), CommonsenseQA (CSQA) (Talmor et al., 2018), Abductive NLI (a-NLI) (Bhagavatula et al., 2020), PhysicalIQA (PIQA) (Bisk et al., 2020), and WinoGrande (WG) (Sakaguchi et al., 2020). Each commonsense benchmark evaluates a specific kind of knowledge: social commonsense for SIQA, concept-level commonsense for CSQA, abductive reasoning for a-NLI, physical commonsense for PIQA, and pronoun resolution ability for WG.333Some benchmarks have a strong alignment with a certain KG due to its construction strategy: SIQA-ATOMIC, and CSQA-ConceptNet. To make a direct comparison with Ma et al. (2021), we use the same KGs to generate data samples. The details are presented in Appendix G. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_22",
"text": " We compare our framework with the following baselines. First, to show the characteristics of each benchmark, we use the random or the most frequent label as Random and Majority baseline, respectively. RoBERTa-L and GPT2-L is the performance of each PLM without any fine-tuning. Also, as the baseline for the unsupervised learning model using KGs, we report the performance of Self-talk (Shwartz et al., 2020), COMET-DynaGen (Bosselut and Choi, 2019), SMLM (Banerjee and Baral, 2020) as presented in original papers. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_23",
"text": " For further analysis in §§\\S4.4 and §§\\S4.5, we set the following models that are pre-trained on the synthetic QA datasets from KGs as baselines: • Single-Task Learning (STL): The model is pre-trained on a synthetic QA dataset generated from a single KG. Specifically, we experiment two architectural choices: PLM (STL-PLM) and PLM with adapters (STL-Adapter). For each architecture, there are four STL models for each of synthetic QA datasets derived from ATOMIC, ConceptNet, WikiData, and WordNet. We note that the trained STL-Adapter is an expert adapter from a specific KG in our framework. The performance of each STL baseline is shown in Appendix I Table 9 and Table 10. • Multi-Task Learning (MTL): The model is pre-trained on multiple synthetic QA datasets, each of which is generated from a KG. We experiment with a PLM trained on all four aforementioned synthetic QA datasets. We note that the difference between STL-PLM and MTL is whether to use one synthetic QA dataset or multiple synthetic QA datasets for its training. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_24",
"text": " We employ RoBERTa-L (Liu et al., 2019b) from Hugging Face’s transformers toolkit for all experiments. We follow the default settings from Ma et al. (2021). Our implementation uses Adapter (Houlsby et al., 2019) and AdapterFusion (Pfeiffer et al., 2021) as a base model architecture from AdpaterHub (Pfeiffer et al., 2020a). We run our experiments with three different random seeds. The implementation details are described in Appendix H. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_25",
"text": " Table 2 shows the zero-shot evaluation results on five benchmark datasets. Generally, zero-shot fusion scores higher than the baselines across all benchmarks, and further, zero-shot fusion shows the best performance in all benchmarks except WG. We note that although Ma et al. (2021) uses the synthetic QA dataset after sample filtering, our method achieves comparable performance with the best performance in WG, even with the raw dataset. Also, the average score of all evaluation benchmarks (the last column of Table 2) shows that zero-shot fusion has generalisability in commonsense reasoning. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_26",
"text": " In addition, zero-shot fusion achieves consistent improvements over MTL. These results indicate that our proposed zero-shot fusion method attributes to fusing the knowledge of multiple KGs more synergetically regardless of the task. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_27",
"text": " Moreover, as an ablation, we compare the zero-shot fusion with and without KG-C adapter to explore the efficacy of the KG-C adapter. We can observe that zero-shot fusion with KG-C adapter improves the average accuracy by 0.4%, which implies that the use of KG-C adapter improves the overall performance and makes our method generalize better on most of the evaluation benchmarks. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_28",
"text": " To assess the effects of the KG-C adapter itself, we visualize and compare the final layer (CLS) token representation between PLM and KG-C adapter. Figure 3 shows t-SNE (Van der Maaten and Hinton, 2008) plots of all representation of five benchmark datasets. In this figure, every sample is mapped into a 1024-dimensional feature space through RoBERTa-L model and projected back into a two-dimensional plane by t-SNE. We can observe that KG-C adapter can separate the samples of different benchmarks well despite being unseen data. It verifies that KG-awareness acquired with the KG classification task is beneficial to categorize the given sample. The KG-C adapter can thus generate a relevant KG-aware query for a given sample and help to fuse representations from suitable expert adapters in our proposed framework. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_29",
"text": " Further, we explore how the KG-C adapter affects zero-shot fusion which is based on an attention-like mechanism (Pfeiffer et al., 2021) compared to zero-shot fusion without KG-C adapter. Here, while zero-shot fusion without KG-C adapter simply uses the representation of PLM as a query, zero-shot fusion with KG-C adapter leverages the representation of KG-C adapter. To illustrate this strength, we visualize the attention probability of (CLS) token from each fusion layer as a representative in Figure 4. The column of the darker cell indicates the adapter that has the bigger influence on the fused representation. We can observe that zero-shot fusion with KG-C adapter fuses the knowledge from different experts with a subtle difference rather than focusing on a single expert severely. This implies that KG-C adapter enables the delicate balancing between multiple knowledge sources based on the KG-alignment awareness, which leads to performance improvements in commonsense reasoning tasks. Interestingly, both cases have the ability not to focus on the expert adapter based on WikiData, which can be seen as a redundant expert.444The zero-shot fusion with KG-C adapter using AT, CN, and WN shows the best average performance in Table 10. This observation would benefit from the further study that explores the optimal combination of KGs by expert selection or rejection. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_30",
"text": " In this experiment, we compare the amount of interference in the MTL and zero-shot fusion with KG-C adapter. We propose a novel evaluation metric, the interference ratio, which is the percentage of the incorrectly predicted samples by the multi-KG models among the correctly predicted samples from the STL models in common. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_31",
"text": " Using the interference ratio, we can precisely compare the negative effects of multi-KG models on knowledge aggregation since the only reason to get the correct samples wrong is the interference caused by learning with additional KGs. We present the interference ratio of the models on five benchmark datasets in Figure 5. This figure shows that MTL has the higher interference ratio than the competing models across all benchmarks. Our method achieves a substantially better ratio, especially when KG-C adapter is used. This demonstrates the efficacy of our framework in mitigating interference between knowledge, which is one of the major problems of MTL. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_32",
"text": " To verify the ability of our model to aggregate different types of KGs, we compare the relative performance gains of MTL and zero-shot fusion with KG-C adapter when increasing the number of KGs. The performance of all KG-combinations for each framework is presented in Table 9 and Table 10. We visualize the improvement of performance for five benchmark development sets, leveraging heatmaps in Figure 6. Here, for the sake of brevity, we denote our framework with KG-C adapter as our method. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_33",
"text": " For MTL in Figure 6 (a), the color of the cell denotes the relative improvement of MTL with the combination of KGs over the best performance among the STL-PLM of KGs. Also, for our method in Figure 6 (b), the relative improvement is measured based on the best performance among the STL-Adapter of KGs, considering the difference of the base architecture for MTL (i.e. PLM) and zero-shot fusion (i.e. PLM with adapter). The green and red colors denote the increase and decrease of performance, respectively, when using multiple KGs together. The greener color on the cells indicates that the approach benefits from an increasing number of KGs, which implies aggregating knowledge successfully. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_34",
"text": " In Figure 6, while the MTL tends to show the decrease of the performance when more KGs are utilized for training, our method obtains relative performance improvement across most of benchmarks. In both framework, the slightly degraded performance of the combination of KGs without ATOMIC could be due to the strong alignment between ATOMIC and SIQA. Except for the above case, we can observe that as more KGs are leveraged, the color of the cell gets greener, which implies that our method gains more advantages for better performance. This demonstrates that our method enables knowledge aggregation for multiple KGs synergetically. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_35",
"text": " Despite the existence of various types of commonsense KGs, utilizing multiple KGs has not been explored enough in the commonsense reasoning field. Motivated by this, this paper proposes a modularized transfer learning framework to fuse the knowledge from multiple KGs efficiently for zero-shot commonsense reasoning. Our framework consists of KG modularization for expert adapter, zero-shot fusion and KG-Classifier adapter. Extensive experiments show that our framework obtains strong improvements over MTL on five commonsense reasoning benchmarks. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
},
{
"id": "2206.03715_all_36",
"text": " In the future, our work can be extended to adapt our methods to further various multiple KGs with studies of appropriate scale for KG modularization. In addition, based on our hypothesis that the existence of an optimal combination, we can explore the study for the optional use of modularized KG experts for the best transfer learning. ",
"title": "Modularized Transfer Learning with Multiple Knowledge Graphs for Zero-shot Commonsense Reasoning"
}
] |
Aren't YOLO9000 and YOLOv2 essentially the same thing? Why make the distinction?
|
YOLOv2 is the improvement over the base YOLO detection system [5]. YOLO9000 further improve YOLOv2 by using a WordTree to combine data from various sources and uses a joint optimization technique to train simultaneously on ImageNet and COCO [70].
|
[
5,
70
] |
[
{
"id": "1612.08242_all_0",
"text": " General purpose object detection should be fast, accurate, and able to recognize a wide variety of objects. Since the introduction of neural networks, detection frameworks have become increasingly fast and accurate. However, most detection methods are still constrained to a small set of objects. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_1",
"text": " Current object detection datasets are limited compared to datasets for other tasks like classification and tagging. The most common detection datasets contain thousands to hundreds of thousands of images with dozens to hundreds of tags . Classification datasets have millions of images with tens or hundreds of thousands of categories . ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_2",
"text": " We would like detection to scale to level of object classification. However, labelling images for detection is far more expensive than labelling for classification or tagging (tags are often user-supplied for free). Thus we are unlikely to see detection datasets on the same scale as classification datasets in the near future. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_3",
"text": " We propose a new method to harness the large amount of classification data we already have and use it to expand the scope of current detection systems. Our method uses a hierarchical view of object classification that allows us to combine distinct datasets together. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_4",
"text": " We also propose a joint training algorithm that allows us to train object detectors on both detection and classification data. Our method leverages labeled detection images to learn to precisely localize objects while it uses classification images to increase its vocabulary and robustness. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_5",
"text": " Using this method we train YOLO9000, a real-time object detector that can detect over 9000 different object categories. First we improve upon the base YOLO detection system to produce YOLOv2, a state-of-the-art, real-time detector. Then we use our dataset combination method and joint training algorithm to train a model on more than 9000 classes from ImageNet as well as detection data from COCO. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_6",
"text": " All of our code and pre-trained models are available online at http://pjreddie.com/yolo9000/. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_7",
"text": " YOLO suffers from a variety of shortcomings relative to state-of-the-art detection systems. Error analysis of YOLO compared to Fast R-CNN shows that YOLO makes a significant number of localization errors. Furthermore, YOLO has relatively low recall compared to region proposal-based methods. Thus we focus mainly on improving recall and localization while maintaining classification accuracy. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_8",
"text": " Computer vision generally trends towards larger, deeper networks . Better performance often hinges on training larger networks or ensembling multiple models together. However, with YOLOv2 we want a more accurate detector that is still fast. Instead of scaling up our network, we simplify the network and then make the representation easier to learn. We pool a variety of ideas from past work with our own novel concepts to improve YOLO’s performance. A summary of results can be found in Table 2. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_9",
"text": " Batch Normalization. Batch normalization leads to significant improvements in convergence while eliminating the need for other forms of regularization . By adding batch normalization on all of the convolutional layers in YOLO we get more than 2% improvement in mAP. Batch normalization also helps regularize the model. With batch normalization we can remove dropout from the model without overfitting. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_10",
"text": " High Resolution Classifier. All state-of-the-art detection methods use classifier pre-trained on ImageNet . Starting with AlexNet most classifiers operate on input images smaller than 256×256256256256\\times 256 . The original YOLO trains the classifier network at 224×224224224224\\times 224 and increases the resolution to 448448448 for detection. This means the network has to simultaneously switch to learning object detection and adjust to the new input resolution. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_11",
"text": " For YOLOv2 we first fine tune the classification network at the full 448×448448448448\\times 448 resolution for 10 epochs on ImageNet. This gives the network time to adjust its filters to work better on higher resolution input. We then fine tune the resulting network on detection. This high resolution classification network gives us an increase of almost 4% mAP. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_12",
"text": " Convolutional With Anchor Boxes. YOLO predicts the coordinates of bounding boxes directly using fully connected layers on top of the convolutional feature extractor. Instead of predicting coordinates directly Faster R-CNN predicts bounding boxes using hand-picked priors . Using only convolutional layers the region proposal network (RPN) in Faster R-CNN predicts offsets and confidences for anchor boxes. Since the prediction layer is convolutional, the RPN predicts these offsets at every location in a feature map. Predicting offsets instead of coordinates simplifies the problem and makes it easier for the network to learn. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_13",
"text": " We remove the fully connected layers from YOLO and use anchor boxes to predict bounding boxes. First we eliminate one pooling layer to make the output of the network’s convolutional layers higher resolution. We also shrink the network to operate on 416416416 input images instead of 448×448448448448\\times 448. We do this because we want an odd number of locations in our feature map so there is a single center cell. Objects, especially large objects, tend to occupy the center of the image so it’s good to have a single location right at the center to predict these objects instead of four locations that are all nearby. YOLO’s convolutional layers downsample the image by a factor of 32 so by using an input image of 416416416 we get an output feature map of 13×13131313\\times 13. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_14",
"text": " When we move to anchor boxes we also decouple the class prediction mechanism from the spatial location and instead predict class and objectness for every anchor box. Following YOLO, the objectness prediction still predicts the IOU of the ground truth and the proposed box and the class predictions predict the conditional probability of that class given that there is an object. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_15",
"text": " Using anchor boxes we get a small decrease in accuracy. YOLO only predicts 98 boxes per image but with anchor boxes our model predicts more than a thousand. Without anchor boxes our intermediate model gets 69.569.569.5 mAP with a recall of 81%percent8181\\%. With anchor boxes our model gets 69.269.269.2 mAP with a recall of 88%percent8888\\%. Even though the mAP decreases, the increase in recall means that our model has more room to improve. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_16",
"text": " Dimension Clusters. We encounter two issues with anchor boxes when using them with YOLO. The first is that the box dimensions are hand picked. The network can learn to adjust the boxes appropriately but if we pick better priors for the network to start with we can make it easier for the network to learn to predict good detections. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_17",
"text": " Instead of choosing priors by hand, we run k-means clustering on the training set bounding boxes to automatically find good priors. If we use standard k-means with Euclidean distance larger boxes generate more error than smaller boxes. However, what we really want are priors that lead to good IOU scores, which is independent of the size of the box. Thus for our distance metric we use: ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_18",
"text": " d(box,centroid)=1−IOU(box,centroid)𝑑boxcentroid1IOUboxcentroidd(\\text{box},\\text{centroid})=1-\\text{IOU}(\\text{box},\\text{centroid}) ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_19",
"text": " We run k-means for various values of k𝑘k and plot the average IOU with closest centroid, see Figure 2. We choose k=5𝑘5k=5 as a good tradeoff between model complexity and high recall. The cluster centroids are significantly different than hand-picked anchor boxes. There are fewer short, wide boxes and more tall, thin boxes. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_20",
"text": " We compare the average IOU to closest prior of our clustering strategy and the hand-picked anchor boxes in Table 1. At only 5 priors the centroids perform similarly to 9 anchor boxes with an average IOU of 61.0 compared to 60.9. If we use 9 centroids we see a much higher average IOU. This indicates that using k-means to generate our bounding box starts the model off with a better representation and makes the task easier to learn. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_21",
"text": " Direct location prediction. When using anchor boxes with YOLO we encounter a second issue: model instability, especially during early iterations. Most of the instability comes from predicting the (x,y)𝑥𝑦(x,y) locations for the box. In region proposal networks the network predicts values txsubscript𝑡𝑥t_{x} and tysubscript𝑡𝑦t_{y} and the (x,y)𝑥𝑦(x,y) center coordinates are calculated as: ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_22",
"text": " x𝑥\\displaystyle x =(tx∗wa)−xaabsentsubscript𝑡𝑥subscript𝑤𝑎subscript𝑥𝑎\\displaystyle=(t_{x}*w_{a})-x_{a} y𝑦\\displaystyle y =(ty∗ha)−yaabsentsubscript𝑡𝑦subscriptℎ𝑎subscript𝑦𝑎\\displaystyle=(t_{y}*h_{a})-y_{a} ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_23",
"text": " For example, a prediction of tx=1subscript𝑡𝑥1t_{x}=1 would shift the box to the right by the width of the anchor box, a prediction of tx=−1subscript𝑡𝑥1t_{x}=-1 would shift it to the left by the same amount. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_24",
"text": " This formulation is unconstrained so any anchor box can end up at any point in the image, regardless of what location predicted the box. With random initialization the model takes a long time to stabilize to predicting sensible offsets. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_25",
"text": " Instead of predicting offsets we follow the approach of YOLO and predict location coordinates relative to the location of the grid cell. This bounds the ground truth to fall between 00 and 111. We use a logistic activation to constrain the network’s predictions to fall in this range. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_26",
"text": " The network predicts 5 bounding boxes at each cell in the output feature map. The network predicts 5 coordinates for each bounding box, txsubscript𝑡𝑥t_{x}, tysubscript𝑡𝑦t_{y}, twsubscript𝑡𝑤t_{w}, thsubscript𝑡ℎt_{h}, and tosubscript𝑡𝑜t_{o}. If the cell is offset from the top left corner of the image by (cx,cy)subscript𝑐𝑥subscript𝑐𝑦(c_{x},c_{y}) and the bounding box prior has width and height pwsubscript𝑝𝑤p_{w}, phsubscript𝑝ℎp_{h}, then the predictions correspond to: ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_27",
"text": " bxsubscript𝑏𝑥\\displaystyle b_{x} =σ(tx)+cxabsent𝜎subscript𝑡𝑥subscript𝑐𝑥\\displaystyle=\\sigma(t_{x})+c_{x} bysubscript𝑏𝑦\\displaystyle b_{y} =σ(ty)+cyabsent𝜎subscript𝑡𝑦subscript𝑐𝑦\\displaystyle=\\sigma(t_{y})+c_{y} bwsubscript𝑏𝑤\\displaystyle b_{w} =pwetwabsentsubscript𝑝𝑤superscript𝑒subscript𝑡𝑤\\displaystyle=p_{w}e^{t_{w}} bhsubscript𝑏ℎ\\displaystyle b_{h} =phethabsentsubscript𝑝ℎsuperscript𝑒subscript𝑡ℎ\\displaystyle=p_{h}e^{t_{h}} Pr(object)∗IOU(b,object)𝑃𝑟object𝐼𝑂𝑈𝑏object\\displaystyle Pr(\\text{object})*IOU(b,\\text{object}) =σ(to)absent𝜎subscript𝑡𝑜\\displaystyle=\\sigma(t_{o}) ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_28",
"text": " Since we constrain the location prediction the parametrization is easier to learn, making the network more stable. Using dimension clusters along with directly predicting the bounding box center location improves YOLO by almost 5% over the version with anchor boxes. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_29",
"text": " Fine-Grained Features.This modified YOLO predicts detections on a 13×13131313\\times 13 feature map. While this is sufficient for large objects, it may benefit from finer grained features for localizing smaller objects. Faster R-CNN and SSD both run their proposal networks at various feature maps in the network to get a range of resolutions. We take a different approach, simply adding a passthrough layer that brings features from an earlier layer at 26×26262626\\times 26 resolution. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_30",
"text": " The passthrough layer concatenates the higher resolution features with the low resolution features by stacking adjacent features into different channels instead of spatial locations, similar to the identity mappings in ResNet. This turns the 26×26×512262651226\\times 26\\times 512 feature map into a 13×13×20481313204813\\times 13\\times 2048 feature map, which can be concatenated with the original features. Our detector runs on top of this expanded feature map so that it has access to fine grained features. This gives a modest 1% performance increase. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_31",
"text": " Multi-Scale Training. The original YOLO uses an input resolution of 448×448448448448\\times 448. With the addition of anchor boxes we changed the resolution to 416×416416416416\\times 416. However, since our model only uses convolutional and pooling layers it can be resized on the fly. We want YOLOv2 to be robust to running on images of different sizes so we train this into the model. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_32",
"text": " Instead of fixing the input image size we change the network every few iterations. Every 10 batches our network randomly chooses a new image dimension size. Since our model downsamples by a factor of 32, we pull from the following multiples of 32: {320,352,…,608}320352…608\\{320,352,...,608\\}. Thus the smallest option is 320×320320320320\\times 320 and the largest is 608×608608608608\\times 608. We resize the network to that dimension and continue training. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_33",
"text": " This regime forces the network to learn to predict well across a variety of input dimensions. This means the same network can predict detections at different resolutions. The network runs faster at smaller sizes so YOLOv2 offers an easy tradeoff between speed and accuracy. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_34",
"text": " At low resolutions YOLOv2 operates as a cheap, fairly accurate detector. At 288×288288288288\\times 288 it runs at more than 90 FPS with mAP almost as good as Fast R-CNN. This makes it ideal for smaller GPUs, high framerate video, or multiple video streams. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_35",
"text": " At high resolution YOLOv2 is a state-of-the-art detector with 78.6 mAP on VOC 2007 while still operating above real-time speeds. See Table 3 for a comparison of YOLOv2 with other frameworks on VOC 2007. Figure 4 ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_36",
"text": " Further Experiments. We train YOLOv2 for detection on VOC 2012. Table 4 shows the comparative performance of YOLOv2 versus other state-of-the-art detection systems. YOLOv2 achieves 73.4 mAP while running far faster than competing methods. We also train on COCO and compare to other methods in Table 5. On the VOC metric (IOU = .5) YOLOv2 gets 44.0 mAP, comparable to SSD and Faster R-CNN. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_37",
"text": " We want detection to be accurate but we also want it to be fast. Most applications for detection, like robotics or self-driving cars, rely on low latency predictions. In order to maximize performance we design YOLOv2 to be fast from the ground up. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_38",
"text": " Most detection frameworks rely on VGG-16 as the base feature extractor . VGG-16 is a powerful, accurate classification network but it is needlessly complex. The convolutional layers of VGG-16 require 30.69 billion floating point operations for a single pass over a single image at 224×224224224224\\times 224 resolution. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_39",
"text": " The YOLO framework uses a custom network based on the Googlenet architecture . This network is faster than VGG-16, only using 8.52 billion operations for a forward pass. However, it’s accuracy is slightly worse than VGG-16. For single-crop, top-5 accuracy at 224×224224224224\\times 224, YOLO’s custom model gets 88.0% ImageNet compared to 90.0% for VGG-16. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_40",
"text": " Darknet-19. We propose a new classification model to be used as the base of YOLOv2. Our model builds off of prior work on network design as well as common knowledge in the field. Similar to the VGG models we use mostly 3×3333\\times 3 filters and double the number of channels after every pooling step . Following the work on Network in Network (NIN) we use global average pooling to make predictions as well as 1×1111\\times 1 filters to compress the feature representation between 3×3333\\times 3 convolutions . We use batch normalization to stabilize training, speed up convergence, and regularize the model . ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_41",
"text": " Our final model, called Darknet-19, has 19 convolutional layers and 5 maxpooling layers. For a full description see Table 6. Darknet-19 only requires 5.58 billion operations to process an image yet achieves 72.9%percent72.972.9\\% top-1 accuracy and 91.2%percent91.291.2\\% top-5 accuracy on ImageNet. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_42",
"text": " Training for classification. We train the network on the standard ImageNet 1000 class classification dataset for 160 epochs using stochastic gradient descent with a starting learning rate of 0.10.10.1, polynomial rate decay with a power of 444, weight decay of 0.00050.00050.0005 and momentum of 0.90.90.9 using the Darknet neural network framework . During training we use standard data augmentation tricks including random crops, rotations, and hue, saturation, and exposure shifts. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_43",
"text": " As discussed above, after our initial training on images at 224×224224224224\\times 224 we fine tune our network at a larger size, 448448448. For this fine tuning we train with the above parameters but for only 10 epochs and starting at a learning rate of 10−3superscript10310^{-3}. At this higher resolution our network achieves a top-1 accuracy of 76.5%percent76.576.5\\% and a top-5 accuracy of 93.3%percent93.393.3\\%. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_44",
"text": " Training for detection. We modify this network for detection by removing the last convolutional layer and instead adding on three 3×3333\\times 3 convolutional layers with 102410241024 filters each followed by a final 1×1111\\times 1 convolutional layer with the number of outputs we need for detection. For VOC we predict 5 boxes with 5 coordinates each and 20 classes per box so 125 filters. We also add a passthrough layer from the final 3×3×512335123\\times 3\\times 512 layer to the second to last convolutional layer so that our model can use fine grain features. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_45",
"text": " We train the network for 160 epochs with a starting learning rate of 10−3superscript10310^{-3}, dividing it by 10 at 60 and 90 epochs. We use a weight decay of 0.00050.00050.0005 and momentum of 0.90.90.9. We use a similar data augmentation to YOLO and SSD with random crops, color shifting, etc. We use the same training strategy on COCO and VOC. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_46",
"text": " We propose a mechanism for jointly training on classification and detection data. Our method uses images labelled for detection to learn detection-specific information like bounding box coordinate prediction and objectness as well as how to classify common objects. It uses images with only class labels to expand the number of categories it can detect. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_47",
"text": " During training we mix images from both detection and classification datasets. When our network sees an image labelled for detection we can backpropagate based on the full YOLOv2 loss function. When it sees a classification image we only backpropagate loss from the classification-specific parts of the architecture. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_48",
"text": " This approach presents a few challenges. Detection datasets have only common objects and general labels, like “dog” or “boat”. Classification datasets have a much wider and deeper range of labels. ImageNet has more than a hundred breeds of dog, including “Norfolk terrier”, “Yorkshire terrier”, and “Bedlington terrier”. If we want to train on both datasets we need a coherent way to merge these labels. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_49",
"text": " Most approaches to classification use a softmax layer across all the possible categories to compute the final probability distribution. Using a softmax assumes the classes are mutually exclusive. This presents problems for combining datasets, for example you would not want to combine ImageNet and COCO using this model because the classes “Norfolk terrier” and “dog” are not mutually exclusive. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_50",
"text": " We could instead use a multi-label model to combine the datasets which does not assume mutual exclusion. This approach ignores all the structure we do know about the data, for example that all of the COCO classes are mutually exclusive. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_51",
"text": " Hierarchical classification. ImageNet labels are pulled from WordNet, a language database that structures concepts and how they relate . In WordNet, “Norfolk terrier” and “Yorkshire terrier” are both hyponyms of “terrier” which is a type of “hunting dog”, which is a type of “dog”, which is a “canine”, etc. Most approaches to classification assume a flat structure to the labels however for combining datasets, structure is exactly what we need. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_52",
"text": " WordNet is structured as a directed graph, not a tree, because language is complex. For example a “dog” is both a type of “canine” and a type of “domestic animal” which are both synsets in WordNet. Instead of using the full graph structure, we simplify the problem by building a hierarchical tree from the concepts in ImageNet. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_53",
"text": " To build this tree we examine the visual nouns in ImageNet and look at their paths through the WordNet graph to the root node, in this case “physical object”. Many synsets only have one path through the graph so first we add all of those paths to our tree. Then we iteratively examine the concepts we have left and add the paths that grow the tree by as little as possible. So if a concept has two paths to the root and one path would add three edges to our tree and the other would only add one edge, we choose the shorter path. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_54",
"text": " The final result is WordTree, a hierarchical model of visual concepts. To perform classification with WordTree we predict conditional probabilities at every node for the probability of each hyponym of that synset given that synset. For example, at the “terrier” node we predict: ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_55",
"text": " Pr(Norfolk terrier\\displaystyle Pr(\\text{Norfolk terrier} |terrier)\\displaystyle|\\text{terrier}) Pr(Yorkshire terrier\\displaystyle Pr(\\text{Yorkshire terrier} |terrier)\\displaystyle|\\text{terrier}) Pr(Bedlington terrier\\displaystyle Pr(\\text{Bedlington terrier} |terrier)\\displaystyle|\\text{terrier}) ……\\displaystyle... ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_56",
"text": " If we want to compute the absolute probability for a particular node we simply follow the path through the tree to the root node and multiply to conditional probabilities. So if we want to know if a picture is of a Norfolk terrier we compute: ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_57",
"text": " Pr(Norfolk terrier)𝑃𝑟Norfolk terrier\\displaystyle Pr(\\text{Norfolk terrier}) =Pr(Norfolk terrier|terrier)absent𝑃𝑟conditionalNorfolk terrierterrier\\displaystyle=Pr(\\text{Norfolk terrier}|\\text{terrier}) ∗Pr(terrier\\displaystyle*Pr(\\text{terrier} |hunting dog)\\displaystyle|\\text{hunting dog}) ∗…absent…\\displaystyle*\\ldots ∗\\displaystyle* ∗Pr(mammal\\displaystyle*Pr(\\text{mammal} |Pr(animal)\\displaystyle|Pr(\\text{animal}) ∗Pr(animal\\displaystyle*Pr(\\text{animal} |physical object)\\displaystyle|\\text{physical object}) ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_58",
"text": " For classification purposes we assume that the the image contains an object: Pr(physical object)=1𝑃𝑟physical object1Pr(\\text{physical object})=1. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_59",
"text": " To validate this approach we train the Darknet-19 model on WordTree built using the 1000 class ImageNet. To build WordTree1k we add in all of the intermediate nodes which expands the label space from 1000 to 1369. During training we propagate ground truth labels up the tree so that if an image is labelled as a “Norfolk terrier” it also gets labelled as a “dog” and a “mammal”, etc. To compute the conditional probabilities our model predicts a vector of 1369 values and we compute the softmax over all sysnsets that are hyponyms of the same concept, see Figure 5. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_60",
"text": " Using the same training parameters as before, our hierarchical Darknet-19 achieves 71.9%percent71.971.9\\% top-1 accuracy and 90.4%percent90.490.4\\% top-5 accuracy. Despite adding 369 additional concepts and having our network predict a tree structure our accuracy only drops marginally. Performing classification in this manner also has some benefits. Performance degrades gracefully on new or unknown object categories. For example, if the network sees a picture of a dog but is uncertain what type of dog it is, it will still predict “dog” with high confidence but have lower confidences spread out among the hyponyms. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_61",
"text": " This formulation also works for detection. Now, instead of assuming every image has an object, we use YOLOv2’s objectness predictor to give us the value of Pr(physical object)𝑃𝑟physical objectPr(\\text{physical object}). The detector predicts a bounding box and the tree of probabilities. We traverse the tree down, taking the highest confidence path at every split until we reach some threshold and we predict that object class. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_62",
"text": " Dataset combination with WordTree. We can use WordTree to combine multiple datasets together in a sensible fashion. We simply map the categories in the datasets to synsets in the tree. Figure 6 shows an example of using WordTree to combine the labels from ImageNet and COCO. WordNet is extremely diverse so we can use this technique with most datasets. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_63",
"text": " Joint classification and detection. Now that we can combine datasets using WordTree we can train our joint model on classification and detection. We want to train an extremely large scale detector so we create our combined dataset using the COCO detection dataset and the top 9000 classes from the full ImageNet release. We also need to evaluate our method so we add in any classes from the ImageNet detection challenge that were not already included. The corresponding WordTree for this dataset has 9418 classes. ImageNet is a much larger dataset so we balance the dataset by oversampling COCO so that ImageNet is only larger by a factor of 4:1. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_64",
"text": " Using this dataset we train YOLO9000. We use the base YOLOv2 architecture but only 3 priors instead of 5 to limit the output size. When our network sees a detection image we backpropagate loss as normal. For classification loss, we only backpropagate loss at or above the corresponding level of the label. For example, if the label is “dog” we do assign any error to predictions further down in the tree, “German Shepherd” versus “Golden Retriever”, because we do not have that information. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_65",
"text": " When it sees a classification image we only backpropagate classification loss. To do this we simply find the bounding box that predicts the highest probability for that class and we compute the loss on just its predicted tree. We also assume that the predicted box overlaps what would be the ground truth label by at least .3.3.3 IOU and we backpropagate objectness loss based on this assumption. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_66",
"text": " Using this joint training, YOLO9000 learns to find objects in images using the detection data in COCO and it learns to classify a wide variety of these objects using data from ImageNet. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_67",
"text": " We evaluate YOLO9000 on the ImageNet detection task. The detection task for ImageNet shares on 44 object categories with COCO which means that YOLO9000 has only seen classification data for the majority of the test images, not detection data. YOLO9000 gets 19.7 mAP overall with 16.0 mAP on the disjoint 156 object classes that it has never seen any labelled detection data for. This mAP is higher than results achieved by DPM but YOLO9000 is trained on different datasets with only partial supervision . It also is simultaneously detecting 9000 other object categories, all in real-time. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_68",
"text": " When we analyze YOLO9000’s performance on ImageNet we see it learns new species of animals well but struggles with learning categories like clothing and equipment. New animals are easier to learn because the objectness predictions generalize well from the animals in COCO. Conversely, COCO does not have bounding box label for any type of clothing, only for person, so YOLO9000 struggles to model categories like “sunglasses” or “swimming trunks”. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_69",
"text": " We introduce YOLOv2 and YOLO9000, real-time detection systems. YOLOv2 is state-of-the-art and faster than other detection systems across a variety of detection datasets. Furthermore, it can be run at a variety of image sizes to provide a smooth tradeoff between speed and accuracy. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_70",
"text": " YOLO9000 is a real-time framework for detection more than 9000 object categories by jointly optimizing detection and classification. We use WordTree to combine data from various sources and our joint optimization technique to train simultaneously on ImageNet and COCO. YOLO9000 is a strong step towards closing the dataset size gap between detection and classification. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_71",
"text": " Many of our techniques generalize outside of object detection. Our WordTree representation of ImageNet offers a richer, more detailed output space for image classification. Dataset combination using hierarchical classification would be useful in the classification and segmentation domains. Training techniques like multi-scale training could provide benefit across a variety of visual tasks. ",
"title": "YOLO9000: Better, Faster, Stronger"
},
{
"id": "1612.08242_all_72",
"text": " For future work we hope to use similar techniques for weakly supervised image segmentation. We also plan to improve our detection results using more powerful matching strategies for assigning weak labels to classification data during training. Computer vision is blessed with an enormous amount of labelled data. We will continue looking for ways to bring different sources and structures of data together to make stronger models of the visual world. ",
"title": "YOLO9000: Better, Faster, Stronger"
}
] |
Why BLINK valuable?
|
The BLINK model can be said to be valuable since the model is simple yet scalable and effective compared to existing works [0]. The proposed BERT-based model can perform entity linking with large-scale and zero-shot setups, which is crucial in real-world use cases that often contain a lot of unseen entities [2]. BLINK also achieved a new state-of-the-art result for two zero-shot benchmarks by using only the provided text description without external knowledge, which shows the effectiveness of the proposed model [48].
|
[
0,
2,
48
] |
[
{
"id": "1911.03814_all_0",
"text": " Scale is a key challenge for entity linking; there are millions of possible entities to consider for each mention. To efficiently filter or rank the candidates, existing methods use different sources of external information, including manually curated mention tables Ganea and Hofmann (2017), incoming Wikipedia link popularity Yamada et al. (2016), and gold Wikipedia entity categories Gillick et al. (2019). In this paper, we show that BERT-based models set new state-of-the-art performance levels for large scale entity linking when used in a zero shot setup, where there is no external knowledge and a short text description provides the only information we have for each entity. We also present an extensive evaluation of the accuracy-speed trade-off inherent to large pre-trained models, and show is possible to achieve very efficient linking with modest loss of accuracy. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_1",
"text": " More specifically, we introduce a two stage approach for zero-shot linking (see Figure 1 for an overview), based on fine-tuned BERT architectures Devlin et al. (2019). In the first stage, we do retrieval in a dense space defined by a bi-encoder that independently embeds the mention context and the entity descriptions Humeau et al. (2019); Gillick et al. (2019). Each retrieved candidate is then examined more carefully with a cross-encoder that concatenates the mention and entity text, following Logeswaran et al. (2019). This overall approach is conceptually simple but highly effective, as we show through detailed experiments. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_2",
"text": " Our two-stage approach achieves a new state-of-the-art result on TACKBP-2010, with an over 30% relative error reduction. By simply reading the provided text descriptions, we are able to outperform previous methods that included many extra cues such as entity name dictionaries and link popularity. We also improve the state of the art on existing zero-shot benchmarks, including a nearly 6 point absolute gain on the recently introduced Wikia corpus Logeswaran et al. (2019) and more than 7 point absolute gain on WikilinksNED Unseen-Mentions Onoe and Durrett (2019). ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_3",
"text": " Finally, we do an extensive evaluation of the accuracy-speed trade-off inherent in our bi- and cross-encoder models. We show that the two stage methods scales well in a full Wikipedia setting, by linking against all the 5.9M Wikipedia entities for TACKBP-2010, while still outperforming existing model with much smaller candidate sets. We also show that bi-encoder linking is very fast with approximate nearest neighbor search (e.g. linking over 5.9 million candidates in 2 milliseconds), and that much of the accuracy gain from the more expensive cross-encoder can be transferred to the bi-encoder via knowledge distillation. We release our code and models, as well as a system to link entity mentions to all of Wikipedia (similar to TagME Ferragina and Scaiella (2011)).111Our code and models are available at https://github.com/facebookresearch/BLINK ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_4",
"text": " We follow most recent work in studying entity linking with gold mentions.222Kolitsas et al. (2018) study end-to-end linking. Our techniques should be applicable to this setting as well, but we leave this exploration to future work. The entity linking task can be broken into two steps: candidate generation and ranking. Prior work has used frequency information, alias tables and TF-IDF-based methods for candidate generation. For candidate ranking, He et al. (2013), Sun et al. (2015), Yamada et al. (2016), Ganea and Hofmann (2017), and Kolitsas et al. (2018) have established state-of-the-art results using neural networks to model context word, span and entity. There is also recent work demonstrating that fine-grained entity typing information helps linking Raiman and Raiman (2018); Onoe and Durrett (2019); Khalife and Vazirgiannis (2018). ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_5",
"text": " Two recent results are most closely related to our work. Logeswaran et al. (2019) proposed the zero-shot entity linking task. They use cross-encoders for entity ranking, but rely on traditional IR-techniques for candidate generation and did not evaluate on large scale benchmarks such as TACKBP. Gillick et al. (2019) show that dense embeddings work well for candidate generation, but they did not do pre-training and included external category labels in their bi-encoder architectures, limiting their linking to entities in Wikipedia. Our approach can be seen as generalizing both of these lines of work, and showing for the first time that pre-trained zero-shot architectures are both highly accurate and computationally efficient at scale. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_6",
"text": " Humeau et al. (2019) studied different architectures to use deep pre-trained bidirectional transformers and performed detailed comparison of three different architectures, namely bi-encoder, poly-encoder, cross-encoder on tasks of sentence selection in dialogues. Inspired by their work, we use similar architectures to the problem of entity linking, and in addition, demonstrate that bi-encoder can be a strong model for retrieval. Instead of using the poly-encoder as a trade-off between cross-encoder and bi-encoder, we propose to train a bi-encoder model with knowledge distillation Buciluundefined et al. (2006); Hinton et al. (2015) from a cross-encoder model to further improve the bi-encoder’s performances. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_7",
"text": " Given an input text document 𝐃={w1,…,wr}𝐃subscript𝑤1…subscript𝑤𝑟\\mathbf{D}=\\{w_{1},...,w_{r}\\} and a list of entity mentions 𝐌𝐃={m1,…,mn}subscript𝐌𝐃subscript𝑚1…subscript𝑚𝑛\\mathbf{M_{D}}=\\{m_{1},...,m_{n}\\}, the output of an entity linking model is a list of mention-entity pairs {(mi,ei)}i∈(1,n)subscriptsubscript𝑚𝑖subscript𝑒𝑖𝑖1𝑛\\{(m_{i},e_{i})\\}_{i\\in(1,n)} where each entity is an entry in a knowledge base (KB) (e.g. Wikipedia), e∈ℰ𝑒ℰe\\in\\mathcal{E}. We assume that the title and description of the entities are available, which is a common setting in entity linking Ganea and Hofmann (2017); Logeswaran et al. (2019). We also assume each mention has a valid gold entity in the KB, which is usually referred as in-KB evaluation. We leave the out-of-KB prediction (i.e. nil prediction) to future work. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_8",
"text": " We also study zero-shot entity linking Logeswaran et al. (2019). Here the document setup is the same, but the knowledge base is separated in training and test time. Formally, denote ℰtrainsubscriptℰ𝑡𝑟𝑎𝑖𝑛\\mathcal{E}_{train} and ℰtestsubscriptℰ𝑡𝑒𝑠𝑡\\mathcal{E}_{test} to be the knowledge base in training and test, we require ℰtrain∩ℰtest=∅subscriptℰ𝑡𝑟𝑎𝑖𝑛subscriptℰ𝑡𝑒𝑠𝑡\\mathcal{E}_{train}\\cap\\mathcal{E}_{test}=\\emptyset. The set of text documents, mentions, and entity dictionary are separated in training and test so that the entities being linked at test time are unseen. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_9",
"text": " Figure 1 shows our overall approach. The bi-encoder uses two independent BERT transformers to encode model context/mention and entity into dense vectors, and each entity candidate is scored as the dot product of these vectors. The candidates retrieved by the bi-encoder are then passed to the cross-encoder for ranking. The cross-encoder encodes context/mention and entity in one transformer, and applies an additional linear layer to compute the final score for each pair. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_10",
"text": " We use a bi-encoder architecture similar to the work of Humeau et al. (2019) to model (mention, entity) pairs. This approach allows for fast, real-time inference, as the candidate representations can be cached. Both input context and candidate entity are encoded into vectors: 𝒚𝒎=red(T1(τm))subscript𝒚𝒎redsubscript𝑇1subscript𝜏𝑚\\displaystyle\\boldsymbol{y_{m}}=\\mathrm{red}(T_{1}(\\tau_{m})) (1) 𝒚𝒆=red(T2(τe))subscript𝒚𝒆redsubscript𝑇2subscript𝜏𝑒\\displaystyle\\boldsymbol{y_{e}}=\\mathrm{red}(T_{2}(\\tau_{e})) (2) where τmsubscript𝜏𝑚\\tau_{m} and τesubscript𝜏𝑒\\tau_{e} are input representations of mention and entity respectively, T1subscript𝑇1T_{1} and T2subscript𝑇2T_{2} are two transformers. red(.)\\mathrm{red}(.) is a function that reduces the sequence of vectors produced by the transformers into one vector. Following the experiments in Humeau et al. (2019), we choose red(.)\\mathrm{red}(.) to be the last layer of the output of the (CLS) token. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_11",
"text": " The representation of context and mention τmsubscript𝜏𝑚\\tau_{m} is composed of the word-pieces of the context surrounding the mention and the mention itself. Specifically, we construct input of each mention example as: ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_12",
"text": " (CLS) ctxtl (Ms) mention (Me) ctxtr (SEP) ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_13",
"text": " where mention, ctxtl, ctxtr are the word-pieces tokens of the mention, context before and after the mention respectively, and (Ms), (Me) are special tokens to tag the mention. The maximum length of the input representation is a hyperparameter in our model, and we find that small value such as 32 works well in practice (see Appendix A). ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_14",
"text": " The entity representation τesubscript𝜏𝑒\\tau_{e} is also composed of word-pieces of the entity title and description (for Wikipedia entities, we use the first ten sentences as description). The input to our entity model is: ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_15",
"text": " (CLS) title (ENT) description (SEP) ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_16",
"text": " where title, description are word-pieces tokens of entity title and description, and (ENT) is a special token to separate entity title and description representation. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_17",
"text": " The score of entity candidate eisubscript𝑒𝑖e_{i} is given by the dot-product: s(m,ei)=𝒚𝒎⋅𝒚𝒆𝒊𝑠𝑚subscript𝑒𝑖⋅subscript𝒚𝒎subscript𝒚subscript𝒆𝒊\\displaystyle s(m,e_{i})=\\boldsymbol{y_{m}}\\cdot\\boldsymbol{y_{e_{i}}} (3) ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_18",
"text": " The network is trained to maximize the score of the correct entity with respect to the (randomly sampled) entities of the same batch Lerer et al. (2019); Humeau et al. (2019). Concretely, for each training pair (mi,ei)subscript𝑚𝑖subscript𝑒𝑖(m_{i},e_{i}) in a batch of B𝐵B pairs, the loss is computed as: ℒ(mi,ei)=−s(mi,ei)+log∑j=1Bexp(s(mi,ej))ℒsubscript𝑚𝑖subscript𝑒𝑖𝑠subscript𝑚𝑖subscript𝑒𝑖superscriptsubscript𝑗1𝐵𝑠subscript𝑚𝑖subscript𝑒𝑗\\displaystyle\\mathcal{L}(m_{i},e_{i})=-s(m_{i},e_{i})+\\log\\sum_{j=1}^{B}\\exp{(s(m_{i},e_{j}))} (4) ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_19",
"text": " Lerer et al. (2019) presented a detailed analysis on speed and memory efficiency of using batched random negatives in large-scale systems. In addition to in-batch negatives, we follow Gillick et al. (2019) by using hard negatives in training. The hard negatives are obtained by finding the top 10 predicted entities for each training example. We add these extra hard negatives to the random in-batch negatives. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_20",
"text": " At inference time, the entity representation for all the entity candidates can be pre-computed and cached. The inference task is then reduced to finding maximum dot product between mention representation and entity candidate representations. In Section 5.2.3 we present efficiency/accuracy trade-offs by exact and approximate nearest neighbor search using FAISS Johnson et al. (2019) in a large-scale setting. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_21",
"text": " Our cross-encoder is similar to the ones described by Logeswaran et al. (2019) and Humeau et al. (2019). The input is the concatenation of the input context and mention representation and the entity representation described in Section 4.1 (we remove the (CLS) token from the entity representation). This allows the model to have deep cross attention between the context and entity descriptions. Formally, we use ym,esubscript𝑦𝑚𝑒y_{m,e} to denote our context-candidate embedding: 𝒚𝒎,𝒆=red(Tcross(τm,e))subscript𝒚𝒎𝒆redsubscript𝑇crosssubscript𝜏𝑚𝑒\\displaystyle\\boldsymbol{y_{m,e}}=\\mathrm{red}(T_{\\mathrm{cross}}(\\tau_{m,e})) (5) where τm,esubscript𝜏𝑚𝑒\\tau_{m,e} is the input representation of mention and entity, Tcrosssubscript𝑇𝑐𝑟𝑜𝑠𝑠T_{cross} is a transformer and red(.)red(.) is the same function as defined in Section 4.1. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_22",
"text": " To score entity candidates, a linear layer 𝑾𝑾\\boldsymbol{W} is applied to the embedding 𝒚𝒎,𝒆subscript𝒚𝒎𝒆\\boldsymbol{y_{m,e}}: scross(m,e)=𝒚𝒎,𝒆𝑾subscript𝑠cross𝑚𝑒subscript𝒚𝒎𝒆𝑾\\displaystyle s_{\\mathrm{cross}}(m,e)=\\boldsymbol{y_{m,e}}\\boldsymbol{W} (6) ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_23",
"text": " Similar to methods in Section 4.1, the network is trained using a softmax loss to maximize scross(mi,ei)subscript𝑠crosssubscript𝑚𝑖subscript𝑒𝑖s_{\\mathrm{cross}}(m_{i},e_{i}) for the correct entity, given a set of entity candidates (same as in Equation 4). ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_24",
"text": " Due to its larger memory and compute footprint, we use the cross-encoder in a re-ranking stage, over a small set (≤100)\\leq 100) of candidates retrieved with the bi-encoder. The cross-encoder is not suitable for retrieval or tasks that require fast inference. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_25",
"text": " To better optimize the accuracy-speed trade-off, we also report knowledge distillation experiments that use a cross-encoder as a teacher for a bi-encoder model. We follow Hinton et al. (2015) to use a softmax with temperature where the target distribution is based on the cross-encoder logits. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_26",
"text": " Concretely, let z𝑧z be a vector of logits for set of entity candidates and T𝑇T a temperature, and σ(z,T)𝜎𝑧𝑇\\sigma(z,T) a (tempered) distribution over the entities with σ(z,T)=exp(zi/T)∑jexp(zj/T).𝜎𝑧𝑇subscript𝑧𝑖𝑇subscript𝑗subscript𝑧𝑗𝑇\\displaystyle\\sigma(z,T)=\\frac{\\exp{(z_{i}/T)}}{\\sum_{j}\\exp{(z_{j}/T)}}. (7) Then the overall loss function, incorporating both distillation and student losses, is calculated as ℒdist=ℋ(σ(zt;τ),σ(zs;τ))subscriptℒ𝑑𝑖𝑠𝑡ℋ𝜎subscript𝑧𝑡𝜏𝜎subscript𝑧𝑠𝜏\\displaystyle\\mathcal{L}_{dist}=\\mathcal{H}(\\sigma(z_{t};\\tau),\\sigma(z_{s};\\tau)) (8) ℒst=ℋ(e,σ(zs;1))subscriptℒ𝑠𝑡ℋ𝑒𝜎subscript𝑧𝑠1\\displaystyle\\mathcal{L}_{st}=\\mathcal{H}(e,\\sigma(z_{s};1)) (9) ℒ=α⋅ℒst+(1−α)⋅ℒdistℒ⋅𝛼subscriptℒ𝑠𝑡⋅1𝛼subscriptℒ𝑑𝑖𝑠𝑡\\displaystyle\\mathcal{L}=\\alpha\\cdot\\mathcal{L}_{st}+(1-\\alpha)\\cdot\\mathcal{L}_{dist} (10) where e𝑒e is the ground truth label distribution with probability 1 for the gold entity, ℋℋ\\mathcal{H} is the cross-entropy loss function, and α𝛼\\alpha is coefficient for mixing distillation and student loss ℒstsubscriptℒ𝑠𝑡\\mathcal{L}_{st}. The student logits zssubscript𝑧𝑠z_{s} are the output of the bi-encoder scoring function s(m,ei)𝑠𝑚subscript𝑒𝑖s(m,e_{i}), the teacher logits the output of the cross-encoder scoring funcion scross(m,e)subscript𝑠cross𝑚𝑒s_{\\mathrm{cross}}(m,e). ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_27",
"text": " In this section, we perform an empirical study of our model on three challenging datasets. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_28",
"text": " was constructed by Logeswaran et al. (2019) from Wikia.333https://www.wikia.com. The task is to link entity mentions in text to an entity dictionary with provided entity descriptions, in a set of domains. There are 49K, 10K, and 10K examples in the train, validation, test sets respectively. The entities in the validation and test sets are from different domains than the train set, allowing for evaluation of performance on entirely unseen entities. The entity dictionaries cover different domains and range in size from 10K to 100K entities. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_29",
"text": " is widely used for evaluating entity linking systems Ji et al. (2010).444https://tac.nist.gov Following prior work, we measure in-KB accuracy (P@1). There are 1,074 and 1,020 annotated mention/entity pairs derived from 1,453 and 2,231 original news and web documents on training and evaluation dataset, respectively. All the entities are from the TAC Reference Knowledgebase which contains 818,741 entities with titles, descriptions and other meta info. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_30",
"text": " was created by Onoe and Durrett (2019) from the original WikilinksNED dataset Eshel et al. (2017), which contains a diverse set of ambiguous entities spanning a variety of domains. In the Unseen-Mentions version, no mentions in the validation and test sets appear in the training set. The train, validation and test sets contain 2.2M, 10K, and 10K examples respectively. In this setting, the definition of unseen-mentions is different from that in zero-shot entity linking: entities in the test set can be seen in the training set. However, in both definitions no (mention, entity) pairs from test set are observed in the training set. In the unseen-mentions test set, about 25%percent2525\\% of the entities appear in training set. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_31",
"text": " We experiment with both BERT-base and BERT-large Devlin et al. (2019) for our bi-encoders and cross-encoders. The details of training infrastructure and hyperparameters can be found in Appendix A. All models are implemented in PyTorch555https://pytorch.org and optimizied with Adam Kingma and Ba (2014). We use (base) and (large) to indicate the version of our model where the underlying pretrained transformer model is BERT-base and BERT-large, respectively. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_32",
"text": " First, we train our bi-encoder on the training set, initializing each encoder with pre-trained BERT base. Hyper-parameters are chosen based on Recall@64 on validation datase. For specifics, see Appendix A.2. Our bi-encoder achieves much higher recall than BM25, as shown in Figure 2. Following Logeswaran et al. (2019), we use the top 64 retrieved candidates for the ranker, and we report Recall@64 on train, validation and test in Table 1. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_33",
"text": " After training the bi-encoder for candidate generation, we train our cross-encoder (initialized with pre-trained BERT) on the top 64 retrieved candidates from bi-encoder for each sample on the training set, and evaluate the cross-encoder on the test dataset. Overall, we are able to obtain a much better end-to-end accuracy, as shown in Table 2, largely due to the improvement on the retrieval stage. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_34",
"text": " We also report cross-encoder performance on the same retrieval method (BM25) used by Logeswaran et al. (2019) in Table 3, where the performance is evaluated on the subset of test instances for which the gold entity is among the top 64 candidates retrieved by BM25. We observe that our cross-encoder obtains slightly better results than reported by Logeswaran et al. (2019), likely due to implementation and hyper-parameter details. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_35",
"text": " Following prior work Sun et al. (2015); Cao et al. (2018); Gillick et al. (2019); Onoe and Durrett (2019), we pre-train our models on Wikipedia666https://www.wikipedia.org/ data. Data and model training details can be found in Appendix A.1. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_36",
"text": " After training our model on Wikipedia, we fine-tune the model on the TACKBP-2010 training dataset. We use the top 100 candidates retrieved by the bi-encoder as training examples for the cross-encoder, and chose hyper-parameters based on cross validation. We report accuracy results in Table 4. For ablation studies, we also report the following versions of our model: 1. bi-encoder only: we use bi-encoder for candidate ranking instead of cross-encoder. 2. Full Wikipedia: we use 5.9M Wikipedia articles as our entity Knowlegebase, instead of TACKBP Reference Knowledgebase. 3. Full Wikipedia w/o finetune: same as above, without fine-tuning on the TACKBP-2010 training set. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_37",
"text": " As expected, the cross-encoder performs better than the bi-encoder on ranking. However, both models exceed state-of-the-art performance levels, demonstrating that the overall approach is highly effective. We observe that our model also performs well when we change the underlying Knowledgebase to full Wikipedia, and even without fine-tuning on the dataset. In Table 5 we show that our bi-encoder model is highly effective at retrieving relevant entities, where the underlying Knowledgebase is full Wikipedia. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_38",
"text": " There are however many other cues that could potentially be added in future work. For example, Khalife and Vazirgiannis (2018) report 94.57%percent94.5794.57\\% precision on the TACKBP-2010 dataset. However, their method is based on the strong assumption that a gold fine-grained entity type is given for each mention (and they do not attempt to do entity type prediction). Indeed, if fine-grained entity type information is given by an oracle at test time, then Raiman and Raiman (2018) reports 98.6%percent98.698.6\\% accuracy on TACKBP-2010, indicating that improving fine-grained entity type prediction would likely improve entity linking. Our results is achieved without gold fine-grained entity type information. Instead, our model learns representations of context, mention and entities based on text only. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_39",
"text": " Similarly to the approach described in Section 5.2.2, we train our bi-encoder and cross-encoder model first on Wikipedia examples, then fine-tune on the training data from this dataset. We also present our model trained on Wikipedia examples and applied directly on the test set as well as our model trained on this dataset directly without training on Wikipedia examples. We report our models’ performance of accuracy on the test set in Table 6, along with baseline models presented from Onoe and Durrett (2019). We observe that our model out-performs all the baseline models. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_40",
"text": " To illustrate the efficiency of our bi-encoder model, we profiled retrieval speed on a server with Intel Xeon CPU E5-2698 v4 @ 2.20GHz and 512GB memory. At inference time, we first compute all entity embeddings for the pool of 5.9M entities. This step is resource intensive but can be paralleled. On 8 Nvidia Volta v100 GPUs, it takes about 2.8 hours to compute all entity embeddings. Given a query of mention embedding, we use FAISS Johnson et al. (2019) IndexFlatIP index type (exact search) to obtain top 100 entity candidates. On the WikilinksNED Unseen-Mentions test dataset which contains 10K queries, it takes 9.2 ms on average to return top 100 candidates per query in batch mode. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_41",
"text": " We also explore the approximate search options using FAISS. We choose the IndexHNSWFlat index type following Karpukhin et al. (2020). It takes additional time in index construction while reduces the average time used per query. In Table 7, we see that HNSW1𝐻𝑁𝑆subscript𝑊1HNSW_{1}777Neighbors to store per node: 128, construction time search depth: 200, search depth: 256; construction time: 2.1h. reduces the average query time to 2.6 ms with less than 1.2% drop in accuracy and recall, and HNSW2𝐻𝑁𝑆subscript𝑊2HNSW_{2}888Neighbors to store per node: 128, construction time search depth: 200, search depth: 128; construction time: 1.8h. further reduce the query time to 1.4 ms with less than 2.1% drop. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_42",
"text": " In a two-stage entity linking systems, the choice of number of candidates retrieved influences the overall model performance. Prior work often used a fixed number of k𝑘k candidates where k𝑘k ranges from 555 to 100100100 (for instance, Yamada et al. (2016) and Ganea and Hofmann (2017) choose k=30𝑘30k=30, Logeswaran et al. (2019) choose k=64𝑘64k=64). When k𝑘k is larger, the recall accuracy increases, however, the ranking stage accuracy is likely to decrease. Further, increasing k𝑘k would often increase the run-time on the ranking stage. We explore different choices of k𝑘k in our model, and present the recall@K𝑟𝑒𝑐𝑎𝑙𝑙@𝐾recall@K curve, ranking stage accuracy and overall accuracy in Figure 3. Based on the overall accuracy, we found that k=10𝑘10k=10 is optimal. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_43",
"text": " In this section, we present results on knowledge distillation, using our cross-encoder as a teacher model and bi-encoder as a student model. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_44",
"text": " We experiment knowledge distillation on the TACKBP-2010 and the WikilinksNED Unseen-Mentions dataset. We use the bi-encoder pretrained on Wikipedia as the student model, and fine-tune it on each dataset with knowledge distillation from the teacher model, which is the best performing cross-encoder model pretrained on Wikipedia and fine-tuned on the dataset. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_45",
"text": " We also fine-tune the student model in our experiments on each dataset, without the knowledge distillation component, as baseline models. As we can see in Table 9, the bi-encoder model trained with knowledge distillation from cross-encoder out-performs the bi-encoder without knowledge distillation, providing another point in the accuracy-speed trade-off curve for these architectures. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_46",
"text": " Table 8 presents some examples from our bi-encoder and cross-encoder model predictions, to provide intuition for how these two models consider context and mention for entity linking. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_47",
"text": " In the first example, we see that the bi-encoder mistakenly links “Ronaldo” to the Brazilian football player, while the cross-encoder is able to use context word “Juventus” to disambiguate. In the second example, the cross-encoder is able to identify from context that the sentence is describing art instead of fiction, where the bi-encoder failed. In the third example, the bi-encoder is able to find the correct entity “Ancient Greek,”; where the cross-encoder mistakenly links it to the entity “Ancient Greek philosophy,” likely because that the word “philosophers” is in context. We observe that cross-encoder is often better at utilizing context information than bi-encoder, but can sometimes make mistakes because of misleading context cues. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
},
{
"id": "1911.03814_all_48",
"text": " We proposed a conceptually simple, scalable, and highly effective two stage approach for entity linking. We show that our BERT-based model outperforms IR methods for entity retrieval, and achieved new state-of-the-art results on recently introduced zero-shot entity linking dataset, WikilinksNED Unseen-Mentions dataset, and the more established TACKBP-2010 benchmark, without any task-specific heuristics or external entity knowledge. We present evaluations of the accuracy-speed trade-off inherent to large pre-trained models, and show that it is possible to achieve efficient linking with modest loss of accuracy. Finally, we show that knowledge distillation can further improve bi-encoder model performance. Future work includes: • Enriching entity representations by adding entity type and entity graph information; • Modeling coherence by jointly resolving mentions in a document; • Extending our work to other languages and other domains; • Joint models for mention detection and entity linking. ",
"title": "Scalable Zero-shot Entity Linking with Dense Entity Retrieval"
}
] |
How can you come to the intuition that shorter rules have smaller sub goals?
|
If smaller LMs are utilised, then one may need to split the issue into sub-problems even more (eg, further decomposing the one-to-many comparisons in the selection module) (eg, further decomposing the one-to-many comparisons in the selection module) [55].
|
[
55
] |
[
{
"id": "2212.13894_all_0",
"text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is an important building block for automated knowledge discovery and holds the key for future advances across various scientific domains. While in recent years tremendous progress has been made towards natural language understanding thanks to pretrained language models (LMs) (Brown et al., 2020; Chowdhery et al., 2022, i.a.,), the performance of these models for logical reasoning still lags behind Rae et al. (2021); Creswell et al. (2023); Valmeekam et al. (2022) compared to the advancements in other areas such as reading comprehension and question-answering. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_1",
"text": " While many problems benefit from LM scaling, scaling has been observed to provide limited benefit for solving complex reasoning problems. For example, Creswell et al. (2023) observed that for the Gopher family of LMs (Rae et al., 2021), the benefit of scaling for logic-based tasks is significantly worse than for other language tasks. Moreover, while finetuning initially seemed to enable logical reasoning in LMs Clark et al. (2021); Tafjord et al. (2021), further exploration revealed that finetuned LMs mostly exploit spurious correlations (e.g., the correlation between the number of rules and the label) as opposed to learning to reason Zhang et al. (2022b); Schlegel et al. (2022); Liu et al. (2023). Recently, prompting strategies such as Chain-of-Thought Wei et al. (2022) and Scratchpad (Nye et al., 2022) have contributed to improving performance of LMs on reasoning tasks, although they have been also shown to struggle with proof planning for more complex logical reasoning problems Saparov and He (2023). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_2",
"text": " One solution to the aforementioned problems is to integrate the strength and reliability of classical AI models in logical reasoning with LMs Garcez and Lamb (2020); Marcus (2020). In the literature, there are two major approaches to logical reasoning Poole and Mackworth (2010): 1. Forward Chaining (FC) where one starts from the facts and rules (“theory”), and iterates between making new inferences and adding them to the theory until the goal statement can be proved or disproved, 2. Backward Chaining (BC) where one starts from the goal and uses the rules to recursively decompose it into sub-goals until the sub-goals can be proved or disproved based on the theory. Previous approaches to reasoning with LMs mostly incorporate elements of FC into LMs Tafjord et al. (2021); Creswell et al. (2023). FC requires selecting a subset of facts and rules from the entire set, which might be difficult for an LM as it requires a combinatorial search over a large space. Moreover, deciding when to halt and declare failure to prove is challenging in FC, as also noted by Creswell et al. (2023), sometimes requiring specialized modules trained on intermediate labels Creswell and Shanahan (2022). Indeed, the classical automated reasoning literature is heavily weighted towards BC or goal-directed strategies for proof-finding. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_3",
"text": " In this paper, we show experimentally that BC is better suited for text-based deductive logical reasoning, as it does not require a combinatorial search for subset selection and there are more natural halting criteria for it. We develop a hybrid LAnguage Model augmented BAckwarD chAining technique (Lambada), where BC drives the high-level proof planning, and the LM performs the textual understanding and individual reasoning steps. We conduct experiments with challenging datasets for LM reasoning containing examples expressed in naturalistic text. The datasets contain proof chains of up to 555 hops in depth, and examples where the goal can neither be proved nor disproved from the provided theory. We show that Lambada achieves substantially higher deductive accuracy, and is considerably more likely to generate valid reasoning chains compared to other techniques which find correct conclusions with spurious proof traces, while also being more query efficient than other LM-based modular reasoning approaches. Our results strongly indicate that future work on reasoning with LMs should incorporate backward chaining or goal-directed planning strategies. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_4",
"text": " The deep learning based models that have been developed to solve text-based (logical) reasoning tasks can be categorized as follows (see Huang and Chang 2022 for a recent survey of the literature). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_5",
"text": " Pretraining on Relevant Tasks: Pretraining an LM on corpora relevant to the target reasoning task can lead to improvements Hendrycks et al. (2021); Shen et al. (2021). Pretraining is, however, costly especially for larger LMs. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_6",
"text": " Implicit Reasoning: These approaches finetune LMs to produce the label directly given the input Clark et al. (2021); Betz et al. (2021); Saeed et al. (2021); Han et al. (2022); reasoning is expected to happen implicitly in the parameters of the LM. It has been shown that finetuning LMs on logical reasoning tasks makes them learn spurious correlations Zhang et al. (2022b); Schlegel et al. (2022), and is not robust to multi-hop reasoning Kassner et al. (2020). Besides, finetuning large LMs is costly especially when the dataset is large, and may introduce distributional shocks to the model Kazemi et al. (2023). In this paper, we focus on models that only take in-context examples as supervision. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_7",
"text": " Explicit Reasoning: Generating the intermediate reasoning steps such as the chain of reasoning Wei et al. (2022); Nye et al. (2022); Dalvi et al. (2021); Zelikman et al. (2022); Zhang et al. (2022a) has shown substantial improvement for many reasoning tasks Suzgun et al. (2022). Such chains have been explored both in the forward and the backward directions, e.g., using multiple constrained LMs for logical reasoning (Zhang et al., 2022a). Gontier et al. (2020) investigated how transformer models perform when trained to perform forward or backward chaining, and drew conclusions about their internal reasoning strategies. We compare against a popular recent prompting strategy, namely Chain-of-Thought (CoT) Wei et al. (2022), from this category. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_8",
"text": " Verifiers: To improve CoT, some works train a verifier using chain-level labels. The verifier takes a reasoning chain produced by the model as input and judges the quality of the chain Cobbe et al. (2021); Shen et al. (2021); Jhamtani and Clark (2020); Zelikman et al. (2022). Using this verifier, one can then generate multiple reasoning chains (e.g., by running the algorithm multiple times with different decoding temperatures) and use the best chain according to the verifier. Since Lambada also generates proofs, verifiers are also applicable to our algorithm. In this paper, we assume not having access to chain-level labels, and leave experiments with verifiers as future work. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_9",
"text": " Length generalization: A number of approaches specifically look into whether LMs can generalize from examples requiring shorter reasoning chains (shown to them either as demonstration or as finetuning data) to examples requiring longer chains Anil et al. (2022); Tafjord et al. (2021). With our model, length generalization comes for free because the model learns the building blocks of solving the problem that are applied as many times as needed to solve the problem. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_10",
"text": " Modular Reasoning: These approaches break the problem into smaller modules and use separate LMs to solve each module Zhou et al. (2022); Khot et al. (2023); Sprague et al. (2022); Zhou et al. (2023); Dua et al. (2022); Wang et al. (2022); Schlag et al. (2023). LM-based approaches to logical reasoning typically makes use of a single LM module; for example, in Tafjord et al. (2021), a single LM module iteratively and exhaustively infers all conclusions based on the facts and rules, and then the goal statement is compared against the final set of conclusions to confirm if it can be proved from the theory. Since exhaustively deriving all conclusions is computationally expensive, Creswell et al. (2023) consider a more scalable approach where the conclusions that are derived are informed by the goal; they iteratively apply two LLM modules one selecting a subset of the facts and rules informed by the goal and the other making new inferences based on the selected facts and rules and adding it back to the theory. In this paper, we compare against the second approach. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_11",
"text": " Natural Language Inference (NLI): Logical reasoning can also be understood as identifying whether a logical entailment relation holds between two propositions (premise and hypothesis; the premise is the theory and the hypothesis is the statement to be proved). In this sense, NLI models are also relevant, although inferences under NLI typically adopt a more relaxed notion of entailment rather than purely logical Dagan et al. (2013); Bowman et al. (2015); Williams et al. (2018). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_12",
"text": " We focus on performing automated reasoning over facts, i.e., natural language assertions such as ‘‘Nice people are red’’, that are coherent but not necessarily grounded in reality. A rule is a natural language statement that is either of the form, or can be rewritten in the form, ‘‘If P then Q’’; e.g., ‘‘Rough, cold people are blue’’ can be rewritten as ‘‘If a person is rough and cold, then they are blue’’. P is called the antecedent and Q is called the consequent of the rule. A theory 𝒞𝒞\\mathcal{C} consists of facts ℱ={f1,f2,…,fn}ℱsubscript𝑓1subscript𝑓2…subscript𝑓𝑛\\mathcal{F}=\\{f_{1},f_{2},\\dots,f_{n}\\} and rules ℛ={r1,r2,…,rm}ℛsubscript𝑟1subscript𝑟2…subscript𝑟𝑚\\mathcal{R}=\\{r_{1},r_{2},\\dots,r_{m}\\}. We let 𝒢𝒢\\mathcal{G} represent a goal that we would like to prove or disprove based on the theory. An example theory with fictional characters and rules is demonstrated in Figure 1. Based on the theory, one should prove or disprove the goal ‘‘Eric is nice’’. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_13",
"text": " Backward chaining (BC) is a strategy for reasoning that starts from the goal and recursively breaks the goal into sub-goals based on the rules that can be applied to it, until the sub-goals can be proved or disproved based on the facts or no more rules can be applied to break down the sub-goal further. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_14",
"text": " Figure 1 shows an example of BC applied to a theory to prove a goal. Initially, BC verifies if the goal can be proved or disproved based on the facts (this step is omitted from the figure). Since none of the facts directly prove or disprove the goal, BC next selects a rule that can be applied to break down the goal into sub-goals. Whether or not a rule applies to a goal is determined by an operation called unification in logic; Rule6 has the same consequent as the goal so the operation can be applied, but the other rules have different consequents and it cannot be applied. Using Rule6, the goal can be broken down into three sub-goals that should be proved for the goal to be proved. BC then makes recursive calls to prove each sub-goal. The algorithm continues until either a halting criterion is reached (e.g., reaching a certain depth in search), or a sub-goal can no longer be broken down (e.g., the left sub-tree under ‘‘Eric is rough’’), or all sub-goals are proved (e.g., the right sub-tree under ‘‘Eric is rough’’). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_15",
"text": " The outcome of BC for a goal is either Proved, Disproved, or Unknown; e.g., its output for the goal in Figure 1 is Proved, for ‘‘Fred is not green?’’ is Disproved (because it contradicts Fact3), and for ‘‘Fred is round?’’ is Unknown (because the theory does not entail or contradict it). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_16",
"text": " To enable applying BC for text-based reasoning, we introduce four LM-based modules: Fact Check, Rule Selection, Goal Decomposition, and Sign Agreement, each implemented by showing relevant in-context demonstrations to a pretrained LM (see Appendix D.3 for details). We describe these modules and then proceed to the full algorithm. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_17",
"text": " Given a set of facts ℱℱ\\mathcal{F} from the theory and a goal 𝒢𝒢\\mathcal{G}, the Fact Check module verifies if there exists a fact f∈ℱ𝑓ℱf\\in\\mathcal{F} such that f𝑓f entails 𝒢𝒢\\mathcal{G} (in which case the goal is proved) or f𝑓f entails the negation of 𝒢𝒢\\mathcal{G} (in which case the goal is disproved). If no such fact can be found, then the truth of 𝒢𝒢\\mathcal{G} remains unknown. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_18",
"text": " We implement Fact Check with two sub-modules: the first sub-module selects a fact from the set of facts that is most relevant to the goal, and the second sub-module verifies if the goal can be proved or disproved based on that fact.111Note that we select only one fact because the goals and sub-goals in the datasets we work with can be proved/disproved using single facts; The two modules can be adapted to selected multiple facts if this is not the case. Since the first sub-module may fail to identify the best fact on the first try, if the truth of the goal remained unknown after one try, the selected fact can be removed and the sub-modules can be called again. This process can be repeated multiple times. In our experiments, we call the two sub-modules twice. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_19",
"text": " Given a set of rules ℛℛ\\mathcal{R} from the theory and a goal 𝒢𝒢\\mathcal{G}, the Rule Selection module identifies the rules r∈ℛ𝑟ℛr\\in\\mathcal{R} such that the consequent of r𝑟r unifies with 𝒢𝒢\\mathcal{G}. These rules are then used for decomposing the goal into sub-goals. If no such rule can be identified, then the truth of 𝒢𝒢\\mathcal{G} remains unknown. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_20",
"text": " As we did for Fact Check, we implement Rule Selection with two sub-modules: the first sub-module identifies the consequent of each rule (independent of the goal), and the second sub-module takes the rule consequents and the goal as input and identifies which one unifies with the goal. Note that due to the recursive nature of BC, the Rule Selection module may be invoked multiple times during the proof of a goal. Since identifying the consequent of each rule is independent of the goal, this sub-module only needs to be called once. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_21",
"text": " Given a rule r𝑟r and a goal 𝒢𝒢\\mathcal{G} such that the consequent of r𝑟r unifies with 𝒢𝒢\\mathcal{G}, the Goal Decomposition module identifies the sub-goals that need to be proved in order for 𝒢𝒢\\mathcal{G} to be proved or disproved. The sub-goals are identified based on the antecedent of r𝑟r. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_22",
"text": " In the case where we succeed in proving the antecedent of r𝑟r, whether the goal is proved or disproved depends on whether the sign of the goal agrees or disagrees with the sign of the consequent of r𝑟r. For instance, in Figure 1, for the goal ‘‘Eric is nice.’’, since the sign of the goal agrees with the sign of the consequent of Rule6 and the antecedent of the rule is proved, we conclude that the goal is proved. However, if Rule6 was ‘‘(...) is not going to be a nice individual.’’, then the sign of the goal would disagree with the sign of the consequent and so we would conclude that the goal is disproved. This motivates the fourth module, Sign Agreement, described below. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_23",
"text": " Given a rule r𝑟r and a goal 𝒢𝒢\\mathcal{G}, the Sign Agreement module verifies if the sign of the consequent of r𝑟r agrees or disagrees with the sign of the goal or not. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_24",
"text": " Algorithm 1 provides a high-level description of how the four LM modules described earlier can be integrated with BC to enable text-based logical reasoning (the function calls corresponding to LM modules are color-coded). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_25",
"text": " Lambada can be understood as a depth-first search algorithm over the facts and the rules. It takes as input a theory 𝒞=(ℱ,ℛ)𝒞ℱℛ\\mathcal{C}=(\\mathcal{F},\\mathcal{R}), a goal 𝒢𝒢\\mathcal{G}, and a depth D𝐷D that defines a halting criterion for the algorithm based on the maximum allowed depth for the search. The search depth is a natural halting criterion corresponding to the maximum number of reasoning hops required for answering questions. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_26",
"text": " Initially, the algorithm uses the Fact Check module to check if 𝒢𝒢\\mathcal{G} can be proved or disproved using the facts. If this is the case, then the algorithm stops and returns the result (Proved or Disproved). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_27",
"text": " If 𝒢𝒢\\mathcal{G} cannot be proved or disproved, then the algorithm checks the depth D𝐷D: if D=0𝐷0D=0, then the algorithm stops and returns Unknown indicating that 𝒢𝒢\\mathcal{G} could not be proved or disproved. Otherwise, the algorithm proceeds with applying rules. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_28",
"text": " The Rule Selection module is used to identify the rules ℛssubscriptℛ𝑠\\mathcal{R}_{s} from ℛℛ\\mathcal{R} whose consequent unifies with 𝒢𝒢\\mathcal{G}. Once the set ℛssubscriptℛ𝑠\\mathcal{R}_{s} is identified, if Lambada can start with the rules that have a higher chance of succeeding at (dis)proving the goal, it can save computations and be less error-prone. Therefore, we include a Rerank function in Lambada. Based on the intuition that shorter rules are likely to have fewer sub-goals (hence a higher chance of success), we start the search from shorter rules and proceed to longer rules if the shorter ones fail. We leave more sophisticated ranking strategies as future work. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_29",
"text": " For each selected rule, the algorithm uses the Goal Decomposition module to decompose 𝒢𝒢\\mathcal{G} into a set of sub-goals 𝐆𝐆\\mathbf{G} that need to be proved and checks whether those sub-goals can be proved by making recursive calls to the algorithm (with reduced depth). If the sub-goals can be proved, then the algorithm uses the Sign Agreement module to check whether the sign of the rule consequent agrees or disagrees with the sign of 𝒢𝒢\\mathcal{G}. If it does, then the algorithm returns Proved and otherwise Disproved. If there is no rule for which the sub-goals can be proved, then Unknown is returned. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_30",
"text": " During a proof, Lambada may be called multiple times with the same theory and goal; in Appendix A we explain how cycles and redundant computations can be avoided using a cache. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_31",
"text": " We describe our baselines and datasets here, and provide further implementation details in Appendix D. Unless stated otherwise, all experiments are based on the PaLM 540B model Chowdhery et al. (2022). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_32",
"text": " We compare against the following two baselines. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_33",
"text": " Chain-of-Thought (CoT) Wei et al. (2022) is a popular neural approach based on demonstrating chains of inference to the LM within the in-context prompt. In addition to the few-shot demonstrations in <INPUT>/<LABEL> format in typical in-context learning settings, in CoT, an intermediate explanation for the label is also provided (<INPUT>/<EXPLANATION>/<LABEL>). In our work, the explanation corresponds to the proof. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_34",
"text": " Selection-Inference (SI) Creswell et al. (2023) is a strong modular reasoning approach based on forward chaining. SI contains two modules: (1) selection, which, guided by the goal, selects a subset of the facts and rules from which new conclusions can be derived toward proving the goal, and (2) inference, which takes the selected facts and rules and derives a new conclusion. The two modules are called iteratively, each time producing a single conclusion that is added back to the theory before the next iteration. The iterations continue until a halting criterion is met (a fixed number of steps in Creswell et al. 2023). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_35",
"text": " We experiment with challenging deductive logical reasoning datasets outlined below. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_36",
"text": " ProofWriter Tafjord et al. (2021) is a commonly used synthetic dataset for testing logical reasoning when facts and rules are expressed in naturalistic text. It contains two subsets: an open-world assumption (OWA) subset and a closed-world assumption (CWA) subset. In this paper, we use the OWA subset. Each example is a (theory, goal) pair and the label is one of {{\\{Proved, Disproved, Unknown}}\\} where Unknown indicates that the goal can neither be proved nor disproved. The dataset has five parts, each part requiring 00, ≤1absent1\\leq 1, ≤2absent2\\leq 2, ≤3absent3\\leq 3 and ≤5absent5\\leq 5 hops of reasoning, respectively. We report two sets of results on this dataset: (1) with examples labeled Unknown removed (for compatibility with previous work), and (2) with all three labels. Note that intermediate proof chains from ProofWriter are not used by our models in making predictions. For both cases, due to the cost of inference, we used the first 100010001000 examples in the test set. Hereafter, we refer to these two subsets as ProofWriter-PD and ProofWriter-PUD. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_37",
"text": " PrOntoQA Saparov and He (2023) is a synthetic dataset created to analyze the capacity of LM-based approaches for logical reasoning. Compared to ProofWriter, PrOntoQA has lower natural language diversity and less l fact/rule variations (e.g., no conjunctions). However, the search traces typically contain multiple paths with only one of them leading to the proof, thus enabling testing the proof planning of different models. This dataset has multiple versions; we use the fictional characters version, which is one of the hardest versions according to Saparov and He (2023). Similarly to ProofWriter, each version of PrOntoQA is divided into different parts depending on the depth of reasoning chains required (111, 333, and 555 hops). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_38",
"text": " ParaRules Tafjord et al. (2021) is a version of ProofWriter where the synthetically generated sentences in the theory are rewritten by crowdworkers to increase diversity and naturalness of the text. This lets us move beyond evaluating reasoning with templatic expressions, which is a key limitation of the other datasets. Each fact in ParaRules may be a combination of several sub-facts (see Fig. 1 for an example). The examples require proof depths of up to 555 and the label can be Proved, Disproved, or Unknown. We found some minor quality issues in ParaRules; we manually verified and fixed the first 500500500 examples of the test set (see Appendix D.2) and used this set for evaluation. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_39",
"text": " We now describe the results and compare Lambada and the baselines in detail. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_40",
"text": " The results are reported in Figure 2, (a)–(d).222Due to the low performance of SI on ProofWriter and PrOntoQA and its high number of LM calls (see Figure 7), we only compared Lambada against CoT for ParaRules. Lambada significantly outperforms the baselines, especially on ProofWriter-PUD which contains Unknown labels (44%percent4444\\% relative improvement compared to CoT and 56%percent5656\\% compared to SI on Depth-5), the higher depths of PrOntoQA (37%percent3737\\% relative improvement compared to CoT and 113%percent113113\\% compared to SI on Depth-5), and the ParaRules dataset (43%percent4343\\% relative improvement compared to CoT). These results overall show the merit of Lambada for logical reasoning. We highlight that the reasoning capacity of Lambada robustly generalizes to more naturalistic expressions, as demonstrated by the high accuracy on ParaRules, which is exactly the desired outcome of combining the strengths of an LM and a symbolic reasoning algorithm. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_41",
"text": " The results in Figure 2(a) reveal a shortcoming of the CoT approach in dealing with Unknown labels. That is, unlike the examples for which the label is Proved or Disproved, there is no natural chain of thought for the examples whose labels are Unknown. Nevertheless, the performance of CoT is competitive for the ProofWriter-PD dataset, and the accuracy does not diminish substantially with increasing depth. We investigate the reason for this behaviour of CoT in the next section. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_42",
"text": " To understand the reason behind the high accuracy of CoT on higher depths of ProofWriter-PD, we randomly selected 505050 examples from Depth-5 of the dataset where CoT predicted the label correctly, and manually verified if the proof chain is correct or not. For comparison, we also manually verified the proofs generated by Lambada following a similar procedure. The results are reported in Figure 2(e). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_43",
"text": " While Lambada mostly produces correct chains, CoT produces correct chains only for 28%percent2828\\% of the examples. We find that hallucination is the main source of error (48%percent4848\\% of the examples; see Appendix B.2 for other prominent failure modes). The hallucinated facts and rules mostly resulted in shortcuts to the correct answer. This hints at the possibility of spurious correlations in ProofWriter-PD that can be exploited by CoT (see Appendix B.2, Figure 10 for examples). This result is consistent with previous work showing that when LMs are asked to solve logical reasoning end-to-end, they rely on spurious correlations Zhang et al. (2022b). Note that for modular approaches like SI and Lambada, the intermediate modules are impervious to the spurious correlations between the input and the label and do not suffer from this issue. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_44",
"text": " As previously explained, SI is based on forward chaining and its selection module requires a combinatorial search to find the right subset of facts and rules (see Appendix C), and the search space becomes progressively larger in each iteration of the algorithm as new inferences are added to the theory. To verify whether the increase in the search space makes forward chaining progressively harder, we measured the success rate of the k𝑘k-th inference of SI for different values of k𝑘k on Depth-5 of PrOntoQA (see Appendix B.3 for details). From the results in Figure 3, we can see that the success rate indeed decreases in the later inferences of the model, where the size of the input theory is larger and therefore a larger space needs to be searched to find the right combination of facts and rules. Note that none of the components in Lambada require selecting a subset, hence no combinatorial search is required (see Appendix C for more details). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_45",
"text": " SI also suffers from inferring redundant facts. Figure 4 reports the number of unique inferences from SI for the examples in ProofWriter-PD (Depth-5) where SI incorrectly predicted Unknown (i.e., examples where a proof exists but SI failed to find it). The result shows that SI inferences contained no redundant facts only 29%percent2929\\% of the time; in 7%percent77\\% of the cases, all 555 inferred facts were identical, and in another 10%percent1010\\%, only two unique inferences were made. This shows that SI, and maybe more generally forward-chaining approaches, suffer from redundant inference. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_46",
"text": " SI also over-predicts Disproved in the binary case and Unknown in the three-way classification case (see Appendix B.4), performing even worse than the majority class for Depth-5 of PrOntoQA which has more Proved labels than Disproved. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_47",
"text": " These results, together with Figure 2, show that backward chaining (which is the backbone of reasoning in Lambada) is a better choice compared to forward chaining (the backbone in SI). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_48",
"text": " Our results may raise the question of whether it is enough to directly incorporate the steps of backward chaining into CoT prompts, or if modularity (as in Lambada) is also needed. To answer this question, we experiment with a backward version of CoT where the proofs are written in the backward direction from the goal to the premises. The label accuracies are presented in Figure 5(a)–(b) for ProofWriter-PUD and ProofWriter-PD, and their proof accuracy on ProofWriter-PD (Depth-5) in Figure 5(c). The label accuracy of forward and backward CoT are comparable, but forward CoT leads to better performance on PUD and backward CoT leads to better performance on PD. For proof accuracy, however, we see a clear difference between the two versions where backward CoT produces substantially lower quality proofs compared to forward chaining. This result is consistent with the observations of Gontier et al. (2020) for finetuned LMs. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_49",
"text": " The above results show that a modular formulation (as in Lambada) is key to successful logical reasoning and simply providing CoT in the backward direction does not suffice. We note, however, that future work can use the traces of our model to finetune (smaller) language models (e.g., Zelikman et al. 2022), or use the traces as training data in future language models to improve their performance with CoT prompting. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_50",
"text": " Taking the label and proof accuracy results together, there is also a potential that backward CoT models are more heavily relying on spurious correlations for the PD case where backward CoT outperformed CoT, as backward CoT achieves a similar label accuracy as forward CoT but with a much lower proof accuracy. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_51",
"text": " In Figure 1, we show the search trace created by Lambada for an example from ParaRules, where the answer was predicted correctly. From the figure, one can see how backward chaining helps Lambada effectively search and create the reasoning chain and how the LM helps fact checking, rule selection, goal decomposition, and sign agreement checking. In Appendix B.1, we include an example that has a much larger search trace. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_52",
"text": " To understand which components in Lambada are responsible for the failure cases, we computed the individual accuracy of the four modules described in Section 3. For this purpose, we created four datasets from the validation set of ProofWriter, each measuring only the performance of one module in isolation (see Appendix D.1 for details). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_53",
"text": " Based on the results of the PaLM 540B model in Figure 6, Rule Selection is the lowest performing module followed by Goal Decomposition. It is possible that the Rule Selection module (partially) fails for some examples but Lambada still arrives at the correct conclusion and proof (e.g., if in Figure 1 the third call to Rule Selection only returned Rule5). For Fact Check, when we allow the model to only select one fact, the accuracy is 0.940.940.94 but when we allow the model to select two facts, the accuracy is near perfect. The Sign Agreement module also shows near-perfect accuracy. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_54",
"text": " We repeat the experiment from Section 5.6 with PaLM 62B and 8B to examine the effect of LM scale on Lambada. According to the results in Figure 6, when we use PaLM 62B, the performance of the Goal Decomposition and Sign Agreement modules remain comparable, but the performance for the Fact Check and Rule Selection modules drop substantially. Unlike the first two modules, the second two rely on a one-to-many comparison between the goal and each of the facts/rules which may require a larger model capacity. Moreover, we observe that in PaLM 8B, the accuracy for all components drops significantly, in some cases becoming close to random prediction. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_55",
"text": " We argue that the extent to which the higher-level reasoning algorithm breaks the problem into sub-problems should be dependent on the scale and power of the base LMs. If smaller LMs are used, then one may need finer-grained problem decomposition (e.g., further decomposing the one-to-many comparisons in the selection module). And as LMs become larger and stronger in the future, one could rely on them to solve problems with a coarser-grained decomposition of the problem. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_56",
"text": " Another advantage of Lambada is its efficiency compared to other approaches that require multiple LM inference calls per example such as SI. In Figure 7, we compare the average number of LM calls per example, for different depths of ProofWriter-PUD. Lambada requires much fewer calls compared to SI, especially at higher depths: for Depth-1, Lambada requires 3.8x fewer calls whereas for Depth-5 it requires 11.8x fewer calls. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_57",
"text": " To analyze the lexical sensitivity of Lambada, we modified the test set of ProofWriter-PUD by replacing various lexical items (names, adjectives, and verbs) with novel tokens and the rule templates with novel ones. We then compared the performance of Lambada on the original and the modified test sets using the same few-shot examples. The details of the modifications are in Appendix B.5. As can be seen in Figure 8, the performance of Lambada remains almost unchanged, demonstrating robustness to lexical and templatic variations. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_58",
"text": " We developed Lambada, an algorithm for deductive logical reasoning with natural language that combines the capacity of LMs to handle naturalistic text input with the backward chaining algorithm for robust symbolic reasoning. We showed that Lambada achieves significant improvements over competitive approaches on challenging benchmarks, both in terms of label accuracy (predicting if a statement can be proved or disproved based on a theory) and proof accuracy. Importantly, this improvement was also observed in a dataset that expresses the theory in more naturalistic expressions, clearly illustrating the benefit of combining an LM with reasoning modules. We also demonstrated the query efficiency and lexical robustness of Lambada. Although in this paper we only experiment with formal reasoning problems and datasets, we believe our key insight on the efficacy of backward, goal-directed reasoning with LMs has broader implications and can be adapted to other NLP tasks where multi-step inference is required. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
}
] |
Based on the results of the baseline and other models, will you rule out occurrence of overfitting in the data? How?
|
We can observe that there's little over fitment of data [40].
|
[
40
] |
[
{
"id": "2212.13894_all_0",
"text": " Automated reasoning, the ability to draw valid conclusions from explicitly provided knowledge, has been a fundamental goal for AI since its early days McCarthy (1959); Hewitt (1969). Furthermore, logical reasoning, especially reasoning with unstructured, natural text is an important building block for automated knowledge discovery and holds the key for future advances across various scientific domains. While in recent years tremendous progress has been made towards natural language understanding thanks to pretrained language models (LMs) (Brown et al., 2020; Chowdhery et al., 2022, i.a.,), the performance of these models for logical reasoning still lags behind Rae et al. (2021); Creswell et al. (2023); Valmeekam et al. (2022) compared to the advancements in other areas such as reading comprehension and question-answering. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_1",
"text": " While many problems benefit from LM scaling, scaling has been observed to provide limited benefit for solving complex reasoning problems. For example, Creswell et al. (2023) observed that for the Gopher family of LMs (Rae et al., 2021), the benefit of scaling for logic-based tasks is significantly worse than for other language tasks. Moreover, while finetuning initially seemed to enable logical reasoning in LMs Clark et al. (2021); Tafjord et al. (2021), further exploration revealed that finetuned LMs mostly exploit spurious correlations (e.g., the correlation between the number of rules and the label) as opposed to learning to reason Zhang et al. (2022b); Schlegel et al. (2022); Liu et al. (2023). Recently, prompting strategies such as Chain-of-Thought Wei et al. (2022) and Scratchpad (Nye et al., 2022) have contributed to improving performance of LMs on reasoning tasks, although they have been also shown to struggle with proof planning for more complex logical reasoning problems Saparov and He (2023). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_2",
"text": " One solution to the aforementioned problems is to integrate the strength and reliability of classical AI models in logical reasoning with LMs Garcez and Lamb (2020); Marcus (2020). In the literature, there are two major approaches to logical reasoning Poole and Mackworth (2010): 1. Forward Chaining (FC) where one starts from the facts and rules (“theory”), and iterates between making new inferences and adding them to the theory until the goal statement can be proved or disproved, 2. Backward Chaining (BC) where one starts from the goal and uses the rules to recursively decompose it into sub-goals until the sub-goals can be proved or disproved based on the theory. Previous approaches to reasoning with LMs mostly incorporate elements of FC into LMs Tafjord et al. (2021); Creswell et al. (2023). FC requires selecting a subset of facts and rules from the entire set, which might be difficult for an LM as it requires a combinatorial search over a large space. Moreover, deciding when to halt and declare failure to prove is challenging in FC, as also noted by Creswell et al. (2023), sometimes requiring specialized modules trained on intermediate labels Creswell and Shanahan (2022). Indeed, the classical automated reasoning literature is heavily weighted towards BC or goal-directed strategies for proof-finding. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_3",
"text": " In this paper, we show experimentally that BC is better suited for text-based deductive logical reasoning, as it does not require a combinatorial search for subset selection and there are more natural halting criteria for it. We develop a hybrid LAnguage Model augmented BAckwarD chAining technique (Lambada), where BC drives the high-level proof planning, and the LM performs the textual understanding and individual reasoning steps. We conduct experiments with challenging datasets for LM reasoning containing examples expressed in naturalistic text. The datasets contain proof chains of up to 555 hops in depth, and examples where the goal can neither be proved nor disproved from the provided theory. We show that Lambada achieves substantially higher deductive accuracy, and is considerably more likely to generate valid reasoning chains compared to other techniques which find correct conclusions with spurious proof traces, while also being more query efficient than other LM-based modular reasoning approaches. Our results strongly indicate that future work on reasoning with LMs should incorporate backward chaining or goal-directed planning strategies. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_4",
"text": " The deep learning based models that have been developed to solve text-based (logical) reasoning tasks can be categorized as follows (see Huang and Chang 2022 for a recent survey of the literature). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_5",
"text": " Pretraining on Relevant Tasks: Pretraining an LM on corpora relevant to the target reasoning task can lead to improvements Hendrycks et al. (2021); Shen et al. (2021). Pretraining is, however, costly especially for larger LMs. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_6",
"text": " Implicit Reasoning: These approaches finetune LMs to produce the label directly given the input Clark et al. (2021); Betz et al. (2021); Saeed et al. (2021); Han et al. (2022); reasoning is expected to happen implicitly in the parameters of the LM. It has been shown that finetuning LMs on logical reasoning tasks makes them learn spurious correlations Zhang et al. (2022b); Schlegel et al. (2022), and is not robust to multi-hop reasoning Kassner et al. (2020). Besides, finetuning large LMs is costly especially when the dataset is large, and may introduce distributional shocks to the model Kazemi et al. (2023). In this paper, we focus on models that only take in-context examples as supervision. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_7",
"text": " Explicit Reasoning: Generating the intermediate reasoning steps such as the chain of reasoning Wei et al. (2022); Nye et al. (2022); Dalvi et al. (2021); Zelikman et al. (2022); Zhang et al. (2022a) has shown substantial improvement for many reasoning tasks Suzgun et al. (2022). Such chains have been explored both in the forward and the backward directions, e.g., using multiple constrained LMs for logical reasoning (Zhang et al., 2022a). Gontier et al. (2020) investigated how transformer models perform when trained to perform forward or backward chaining, and drew conclusions about their internal reasoning strategies. We compare against a popular recent prompting strategy, namely Chain-of-Thought (CoT) Wei et al. (2022), from this category. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_8",
"text": " Verifiers: To improve CoT, some works train a verifier using chain-level labels. The verifier takes a reasoning chain produced by the model as input and judges the quality of the chain Cobbe et al. (2021); Shen et al. (2021); Jhamtani and Clark (2020); Zelikman et al. (2022). Using this verifier, one can then generate multiple reasoning chains (e.g., by running the algorithm multiple times with different decoding temperatures) and use the best chain according to the verifier. Since Lambada also generates proofs, verifiers are also applicable to our algorithm. In this paper, we assume not having access to chain-level labels, and leave experiments with verifiers as future work. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_9",
"text": " Length generalization: A number of approaches specifically look into whether LMs can generalize from examples requiring shorter reasoning chains (shown to them either as demonstration or as finetuning data) to examples requiring longer chains Anil et al. (2022); Tafjord et al. (2021). With our model, length generalization comes for free because the model learns the building blocks of solving the problem that are applied as many times as needed to solve the problem. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_10",
"text": " Modular Reasoning: These approaches break the problem into smaller modules and use separate LMs to solve each module Zhou et al. (2022); Khot et al. (2023); Sprague et al. (2022); Zhou et al. (2023); Dua et al. (2022); Wang et al. (2022); Schlag et al. (2023). LM-based approaches to logical reasoning typically makes use of a single LM module; for example, in Tafjord et al. (2021), a single LM module iteratively and exhaustively infers all conclusions based on the facts and rules, and then the goal statement is compared against the final set of conclusions to confirm if it can be proved from the theory. Since exhaustively deriving all conclusions is computationally expensive, Creswell et al. (2023) consider a more scalable approach where the conclusions that are derived are informed by the goal; they iteratively apply two LLM modules one selecting a subset of the facts and rules informed by the goal and the other making new inferences based on the selected facts and rules and adding it back to the theory. In this paper, we compare against the second approach. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_11",
"text": " Natural Language Inference (NLI): Logical reasoning can also be understood as identifying whether a logical entailment relation holds between two propositions (premise and hypothesis; the premise is the theory and the hypothesis is the statement to be proved). In this sense, NLI models are also relevant, although inferences under NLI typically adopt a more relaxed notion of entailment rather than purely logical Dagan et al. (2013); Bowman et al. (2015); Williams et al. (2018). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_12",
"text": " We focus on performing automated reasoning over facts, i.e., natural language assertions such as ‘‘Nice people are red’’, that are coherent but not necessarily grounded in reality. A rule is a natural language statement that is either of the form, or can be rewritten in the form, ‘‘If P then Q’’; e.g., ‘‘Rough, cold people are blue’’ can be rewritten as ‘‘If a person is rough and cold, then they are blue’’. P is called the antecedent and Q is called the consequent of the rule. A theory 𝒞𝒞\\mathcal{C} consists of facts ℱ={f1,f2,…,fn}ℱsubscript𝑓1subscript𝑓2…subscript𝑓𝑛\\mathcal{F}=\\{f_{1},f_{2},\\dots,f_{n}\\} and rules ℛ={r1,r2,…,rm}ℛsubscript𝑟1subscript𝑟2…subscript𝑟𝑚\\mathcal{R}=\\{r_{1},r_{2},\\dots,r_{m}\\}. We let 𝒢𝒢\\mathcal{G} represent a goal that we would like to prove or disprove based on the theory. An example theory with fictional characters and rules is demonstrated in Figure 1. Based on the theory, one should prove or disprove the goal ‘‘Eric is nice’’. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_13",
"text": " Backward chaining (BC) is a strategy for reasoning that starts from the goal and recursively breaks the goal into sub-goals based on the rules that can be applied to it, until the sub-goals can be proved or disproved based on the facts or no more rules can be applied to break down the sub-goal further. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_14",
"text": " Figure 1 shows an example of BC applied to a theory to prove a goal. Initially, BC verifies if the goal can be proved or disproved based on the facts (this step is omitted from the figure). Since none of the facts directly prove or disprove the goal, BC next selects a rule that can be applied to break down the goal into sub-goals. Whether or not a rule applies to a goal is determined by an operation called unification in logic; Rule6 has the same consequent as the goal so the operation can be applied, but the other rules have different consequents and it cannot be applied. Using Rule6, the goal can be broken down into three sub-goals that should be proved for the goal to be proved. BC then makes recursive calls to prove each sub-goal. The algorithm continues until either a halting criterion is reached (e.g., reaching a certain depth in search), or a sub-goal can no longer be broken down (e.g., the left sub-tree under ‘‘Eric is rough’’), or all sub-goals are proved (e.g., the right sub-tree under ‘‘Eric is rough’’). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_15",
"text": " The outcome of BC for a goal is either Proved, Disproved, or Unknown; e.g., its output for the goal in Figure 1 is Proved, for ‘‘Fred is not green?’’ is Disproved (because it contradicts Fact3), and for ‘‘Fred is round?’’ is Unknown (because the theory does not entail or contradict it). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_16",
"text": " To enable applying BC for text-based reasoning, we introduce four LM-based modules: Fact Check, Rule Selection, Goal Decomposition, and Sign Agreement, each implemented by showing relevant in-context demonstrations to a pretrained LM (see Appendix D.3 for details). We describe these modules and then proceed to the full algorithm. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_17",
"text": " Given a set of facts ℱℱ\\mathcal{F} from the theory and a goal 𝒢𝒢\\mathcal{G}, the Fact Check module verifies if there exists a fact f∈ℱ𝑓ℱf\\in\\mathcal{F} such that f𝑓f entails 𝒢𝒢\\mathcal{G} (in which case the goal is proved) or f𝑓f entails the negation of 𝒢𝒢\\mathcal{G} (in which case the goal is disproved). If no such fact can be found, then the truth of 𝒢𝒢\\mathcal{G} remains unknown. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_18",
"text": " We implement Fact Check with two sub-modules: the first sub-module selects a fact from the set of facts that is most relevant to the goal, and the second sub-module verifies if the goal can be proved or disproved based on that fact.111Note that we select only one fact because the goals and sub-goals in the datasets we work with can be proved/disproved using single facts; The two modules can be adapted to selected multiple facts if this is not the case. Since the first sub-module may fail to identify the best fact on the first try, if the truth of the goal remained unknown after one try, the selected fact can be removed and the sub-modules can be called again. This process can be repeated multiple times. In our experiments, we call the two sub-modules twice. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_19",
"text": " Given a set of rules ℛℛ\\mathcal{R} from the theory and a goal 𝒢𝒢\\mathcal{G}, the Rule Selection module identifies the rules r∈ℛ𝑟ℛr\\in\\mathcal{R} such that the consequent of r𝑟r unifies with 𝒢𝒢\\mathcal{G}. These rules are then used for decomposing the goal into sub-goals. If no such rule can be identified, then the truth of 𝒢𝒢\\mathcal{G} remains unknown. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_20",
"text": " As we did for Fact Check, we implement Rule Selection with two sub-modules: the first sub-module identifies the consequent of each rule (independent of the goal), and the second sub-module takes the rule consequents and the goal as input and identifies which one unifies with the goal. Note that due to the recursive nature of BC, the Rule Selection module may be invoked multiple times during the proof of a goal. Since identifying the consequent of each rule is independent of the goal, this sub-module only needs to be called once. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_21",
"text": " Given a rule r𝑟r and a goal 𝒢𝒢\\mathcal{G} such that the consequent of r𝑟r unifies with 𝒢𝒢\\mathcal{G}, the Goal Decomposition module identifies the sub-goals that need to be proved in order for 𝒢𝒢\\mathcal{G} to be proved or disproved. The sub-goals are identified based on the antecedent of r𝑟r. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_22",
"text": " In the case where we succeed in proving the antecedent of r𝑟r, whether the goal is proved or disproved depends on whether the sign of the goal agrees or disagrees with the sign of the consequent of r𝑟r. For instance, in Figure 1, for the goal ‘‘Eric is nice.’’, since the sign of the goal agrees with the sign of the consequent of Rule6 and the antecedent of the rule is proved, we conclude that the goal is proved. However, if Rule6 was ‘‘(...) is not going to be a nice individual.’’, then the sign of the goal would disagree with the sign of the consequent and so we would conclude that the goal is disproved. This motivates the fourth module, Sign Agreement, described below. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_23",
"text": " Given a rule r𝑟r and a goal 𝒢𝒢\\mathcal{G}, the Sign Agreement module verifies if the sign of the consequent of r𝑟r agrees or disagrees with the sign of the goal or not. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_24",
"text": " Algorithm 1 provides a high-level description of how the four LM modules described earlier can be integrated with BC to enable text-based logical reasoning (the function calls corresponding to LM modules are color-coded). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_25",
"text": " Lambada can be understood as a depth-first search algorithm over the facts and the rules. It takes as input a theory 𝒞=(ℱ,ℛ)𝒞ℱℛ\\mathcal{C}=(\\mathcal{F},\\mathcal{R}), a goal 𝒢𝒢\\mathcal{G}, and a depth D𝐷D that defines a halting criterion for the algorithm based on the maximum allowed depth for the search. The search depth is a natural halting criterion corresponding to the maximum number of reasoning hops required for answering questions. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_26",
"text": " Initially, the algorithm uses the Fact Check module to check if 𝒢𝒢\\mathcal{G} can be proved or disproved using the facts. If this is the case, then the algorithm stops and returns the result (Proved or Disproved). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_27",
"text": " If 𝒢𝒢\\mathcal{G} cannot be proved or disproved, then the algorithm checks the depth D𝐷D: if D=0𝐷0D=0, then the algorithm stops and returns Unknown indicating that 𝒢𝒢\\mathcal{G} could not be proved or disproved. Otherwise, the algorithm proceeds with applying rules. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_28",
"text": " The Rule Selection module is used to identify the rules ℛssubscriptℛ𝑠\\mathcal{R}_{s} from ℛℛ\\mathcal{R} whose consequent unifies with 𝒢𝒢\\mathcal{G}. Once the set ℛssubscriptℛ𝑠\\mathcal{R}_{s} is identified, if Lambada can start with the rules that have a higher chance of succeeding at (dis)proving the goal, it can save computations and be less error-prone. Therefore, we include a Rerank function in Lambada. Based on the intuition that shorter rules are likely to have fewer sub-goals (hence a higher chance of success), we start the search from shorter rules and proceed to longer rules if the shorter ones fail. We leave more sophisticated ranking strategies as future work. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_29",
"text": " For each selected rule, the algorithm uses the Goal Decomposition module to decompose 𝒢𝒢\\mathcal{G} into a set of sub-goals 𝐆𝐆\\mathbf{G} that need to be proved and checks whether those sub-goals can be proved by making recursive calls to the algorithm (with reduced depth). If the sub-goals can be proved, then the algorithm uses the Sign Agreement module to check whether the sign of the rule consequent agrees or disagrees with the sign of 𝒢𝒢\\mathcal{G}. If it does, then the algorithm returns Proved and otherwise Disproved. If there is no rule for which the sub-goals can be proved, then Unknown is returned. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_30",
"text": " During a proof, Lambada may be called multiple times with the same theory and goal; in Appendix A we explain how cycles and redundant computations can be avoided using a cache. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_31",
"text": " We describe our baselines and datasets here, and provide further implementation details in Appendix D. Unless stated otherwise, all experiments are based on the PaLM 540B model Chowdhery et al. (2022). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_32",
"text": " We compare against the following two baselines. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_33",
"text": " Chain-of-Thought (CoT) Wei et al. (2022) is a popular neural approach based on demonstrating chains of inference to the LM within the in-context prompt. In addition to the few-shot demonstrations in <INPUT>/<LABEL> format in typical in-context learning settings, in CoT, an intermediate explanation for the label is also provided (<INPUT>/<EXPLANATION>/<LABEL>). In our work, the explanation corresponds to the proof. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_34",
"text": " Selection-Inference (SI) Creswell et al. (2023) is a strong modular reasoning approach based on forward chaining. SI contains two modules: (1) selection, which, guided by the goal, selects a subset of the facts and rules from which new conclusions can be derived toward proving the goal, and (2) inference, which takes the selected facts and rules and derives a new conclusion. The two modules are called iteratively, each time producing a single conclusion that is added back to the theory before the next iteration. The iterations continue until a halting criterion is met (a fixed number of steps in Creswell et al. 2023). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_35",
"text": " We experiment with challenging deductive logical reasoning datasets outlined below. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_36",
"text": " ProofWriter Tafjord et al. (2021) is a commonly used synthetic dataset for testing logical reasoning when facts and rules are expressed in naturalistic text. It contains two subsets: an open-world assumption (OWA) subset and a closed-world assumption (CWA) subset. In this paper, we use the OWA subset. Each example is a (theory, goal) pair and the label is one of {{\\{Proved, Disproved, Unknown}}\\} where Unknown indicates that the goal can neither be proved nor disproved. The dataset has five parts, each part requiring 00, ≤1absent1\\leq 1, ≤2absent2\\leq 2, ≤3absent3\\leq 3 and ≤5absent5\\leq 5 hops of reasoning, respectively. We report two sets of results on this dataset: (1) with examples labeled Unknown removed (for compatibility with previous work), and (2) with all three labels. Note that intermediate proof chains from ProofWriter are not used by our models in making predictions. For both cases, due to the cost of inference, we used the first 100010001000 examples in the test set. Hereafter, we refer to these two subsets as ProofWriter-PD and ProofWriter-PUD. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_37",
"text": " PrOntoQA Saparov and He (2023) is a synthetic dataset created to analyze the capacity of LM-based approaches for logical reasoning. Compared to ProofWriter, PrOntoQA has lower natural language diversity and less l fact/rule variations (e.g., no conjunctions). However, the search traces typically contain multiple paths with only one of them leading to the proof, thus enabling testing the proof planning of different models. This dataset has multiple versions; we use the fictional characters version, which is one of the hardest versions according to Saparov and He (2023). Similarly to ProofWriter, each version of PrOntoQA is divided into different parts depending on the depth of reasoning chains required (111, 333, and 555 hops). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_38",
"text": " ParaRules Tafjord et al. (2021) is a version of ProofWriter where the synthetically generated sentences in the theory are rewritten by crowdworkers to increase diversity and naturalness of the text. This lets us move beyond evaluating reasoning with templatic expressions, which is a key limitation of the other datasets. Each fact in ParaRules may be a combination of several sub-facts (see Fig. 1 for an example). The examples require proof depths of up to 555 and the label can be Proved, Disproved, or Unknown. We found some minor quality issues in ParaRules; we manually verified and fixed the first 500500500 examples of the test set (see Appendix D.2) and used this set for evaluation. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_39",
"text": " We now describe the results and compare Lambada and the baselines in detail. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_40",
"text": " The results are reported in Figure 2, (a)–(d).222Due to the low performance of SI on ProofWriter and PrOntoQA and its high number of LM calls (see Figure 7), we only compared Lambada against CoT for ParaRules. Lambada significantly outperforms the baselines, especially on ProofWriter-PUD which contains Unknown labels (44%percent4444\\% relative improvement compared to CoT and 56%percent5656\\% compared to SI on Depth-5), the higher depths of PrOntoQA (37%percent3737\\% relative improvement compared to CoT and 113%percent113113\\% compared to SI on Depth-5), and the ParaRules dataset (43%percent4343\\% relative improvement compared to CoT). These results overall show the merit of Lambada for logical reasoning. We highlight that the reasoning capacity of Lambada robustly generalizes to more naturalistic expressions, as demonstrated by the high accuracy on ParaRules, which is exactly the desired outcome of combining the strengths of an LM and a symbolic reasoning algorithm. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_41",
"text": " The results in Figure 2(a) reveal a shortcoming of the CoT approach in dealing with Unknown labels. That is, unlike the examples for which the label is Proved or Disproved, there is no natural chain of thought for the examples whose labels are Unknown. Nevertheless, the performance of CoT is competitive for the ProofWriter-PD dataset, and the accuracy does not diminish substantially with increasing depth. We investigate the reason for this behaviour of CoT in the next section. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_42",
"text": " To understand the reason behind the high accuracy of CoT on higher depths of ProofWriter-PD, we randomly selected 505050 examples from Depth-5 of the dataset where CoT predicted the label correctly, and manually verified if the proof chain is correct or not. For comparison, we also manually verified the proofs generated by Lambada following a similar procedure. The results are reported in Figure 2(e). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_43",
"text": " While Lambada mostly produces correct chains, CoT produces correct chains only for 28%percent2828\\% of the examples. We find that hallucination is the main source of error (48%percent4848\\% of the examples; see Appendix B.2 for other prominent failure modes). The hallucinated facts and rules mostly resulted in shortcuts to the correct answer. This hints at the possibility of spurious correlations in ProofWriter-PD that can be exploited by CoT (see Appendix B.2, Figure 10 for examples). This result is consistent with previous work showing that when LMs are asked to solve logical reasoning end-to-end, they rely on spurious correlations Zhang et al. (2022b). Note that for modular approaches like SI and Lambada, the intermediate modules are impervious to the spurious correlations between the input and the label and do not suffer from this issue. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_44",
"text": " As previously explained, SI is based on forward chaining and its selection module requires a combinatorial search to find the right subset of facts and rules (see Appendix C), and the search space becomes progressively larger in each iteration of the algorithm as new inferences are added to the theory. To verify whether the increase in the search space makes forward chaining progressively harder, we measured the success rate of the k𝑘k-th inference of SI for different values of k𝑘k on Depth-5 of PrOntoQA (see Appendix B.3 for details). From the results in Figure 3, we can see that the success rate indeed decreases in the later inferences of the model, where the size of the input theory is larger and therefore a larger space needs to be searched to find the right combination of facts and rules. Note that none of the components in Lambada require selecting a subset, hence no combinatorial search is required (see Appendix C for more details). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_45",
"text": " SI also suffers from inferring redundant facts. Figure 4 reports the number of unique inferences from SI for the examples in ProofWriter-PD (Depth-5) where SI incorrectly predicted Unknown (i.e., examples where a proof exists but SI failed to find it). The result shows that SI inferences contained no redundant facts only 29%percent2929\\% of the time; in 7%percent77\\% of the cases, all 555 inferred facts were identical, and in another 10%percent1010\\%, only two unique inferences were made. This shows that SI, and maybe more generally forward-chaining approaches, suffer from redundant inference. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_46",
"text": " SI also over-predicts Disproved in the binary case and Unknown in the three-way classification case (see Appendix B.4), performing even worse than the majority class for Depth-5 of PrOntoQA which has more Proved labels than Disproved. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_47",
"text": " These results, together with Figure 2, show that backward chaining (which is the backbone of reasoning in Lambada) is a better choice compared to forward chaining (the backbone in SI). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_48",
"text": " Our results may raise the question of whether it is enough to directly incorporate the steps of backward chaining into CoT prompts, or if modularity (as in Lambada) is also needed. To answer this question, we experiment with a backward version of CoT where the proofs are written in the backward direction from the goal to the premises. The label accuracies are presented in Figure 5(a)–(b) for ProofWriter-PUD and ProofWriter-PD, and their proof accuracy on ProofWriter-PD (Depth-5) in Figure 5(c). The label accuracy of forward and backward CoT are comparable, but forward CoT leads to better performance on PUD and backward CoT leads to better performance on PD. For proof accuracy, however, we see a clear difference between the two versions where backward CoT produces substantially lower quality proofs compared to forward chaining. This result is consistent with the observations of Gontier et al. (2020) for finetuned LMs. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_49",
"text": " The above results show that a modular formulation (as in Lambada) is key to successful logical reasoning and simply providing CoT in the backward direction does not suffice. We note, however, that future work can use the traces of our model to finetune (smaller) language models (e.g., Zelikman et al. 2022), or use the traces as training data in future language models to improve their performance with CoT prompting. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_50",
"text": " Taking the label and proof accuracy results together, there is also a potential that backward CoT models are more heavily relying on spurious correlations for the PD case where backward CoT outperformed CoT, as backward CoT achieves a similar label accuracy as forward CoT but with a much lower proof accuracy. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_51",
"text": " In Figure 1, we show the search trace created by Lambada for an example from ParaRules, where the answer was predicted correctly. From the figure, one can see how backward chaining helps Lambada effectively search and create the reasoning chain and how the LM helps fact checking, rule selection, goal decomposition, and sign agreement checking. In Appendix B.1, we include an example that has a much larger search trace. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_52",
"text": " To understand which components in Lambada are responsible for the failure cases, we computed the individual accuracy of the four modules described in Section 3. For this purpose, we created four datasets from the validation set of ProofWriter, each measuring only the performance of one module in isolation (see Appendix D.1 for details). ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_53",
"text": " Based on the results of the PaLM 540B model in Figure 6, Rule Selection is the lowest performing module followed by Goal Decomposition. It is possible that the Rule Selection module (partially) fails for some examples but Lambada still arrives at the correct conclusion and proof (e.g., if in Figure 1 the third call to Rule Selection only returned Rule5). For Fact Check, when we allow the model to only select one fact, the accuracy is 0.940.940.94 but when we allow the model to select two facts, the accuracy is near perfect. The Sign Agreement module also shows near-perfect accuracy. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_54",
"text": " We repeat the experiment from Section 5.6 with PaLM 62B and 8B to examine the effect of LM scale on Lambada. According to the results in Figure 6, when we use PaLM 62B, the performance of the Goal Decomposition and Sign Agreement modules remain comparable, but the performance for the Fact Check and Rule Selection modules drop substantially. Unlike the first two modules, the second two rely on a one-to-many comparison between the goal and each of the facts/rules which may require a larger model capacity. Moreover, we observe that in PaLM 8B, the accuracy for all components drops significantly, in some cases becoming close to random prediction. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_55",
"text": " We argue that the extent to which the higher-level reasoning algorithm breaks the problem into sub-problems should be dependent on the scale and power of the base LMs. If smaller LMs are used, then one may need finer-grained problem decomposition (e.g., further decomposing the one-to-many comparisons in the selection module). And as LMs become larger and stronger in the future, one could rely on them to solve problems with a coarser-grained decomposition of the problem. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_56",
"text": " Another advantage of Lambada is its efficiency compared to other approaches that require multiple LM inference calls per example such as SI. In Figure 7, we compare the average number of LM calls per example, for different depths of ProofWriter-PUD. Lambada requires much fewer calls compared to SI, especially at higher depths: for Depth-1, Lambada requires 3.8x fewer calls whereas for Depth-5 it requires 11.8x fewer calls. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_57",
"text": " To analyze the lexical sensitivity of Lambada, we modified the test set of ProofWriter-PUD by replacing various lexical items (names, adjectives, and verbs) with novel tokens and the rule templates with novel ones. We then compared the performance of Lambada on the original and the modified test sets using the same few-shot examples. The details of the modifications are in Appendix B.5. As can be seen in Figure 8, the performance of Lambada remains almost unchanged, demonstrating robustness to lexical and templatic variations. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
},
{
"id": "2212.13894_all_58",
"text": " We developed Lambada, an algorithm for deductive logical reasoning with natural language that combines the capacity of LMs to handle naturalistic text input with the backward chaining algorithm for robust symbolic reasoning. We showed that Lambada achieves significant improvements over competitive approaches on challenging benchmarks, both in terms of label accuracy (predicting if a statement can be proved or disproved based on a theory) and proof accuracy. Importantly, this improvement was also observed in a dataset that expresses the theory in more naturalistic expressions, clearly illustrating the benefit of combining an LM with reasoning modules. We also demonstrated the query efficiency and lexical robustness of Lambada. Although in this paper we only experiment with formal reasoning problems and datasets, we believe our key insight on the efficacy of backward, goal-directed reasoning with LMs has broader implications and can be adapted to other NLP tasks where multi-step inference is required. ",
"title": "LAMBADA: Backward Chaining for Automated Reasoning in Natural Language"
}
] |
What is the distribution of images in the training and testing set of FashionMNIST dataset?
|
Training set has 6,000 example from each class [8].
|
[
8
] |
[
{
"id": "1708.07747_all_0",
"text": " The MNIST dataset comprising of 10-class handwritten digits, was first introduced by LeCun et al. (1998) in 1998. At that time one could not have foreseen the stellar rise of deep learning techniques and their performance. Despite the fact that today deep learning can do so much the simple MNIST dataset has become the most widely used testbed in deep learning, surpassing CIFAR-10 (Krizhevsky and Hinton, 2009) and ImageNet (Deng et al., 2009) in its popularity via Google trends111https://trends.google.com/trends/explore?date=all&q=mnist,CIFAR,ImageNet. Despite its simplicity its usage does not seem to be decreasing despite calls for it in the deep learning community. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_1",
"text": " The reason MNIST is so popular has to do with its size, allowing deep learning researchers to quickly check and prototype their algorithms. This is also complemented by the fact that all machine learning libraries (e.g. scikit-learn) and deep learning frameworks (e.g. Tensorflow, Pytorch) provide helper functions and convenient examples that use MNIST out of the box. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_2",
"text": " Our aim with this work is to create a good benchmark dataset which has all the accessibility of MNIST, namely its small size, straightforward encoding and permissive license. We took the approach of sticking to the 101010 classes 70,0007000070,000 grayscale images in the size of 28×28282828\\times 28 as in the original MNIST. In fact, the only change one needs to use this dataset is to change the URL from where the MNIST dataset is fetched. Moreover, Fashion-MNIST poses a more challenging classification task than the simple MNIST digits data, whereas the latter has been trained to accuracies above 99.7% as reported in Wan et al. (2013); Ciregan et al. (2012). ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_3",
"text": " We also looked at the EMNIST dataset provided by Cohen et al. (2017), an extended version of MNIST that extends the number of classes by introducing uppercase and lowercase characters. However, to be able to use it seamlessly one needs to not only extend the deep learning framework’s MNIST helpers, but also change the underlying deep neural network to classify these extra classes. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_4",
"text": " Fashion-MNIST is based on the assortment on Zalando’s website222Zalando is the Europe’s largest online fashion platform. http://www.zalando.com. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762×10007621000762\\times 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_5",
"text": " We use the front look thumbnail images of 70,0007000070,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, white-color products are not included in the dataset as they have low contrast to the background. The thumbnails (51×73517351\\times 73) are then fed into the following conversion pipeline, which is visualized in Figure 1. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_6",
"text": " 1. Converting the input to a PNG image. 2. Trimming any edges that are close to the color of the corner pixels. The “closeness” is defined by the distance within 5%percent55\\% of the maximum possible intensity in RGB space. 3. Resizing the longest edge of the image to 282828 by subsampling the pixels, i.e. some rows and columns are skipped over. 4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.01.01.0, with increasing effect near outlines. 5. Extending the shortest edge to 282828 and put the image to the center of the canvas. 6. Negating the intensities of the image. 7. Converting the image to 8-bit grayscale pixels. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_7",
"text": " For the class labels, we use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product contains only one silhouette code. Table 2 gives a summary of all class labels in Fashion-MNIST with examples for each class. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_8",
"text": " Finally, the dataset is divided into a training and a test set. The training set receives a randomly-selected 6,00060006,000 examples from each class. Images and labels are stored in the same file format as the MNIST data set, which is designed for storing vectors and multidimensional matrices. The result files are listed in Table 1. We sort examples by their labels while storing, resulting in smaller label files after compression comparing to the MNIST. It is also easier to retrieve examples with a certain class label. The data shuffling job is therefore left to the algorithm developer. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_9",
"text": " We provide some classification results in LABEL:tbl:benchmark to form a benchmark on this data set. All algorithms are repeated 555 times by shuffling the training data and the average accuracy on the test set is reported. The benchmark on the MNIST dataset is also included for a side-by-side comparison. A more comprehensive table with explanations on the algorithms can be found on https://github.com/zalandoresearch/fashion-mnist. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
},
{
"id": "1708.07747_all_10",
"text": " This paper introduced Fashion-MNIST, a fashion product images dataset intended to be a drop-in replacement of MNIST and whilst providing a more challenging alternative for benchmarking machine learning algorithm. The images in Fashion-MNIST are converted to a format that matches that of the MNIST dataset, making it immediately compatible with any machine learning package capable of working with the original MNIST dataset. ",
"title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms"
}
] |
How does the size of a large neural network for NMT affect memory?
|
Large Neural network NMT has the ability to generalize well to very long word sequences so that it doesn't have to store gigantic phrase tables and language models, which results to having a small memory footprint [0].
|
[
0
] |
[
{
"id": "1508.04025_all_0",
"text": " Neural Machine Translation (NMT) achieved state-of-the-art performances in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). NMT is appealing since it requires minimal domain knowledge and is conceptually simple. The model by ?) reads through all the source words until the end-of-sentence symbol <<eos>> is reached. It then starts emitting one target word at a time, as illustrated in Figure 1. NMT is often a large neural network that is trained in an end-to-end fashion and has the ability to generalize well to very long word sequences. This means the model does not have to explicitly store gigantic phrase tables and language models as in the case of standard MT; hence, NMT has a small memory footprint. Lastly, implementing NMT decoders is easy unlike the highly intricate decoders in standard MT (Koehn et al., 2003). ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_1",
"text": " In parallel, the concept of “attention” has gained popularity recently in training neural networks, allowing models to learn alignments between different modalities, e.g., between image objects and agent actions in the dynamic control problem (Mnih et al., 2014), between speech frames and text in the speech recognition task (jan14), or between visual features of a picture and its text description in the image caption generation task (Xu et al., 2015). In the context of NMT, ?) has successfully applied such attentional mechanism to jointly translate and align words. To the best of our knowledge, there has not been any other work exploring the use of attention-based architectures for NMT. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_2",
"text": " In this work, we design, with simplicity and effectiveness in mind, two novel types of attention-based models: a global approach in which all source words are attended and a local one whereby only a subset of source words are considered at a time. The former approach resembles the model of (Bahdanau et al., 2015) but is simpler architecturally. The latter can be viewed as an interesting blend between the hard and soft attention models proposed in (Xu et al., 2015): it is computationally less expensive than the global model or the soft attention; at the same time, unlike the hard attention, the local attention is differentiable almost everywhere, making it easier to implement and train.222There is a recent work by ?), which is very similar to our local attention and applied to the image generation task. However, as we detail later, our model is much simpler and can achieve good performance for NMT. Besides, we also examine various alignment functions for our attention-based models. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_3",
"text": " Experimentally, we demonstrate that both of our approaches are effective in the WMT translation tasks between English and German in both directions. Our attentional models yield a boost of up to 5.0 BLEU over non-attentional systems which already incorporate known techniques such as dropout. For English to German translation, we achieve new state-of-the-art (SOTA) results for both WMT’14 and WMT’15, outperforming previous SOTA systems, backed by NMT models and n𝑛n-gram LM rerankers, by more than 1.0 BLEU. We conduct extensive analysis to evaluate our models in terms of learning, the ability to handle long sentences, choices of attentional architectures, alignment quality, and translation outputs. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_4",
"text": " A neural machine translation system is a neural network that directly models the conditional probability p(y|x)𝑝conditional𝑦𝑥p(y|x) of translating a source sentence, x1,…,xnsubscript𝑥1…subscript𝑥𝑛x_{1},\\ldots,x_{n}, to a target sentence, y1,…,ymsubscript𝑦1…subscript𝑦𝑚y_{1},\\ldots,y_{m}.333All sentences are assumed to terminate with a special “end-of-sentence” token <<eos>>. A basic form of NMT consists of two components: (a) an encoder which computes a representation 𝒔𝒔s for each source sentence and (b) a decoder which generates one target word at a time and hence decomposes the conditional probability as: logp(y|x)=∑j=1mlogp(yj|y<j,𝒔)𝑝conditional𝑦𝑥superscriptsubscript𝑗1𝑚𝑝conditionalsubscript𝑦𝑗subscript𝑦absent𝑗𝒔\\log p(y|x)=\\sum_{j=1}^{m}\\nolimits\\log p\\left(y_{j}|y_{<j},\\mbox{\\boldmath{$s$}}\\right) (1) ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_5",
"text": " A natural choice to model such a decomposition in the decoder is to use a recurrent neural network (RNN) architecture, which most of the recent NMT work such as (Kalchbrenner and Blunsom, 2013, Sutskever et al., 2014, Cho et al., 2014, Bahdanau et al., 2015, Luong et al., 2015, Jean et al., 2015) have in common. They, however, differ in terms of which RNN architectures are used for the decoder and how the encoder computes the source sentence representation 𝒔𝒔s. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_6",
"text": " ?) used an RNN with the standard hidden unit for the decoder and a convolutional neural network for encoding the source sentence representation. On the other hand, both ?) and ?) stacked multiple layers of an RNN with a Long Short-Term Memory (LSTM) hidden unit for both the encoder and the decoder. ?), ?), and ?) all adopted a different version of the RNN with an LSTM-inspired hidden unit, the gated recurrent unit (GRU), for both components.444They all used a single RNN layer except for the latter two works which utilized a bidirectional RNN for the encoder. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_7",
"text": " In more detail, one can parameterize the probability of decoding each word yjsubscript𝑦𝑗y_{j} as: p(yj|y<j,𝒔)=softmax(g(𝒉j))𝑝conditionalsubscript𝑦𝑗subscript𝑦absent𝑗𝒔softmax𝑔subscript𝒉𝑗p\\left(y_{j}|y_{<j},\\mbox{\\boldmath{$s$}}\\right)=\\operatorname{softmax}\\left(g\\left(\\mbox{\\boldmath{$h$}}_{j}\\right)\\right) (2) with g𝑔g being the transformation function that outputs a vocabulary-sized vector.555One can provide g𝑔g with other inputs such as the currently predicted word yjsubscript𝑦𝑗y_{j} as in (Bahdanau et al., 2015). Here, 𝒉jsubscript𝒉𝑗\\mbox{\\boldmath{$h$}}_{j} is the RNN hidden unit, abstractly computed as: 𝒉j=f(𝒉j−1,𝒔),subscript𝒉𝑗𝑓subscript𝒉𝑗1𝒔\\mbox{\\boldmath{$h$}}_{j}=f(\\mbox{\\boldmath{$h$}}_{j-1},\\mbox{\\boldmath{$s$}}), (3) where f𝑓f computes the current hidden state given the previous hidden state and can be either a vanilla RNN unit, a GRU, or an LSTM unit. In (Kalchbrenner and Blunsom, 2013, Sutskever et al., 2014, Cho et al., 2014, Luong et al., 2015), the source representation 𝒔𝒔s is only used once to initialize the decoder hidden state. On the other hand, in (Bahdanau et al., 2015, Jean et al., 2015) and this work, 𝒔𝒔s, in fact, implies a set of source hidden states which are consulted throughout the entire course of the translation process. Such an approach is referred to as an attention mechanism, which we will discuss next. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_8",
"text": " In this work, following (Sutskever et al., 2014, Luong et al., 2015), we use the stacking LSTM architecture for our NMT systems, as illustrated in Figure 1. We use the LSTM unit defined in (Zaremba et al., 2015). Our training objective is formulated as follows: Jt=∑(x,y)∈𝔻−logp(y|x)subscript𝐽𝑡subscript𝑥𝑦𝔻𝑝conditional𝑦𝑥J_{t}=\\sum_{(x,y)\\in\\mathbb{D}}\\nolimits-\\log p(y|x) (4) with 𝔻𝔻\\mathbb{D} being our parallel training corpus. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_9",
"text": " Our various attention-based models are classifed into two broad categories, global and local. These classes differ in terms of whether the “attention” is placed on all source positions or on only a few source positions. We illustrate these two model types in Figure 2 and 3 respectively. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_10",
"text": " Common to these two types of models is the fact that at each time step t𝑡t in the decoding phase, both approaches first take as input the hidden state 𝒉tsubscript𝒉𝑡\\mbox{\\boldmath{$h$}}_{t} at the top layer of a stacking LSTM. The goal is then to derive a context vector 𝒄tsubscript𝒄𝑡\\mbox{\\boldmath{$c$}}_{t} that captures relevant source-side information to help predict the current target word ytsubscript𝑦𝑡y_{t}. While these models differ in how the context vector 𝒄tsubscript𝒄𝑡\\mbox{\\boldmath{$c$}}_{t} is derived, they share the same subsequent steps. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_11",
"text": " Specifically, given the target hidden state 𝒉tsubscript𝒉𝑡\\mbox{\\boldmath{$h$}}_{t} and the source-side context vector 𝒄tsubscript𝒄𝑡\\mbox{\\boldmath{$c$}}_{t}, we employ a simple concatenation layer to combine the information from both vectors to produce an attentional hidden state as follows: 𝒉~t=tanh(𝑾𝒄(𝒄t;𝒉t))subscriptbold-~𝒉𝑡subscript𝑾𝒄subscript𝒄𝑡subscript𝒉𝑡\\mbox{\\boldmath{$\\tilde{h}$}}_{t}=\\tanh(\\mbox{\\boldmath{$W_{c}$}}(\\mbox{\\boldmath{$c$}}_{t};\\mbox{\\boldmath{$h$}}_{t})) (5) ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_12",
"text": " The attentional vector 𝒉~tsubscriptbold-~𝒉𝑡\\mbox{\\boldmath{$\\tilde{h}$}}_{t} is then fed through the softmax layer to produce the predictive distribution formulated as: p(yt|y<t,x)=softmax(𝑾𝒔𝒉~t)𝑝conditionalsubscript𝑦𝑡subscript𝑦absent𝑡𝑥softmaxsubscript𝑾𝒔𝒉~𝑡p(y_{t}|y_{<t},x)=\\operatorname{softmax}(\\mbox{\\boldmath{$W_{s}$}}\\mbox{\\boldmath{$\\tilde{h}$}}_{t}) (6) ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_13",
"text": " We now detail how each model type computes the source-side context vector 𝒄tsubscript𝒄𝑡\\mbox{\\boldmath{$c$}}_{t}. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_14",
"text": " The idea of a global attentional model is to consider all the hidden states of the encoder when deriving the context vector ctsubscript𝑐𝑡c_{t}. In this model type, a variable-length alignment vector 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t}, whose size equals the number of time steps on the source side, is derived by comparing the current target hidden state 𝒉tsubscript𝒉𝑡\\mbox{\\boldmath{$h$}}_{t} with each source hidden state 𝒉¯ssubscriptbold-¯𝒉𝑠\\mbox{\\boldmath{$\\bar{h}$}}_{s}: 𝒂t(s)subscript𝒂𝑡𝑠\\displaystyle\\mbox{\\boldmath{$a$}}_{t}(s) =align(𝒉t,𝒉¯s)absentalignsubscript𝒉𝑡subscriptbold-¯𝒉𝑠\\displaystyle=\\operatorname{align}(\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$\\bar{h}$}}_{s}) (7) =exp(score(𝒉t,𝒉¯s))∑s′exp(score(𝒉t,𝒉¯s′))absentscoresubscript𝒉𝑡subscriptbold-¯𝒉𝑠subscriptsuperscript𝑠′scoresubscript𝒉𝑡subscriptbold-¯𝒉superscript𝑠′\\displaystyle=\\frac{\\exp\\left(\\operatorname{score}(\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$\\bar{h}$}}_{s})\\right)}{\\sum_{s^{\\prime}}\\exp\\left(\\operatorname{score}(\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$\\bar{h}$}}_{s^{\\prime}})\\right)} Here, scorescore\\operatorname{score} is referred as a content-based function for which we consider three different alternatives: score(𝒉t,𝒉¯s)={𝒉t⊤𝒉¯sdot𝒉t⊤𝑾𝒂𝒉¯sgeneral𝒗a⊤tanh(𝑾𝒂(𝒉t;𝒉¯s))concatscoresubscript𝒉𝑡subscriptbold-¯𝒉𝑠casessuperscriptsubscript𝒉𝑡topsubscriptbold-¯𝒉𝑠dotsuperscriptsubscript𝒉𝑡topsubscript𝑾𝒂𝒉¯𝑠generalsuperscriptsubscript𝒗𝑎topsubscript𝑾𝒂subscript𝒉𝑡subscriptbold-¯𝒉𝑠concat\\operatorname{score}(\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$\\bar{h}$}}_{s})\\!=\\!\\begin{cases}\\mbox{\\boldmath{$h$}}_{t}^{\\top}\\mbox{\\boldmath{$\\bar{h}$}}_{s}&\\mbox{{\\it dot}}\\\\ \\mbox{\\boldmath{$h$}}_{t}^{\\top}\\mbox{\\boldmath{$W_{a}$}}\\mbox{\\boldmath{$\\bar{h}$}}_{s}&\\mbox{{\\it general}}\\\\ \\mbox{\\boldmath{$v$}}_{a}^{\\top}\\tanh\\left(\\mbox{\\boldmath{$W_{a}$}}(\\mbox{\\boldmath{$h$}}_{t};\\mbox{\\boldmath{$\\bar{h}$}}_{s})\\right)&\\mbox{{\\it concat}}\\end{cases} ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_15",
"text": " Besides, in our early attempts to build attention-based models, we use a location-based function in which the alignment scores are computed from solely the target hidden state 𝒉tsubscript𝒉𝑡\\mbox{\\boldmath{$h$}}_{t} as follows: 𝒂t=softmax(𝑾𝒂𝒉t) locationsubscript𝒂𝑡softmaxsubscript𝑾𝒂𝒉𝑡 location\\mbox{\\boldmath{$a$}}_{t}=\\operatorname{softmax}(\\mbox{\\boldmath{$W_{a}$}}\\mbox{\\boldmath{$h$}}_{t})\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{ }\\mbox{{\\it location}} (8) Given the alignment vector as weights, the context vector ctsubscript𝑐𝑡c_{t} is computed as the weighted average over all the source hidden states.666Eq. (8) implies that all alignment vectors 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t} are of the same length. For short sentences, we only use the top part of 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t} and for long sentences, we ignore words near the end. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_16",
"text": " Comparison to (Bahdanau et al., 2015) – While our global attention approach is similar in spirit to the model proposed by ?), there are several key differences which reflect how we have both simplified and generalized from the original model. First, we simply use hidden states at the top LSTM layers in both the encoder and decoder as illustrated in Figure 2. ?), on the other hand, use the concatenation of the forward and backward source hidden states in the bi-directional encoder and target hidden states in their non-stacking uni-directional decoder. Second, our computation path is simpler; we go from 𝒉t→𝒂t→𝒄t→𝒉~t→subscript𝒉𝑡subscript𝒂𝑡→subscript𝒄𝑡→subscriptbold-~𝒉𝑡\\mbox{\\boldmath{$h$}}_{t}\\rightarrow\\mbox{\\boldmath{$a$}}_{t}\\rightarrow\\mbox{\\boldmath{$c$}}_{t}\\rightarrow\\mbox{\\boldmath{$\\tilde{h}$}}_{t} then make a prediction as detailed in Eq. (5), Eq. (6), and Figure 2. On the other hand, at any time t𝑡t, ?) build from the previous hidden state 𝒉t−1→𝒂t→𝒄t→𝒉t→subscript𝒉𝑡1subscript𝒂𝑡→subscript𝒄𝑡→subscript𝒉𝑡\\mbox{\\boldmath{$h$}}_{t-1}\\rightarrow\\mbox{\\boldmath{$a$}}_{t}\\rightarrow\\mbox{\\boldmath{$c$}}_{t}\\rightarrow\\mbox{\\boldmath{$h$}}_{t}, which, in turn, goes through a deep-output and a maxout layer before making predictions.777We will refer to this difference again in Section 3.3. Lastly, ?) only experimented with one alignment function, the concat product; whereas we show later that the other alternatives are better. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_17",
"text": " The global attention has a drawback that it has to attend to all words on the source side for each target word, which is expensive and can potentially render it impractical to translate longer sequences, e.g., paragraphs or documents. To address this deficiency, we propose a local attentional mechanism that chooses to focus only on a small subset of the source positions per target word. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_18",
"text": " This model takes inspiration from the tradeoff between the soft and hard attentional models proposed by ?) to tackle the image caption generation task. In their work, soft attention refers to the global attention approach in which weights are placed “softly” over all patches in the source image. The hard attention, on the other hand, selects one patch of the image to attend to at a time. While less expensive at inference time, the hard attention model is non-differentiable and requires more complicated techniques such as variance reduction or reinforcement learning to train. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_19",
"text": " Our local attention mechanism selectively focuses on a small window of context and is differentiable. This approach has an advantage of avoiding the expensive computation incurred in the soft attention and at the same time, is easier to train than the hard attention approach. In concrete details, the model first generates an aligned position ptsubscript𝑝𝑡p_{t} for each target word at time t𝑡t. The context vector 𝒄tsubscript𝒄𝑡\\mbox{\\boldmath{$c$}}_{t} is then derived as a weighted average over the set of source hidden states within the window (pt−D,pt+D)subscript𝑝𝑡𝐷subscript𝑝𝑡𝐷(p_{t}-D,p_{t}+D); D𝐷D is empirically selected.888If the window crosses the sentence boundaries, we simply ignore the outside part and consider words in the window. Unlike the global approach, the local alignment vector 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t} is now fixed-dimensional, i.e., ∈ℝ2D+1absentsuperscriptℝ2𝐷1\\in\\mathbb{R}^{2D+1}. We consider two variants of the model as below. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_20",
"text": " Monotonic alignment (local-m) – we simply set pt=tsubscript𝑝𝑡𝑡p_{t}\\!=\\!t assuming that source and target sequences are roughly monotonically aligned. The alignment vector 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t} is defined according to Eq. (7).999local-m is the same as the global model except that the vector 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t} is fixed-length and shorter. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_21",
"text": " Predictive alignment (local-p) – instead of assuming monotonic alignments, our model predicts an aligned position as follows: pt=S⋅sigmoid(𝒗p⊤tanh(𝑾𝒑𝒉t)),subscript𝑝𝑡⋅𝑆sigmoidsuperscriptsubscript𝒗𝑝topsubscript𝑾𝒑𝒉𝑡p_{t}=S\\cdot\\operatorname{sigmoid}(\\mbox{\\boldmath{$v$}}_{p}^{\\top}\\tanh(\\mbox{\\boldmath{$W_{p}$}}\\mbox{\\boldmath{$h$}}_{t})), (9) 𝑾𝒑subscript𝑾𝒑W_{p} and 𝒗psubscript𝒗𝑝\\mbox{\\boldmath{$v$}}_{p} are the model parameters which will be learned to predict positions. S𝑆S is the source sentence length. As a result of sigmoidsigmoid\\operatorname{sigmoid}, pt∈(0,S)subscript𝑝𝑡0𝑆p_{t}\\in(0,S). To favor alignment points near ptsubscript𝑝𝑡p_{t}, we place a Gaussian distribution centered around ptsubscript𝑝𝑡p_{t} . Specifically, our alignment weights are now defined as: 𝒂t(s)=align(𝒉t,𝒉¯s)exp(−(s−pt)22σ2)subscript𝒂𝑡𝑠alignsubscript𝒉𝑡subscriptbold-¯𝒉𝑠superscript𝑠subscript𝑝𝑡22superscript𝜎2\\mbox{\\boldmath{$a$}}_{t}(s)=\\operatorname{align}(\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$\\bar{h}$}}_{s})\\exp\\left(-\\frac{(s-p_{t})^{2}}{2\\sigma^{2}}\\right) (10) We use the same alignalign\\operatorname{align} function as in Eq. (7) and the standard deviation is empirically set as σ=D2𝜎𝐷2\\sigma\\!=\\!\\frac{D}{2}. Note that ptsubscript𝑝𝑡p_{t} is a real nummber; whereas s𝑠s is an integer within the window centered at ptsubscript𝑝𝑡p_{t}.101010local-p is similar to the local-m model except that we dynamically compute ptsubscript𝑝𝑡p_{t} and use a truncated Gaussian distribution to modify the original alignment weights align(𝒉t,𝒉¯s)alignsubscript𝒉𝑡subscriptbold-¯𝒉𝑠\\operatorname{align}(\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$\\bar{h}$}}_{s}) as shown in Eq. (10). By utilizing ptsubscript𝑝𝑡p_{t} to derive 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t}, we can compute backprop gradients for 𝑾𝒑subscript𝑾𝒑W_{p} and 𝒗psubscript𝒗𝑝\\mbox{\\boldmath{$v$}}_{p}. This model is differentiable almost everywhere. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_22",
"text": " Comparison to (Gregor et al., 2015) – have proposed a selective attention mechanism, very similar to our local attention, for the image generation task. Their approach allows the model to select an image patch of varying location and zoom. We, instead, use the same “zoom” for all target positions, which greatly simplifies the formulation and still achieves good performance. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_23",
"text": " In our proposed global and local approaches, the attentional decisions are made independently, which is suboptimal. Whereas, in standard MT, a coverage set is often maintained during the translation process to keep track of which source words have been translated. Likewise, in attentional NMTs, alignment decisions should be made jointly taking into account past alignment information. To address that, we propose an input-feeding approach in which attentional vectors 𝒉~tsubscriptbold-~𝒉𝑡\\mbox{\\boldmath{$\\tilde{h}$}}_{t} are concatenated with inputs at the next time steps as illustrated in Figure 4.111111If n𝑛n is the number of LSTM cells, the input size of the first LSTM layer is 2n2𝑛2n; those of subsequent layers are n𝑛n. The effects of having such connections are two-fold: (a) we hope to make the model fully aware of previous alignment choices and (b) we create a very deep network spanning both horizontally and vertically. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_24",
"text": " Comparison to other work – ?) use context vectors, similar to our 𝒄tsubscript𝒄𝑡\\mbox{\\boldmath{$c$}}_{t}, in building subsequent hidden states, which can also achieve the “coverage” effect. However, there has not been any analysis of whether such connections are useful as done in this work. Also, our approach is more general; as illustrated in Figure 4, it can be applied to general stacking recurrent architectures, including non-attentional models. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_25",
"text": " ?) propose a doubly attentional approach with an additional constraint added to the training objective to make sure the model pays equal attention to all parts of the image during the caption generation process. Such a constraint can also be useful to capture the coverage set effect in NMT that we mentioned earlier. However, we chose to use the input-feeding approach since it provides flexibility for the model to decide on any attentional constraints it deems suitable. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_26",
"text": " We evaluate the effectiveness of our models on the WMT translation tasks between English and German in both directions. newstest2013 (3000 sentences) is used as a development set to select our hyperparameters. Translation performances are reported in case-sensitive BLEU (Papineni et al., 2002) on newstest2014 (2737 sentences) and newstest2015 (2169 sentences). Following (Luong et al., 2015), we report translation quality using two types of BLEU: (a) tokenized121212All texts are tokenized with tokenizer.perl and BLEU scores are computed with multi-bleu.perl. BLEU to be comparable with existing NMT work and (b) NIST131313With the mteval-v13a script as per WMT guideline. BLEU to be comparable with WMT results. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_27",
"text": " All our models are trained on the WMT’14 training data consisting of 4.5M sentences pairs (116M English words, 110M German words). Similar to (Jean et al., 2015), we limit our vocabularies to be the top 50K most frequent words for both languages. Words not in these shortlisted vocabularies are converted into a universal token <<unk>>. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_28",
"text": " When training our NMT systems, following (Bahdanau et al., 2015, Jean et al., 2015), we filter out sentence pairs whose lengths exceed 50 words and shuffle mini-batches as we proceed. Our stacking LSTM models have 4 layers, each with 1000 cells, and 1000-dimensional embeddings. We follow (Sutskever et al., 2014, Luong et al., 2015) in training NMT with similar settings: (a) our parameters are uniformly initialized in (−0.1,0.1)0.10.1(-0.1,0.1), (b) we train for 10 epochs using plain SGD, (c) a simple learning rate schedule is employed – we start with a learning rate of 1; after 5 epochs, we begin to halve the learning rate every epoch, (d) our mini-batch size is 128, and (e) the normalized gradient is rescaled whenever its norm exceeds 5. Additionally, we also use dropout with probability 0.20.20.2 for our LSTMs as suggested by (Zaremba et al., 2015). For dropout models, we train for 12 epochs and start halving the learning rate after 8 epochs. For local attention models, we empirically set the window size D=10𝐷10D=10. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_29",
"text": " Our code is implemented in MATLAB. When running on a single GPU device Tesla K40, we achieve a speed of 1K target words per second. It takes 7–10 days to completely train a model. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_30",
"text": " We compare our NMT systems in the English-German task with various other systems. These include the winning system in WMT’14 (Buck et al., 2014), a phrase-based system whose language models were trained on a huge monolingual text, the Common Crawl corpus. For end-to-end NMT systems, to the best of our knowledge, (Jean et al., 2015) is the only work experimenting with this language pair and currently the SOTA system. We only present results for some of our attention models and will later analyze the rest in Section 5. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_31",
"text": " As shown in Table 1, we achieve progressive improvements when (a) reversing the source sentence, +1.31.31.3 BLEU, as proposed in (Sutskever et al., 2014) and (b) using dropout, +1.41.41.4 BLEU. On top of that, (c) the global attention approach gives a significant boost of +2.82.82.8 BLEU, making our model slightly better than the base attentional system of ?) (row RNNSearch). When (d) using the input-feeding approach, we seize another notable gain of +1.31.31.3 BLEU and outperform their system. The local attention model with predictive alignments (row local-p) proves to be even better, giving us a further improvement of +0.90.90.9 BLEU on top of the global attention model. It is interesting to observe the trend previously reported in (Luong et al., 2015) that perplexity strongly correlates with translation quality. In total, we achieve a significant gain of 5.0 BLEU points over the non-attentional baseline, which already includes known techniques such as source reversing and dropout. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_32",
"text": " The unknown replacement technique proposed in (Luong et al., 2015, Jean et al., 2015) yields another nice gain of +1.91.91.9 BLEU, demonstrating that our attentional models do learn useful alignments for unknown works. Finally, by ensembling 8 different models of various settings, e.g., using different attention approaches, with and without dropout etc., we were able to achieve a new SOTA result of 23.023.023.0{} BLEU, outperforming the existing best system (Jean et al., 2015) by +1.41.41.4 BLEU. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_33",
"text": " Latest results in WMT’15 – despite the fact that our models were trained on WMT’14 with slightly less data, we test them on newstest2015 to demonstrate that they can generalize well to different test sets. As shown in Table 2, our best system establishes a new SOTA performance of 25.925.925.9{} BLEU, outperforming the existing best system backed by NMT and a 5-gram LM reranker by +1.01.01.0 BLEU. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_34",
"text": " We carry out a similar set of experiments for the WMT’15 translation task from German to English. While our systems have not yet matched the performance of the SOTA system, we nevertheless show the effectiveness of our approaches with large and progressive gains in terms of BLEU as illustrated in Table 3. The attentional mechanism gives us +2.22.22.2 BLEU gain and on top of that, we obtain another boost of up to +1.01.01.0 BLEU from the input-feeding approach. Using a better alignment function, the content-based dot product one, together with dropout yields another gain of +2.72.72.7 BLEU. Lastly, when applying the unknown word replacement technique, we seize an additional +2.12.12.1 BLEU, demonstrating the usefulness of attention in aligning rare words. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_35",
"text": " We conduct extensive analysis to better understand our models in terms of learning, the ability to handle long sentences, choices of attentional architectures, and alignment quality. All results reported here are on English-German newstest2014. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_36",
"text": " We compare models built on top of one another as listed in Table 1. It is pleasant to observe in Figure 5 a clear separation between non-attentional and attentional models. The input-feeding approach and the local attention model also demonstrate their abilities in driving the test costs lower. The non-attentional model with dropout (the blue + curve) learns slower than other non-dropout models, but as time goes by, it becomes more robust in terms of minimizing test errors. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_37",
"text": " We follow (Bahdanau et al., 2015) to group sentences of similar lengths together and compute a BLEU score per group. Figure 6 shows that our attentional models are more effective than the non-attentional one in handling long sentences: the quality does not degrade as sentences become longer. Our best model (the blue + curve) outperforms all other systems in all length buckets. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_38",
"text": " We examine different attention models (global, local-m, local-p) and different alignment functions (location, dot, general, concat) as described in Section 3. Due to limited resources, we cannot run all the possible combinations. However, results in Table 4 do give us some idea about different choices. The location-based function does not learn good alignments: the global (location) model can only obtain a small gain when performing unknown word replacement compared to using other alignment functions.141414There is a subtle difference in how we retrieve alignments for the different alignment functions. At time step t𝑡t in which we receive yt−1subscript𝑦𝑡1y_{t-1} as input and then compute 𝒉t,𝒂t,𝒄tsubscript𝒉𝑡subscript𝒂𝑡subscript𝒄𝑡\\mbox{\\boldmath{$h$}}_{t},\\mbox{\\boldmath{$a$}}_{t},\\mbox{\\boldmath{$c$}}_{t}, and 𝒉~tsubscriptbold-~𝒉𝑡\\mbox{\\boldmath{$\\tilde{h}$}}_{t} before predicting ytsubscript𝑦𝑡y_{t}, the alignment vector 𝒂tsubscript𝒂𝑡\\mbox{\\boldmath{$a$}}_{t} is used as alignment weights for (a) the predicted word ytsubscript𝑦𝑡y_{t} in the location-based alignment functions and (b) the input word yt−1subscript𝑦𝑡1y_{t-1} in the content-based functions. For content-based functions, our implementation concat does not yield good performances and more analysis should be done to understand the reason.151515With concat, the perplexities achieved by different models are 6.7 (global), 7.1 (local-m), and 7.1 (local-p). Such high perplexities could be due to the fact that we simplify the matrix 𝑾𝒂subscript𝑾𝒂W_{a} to set the part that corresponds to 𝒉¯ssubscriptbold-¯𝒉𝑠\\mbox{\\boldmath{$\\bar{h}$}}_{s} to identity. It is interesting to observe that dot works well for the global attention and general is better for the local attention. Among the different models, the local attention model with predictive alignments (local-p) is best, both in terms of perplexities and BLEU. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_39",
"text": " A by-product of attentional models are word alignments. While (Bahdanau et al., 2015) visualized alignments for some sample sentences and observed gains in translation quality as an indication of a working attention model, no work has assessed the alignments learned as a whole. In contrast, we set out to evaluate the alignment quality using the alignment error rate (AER) metric. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_40",
"text": " Given the gold alignment data provided by RWTH for 508 English-German Europarl sentences, we “force” decode our attentional models to produce translations that match the references. We extract only one-to-one alignments by selecting the source word with the highest alignment weight per target word. Nevertheless, as shown in Table 6, we were able to achieve AER scores comparable to the one-to-many alignments obtained by the Berkeley aligner (Liang et al., 2006).161616We concatenate the 508 sentence pairs with 1M sentence pairs from WMT and run the Berkeley aligner. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_41",
"text": " We also found that the alignments produced by local attention models achieve lower AERs than those of the global one. The AER obtained by the ensemble, while good, is not better than the local-m AER, suggesting the well-known observation that AER and translation scores are not well correlated (Fraser and Marcu, 2007). We show some alignment visualizations in Appendix A. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_42",
"text": " We show in Table 5 sample translations in both directions. It it appealing to observe the effect of attentional models in correctly translating names such as “Miranda Kerr” and “Roger Dow”. Non-attentional models, while producing sensible names from a language model perspective, lack the direct connections from the source side to make correct translations. We also observed an interesting case in the second example, which requires translating the doubly-negated phrase, “not incompatible”. The attentional model correctly produces “nicht ……\\dots unvereinbar”; whereas the non-attentional model generates “nicht vereinbar”, meaning “not compatible”.171717The reference uses a more fancy translation of “incompatible”, which is “im Widerspruch zu etwas stehen”. Both models, however, failed to translate “passenger experience”. The attentional model also demonstrates its superiority in translating long sentences as in the last example. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_43",
"text": " In this paper, we propose two simple and effective attentional mechanisms for neural machine translation: the global approach which always looks at all source positions and the local one that only attends to a subset of source positions at a time. We test the effectiveness of our models in the WMT translation tasks between English and German in both directions. Our local attention yields large gains of up to 5.05.05.0{} BLEU over non-attentional models which already incorporate known techniques such as dropout. For the English to German translation direction, our ensemble model has established new state-of-the-art results for both WMT’14 and WMT’15, outperforming existing best systems, backed by NMT models and n𝑛n-gram LM rerankers, by more than 1.0 BLEU. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
},
{
"id": "1508.04025_all_44",
"text": " We have compared various alignment functions and shed light on which functions are best for which attentional models. Our analysis shows that attention-based NMT models are superior to non-attentional ones in many cases, for example in translating names and handling long sentences. ",
"title": "Effective Approaches to Attention-based Neural Machine Translation"
}
] |
What is prior preservation loss?
|
Prior preservation loss supervises the model with its own samples to keep the prior during few-shot fine-tuningThe loss equation is presented in the evidential paragraph [21].
|
[
21
] |
[
{
"id": "2208.12242_all_0",
"text": " Can you imagine your own dog traveling around the world, or your favorite bag displayed in the most exclusive showroom in Paris? What about your parrot being the main character of an illustrated storybook? Rendering such imaginary scenes is a challenging task that requires synthesizing instances of specific subjects (e.g., objects, animals) in new contexts such that they naturally and seamlessly blend into the scene. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_1",
"text": " Recently developed large text-to-image models have shown unprecedented capabilities, by enabling high-quality and diverse synthesis of images based on a text prompt written in natural language (61, 54). One of the main advantages of such models is the strong semantic prior learned from a large collection of image-caption pairs. Such a prior learns, for instance, to bind the word “dog” with various instances of dogs that can appear in different poses and contexts in an image. While the synthesis capabilities of these models are unprecedented, they lack the ability to mimic the appearance of subjects in a given reference set, and synthesize novel renditions of the same subjects in different contexts. The main reason is that the expressiveness of their output domain is limited; even the most detailed textual description of an object may yield instances with different appearances. Furthermore, even models whose text embedding lies in a shared language-vision space cannot accurately reconstruct the appearance of given subjects but only create variations of the image content (Figure 2). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_2",
"text": " In this work, we present a new approach for “personalization” of text-to-image diffusion models (adapting them to user-specific image generation needs). Our goal is to expand the language-vision dictionary of the model such that it binds new words with specific subjects the user wants to generate. Once the new dictionary is embedded in the model, it can use these words to synthesize novel photorealistic images of the subject, contextualized in different scenes, while preserving their key identifying features. The effect is akin to a “magic photo booth”—once a few images of the subject are taken, the booth generates photos of the subject in different conditions and scenes, as guided by simple and intuitive text prompts (Figure 1). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_3",
"text": " More formally, given a few images of a subject (∼similar-to\\sim3-5), our objective is to implant the subject into the output domain of the model such that it can be synthesized with a unique identifier. To that end, we propose a technique to represent a given subject with rare token identifiers and fine-tune a pre-trained, diffusion-based text-to-image framework. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_4",
"text": " We fine-tune the text-to-image model with the input images and text prompts containing a unique identifier followed by the class name of the subject (e.g., “A (V) dog”). The latter enables the model to use its prior knowledge on the subject class while the class-specific instance is bound with the unique identifier. In order to prevent language drift (34, 40) that causes the model to associate the class name (e.g., “dog”) with the specific instance, we propose an autogenous, class-specific prior preservation loss, which leverages the semantic prior on the class that is embedded in the model, and encourages it to generate diverse instances of the same class as our subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_5",
"text": " We apply our approach to a myriad of text-based image generation applications including recontextualization of subjects, modification of their properties, original art renditions, and more, paving the way to a new stream of previously unassailable tasks. We highlight the contribution of each component in our method via ablation studies, and compare with alternative baselines and related work. We also conduct a user study to evaluate subject and prompt fidelity in our synthesized images, compared to alternative approaches. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_6",
"text": " To the best of our knowledge, ours is the first technique that tackles this new challenging problem of subject-driven generation, allowing users, from just a few casually captured images of a subject, synthesize novel renditions of the subject in different contexts while maintaining its distinctive features. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_7",
"text": " To evaluate this new task, we also construct a new dataset that contains various subjects captured in different contexts, and propose a new evaluation protocol that measures the subject fidelity and prompt fidelity of the generated results. We make our dataset and evaluation protocol publicly available on the project webpage. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_8",
"text": " Image Composition. Image composition techniques (70, 13, 38) aim to clone a given subject into a new background such that the subject melds into the scene. To consider composition in novel poses, one may apply 3D reconstruction techniques (41, 6, 8, 68, 49) which usually works on rigid objects and require a larger number of views. Some drawbacks include scene integration (lighting, shadows, contact) and the inability to generate novel scenes. In contrast, our approach enable generation of subjects in novel poses and new contexts. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_9",
"text": " Text-to-Image Editing and Synthesis. Text-driven image manipulation has recently achieved significant progress using GANs (22, 9, 28, 29, 30) combined with image-text representations such as CLIP , yielding realistic manipulations using text (48, 21, 71, 2, 7, 43). These methods work well on structured scenarios (e.g. human face editing) and can struggle over diverse datasets where subjects are varied. Crowson et al. use VQ-GAN and train over more diverse data to alleviate this concern. Other works (4, 31) exploit the recent diffusion models (25, 63, 65, 25, 64, 58, 45, 66, 60, 62), which achieve state-of-the-art generation quality over highly diverse datasets, often surpassing GANs . While most works that require only text are limited to global editing (14, 33), Bar-Tal et al. proposed a text-based localized editing technique without using masks, showing impressive results. While most of these editing approaches allow modification of global properties or local editing of a given image, none enables generating novel renditions of a given subject in new contexts. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_10",
"text": " There also exists work on text-to-image synthesis (16, 24, 67, 35, 36, 50, 51, 55, 74, 14, 19, 58, 27). Recent large text-to-image models such as Imagen , DALL-E2 , Parti , CogView2 and Stable Diffusion demonstrated unprecedented semantic generation. These models do not provide fine-grained control over a generated image and use text guidance only. Specifically, it is challenging or impossible to preserve the identity of a subject consistently across synthesized images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_11",
"text": " Controllable Generative Models. There are various approaches to control generative models, where some of them might prove to be viable directions for subject-driven prompt-guided image synthesis. Liu et al. propose a diffusion-based technique allowing for image variations guided by reference image or text. To overcome subject modification, several works (44, 3) assume a user-provided mask to restrict the modified area. Inversion (12, 15, 54) can be used to preserve a subject while modifying context. Prompt-to-prompt allows for local and global editing without an input mask. These methods fall short of identity-preserving novel sample generation of a subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_12",
"text": " In the context of GANs, Pivotal Tuning allows for real image editing by finetuning the model with an inverted latent code anchor, and Nitzan et al. extended this work to GAN finetuning on faces to train a personalized prior, which requires around 100 images and are limited to the face domain. Casanova et al. propose an instance conditioned GAN that can generate variations of an instance, although it can struggle with unique subjects and does not preserve all subject details. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_13",
"text": " Finally, the concurrent work of Gal et al. proposes a method to represent visual concepts, like an object or a style, through new tokens in the embedding space of a frozen text-to-image model, resulting in small personalized token embeddings. While this method is limited by the expressiveness of the frozen diffusion model, our fine-tuning approach enables us to embed the subject within the model’s output domain, resulting in the generation of novel images of the subject which preserve its key visual features. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_14",
"text": " Given only a few (typically 3-5) casually captured images of a specific subject, without any textual description, our objective is to generate new images of the subject with high detail fidelity and with variations guided by text prompts. Example variations include changing the subject location, changing subject properties such as color or shape, modifying the subject’s pose, viewpoint, and other semantic modifications. We do not impose any restrictions on input image capture settings and the subject image can have varying contexts. We next provide some background on text-to-image diffusion models (Sec. 3.1), then present our fine-tuning technique to bind a unique identifier with a subject described in a few images (Sec. 3.2), and finally propose a class-specific prior-preservation loss that enables us to overcome language drift in our fine-tuned model (Sec. 3.3). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_15",
"text": " Diffusion models are probabilistic generative models that are trained to learn a data distribution by the gradual denoising of a variable sampled from a Gaussian distribution. Specifically, we are interested in a pre-trained text-to-image diffusion model 𝐱^θsubscript^𝐱𝜃\\hat{\\mathbf{x}}_{\\theta} that, given an initial noise map ϵ∼𝒩(𝟎,𝐈)similar-tobold-italic-ϵ𝒩0𝐈{\\bm{\\epsilon}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) and a conditioning vector 𝐜=Γ(𝐏)𝐜Γ𝐏\\mathbf{c}=\\Gamma(\\mathbf{P}) generated using a text encoder ΓΓ\\Gamma and a text prompt 𝐏𝐏\\mathbf{P}, generates an image 𝐱gen=𝐱^θ(ϵ,𝐜)subscript𝐱gensubscript^𝐱𝜃bold-italic-ϵ𝐜\\mathbf{x}_{\\text{gen}}=\\hat{\\mathbf{x}}_{\\theta}({\\bm{\\epsilon}},\\mathbf{c}). They are trained using a squared error loss to denoise a variably-noised image or latent code 𝐳t≔αt𝐱+σtϵ≔subscript𝐳𝑡subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ\\mathbf{z}_{t}\\coloneqq\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}} as follows: 𝔼𝐱,𝐜,ϵ,t(wt‖𝐱^θ(αt𝐱+σtϵ,𝐜)−𝐱‖22)subscript𝔼𝐱𝐜bold-italic-ϵ𝑡delimited-()subscript𝑤𝑡subscriptsuperscriptnormsubscript^𝐱𝜃subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ𝐜𝐱22\\mathbb{E}_{\\mathbf{x},\\mathbf{c},{\\bm{\\epsilon}},t}\\!\\left(w_{t}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}},\\mathbf{c})-\\mathbf{x}\\|^{2}_{2}\\right) (1) where 𝐱𝐱\\mathbf{x} is the ground-truth image, 𝐜𝐜\\mathbf{c} is a conditioning vector (e.g., obtained from a text prompt), and αt,σt,wtsubscript𝛼𝑡subscript𝜎𝑡subscript𝑤𝑡\\alpha_{t},\\sigma_{t},w_{t} are terms that control the noise schedule and sample quality, and are functions of the diffusion process time t∼𝒰((0,1))similar-to𝑡𝒰01t\\sim\\mathcal{U}((0,1)). A more detailed description is given in the supplementary material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_16",
"text": " Our first task is to implant the subject instance into the output domain of the model such that we can query the model for varied novel images of the subject. One natural idea is to fine-tune the model using the few-shot dataset of the subject. Careful care had to be taken when fine-tuning generative models such as GANs in a few-shot scenario as it can cause overfitting and mode-collapse - as well as not capturing the target distribution sufficiently well. There has been research on techniques to avoid these pitfalls (56, 47, 37, 42, 69), although, in contrast to our work, this line of work primarily seeks to generate images that resemble the target distribution but has no requirement of subject preservation. With regards to these pitfalls, we observe the peculiar finding that, given a careful fine-tuning setup using the diffusion loss from Eq 1, large text-to-image diffusion models seem to excel at integrating new information into their domain without forgetting the prior or overfitting to a small set of training images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_17",
"text": " Our goal is to “implant” a new (unique identifier, subject) pair into the diffusion model’s “dictionary” . In order to bypass the overhead of writing detailed image descriptions for a given image set we opt for a simpler approach and label all input images of the subject “a (identifier) (class noun)”, where (identifier) is a unique identifier linked to the subject and (class noun) is a coarse class descriptor of the subject (e.g. cat, dog, watch, etc.). The class descriptor can be provided by the user or obtained using a classifier. We use a class descriptor in the sentence in order to tether the prior of the class to our unique subject and find that using a wrong class descriptor, or no class descriptor increases training time and language drift while decreasing performance. In essence, we seek to leverage the model’s prior of the specific class and entangle it with the embedding of our subject’s unique identifier so we can leverage the visual prior to generate new poses and articulations of the subject in different contexts. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_18",
"text": " We generally find existing English words (e.g. “unique”, “special”) suboptimal since the model has to learn to disentangle them from their original meaning and to re-entangle them to reference our subject. This motivates the need for an identifier that has a weak prior in both the language model and the diffusion model. A hazardous way of doing this is to select random characters in the English language and concatenate them to generate a rare identifier (e.g. “xxy5syt00”). In reality, the tokenizer might tokenize each letter separately, and the prior for the diffusion model is strong for these letters. We often find that these tokens incur the similar weaknesses as using common English words. Our approach is to find rare tokens in the vocabulary, and then invert these tokens into text space, in order to minimize the probability of the identifier having a strong prior. We perform a rare-token lookup in the vocabulary and obtain a sequence of rare token identifiers f(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}), where f𝑓f is a tokenizer; a function that maps character sequences to tokens and 𝐕^^𝐕\\hat{\\mathbf{V}} is the decoded text stemming from the tokens f(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}). The sequence can be of variable length k𝑘k, and find that relatively short sequences of k={1,…,3}𝑘1…3k=\\{1,...,3\\} work well. Then, by inverting the vocabulary using the de-tokenizer on f(𝐕^)𝑓^𝐕f(\\hat{\\mathbf{V}}) we obtain a sequence of characters that define our unique identifier 𝐕^^𝐕\\hat{\\mathbf{V}}. For Imagen, we find that using uniform random sampling of tokens that correspond to 3 or fewer Unicode characters (without spaces) and using tokens in the T5-XXL tokenizer range of {5000,…,10000}5000…10000\\{5000,...,10000\\} works well. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_19",
"text": " In our experience, the best results for maximum subject fidelity are achieved by fine-tuning all layers of the model. This includes fine-tuning layers that are conditioned on the text embeddings, which gives rise to the problem of language drift. Language drift has been an observed problem in language models (34, 40), where a model that is pre-trained on a large text corpus and later fine-tuned for a specific task progressively loses syntactic and semantic knowledge of the language. To the best of our knowledge, we are the first to find a similar phenomenon affecting diffusion models, where to model slowly forgets how to generate subjects of the same class as the target subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_20",
"text": " Another problem is the possibility of reduced output diversity. Text-to-image diffusion models naturally posses high amounts of output diversity. When fine-tuning on a small set of images we would like to be able to generate the subject in novel viewpoints, poses and articulations. Yet, there is a risk of reducing the amount of variability in the output poses and views of the subject (e.g. snapping to the few-shot views). We observe that this is often the case, especially when the model is trained for too long. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_21",
"text": " To mitigate the two aforementioned issues, we propose an autogenous class-specific prior preservation loss that encourages diversity and counters language drift. In essence, our method is to supervise the model with its own generated samples, in order for it to retain the prior once the few-shot fine-tuning begins. This allows it to generate diverse images of the class prior, as well as retain knowledge about the class prior that it can use in conjunction with knowledge about the subject instance. Specifically, we generate data 𝐱pr=𝐱^(𝐳t1,𝐜pr)subscript𝐱pr^𝐱subscript𝐳subscript𝑡1subscript𝐜pr\\mathbf{x}_{\\text{pr}}=\\hat{\\mathbf{x}}(\\mathbf{z}_{t_{1}},\\mathbf{c}_{\\text{pr}}) by using the ancestral sampler on the frozen pre-trained diffusion model with random initial noise 𝐳t1∼𝒩(𝟎,𝐈)similar-tosubscript𝐳subscript𝑡1𝒩0𝐈\\mathbf{z}_{t_{1}}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}) and conditioning vector 𝐜pr≔Γ(f(”a (class noun)”))≔subscript𝐜prΓ𝑓”a (class noun)”\\mathbf{c}_{\\text{pr}}\\coloneqq\\Gamma(f(\\text{\"a (class noun)\"})). The loss becomes: 𝔼𝐱,𝐜,ϵ,ϵ′,t(wt∥𝐱^θ(αt𝐱+σtϵ,𝐜)−𝐱∥22+λwt′∥𝐱^θ(αt′𝐱pr+σt′ϵ′,𝐜pr)−𝐱pr∥22),subscript𝔼𝐱𝐜bold-italic-ϵsuperscriptbold-italic-ϵ′𝑡delimited-()subscript𝑤𝑡subscriptsuperscriptdelimited-∥∥subscript^𝐱𝜃subscript𝛼𝑡𝐱subscript𝜎𝑡bold-italic-ϵ𝐜𝐱22𝜆subscript𝑤superscript𝑡′subscriptsuperscriptdelimited-∥∥subscript^𝐱𝜃subscript𝛼superscript𝑡′subscript𝐱prsubscript𝜎superscript𝑡′superscriptbold-italic-ϵ′subscript𝐜prsubscript𝐱pr22\\mathbb{E}_{\\mathbf{x},\\mathbf{c},{\\bm{\\epsilon}},{\\bm{\\epsilon}}^{\\prime},t}(w_{t}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t}\\mathbf{x}+\\sigma_{t}{\\bm{\\epsilon}},\\mathbf{c})-\\mathbf{x}\\|^{2}_{2}+\\\\ \\lambda w_{t^{\\prime}}\\|\\hat{\\mathbf{x}}_{\\theta}(\\alpha_{t^{\\prime}}\\mathbf{x}_{\\text{pr}}+\\sigma_{t^{\\prime}}{\\bm{\\epsilon}}^{\\prime},\\mathbf{c}_{\\text{pr}})-\\mathbf{x}_{\\text{pr}}\\|^{2}_{2}), (2) where the second term is the prior-preservation term that supervises the model with its own generated images, and λ𝜆\\lambda controls for the relative weight of this term. Figure 3 illustrates the model fine-tuning with the class-generated samples and prior-preservation loss. Despite being simple, we find this prior-preservation loss is effective in encouraging output diversity and in overcoming language-drift. We also find that we can train the model for more iterations without risking overfitting. We find that ∼similar-to\\sim 1000 iterations with λ=1𝜆1\\lambda=1 and learning rate 10−5superscript10510^{-5} for Imagen and 5×10−65superscript1065\\times 10^{-6} for Stable Diffusion , and with a subject dataset size of 3-5 images is enough to achieve good results. During this process, ∼1000similar-toabsent1000\\sim 1000 “a (class noun)” samples are generated - but less can be used. The training process takes about 5 minutes on one TPUv4 for Imagen, and 5 minutes on a NVIDIA A100 for Stable Diffusion. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_22",
"text": " In this section, we show experiments and applications. Our method enables a large expanse of text-guided semantic modifications of our subject instances, including recontextualization, modification of subject properties such as material and species, art rendition, and viewpoint modification. Importantly, across all of these modifications, we are able to preserve the unique visual features that give the subject its identity and essence. If the task is recontextualization, then the subject features are unmodified, but appearance (e.g., pose) may change. If the task is a stronger semantic modification, such as crossing between our subject and another species/object, then the key features of the subject are preserved after modification. In this section, we reference the subject’s unique identifier using (V). We include specific Imagen and Stable Diffusion implementation details in the supp. material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_23",
"text": " We collected a dataset of 30 subjects, including unique objects and pets such as backpacks, stuffed animals, dogs, cats, sunglasses, cartoons, etc. We separate each subject into two categories: objects and live subjects/pets. 21 of the 30 subjects are objects, and 9 are live subjects/pets. We provide one sample image for each of the subjects in Figure 5. Images for this dataset were collected by the authors or sourced from Unsplash . We also collected 25 prompts: 20 recontextualization prompts and 5 property modification prompts for objects; 10 recontextualization, 10 accessorization, and 5 property modification prompts for live subjects/pets. The full list of prompts can be found in the supplementary material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_24",
"text": " For the evaluation suite we generate four images per subject and per prompt, totaling 3,000 images. This allows us to robustly measure performances and generalization capabilities of a method. We make our dataset and evaluation protocol publicly available on the project webpage for future use in evaluating subject-driven generation. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_25",
"text": " One important aspect to evaluate is subject fidelity: the preservation of subject details in generated images. For this, we compute two metrics: CLIP-I and DINO . CLIP-I is the average pairwise cosine similarity between CLIP embeddings of generated and real images. Although this metric has been used in other work , it is not constructed to distinguish between different subjects that could have highly similar text descriptions (e.g. two different yellow clocks). Our proposed DINO metric is the average pairwise cosine similarity between the ViT-S/16 DINO embeddings of generated and real images. This is our preferred metric, since, by construction and in contrast to supervised networks, DINO is not trained to ignore differences between subjects of the same class. Instead, the self-supervised training objective encourages distinction of unique features of a subject or image. The second important aspect to evaluate is prompt fidelity, measured as the average cosine similarity between prompt and image CLIP embeddings. We denote this as CLIP-T. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_26",
"text": " We compare our results with Textual Inversion, the recent concurrent work of Gal et al. , using the hyperparameters provided in their work. We find that this work is the only comparable work in the literature that is subject-driven, text-guided and generates novel images. We generate images for DreamBooth using Imagen, DreamBooth using Stable Diffusion and Textual Inversion using Stable Diffusion. We compute DINO and CLIP-I subject fidelity metrics and the CLIP-T prompt fidelity metric. In Table 1 we show sizeable gaps in both subject and prompt fidelity metrics for DreamBooth over Textual Inversion. We find that DreamBooth (Imagen) achieves higher scores for both subject and prompt fidelity than DreamBooth (Stable Diffusion), approaching the upper-bound of subject fidelity for real images. We believe that this is due to the larger expressive power and higher output quality of Imagen. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_27",
"text": " Further, we compare Textual Inversion (Stable Diffusion) and DreamBooth (Stable Diffusion) by conducting a user study. For subject fidelity, we asked 72 users to answer questionnaires of 25 comparative questions (3 users per questionnaire), totaling 1800 answers. Samples are randomly selected from a large pool. Each question shows the set of real images for a subject, and one generated image of that subject by each method (with a random prompt). Users are asked to answer the question: “Which of the two images best reproduces the identity (e.g. item type and details) of the reference item?”, and we include a “Cannot Determine / Both Equally” option. Similarly for prompt fidelity, we ask “Which of the two images is best described by the reference text?”. We average results using majority voting and present them in Table 2. We find an overwhelming preference for DreamBooth for both subject fidelity and prompt fidelity. This shines a light on results in Table 1, where DINO differences of around 0.10.10.1 and CLIP-T differences of 0.050.050.05 are significant in terms of user preference. Finally, we show qualitative comparisons in Figure 4. We observe that DreamBooth better preserves subject identity, and is more faithful to prompts. We show samples of the user study in the supp. material. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_28",
"text": " We fine-tune Imagen on 15 subjects from our dataset, with and without our proposed prior preservation loss (PPL). The prior preservation loss seeks to combat language drift and preserve the prior. We compute a prior preservation metric (PRES) by computing the average pairwise DINO embeddings between generated images of random subjects of the prior class and real images of our specific subject. The higher this metric, the more similar random subjects of the class are to our specific subject, indicating collapse of the prior. We report results in Table 3 and observe that PPL substantially counteracts language drift and helps retain the ability to generate diverse images of the prior class. Additionally, we compute a diversity metric (DIV) using the average LPIPS cosine similarity between generated images of same subject with same prompt. We observe that our model trained with PPL achieves higher diversity (with slightly diminished subject fidelity), which can also be observed qualitatively in Figure 6, where our model trained with PPL overfits less to the environment of the reference images and can generate the dog in more diverse poses and articulations. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_29",
"text": " We finetune Imagen on a subset of our dataset subjects (5 subjects) with no class noun, a randomly sampled incorrect class noun, and the correct class noun. With the correct class noun for our subject, we are able to faithfully fit to the subject, take advantage of the class prior, allowing us to generate our subject in various contexts. When an incorrect class noun (e.g. “can” for a backpack) is used, we run into contention between our subject and and the class prior - sometimes obtaining cylindrical backpacks, or otherwise misshapen subjects. If we train with no class noun, the model does not leverage the class prior, has difficulty learning the subject and converging, and can generate erroneous samples. Subject fidelity results are shown in Table 4, with substantially higher subject fidelity for our proposed approach. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_30",
"text": " We can generate novel images for a specific subject in different contexts (Figure 7) with descriptive prompts (“a (V) (class noun) (context description)”). Importantly, we are able to generate the subject in new poses and articulations, with previously unseen scene structure and realistic integration of the subject in the scene (e.g. contact, shadows, reflections). ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_31",
"text": " Given a prompt “a painting of a (V) (class noun) in the style of (famous painter)” or “a statue of a (V) (class noun) in the style of (famous sculptor)” we are able to generate artistic renditions of our subject. Unlike style transfer, where the source structure is preserved and only the style is transferred, we are able to generate meaningful, novel variations depending on the artistic style, while preserving subject identity. E.g, as shown in Figure 8, “Michelangelo”, we generated a pose that is novel and not seen in the input images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_32",
"text": " We are able to render the subject under novel viewpoints. In Figure 8, we generate new images of the input cat (with consistent complex fur patterns) under new viewpoints. We highlight that the model has not seen this specific cat from behind, below, or above - yet it is able to extrapolate knowledge from the class prior to generate these novel views given only 4 frontal images of the subject. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_33",
"text": " We are able to modify subject properties. For example, we show crosses between a specific Chow Chow dog and different animal species in the bottom row of Figure 8. We prompt the model with sentences of the following structure: “a cross of a (V) dog and a (target species)”. In particular, we can see in this example that the identity of the dog is well preserved even when the species changes - the face of the dog has certain unique features that are well preserved and melded with the target species. Other property modifications are possible, such as material modification (e.g. “a transparent (V) teapot” in Figure 7). Some are harder than others and depend on the prior of the base generation model. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_34",
"text": " We illustrate some failure models of our method in Figure 9. The first is related to not being able to accurately generate the prompted context. Possible reasons are a weak prior for these contexts, or difficulty in generating both the subject and specified concept together due to low probability of co-occurrence in the training set. The second is context-appearance entanglement, where the appearance of the subject changes due to the prompted context, exemplified in Figure 9 with color changes of the backpack. Third, we also observe overfitting to the real images that happen when the prompt is similar to the original setting in which the subject was seen. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_35",
"text": " Other limitations are that some subjects are easier to learn than others (e.g. dogs and cats). Occasionally, with subjects that are rarer, the model is unable to support as many subject variations. Finally, there is also variability in the fidelity of the subject and some generated images might contain hallucinated subject features, depending on the strength of the model prior, and the complexity of the semantic modification. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_36",
"text": " We presented an approach for synthesizing novel renditions of a subject using a few images of the subject and the guidance of a text prompt. Our key idea is to embed a given subject instance in the output domain of a text-to-image diffusion model by binding the subject to a unique identifier. Remarkably - this fine-tuning process can work given only 3-5 subject images, making the technique particularly accessible. We demonstrated a variety of applications with animals and objects in generated photorealistic scenes, in most cases indistinguishable from real images. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
},
{
"id": "2208.12242_all_37",
"text": " We thank Rinon Gal, Adi Zicher, Ron Mokady, Bill Freeman, Dilip Krishnan, Huiwen Chang and Daniel Cohen-Or for their valuable inputs that helped improve this work, and to Mohammad Norouzi, Chitwan Saharia and William Chan for providing us with their support and the pretrained Imagen models. Finally, a special thanks to David Salesin for his feedback, advice and for his support for the project. ",
"title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation"
}
] |
The paper wished to only show the main object , letting other regions be exactly zero if they are not needed. How did the authors achieve it?
|
The paper reaches this goal by calculating each pixel norm over the 3 colour channels and zeroing out small-norm pixels according to some threshold (the percentile of all pixel norms in x) [27].
|
[
27
] |
[
{
"id": "1506.06579_all_0",
"text": " The last several years have produced tremendous progress in training powerful, deep neural network models that are approaching and even surpassing human abilities on a variety of challenging machine learning tasks (Taigman et al., 2014; Schroff et al., 2015; Hannun et al., 2014). A flagship example is training deep, convolutional neural networks (CNNs) with supervised learning to classify natural images (Krizhevsky et al., 2012). That area has benefitted from the combined effects of faster computing (e.g. GPUs), better training techniques (e.g. dropout (Hinton et al., 2012)), better activation units (e.g. rectified linear units (Glorot et al., 2011)), and larger labeled datasets (Deng et al., 2009; Lin et al., 2014). ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_1",
"text": " While there has thus been considerable improvements in our knowledge of how to create high-performing architectures and learning algorithms, our understanding of how these large neural models operate has lagged behind. Neural networks have long been known as “black boxes” because it is difficult to understand exactly how any particular, trained neural network functions due to the large number of interacting, non-linear parts. Large modern neural networks are even harder to study because of their size; for example, understanding the widely-used AlexNet DNN involves making sense of the values taken by the 60 million trained network parameters. Understanding what is learned is interesting in its own right, but it is also one key way of further improving models: the intuitions provided by understanding the current generation of models should suggest ways to make them better. For example, the deconvolutional technique for visualizing the features learned by the hidden units of DNNs suggested an architectural change of smaller convolutional filters that led to state of the art performance on the ImageNet benchmark in 2013 (Zeiler & Fergus, 2013). ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_2",
"text": " We also note that tools that enable understanding will especially benefit the vast numbers of newcomers to deep learning, who would like to take advantage of off-the-shelf software packages — like Theano (Bergstra et al., 2010), Pylearn2 (Goodfellow et al., 2013), Caffe (Jia et al., 2014), and Torch (Collobert et al., 2011) — in new domains, but who may not have any intuition for why their models work (or do not). Experts can also benefit as they iterate ideas for new models or when they are searching for good hyperparameters. We thus believe that both experts and newcomers will benefit from tools that provide intuitions about the inner workings of DNNs. This paper provides two such tools, both of which are open source so that scientists and practitioners can integrate them with their own DNNs to better understand them. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_3",
"text": " The first tool is software that interactively plots the activations produced on each layer of a trained DNN for user-provided images or video. Static images afford a slow, detailed investigation of a particular input, whereas video input highlights the DNNs changing responses to dynamic input. At present, the videos are processed live from a user’s computer camera, which is especially helpful because users can move different items around the field of view, occlude and combine them, and perform other manipulations to actively learn how different features in the network respond. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_4",
"text": " The second tool we introduce enables better visualization of the learned features computed by individual neurons at every layer of a DNN. Seeing what features have been learned is important both to understand how current DNNs work and to fuel intuitions for how to improve them. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_5",
"text": " Attempting to understand what computations are performed at each layer in DNNs is an increasingly popular direction of research. One approach is to study each layer as a group and investigate the type of computation performed by the set of neurons on a layer as a whole (Yosinski et al., 2014; Mahendran & Vedaldi, 2014). This approach is informative because the neurons in a layer interact with each other to pass information to higher layers, and thus each neuron’s contribution to the entire function performed by the DNN depends on that neuron’s context in the layer. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_6",
"text": " Another approach is to try to interpret the function computed by each individual neuron. Past studies in this vein roughly divide into two different camps: dataset-centric and network-centric. The former requires both a trained DNN and running data through that network; the latter requires only the trained network itself. One dataset-centric approach is to display images from the training or test set that cause high or low activations for individual units. Another is the deconvolution method of Zeiler & Fergus (2013), which highlights the portions of a particular image that are responsible for the firing of each neural unit. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_7",
"text": " Network-centric approaches investigate a network directly without any data from a dataset. For example, Erhan et al. (2009) synthesized images that cause high activations for particular units. Starting with some initial input 𝐱=𝐱𝟎𝐱subscript𝐱0\\mathbf{x}=\\mathbf{x_{0}}, the activation ai(𝐱)subscript𝑎𝑖𝐱a_{i}(\\mathbf{x}) caused at some unit i𝑖i by this input is computed, and then steps are taken in input space along the gradient ∂ai(𝐱)/∂𝐱subscript𝑎𝑖𝐱𝐱\\partial a_{i}(\\mathbf{x})/\\partial\\mathbf{x} to synthesize inputs that cause higher and higher activations of unit i𝑖i, eventually terminating at some 𝐱∗superscript𝐱\\mathbf{x^{*}} which is deemed to be a preferred input stimulus for the unit in question. In the case where the input space is an image, 𝐱∗superscript𝐱\\mathbf{x^{*}} can be displayed directly for interpretation. Others have followed suit, using the gradient to find images that cause higher activations (Simonyan et al., 2013; Nguyen et al., 2014) or lower activations (Szegedy et al., 2013) for output units. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_8",
"text": " These gradient-based approaches are attractive in their simplicity, but the optimization process tends to produce images that do not greatly resemble natural images. Instead, they are composed of a collection of “hacks” that happen to cause high (or low) activations: extreme pixel values, structured high frequency patterns, and copies of common motifs without global structure (Simonyan et al., 2013; Nguyen et al., 2014; Szegedy et al., 2013; Goodfellow et al., 2014). The fact that activations may be effected by such hacks is better understood thanks to several recent studies. Specifically, it has been shown that such hacks may be applied to correctly classified images to cause them to be misclassified even via imperceptibly small changes (Szegedy et al., 2013), that such hacks can be found even without the gradient information to produce unrecognizable “fooling examples” (Nguyen et al., 2014), and that the abundance of non-natural looking images that cause extreme activations can be explained by the locally linear behavior of neural nets (Goodfellow et al., 2014). ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_9",
"text": " With such strong evidence that optimizing images to cause high activations produces unrecognizable images, is there any hope of using such methods to obtain useful visualizations? It turns out there is, if one is able to appropriately regularize the optimization. Simonyan et al. (2013) showed that slightly discernible images for the final layers of a convnet could be produced with L2subscript𝐿2L_{2}-regularization. Mahendran and Vedaldi (2014) also showed the importance of incorporating natural-image priors in the optimization process when producing images that mimic an entire-layer’s firing pattern produced by a specific input image. We build on these works and contribute three additional forms of regularization that, when combined, produce more recognizable, optimization-based samples than previous methods. Because the optimization is stochastic, by starting at different random initial images, we can produce a set of optimized images whose variance provides information about the invariances learned by the unit. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_10",
"text": " To summarize, this paper makes the following two contributions: ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_11",
"text": " 1. We describe and release a software tool that provides a live, interactive visualization of every neuron in a trained convnet as it responds to a user-provided image or video. The tool displays forward activation values, preferred stimuli via gradient ascent, top images for each unit from the training set, deconv highlighting (Zeiler & Fergus, 2013) of top images, and backward diffs computed via backprop or deconv starting from arbitrary units. The combined effect of these complementary visualizations promotes a greater understanding of what a neuron computes than any single method on its own. We also describe a few insights we have gained from using this tool. (Section 2). 2. We extend past efforts to visualize preferred activation patterns in input space by adding several new types of regularization, which produce what we believe are the most interpretable images for large convnets so far (Section 3). ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_12",
"text": " Both of our tools are released as open source and are available at http://yosinski.com/deepvis. While the tools could be adapted to integrate with any DNN software framework, they work out of the box with the popular Caffe DNN software package (Jia et al., 2014). Users may run visualizations with their own Caffe DNN or our pre-trained DNN, which comes with pre-computed images optimized to activate each neuron in this trained network. Our pre-trained network is nearly identical to the “AlexNet” architecture (Krizhevsky et al., 2012), but with local reponse normalization layers after pooling layers following (Jia et al., 2014). It was trained with the Caffe framework on the ImageNet 2012 dataset (Deng et al., 2009). ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_13",
"text": " Our first visualization method is straightforward: plotting the activation values for the neurons in each layer of a convnet in response to an image or video. In fully connected neural networks, the order of the units is irrelevant, so plots of these vectors are not spatially informative. However, in convolutional networks, filters are applied in a way that respects the underlying geometry of the input; in the case of 2D images, filters are applied in a 2D convolution over the two spatial dimensions of the image. This convolution produces activations on subsequent layers that are, for each channel, also arranged spatially. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_14",
"text": " Figure 1 shows examples of this type of plot for the 𝖼𝗈𝗇𝗏𝟧𝖼𝗈𝗇𝗏𝟧\\mathsf{conv5} layer. The 𝖼𝗈𝗇𝗏𝟧𝖼𝗈𝗇𝗏𝟧\\mathsf{conv5} layer has size 256×\\times13×\\times13, which we depict as 256 separate 13×\\times13 grayscale images. Each of the 256 small images contains activations in the same spatial x𝑥x-y𝑦y spatial layout as the input data, and the 256 images are simply and arbitrarily tiled into a 16×\\times16 grid in row-major order. Figure 2 shows a zoomed in view of one particular channel, 𝖼𝗈𝗇𝗏𝟧𝟣𝟧𝟣subscript𝖼𝗈𝗇𝗏𝟧151\\mathsf{conv5_{151}}, that responds to human and animal faces. All layers can be viewed in the software tool, including pooling and normalization layers. Visualizing these layers provides intuitions about their effects and functions. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_15",
"text": " Although this visualization is simple to implement, we find it informative because all data flowing through the network can be visualized. There is nothing mysterious happening behind the scenes. Because this convnet contains only a single path from input to output, every layer is a bottleneck through which all information must pass en-route to a classification decision. The layer sizes are all small enough that any one layer can easily fit on a computer screen.111The layer with the most activations is 𝖼𝗈𝗇𝗏𝟣𝖼𝗈𝗇𝗏𝟣\\mathsf{conv1} which, when tiled, is only 550x550 before adding padding. So far, we have gleaned several surprising intuitions from using the tool: ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_16",
"text": " • One of the most interesting conclusions so far has been that representations on some layers seem to be surprisingly local. Instead of finding distributed representations on all layers, we see, for example, detectors for text, flowers, fruit, and faces on 𝖼𝗈𝗇𝗏𝟦𝖼𝗈𝗇𝗏𝟦\\mathsf{conv4} and 𝖼𝗈𝗇𝗏𝟧𝖼𝗈𝗇𝗏𝟧\\mathsf{conv5}. These conclusions can be drawn either from the live visualization or the optimized images (or, best, by using both in concert) and suggest several directions for future research (discussed in Section 4). • When using direct file input to classify photos from Flickr or Google Images, classifications are often correct and highly confident (softmax probability for correct class near 1). On the other hand, when using input from a webcam, predictions often cannot be correct because no items from the training set are shown in the image. The training set’s 1000 classes, though numerous, do not cover most common household objects. Thus, when shown a typical webcam view of a person with no ImageNet classes present, the output has no single high probability, as is expected. Surprisingly, however, this probability vector is noisy and varies significantly in response to tiny changes in the input, often changing merely in response to the noise from the webcam. We might have instead expected unchanging and low confidence predictions for a given scene when no object the network has been trained to classify is present. Plotting the fully connected layers (𝖿𝖼𝟨𝖿𝖼𝟨\\mathsf{fc6} and 𝖿𝖼𝟩𝖿𝖼𝟩\\mathsf{fc7}) also reveals a similar sensitivity to small input changes. • Although the last three layers are sensitive to small input changes, much of the lower layer computation is more robust. For example, when visualizing the 𝖼𝗈𝗇𝗏𝟧𝖼𝗈𝗇𝗏𝟧\\mathsf{conv5} layer, one can find many invariant detectors for faces, shoulders, text, etc. by moving oneself or objects in front of the camera. Even though the 1000 classes contain no explicitly labeled faces or text, the network learns to identify these concepts simply because they represent useful partial information for making a later classification decision. One face detector, denoted 𝖼𝗈𝗇𝗏𝟧𝟣𝟧𝟣subscript𝖼𝗈𝗇𝗏𝟧151\\mathsf{conv5_{151}} (channel number 151 on 𝖼𝗈𝗇𝗏𝟧𝖼𝗈𝗇𝗏𝟧\\mathsf{conv5}), is shown in Figure 2 activating for human and lion faces and in Figure 1 activating for a cat face. Zhou et al. (2014) recently observed a similar effect where convnets trained only to recognize different scene types — playgrounds, restaurant patios, living rooms, etc. — learn object detectors (e.g. for chairs, books, and sofas) on intermediate layers. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_17",
"text": " The reader is encouraged to try this visualization tool out for him or herself. The code, together with pre-trained models and images synthesized by gradient ascent, can be downloaded at http://yosinski.com/deepvis. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_18",
"text": " The second contribution of this work is introducing several regularization methods to bias images found via optimization toward more visually interpretable examples. While each of these regularization methods helps on its own, in combination they are even more effective. We found useful combinations via a random hyperparameter search, as discussed below. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_19",
"text": " Formally, consider an image 𝐱∈ℝC×H×W𝐱superscriptℝ𝐶𝐻𝑊\\mathbf{x}\\in\\mathbb{R}^{C\\times H\\times W}, where C=3𝐶3C=3 color channels and the height (H𝐻H) and width (W𝑊W) are both 227 pixels. When this image is presented to a neural network, it causes an activation ai(𝐱)subscript𝑎𝑖𝐱a_{i}(\\mathbf{x}) for some unit i𝑖i, where for simplicity i𝑖i is an index that runs over all units on all layers. We also define a parameterized regularization function Rθ(𝐱)subscript𝑅𝜃𝐱R_{\\theta}(\\mathbf{x}) that penalizes images in various ways. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_20",
"text": " Our network was trained on ImageNet by first subtracting the per-pixel mean of examples in ImageNet before inputting training examples to the network. Thus, the direct input to the network, 𝐱𝐱\\mathbf{x}, can be thought of as a zero-centered input. We may pose the optimization problem as finding an image 𝐱∗superscript𝐱\\mathbf{x^{*}} where ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_21",
"text": " 𝐱∗=argmax𝐱(ai(𝐱)−Rθ(𝐱))superscript𝐱subscriptargmax𝐱subscript𝑎𝑖𝐱subscript𝑅𝜃𝐱\\mathbf{x^{*}}=\\operatorname*{arg\\,max}_{\\mathbf{x}}(a_{i}(\\mathbf{x})-R_{\\theta}(\\mathbf{x})) (1) ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_22",
"text": " In practice, we use a slightly different formulation. Because we search for 𝐱∗superscript𝐱\\mathbf{x^{*}} by starting at some 𝐱𝟎subscript𝐱0\\mathbf{x_{0}} and taking gradient steps, we instead define the regularization via an operator rθ(⋅)subscript𝑟𝜃⋅r_{\\theta}(\\cdot) that maps 𝐱𝐱\\mathbf{x} to a slightly more regularized version of itself. This latter definition is strictly more expressive, allowing regularization operators rθsubscript𝑟𝜃r_{\\theta} that are not the gradient of any Rθsubscript𝑅𝜃R_{\\theta}. This method is easy to implement within a gradient descent framework by simply alternating between taking a step toward the gradient of ai(𝐱)subscript𝑎𝑖𝐱a_{i}(\\mathbf{x}) and taking a step in the direction given by rθsubscript𝑟𝜃r_{\\theta}. With a gradient descent step size of η𝜂\\eta, a single step in this process applies the update: ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_23",
"text": " 𝐱←rθ(𝐱+η∂ai∂𝐱)←𝐱subscript𝑟𝜃𝐱𝜂subscript𝑎𝑖𝐱\\mathbf{x}\\leftarrow r_{\\theta}\\left(\\mathbf{x}+\\eta\\frac{\\partial a_{i}}{\\partial\\mathbf{x}}\\right)\\\\ (2) ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_24",
"text": " We investigated the following four regularizations. All are designed to overcome different pathologies commonly encountered by gradient descent without regularization. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_25",
"text": " L2subscript𝐿2L_{2} decay: A common regularization, L2subscript𝐿2L_{2} decay penalizes large values and is implemented as rθ(𝐱)=(1−θdecay)⋅𝐱subscript𝑟𝜃𝐱⋅1subscript𝜃decay𝐱r_{\\theta}(\\mathbf{x})=(1-\\theta_{\\mathrm{decay}})\\cdot\\mathbf{x}. L2subscript𝐿2L_{2} decay tends to prevent a small number of extreme pixel values from dominating the example image. Such extreme single-pixel values neither occur naturally with great frequency nor are useful for visualization. L2subscript𝐿2L_{2} decay was also used by Simonyan et al. (2013). ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_26",
"text": " Gaussian blur: Producing images via gradient ascent tends to produce examples with high frequency information (see Supplementary Section S1 for a possible reason). While these images cause high activations, they are neither realistic nor interpretable (Nguyen et al., 2014). A useful regularization is thus to penalize high frequency information. We implement this as a Gaussian blur step rθ(𝐱)=GaussianBlur(𝐱,θb_width)subscript𝑟𝜃𝐱GaussianBlur𝐱subscript𝜃b_widthr_{\\theta}(\\mathbf{x})=\\mathrm{GaussianBlur}(\\mathbf{x},\\theta_{\\mathrm{b\\_width}}). Convolving with a blur kernel is more computationally expensive than the other regularization methods, so we added another hyperparameter θb_everysubscript𝜃b_every\\theta_{\\mathrm{b\\_every}} to allow, for example, blurring every several optimization steps instead of every step. Blurring an image multiple times with a small width Gaussian kernel is equivalent to blurring once with a larger width kernel, and the effect will be similar even if the image changes slightly during the optimization process. This technique thus lowers computational costs without limiting the expressiveness of the regularization. Mahendran & Vedaldi (2014) used a penalty with a similar effect to blurring, called total variation, in their work reconstructing images from layer codes. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_27",
"text": " Clipping pixels with small norm: The first two regularizations suppress high amplitude and high frequency information, so after applying both, we are left with an 𝐱∗superscript𝐱\\mathbf{x^{*}} that contains somewhat small, somewhat smooth values. However, 𝐱∗superscript𝐱\\mathbf{x^{*}} will still tend to contain non-zero pixel values everywhere. Even if some pixels in 𝐱∗superscript𝐱\\mathbf{x^{*}} show the primary object or type of input causing the unit under consideration to activate, the gradient with respect to all other pixels in 𝐱∗superscript𝐱\\mathbf{x^{*}} will still generally be non-zero, so these pixels will also shift to show some pattern as well, contributing in whatever small way they can to ultimately raise the chosen unit’s activation. We wish to bias the search away from such behavior and instead show only the main object, letting other regions be exactly zero if they are not needed. We implement this bias using an rθ(𝐱)subscript𝑟𝜃𝐱r_{\\theta}(\\mathbf{x}) that computes the norm of each pixel (over red, green, and blue channels) and then sets any pixels with small norm to zero. The threshold for the norm, θn_pctsubscript𝜃n_pct\\theta_{\\mathrm{n\\_pct}}, is specified as a percentile of all pixel norms in 𝐱𝐱\\mathbf{x}. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_28",
"text": " Clipping pixels with small contribution: Instead of clipping pixels with small norms, we can try something slightly smarter and clip pixels with small contributions to the activation. One way of computing a pixel’s contribution to an activation is to measure how much the activation increases or decreases when the pixel is set to zero; that is, to compute the contribution as |ai(𝐱)−ai(𝐱−j)|subscript𝑎𝑖𝐱subscript𝑎𝑖subscript𝐱𝑗|a_{i}(\\mathbf{x})-a_{i}(\\mathbf{x}_{-j})|, where 𝐱−jsubscript𝐱𝑗\\mathbf{x}_{-j} is 𝐱𝐱\\mathbf{x} but with the jthsuperscript𝑗𝑡ℎj^{th} pixel set to zero. This approach is straightforward but prohibitively slow, requiring a forward pass for every pixel. Instead, we approximate this process by linearizing ai(𝐱)subscript𝑎𝑖𝐱a_{i}(\\mathbf{x}) around 𝐱𝐱\\mathbf{x}, in which case the contribution of each dimension of 𝐱𝐱\\mathbf{x} can be estimated as the elementwise product of 𝐱𝐱\\mathbf{x} and the gradient. We then sum over all three channels and take the absolute value, computing |∑c𝐱∘∇𝐱ai(𝐱)|subscript𝑐𝐱subscript∇𝐱subscript𝑎𝑖𝐱\\left|\\sum_{c}\\mathbf{x}\\circ\\nabla_{\\mathbf{x}}a_{i}(\\mathbf{x})\\right|. We use the absolute value to find pixels with small contribution in either direction, positive or negative. While we could choose to keep the pixel transitions where setting the pixel to zero would result in a large activation increase, these shifts are already handled by gradient ascent, and here we prefer to clip only the pixels that are deemed not to matter, not to take large gradient steps outside the region where the linear approximation is most valid. We define this rθ(𝐱)subscript𝑟𝜃𝐱r_{\\theta}(\\mathbf{x}) as the operation that sets pixels with contribution under the θc_pctsubscript𝜃c_pct\\theta_{\\mathrm{c\\_pct}} percentile to zero. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_29",
"text": " If the above regularization methods are applied individually, they are somewhat effective at producing more interpretable images; Figure 3 shows the effects of each individual hyperparameter. However, preliminary experiments uncovered that their combined effect produces better visualizations. To pick a reasonable set of hyperparameters for all methods at once, we ran a random hyperparameter search of 300 possible combinations and settled on four that complement each other well. The four selected combinations are listed in Table 1 and optimized images using each are shown for the “Gorilla” class output unit in Figure 4. Of the four, some show high frequency information, others low frequency; some contain dense pixel data, and others contain only sparse outlines of important regions. We found the version in the lower-left quadrant to be the best single set of hyperparameters, but often greater intuition can be gleaned by considering all four at once. Figure 5 shows the optimization results computed for a selection of units on all layers. A single image for every filter of all five convolutional layers is shown in Supplementary Figure S1. Nine images for each filter of all layers, including each of the 1000 ImageNet output classes, can be viewed at http://yosinski.com/deepvis. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_30",
"text": " We have introduced two visual tools for aiding in the interpretation of trained neural nets. Intuition gained from these tools may prompt ideas for improved methods and future research. Here we discuss several such ideas. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_31",
"text": " The interactive tool reveals that representations on later convolutional layers tend to be somewhat local, where channels correspond to specific, natural parts (e.g. wheels, faces) instead of being dimensions in a completely distributed code. That said, not all features correspond to natural parts, raising the possibility of a different decomposition of the world than humans might expect. These visualizations suggest that further study into the exact nature of learned representations — whether they are local to a single channel or distributed across several — is likely to be interesting (see Zhou et al. (2014) for work in this direction). The locality of the representation also suggests that during transfer learning, when new models are trained atop the 𝖼𝗈𝗇𝗏𝟦𝖼𝗈𝗇𝗏𝟦\\mathsf{conv4} or 𝖼𝗈𝗇𝗏𝟧𝖼𝗈𝗇𝗏𝟧\\mathsf{conv5} representations, a bias toward sparse connectivity could be helpful because it may be necessary to combine only a few features from these layers to create important features at higher layers. ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_32",
"text": " The second tool — new regularizations that enable improved, interpretable, optimized visualizations of learned features — will help researchers and practitioners understand, debug, and improve their models. The visualizations also reveal a new twist in an ongoing story. Previous studies have shown that discriminative networks can easily be fooled or hacked by the addition of certain structured noise in image space (Szegedy et al., 2013; Nguyen et al., 2014). An oft-cited reason for this property is that discriminative training leads networks to ignore non-discriminative information in their input, e.g. learning to detect jaguars by matching the unique spots on their fur while ignoring the fact that they have four legs. For this reason it has been seen as a hopeless endeavor to create a generative model in which one randomly samples an x𝑥x from a broad distribution on the space of all possible images and then iteratively transforms x𝑥x into a recognizable image by moving it to a region that satisfies both a prior p(x)𝑝𝑥p(x) and posterior p(y|x)𝑝conditional𝑦𝑥p(y|x) for some class label y𝑦y. Past attempts have largely supported this view by producing unrealistic images using this method (Nguyen et al., 2014; Simonyan et al., 2013). ",
"title": "Understanding Neural Networks Through Deep Visualization"
},
{
"id": "1506.06579_all_33",
"text": " However, the results presented here suggest an alternate possibility: the previously used priors may simply have been too weak (see Section S1 for one hypothesis of why a strong p(x)𝑝𝑥p(x) model is needed). With the careful design or learning of a p(x)𝑝𝑥p(x) model that biases toward realism, one may be able to harness the large number of parameters present in a discriminately learned p(y|x)𝑝conditional𝑦𝑥p(y|x) model to generate realistic images by enforcing probability under both models simultaneously. Even with the simple, hand-coded p(x)𝑝𝑥p(x) models we use in this paper as regularizers, complex dependencies between distant pixels already arise (cf. the beetles with structure spanning over 100 pixels in Figure 4). This implies that the discriminative parameters also contain significant “generative” structure from the training dataset; that is, the parameters encode not only the jaguar’s spots, but to some extent also its four legs. With better, learned probabilistic models over the input and activations of higher layers, much more structure may be apparent. Work by Dai et al. (2015) shows some interesting results in this direction. While the images generated in this paper are far from being photo-realistic, they do suggest that transferring discriminatively trained parameters to generative models — opposite the direction of the usual unsupervised pretraining approach — may be a fruitful area for further investigation. ",
"title": "Understanding Neural Networks Through Deep Visualization"
}
] |
What are ASRGs?
|
ARSG is the sequence generator based on the RNN network, which utilizes the attention mechanism [7].
|
[
7
] |
[
{
"id": "1506.07503_all_0",
"text": " Recently, attention-based recurrent networks have been successfully applied to a wide variety of tasks, such as handwriting synthesis , machine translation , image caption generation and visual object classification .111An early version of this work was presented at the NIPS 2014 Deep Learning Workshop . Such models iteratively process their input by selecting relevant content at every step. This basic idea significantly extends the applicability range of end-to-end training methods, for instance, making it possible to construct networks with external memory (6, 7). ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_1",
"text": " We introduce extensions to attention-based recurrent networks that make them applicable to speech recognition. Learning to recognize speech can be viewed as learning to generate a sequence (transcription) given another sequence (speech). From this perspective it is similar to machine translation and handwriting synthesis tasks, for which attention-based methods have been found suitable (2, 1). However, compared to machine translation, speech recognition principally differs by requesting much longer input sequences (thousands of frames instead of dozens of words), which introduces a challenge of distinguishing similar speech fragments222Explained in more detail in Sec. 2.1. in a single utterance. It is also different from handwriting synthesis, since the input sequence is much noisier and does not have as clear structure. For these reasons speech recognition is an interesting testbed for developing new attention-based architectures capable of processing long and noisy inputs. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_2",
"text": " Application of attention-based models to speech recognition is also an important step toward building fully end-to-end trainable speech recognition systems, which is an active area of research. The dominant approach is still based on hybrid systems consisting of a deep neural acoustic model, a triphone HMM model and an n-gram language model (8, 9). This requires dictionaries of hand-crafted pronunciation and phoneme lexicons, and a multi-stage training procedure to make the components work together. Excellent results by an HMM-less recognizer have recently been reported, with the system consisting of a CTC-trained neural network and a language model . Still, the language model was added only at the last stage in that work, thus leaving open a question of how much an acoustic model can benefit from being aware of a language model during training. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_3",
"text": " In this paper, we evaluate attention-based models on a phoneme recognition task using the widely-used TIMIT dataset. At each time step in generating an output sequence (phonemes), an attention mechanism selects or weighs the signals produced by a trained feature extraction mechanism at potentially all of the time steps in the input sequence (speech frames). The weighted feature vector then helps to condition the generation of the next element of the output sequence. Since the utterances in this dataset are rather short (mostly under 5 seconds), we measure the ability of the considered models in recognizing much longer utterances which were created by artificially concatenating the existing utterances. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_4",
"text": " We start with a model proposed in for the machine translation task as the baseline. This model seems entirely vulnerable to the issue of similar speech fragments but despite our expectations it was competitive on the original test set, reaching 18.7% phoneme error rate (PER). However, its performance degraded quickly with longer, concatenated utterances. We provide evidence that this model adapted to track the absolute location in the input sequence of the content it is recognizing, a strategy feasible for short utterances from the original test set but inherently unscalable. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_5",
"text": " In order to circumvent this undesired behavior, in this paper, we propose to modify the attention mechanism such that it explicitly takes into account both (a) the location of the focus from the previous step, as in and (b) the features of the input sequence, as in . This is achieved by adding as inputs to the attention mechanism auxiliary convolutional features which are extracted by convolving the attention weights from the previous step with trainable filters. We show that a model with such convolutional features performs significantly better on the considered task (18.0% PER). More importantly, the model with convolutional features robustly recognized utterances many times longer than the ones from the training set, always staying below 20% PER. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_6",
"text": " Therefore, the contribution of this work is three-fold. For one, we present a novel purely neural speech recognition architecture based on an attention mechanism, whose performance is comparable to that of the conventional approaches on the TIMIT dataset. Moreover, we propose a generic method of adding location awareness to the attention mechanism. Finally, we introduce a modification of the attention mechanism to avoid concentrating the attention on a single frame, and thus avoid obtaining less “effective training examples”, bringing the PER down to 17.6%. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_7",
"text": " An attention-based recurrent sequence generator (ARSG) is a recurrent neural network that stochastically generates an output sequence (y1,…,yT)subscript𝑦1…subscript𝑦𝑇(y_{1},\\dots,y_{T}) from an input x𝑥x. In practice, x𝑥x is often processed by an encoder which outputs a sequential input representation h=(h1,…,hL)ℎsubscriptℎ1…subscriptℎ𝐿h=(h_{1},\\ldots,h_{L}) more suitable for the attention mechanism to work with. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_8",
"text": " In the context of this work, the output y𝑦y is a sequence of phonemes, and the input x=(x1,…,xL′)𝑥subscript𝑥1…subscript𝑥superscript𝐿′x=(x_{1},\\ldots,x_{L^{\\prime}}) is a sequence of feature vectors. Each feature vector is extracted from a small overlapping window of audio frames. The encoder is implemented as a deep bidirectional recurrent network (BiRNN), to form a sequential representation hℎh of length L=L′𝐿superscript𝐿′L=L^{\\prime}. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_9",
"text": " At the i𝑖i-th step an ARSG generates an output yisubscript𝑦𝑖y_{i} by focusing on the relevant elements of hℎh: αi=Attend(si−1,αi−1,h)subscript𝛼𝑖𝐴𝑡𝑡𝑒𝑛𝑑subscript𝑠𝑖1subscript𝛼𝑖1ℎ\\displaystyle\\alpha_{i}=Attend(s_{i-1},\\alpha_{i-1},h) (1) gi=∑j=1Lαi,jhjsubscript𝑔𝑖superscriptsubscript𝑗1𝐿subscript𝛼𝑖𝑗subscriptℎ𝑗\\displaystyle g_{i}=\\sum\\limits_{j=1}^{L}\\alpha_{i,j}h_{j} (2) yi∼Generate(si−1,gi),similar-tosubscript𝑦𝑖𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒subscript𝑠𝑖1subscript𝑔𝑖\\displaystyle y_{i}\\sim Generate(s_{i-1},g_{i}), (3) where si−1subscript𝑠𝑖1s_{i-1} is the (i−1)𝑖1(i-1)-th state of the recurrent neural network to which we refer as the generator, αi∈ℝLsubscript𝛼𝑖superscriptℝ𝐿\\alpha_{i}\\in\\mathbb{R}^{L} is a vector of the attention weights, also often called the alignment . Using the terminology from , we call gisubscript𝑔𝑖g_{i} a glimpse. The step is completed by computing a new generator state: si=Recurrency(si−1,gi,yi)subscript𝑠𝑖𝑅𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑦subscript𝑠𝑖1subscript𝑔𝑖subscript𝑦𝑖\\displaystyle s_{i}=Recurrency(s_{i-1},g_{i},y_{i}) (4) Long short-term memory units (LSTM, ) and gated recurrent units (GRU, ) are typically used as a recurrent activation, to which we refer as a recurrency. The process is graphically illustrated in Fig. 1. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_10",
"text": " Inspired by we distinguish between location-based, content-based and hybrid attention mechanisms. Attend𝐴𝑡𝑡𝑒𝑛𝑑Attend in Eq. (1) describes the most generic, hybrid attention. If the term αi−1subscript𝛼𝑖1\\alpha_{i-1} is dropped from Attend𝐴𝑡𝑡𝑒𝑛𝑑Attend arguments, i.e., αi=Attend(si−1,h)subscript𝛼𝑖𝐴𝑡𝑡𝑒𝑛𝑑subscript𝑠𝑖1ℎ\\alpha_{i}=Attend(s_{i-1},h), we call it content-based (see, e.g., or ). In this case, Attend𝐴𝑡𝑡𝑒𝑛𝑑Attend is often implemented by scoring each element in hℎh separately and normalizing the scores: ei,j=Score(si−1,hj),subscript𝑒𝑖𝑗𝑆𝑐𝑜𝑟𝑒subscript𝑠𝑖1subscriptℎ𝑗\\displaystyle e_{i,j}=Score(s_{i-1},h_{j}), (5) αi,j=exp(ei,j)/∑j=1Lexp(ei,j).subscript𝛼𝑖𝑗/subscript𝑒𝑖𝑗superscriptsubscript𝑗1𝐿subscript𝑒𝑖𝑗\\displaystyle\\alpha_{i,j}=\\exp(e_{i,j})\\left/\\sum\\limits_{j=1}^{L}\\exp(e_{i,j})\\right.. (6) ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_11",
"text": " The main limitation of such scheme is that identical or very similar elements of hℎh are scored equally regardless of their position in the sequence. This is the issue of “similar speech fragments” raised above. Often this issue is partially alleviated by an encoder such as e.g. a BiRNN or a deep convolutional network that encode contextual information into every element of hℎh . However, capacity of hℎh elements is always limited, and thus disambiguation by context is only possible to a limited extent. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_12",
"text": " Alternatively, a location-based attention mechanism computes the alignment from the generator state and the previous alignment only such that αi=Attend(si−1,αi−1)subscript𝛼𝑖𝐴𝑡𝑡𝑒𝑛𝑑subscript𝑠𝑖1subscript𝛼𝑖1\\alpha_{i}=Attend(s_{i-1},\\alpha_{i-1}). For instance, Graves used the location-based attention mechanism using a Gaussian mixture model in his handwriting synthesis model. In the case of speech recognition, this type of location-based attention mechanism would have to predict the distance between consequent phonemes using si−1subscript𝑠𝑖1s_{i-1} only, which we expect to be hard due to large variance of this quantity. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_13",
"text": " For these limitations associated with both content-based and location-based mechanisms, we argue that a hybrid attention mechanism is a natural candidate for speech recognition. Informally, we would like an attention model that uses the previous alignment αi−1subscript𝛼𝑖1\\alpha_{i-1} to select a short list of elements from hℎh, from which the content-based attention, in Eqs. (5)–(6), will select the relevant ones without confusion. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_14",
"text": " We start from the ARSG-based model with the content-based attention mechanism proposed in . This model can be described by Eqs. (5)–(6), where ei,j=w⊤tanh(Wsi−1+Vhj+b).subscript𝑒𝑖𝑗superscript𝑤top𝑊subscript𝑠𝑖1𝑉subscriptℎ𝑗𝑏\\displaystyle e_{i,j}=w^{\\top}\\tanh(Ws_{i-1}+Vh_{j}+b). (7) w𝑤w and b𝑏b are vectors, W𝑊W and V𝑉V are matrices. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_15",
"text": " We extend this content-based attention mechanism of the original model to be location-aware by making it take into account the alignment produced at the previous step. First, we extract k𝑘k vectors fi,j∈ℝksubscript𝑓𝑖𝑗superscriptℝ𝑘f_{i,j}\\in\\mathbb{R}^{k} for every position j𝑗j of the previous alignment αi−1subscript𝛼𝑖1\\alpha_{i-1} by convolving it with a matrix F∈ℝk×r𝐹superscriptℝ𝑘𝑟F\\in\\mathbb{R}^{k\\times r}: fi=F∗αi−1.subscript𝑓𝑖𝐹subscript𝛼𝑖1\\displaystyle f_{i}=F*\\alpha_{i-1}. (8) These additional vectors fi,jsubscript𝑓𝑖𝑗f_{i,j} are then used by the scoring mechanism ei,jsubscript𝑒𝑖𝑗e_{i,j}: ei,j=w⊤tanh(Wsi−1+Vhj+Ufi,j+b)subscript𝑒𝑖𝑗superscript𝑤top𝑊subscript𝑠𝑖1𝑉subscriptℎ𝑗𝑈subscript𝑓𝑖𝑗𝑏\\displaystyle e_{i,j}=w^{\\top}\\tanh(Ws_{i-1}+Vh_{j}+Uf_{i,j}+b) (9) ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_16",
"text": " There are three potential issues with the normalization in Eq. (6). ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_17",
"text": " First, when the input sequence hℎh is long, the glimpse gisubscript𝑔𝑖g_{i} is likely to contain noisy information from many irrelevant feature vectors hjsubscriptℎ𝑗h_{j}, as the normalized scores αi,jsubscript𝛼𝑖𝑗\\alpha_{i,j} are all positive and sum to 111. This makes it difficult for the proposed ARSG to focus clearly on a few relevant frames at each time i𝑖i. Second, the attention mechanism is required to consider all the L𝐿L frames each time it decodes a single output yisubscript𝑦𝑖y_{i} while decoding the output of length T𝑇T, leading to a computational complexity of O(LT)𝑂𝐿𝑇O(LT). This may easily become prohibitively expensive, when input utterances are long (and issue that is less serious for machine translation, because in that case the input sequence is made of words, not of 20ms acoustic frames). ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_18",
"text": " The other side of the coin is that the use of softmax normalization in Eq. (6) prefers to mostly focus on only a single feature vector hjsubscriptℎ𝑗h_{j}. This prevents the model from aggregating multiple top-scored frames to form a glimpse gisubscript𝑔𝑖g_{i}. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_19",
"text": " There is a straightforward way to address the first issue of a noisy glimpse by “sharpening” the scores αi,jsubscript𝛼𝑖𝑗\\alpha_{i,j}. One way to sharpen the weights is to introduce an inverse temperature β>1𝛽1\\beta>1 to the softmax function such that ai,j=exp(βei,j)/∑j=1Lexp(βei,j),subscript𝑎𝑖𝑗/𝛽subscript𝑒𝑖𝑗superscriptsubscript𝑗1𝐿𝛽subscript𝑒𝑖𝑗a_{i,j}=\\exp(\\beta e_{i,j})\\left/\\sum_{j=1}^{L}\\exp(\\beta e_{i,j})\\right., or to keep only the top-k𝑘k frames according to the scores and re-normalize them. These sharpening methods, however, still requires us to compute the score of every frame each time (O(LT)𝑂𝐿𝑇O(LT)), and they worsen the second issue, of overly narrow focus. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_20",
"text": " We also propose and investigate a windowing technique. At each time i𝑖i, the attention mechanism considers only a subsequence h~=(hpi−w,…,hpi+w−1)~ℎsubscriptℎsubscript𝑝𝑖𝑤…subscriptℎsubscript𝑝𝑖𝑤1\\tilde{h}=(h_{p_{i}-w},\\ldots,h_{p_{i}+w-1}) of the whole sequence hℎh, where w≪Lmuch-less-than𝑤𝐿w\\ll L is the predefined window width and pisubscript𝑝𝑖p_{i} is the median of the alignment αi−1subscript𝛼𝑖1\\alpha_{i-1}. The scores for hj∉h~subscriptℎ𝑗~ℎh_{j}\\notin\\tilde{h} are not computed, resulting in a lower complexity of O(L+T)𝑂𝐿𝑇O(L+T). This windowing technique is similar to taking the top-k𝑘k frames, and similarly, has the effect of sharpening. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_21",
"text": " The proposed sharpening based on windowing can be used both during training and evaluation. Later, in the experiments, we only consider the case where it is used during evaluation. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_22",
"text": " We observed that the proposed sharpening methods indeed helped with long utterances. However, all of them, and especially selecting the frame with the highest score, negatively affected the model’s performance on the standard development set which mostly consists of short utterances. This observations let us hypothesize that it is helpful for the model to aggregate selections from multiple top-scored frames. In a sense this brings more diversity, i.e., more effective training examples, to the output part of the model, as more input locations are considered. To facilitate this effect, we replace the unbounded exponential function of the softmax function in Eq. (6) with the bounded logistic sigmoid σ𝜎\\sigma such that ai,j=σ(ei,j)/∑j=1Lσ(ei,j).subscript𝑎𝑖𝑗/𝜎subscript𝑒𝑖𝑗superscriptsubscript𝑗1𝐿𝜎subscript𝑒𝑖𝑗a_{i,j}=\\sigma(e_{i,j})\\left/\\sum_{j=1}^{L}\\sigma(e_{i,j})\\right.. This has the effect of smoothing the focus found by the attention mechanism. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_23",
"text": " Speech recognizers based on the connectionist temporal classification (CTC, ) and its extension, RNN Transducer , are the closest to the ARSG model considered in this paper. They follow earlier work on end-to-end trainable deep learning over sequences with gradient signals flowing through the alignment process . They have been shown to perform well on the phoneme recognition task . Furthermore, the CTC was recently found to be able to directly transcribe text from speech without any intermediate phonetic representation . ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_24",
"text": " The considered ARSG is different from both the CTC and RNN Transducer in two ways. First, whereas the attention mechanism deterministically aligns the input and the output sequences, the CTC and RNN Transducer treat the alignment as a latent random variable over which MAP (maximum a posteriori) inference is performed. This deterministic nature of the ARSG’s alignment mechanism allows beam search procedure to be simpler. Furthermore, we empirically observe that a much smaller beam width can be used with the deterministic mechanism, which allows faster decoding (see Sec. 4.2 and Fig. 2). Second, the alignment mechanism of both the CTC and RNN Transducer is constrained to be “monotonic” to keep marginalization of the alignment tractable. On the other hand, the proposed attention mechanism can result in non-monotonic alignment, which makes it suitable for a larger variety of tasks other than speech recognition. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_25",
"text": " A hybrid attention model using a convolution operation was also proposed in for neural Turing machines (NTM). At each time step, the NTM computes content-based attention weights which are then convolved with a predicted shifting distribution. Unlike the NTM’s approach, the hybrid mechanism proposed here lets learning figure out how the content-based and location-based addressing be combined by a deep, parametric function (see Eq. (9).) ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_26",
"text": " Sukhbaatar et al. describes a similar hybrid attention mechanism, where location embeddings are used as input to the attention model. This approach has an important disadvantage that the model cannot work with an input sequence longer than those seen during training. Our approach, on the other hand, works well on sequences many times longer than those seen during training (see Sec. 5.) ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_27",
"text": " We closely followed the procedure in . All experiments were performed on the TIMIT corpus . We used the train-dev-test split from the Kaldi TIMIT s5 recipe. We trained on the standard 462 speaker set with all SA utterances removed and used the 50 speaker dev set for early stopping. We tested on the 24 speaker core test set. All networks were trained on 40 mel-scale filter-bank features together with the energy in each frame, and first and second temporal differences, yielding in total 123 features per frame. Each feature was rescaled to have zero mean and unit variance over the training set. Networks were trained on the full 61-phone set extended with an extra “end-of-sequence” token that was appended to each target sequence. Similarly, we appended an all-zero frame at the end of each input sequence to indicate the end of the utterance. Decoding was performed using the 61+1 phoneme set, while scoring was done on the 39 phoneme set. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_28",
"text": " One property of ARSG models is that different subsets of parameters are reused different number of times; L𝐿L times for those of the encoder, LT𝐿𝑇LT for the attention weights and T𝑇T times for all the other parameters of the ARSG. This makes the scales of derivatives w.r.t. parameters vary significantly, and we handle it by using an adaptive learning rate algorithm, AdaDelta which has two hyperparameters ϵitalic-ϵ\\epsilon and ρ𝜌\\rho. All the weight matrices were initialized from a normal Gaussian distribution with its standard deviation set to 0.010.010.01. Recurrent weights were furthermore orthogonalized. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_29",
"text": " As TIMIT is a relatively small dataset, proper regularization is crucial. We used the adaptive weight noise as a main regularizer . We first trained our models with a column norm constraint with the maximum norm 111 until the lowest development negative log-likelihood is achieved.333 Applying the weight noise from the beginning of training caused severe underfitting. During this time, ϵitalic-ϵ\\epsilon and ρ𝜌\\rho are set to 10−8superscript10810^{-8} and 0.950.950.95, respectively. At this point, we began using the adaptive weight noise, and scaled down the model complexity cost LCsubscript𝐿𝐶L_{C} by a factor of 10, while disabling the column norm constraints. Once the new lowest development log-likelihood was reached, we fine-tuned the model with a smaller ϵ=10−10italic-ϵsuperscript1010\\epsilon=10^{-10}, until we did not observe the improvement in the development phoneme error rate (PER) for 100K weight updates. Batch size 1 was used throughout the training. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_30",
"text": " We evaluated the ARSGs with different attention mechanisms. The encoder was a 3-layer BiRNN with 256 GRU units in each direction, and the activations of the 512 top-layer units were used as the representation hℎh. The generator had a single recurrent layer of 256 GRU units. Generate𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒Generate in Eq. (3) had a hidden layer of 64 maxout units. The initial states of both the encoder and generator were treated as additional parameters. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_31",
"text": " Our baseline model is the one with a purely content-based attention mechanism (See Eqs. (5)–(7).) The scoring network in Eq. (7) had 512 hidden units. The other two models use the convolutional features in Eq. (8) with k=10𝑘10k=10 and r=201𝑟201r=201. One of them uses the smoothing from Sec. 2.3. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_32",
"text": " A left-to-right beam search over phoneme sequences was used during decoding . Beam search was stopped when the “end-of-sequence” token ⟨eos⟩delimited-⟨⟩eos\\left<\\text{eos}\\right> was emitted. We started with a beam width of 10, increasing it up to 40 when the network failed to produce ⟨eos⟩delimited-⟨⟩eos\\left<\\text{eos}\\right> with the narrower beam. As shown in Fig. 2, decoding with a wider beam gives little-to-none benefit. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_33",
"text": " All the models achieved competitive PERs (see Table 1). With the convolutional features, we see 3.7% relative improvement over the baseline and further 5.9% with the smoothing. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_34",
"text": " To our surprise (see Sec. 2.1.), the baseline model learned to align properly. An alignment produced by the baseline model on a sequence with repeated phonemes (utterance FDHC0_SX209) is presented in Fig. 3 which demonstrates that the baseline model is not confused by short-range repetitions. We can also see from the figure that it prefers to select frames that are near the beginning or even slightly before the phoneme location provided as a part of the dataset. The alignments produced by the other models were very similar visually. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_35",
"text": " The good performance of the baseline model led us to the question of how it distinguishes between repetitions of similar phoneme sequences and how reliably it decodes longer sequences with more repetitions. We created two datasets of long utterances; one by repeating each test utterance, and the other by concatenating randomly chosen utterances. In both cases, the waveforms were cross-faded with a 0.05s silence inserted as the “pau” phone. We concatenated up to 151515 utterances. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_36",
"text": " First, we checked the forced alignment with these longer utterances by forcing the generator to emit the correct phonemes. Each alignment was considered correct if 90% of the alignment weight lies inside the ground-truth phoneme window extended by 20 frames on each side. Under this definition, all phones but the ⟨eos⟩delimited-⟨⟩eos\\left<\\text{eos}\\right> shown in Fig. 3 are properly aligned. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_37",
"text": " The first column of Fig. 4 shows the number of correctly aligned frames w.r.t. the utterance length (in frames) for some of the considered models. One can see that the baseline model was able to decode sequences up to about 120 phones when a single utterance was repeated, and up to about 150 phones when different utterances were concatenated. Even when it failed, it correctly aligned about 50 phones. On the other hand, the model with the hybrid attention mechanism with convolutional features was able to align sequences up to 200 phones long. However, once it began to fail, the model was not able to align almost all phones. The model with the smoothing behaved similarly to the one with convolutional features only. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_38",
"text": " We examined failed alignments to understand these two different modes of failure. Some of the examples are shown in the Supplementary Materials. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_39",
"text": " We found that the baseline model properly aligns about 40 first phones, then makes a jump to the end of the recording and cycles over the last 10 phones. This behavior suggests that it learned to track its approximate location in the source sequence. However, the tracking capability is limited to the lengths observed during training. Once the tracker saturates, it jumps to the end of the recording. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_40",
"text": " In contrast, when the location-aware network failed it just stopped aligning – no particular frames were selected for each phone. We attribute this behavior to the issue of noisy glimpse discussed in Sec. 2.3. With a long utterance there are many irrelevant frames negatively affecting the weight assigned to the correct frames. In line with this conjecture, the location-aware network works slightly better on the repetition of the same utterance, where all frames are somehow relevant, than on the concatenation of different utterances, where each misaligned frame is irrelevant. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_41",
"text": " To gain more insight we applied the alignment sharpening schemes described in Sec. 2.3. In the remaining columns of Fig. 4, we see that the sharpening methods help the location-aware network to find proper alignments, while they show little effect on the baseline network. The windowing technique helps both the baseline and location-aware networks, with the location-aware network properly aligning nearly all sequences. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_42",
"text": " During visual inspection, we noticed that in the middle of very long utterances the baseline model was confused by repetitions of similar content within the window, and that such confusions did not happen in the beginning. This supports our conjecture above. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_43",
"text": " We evaluated the models on long sequences. Each model was decoded using the alignment sharpening techniques that helped to obtain proper forced alignments. The results are presented in Fig. 5. The baseline model fails to decode long utterances, even when a narrow window is used to constrain the alignments it produces. The two other location-aware networks are able to decode utterances formed by concatenating up to 11 test utterances. Better results were obtained with a wider window, presumably because it resembles more the training conditions when at each step the attention mechanism was seeing the whole input sequence. With the wide window, both of the networks scored about 20% PER on the long utterances, indicating that the proposed location-aware attention mechanism can scale to sequences much longer than those in the training set with only minor modifications required at the decoding stage. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_44",
"text": " We proposed and evaluated a novel end-to-end trainable speech recognition architecture based on a hybrid attention mechanism which combines both content and location information in order to select the next position in the input sequence for decoding. One desirable property of the proposed model is that it can recognize utterances much longer than the ones it was trained on. In the future, we expect this model to be used to directly recognize text from speech (10, 17), in which case it may become important to incorporate a monolingual language model to the ARSG architecture . ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_45",
"text": " This work has contributed two novel ideas for attention mechanisms: a better normalization approach yielding smoother alignments and a generic principle for extracting and using features from the previous alignments. Both of these can potentially be applied beyond speech recognition. For instance, the proposed attention can be used without modification in neural Turing machines, or by using 2–D convolution instead of 1–D, for improving image caption generation . ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_46",
"text": " All experiments were conducted using Theano (27, 28), PyLearn2 , and Blocks libraries. ",
"title": "Attention-Based Models for Speech Recognition"
},
{
"id": "1506.07503_all_47",
"text": " The authors would like to acknowledge the support of the following agencies for research funding and computing support: National Science Center (Poland), NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. Bahdanau also thanks Planet Intelligent Systems GmbH and Yandex. ",
"title": "Attention-Based Models for Speech Recognition"
}
] |
What is the reason for doing the joint training? Does it related to the model performance?
|
Due to memory limit, authors consider newly joint training method utilizing both image and video [22]. As more independent image frames are added, we can see the reduced variane of the gradient at the expense of some bias for the video modeling [23]. Table 4 shows that additional frames per video helps to improve in video and image sample quality metrics [30].
|
[
22,
23,
30
] |
[
{
"id": "2204.03458_all_0",
"text": " Diffusion models have recently been producing high quality results in image generation and audio generation (e.g. 28, 39, 40, 16, 23, 36, 48, 60, 42, 10, 29), and there is significant interest in validating diffusion models in new data modalities. In this work, we present first results on video generation using diffusion models, for both unconditional and conditional settings. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_1",
"text": " We show that high quality videos can be generated using essentially the standard formulation of the Gaussian diffusion model , with little modification other than straightforward architectural changes to accommodate video data within the memory constraints of deep learning accelerators. We train models that generate a fixed number of video frames using a 3D U-Net diffusion model architecture, and we enable generating longer videos by applying this model autoregressively using a new method for conditional generation. We additionally show the benefits of joint training on video and image modeling objectives. We test our methods on video prediction and unconditional video generation, where we achieve state-of-the-art sample quality scores, and we also show promising first results on text-conditioned video generation. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_2",
"text": " A diffusion model (46, 47, 22) specified in continuous time (53, 48, 10, 28) is a generative model with latents 𝐳={𝐳t|t∈(0,1)}𝐳conditional-setsubscript𝐳𝑡𝑡01\\mathbf{z}=\\{\\mathbf{z}_{t}\\,|\\,t\\in(0,1)\\} obeying a forward process q(𝐳|𝐱)𝑞conditional𝐳𝐱q(\\mathbf{z}|\\mathbf{x}) starting at data 𝐱∼p(𝐱)similar-to𝐱𝑝𝐱\\mathbf{x}\\sim p(\\mathbf{x}). The forward process is a Gaussian process that satisfies the Markovian structure: q(𝐳t|𝐱)=𝒩(𝐳t;αt𝐱,σt2𝐈),q(𝐳t|𝐳s)=𝒩(𝐳t;(αt/αs)𝐳s,σt|s2𝐈)formulae-sequence𝑞conditionalsubscript𝐳𝑡𝐱𝒩subscript𝐳𝑡subscript𝛼𝑡𝐱superscriptsubscript𝜎𝑡2𝐈𝑞conditionalsubscript𝐳𝑡subscript𝐳𝑠𝒩subscript𝐳𝑡subscript𝛼𝑡subscript𝛼𝑠subscript𝐳𝑠superscriptsubscript𝜎conditional𝑡𝑠2𝐈\\displaystyle q(\\mathbf{z}_{t}|\\mathbf{x})=\\mathcal{N}(\\mathbf{z}_{t};\\alpha_{t}\\mathbf{x},\\sigma_{t}^{2}\\mathbf{I}),\\quad q(\\mathbf{z}_{t}|\\mathbf{z}_{s})=\\mathcal{N}(\\mathbf{z}_{t};(\\alpha_{t}/\\alpha_{s})\\mathbf{z}_{s},\\sigma_{t|s}^{2}\\mathbf{I}) (1) where 0≤s<t≤10𝑠𝑡10\\leq s<t\\leq 1, σt|s2=(1−eλt−λs)σt2subscriptsuperscript𝜎2conditional𝑡𝑠1superscript𝑒subscript𝜆𝑡subscript𝜆𝑠superscriptsubscript𝜎𝑡2\\sigma^{2}_{t|s}=(1-e^{\\lambda_{t}-\\lambda_{s}})\\sigma_{t}^{2}, and αt,σtsubscript𝛼𝑡subscript𝜎𝑡\\alpha_{t},\\sigma_{t} specify a differentiable noise schedule whose log signal-to-noise-ratio λt=log(αt2/σt2)subscript𝜆𝑡superscriptsubscript𝛼𝑡2superscriptsubscript𝜎𝑡2\\lambda_{t}=\\log(\\alpha_{t}^{2}/\\sigma_{t}^{2}) decreases with t𝑡t until q(𝐳1)≈𝒩(𝟎,𝐈)𝑞subscript𝐳1𝒩0𝐈q(\\mathbf{z}_{1})\\approx\\mathcal{N}(\\mathbf{0},\\mathbf{I}). ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_3",
"text": " Learning to reverse the forward process for generation can be reduced to learning to denoise 𝐳t∼q(𝐳t|𝐱)similar-tosubscript𝐳𝑡𝑞conditionalsubscript𝐳𝑡𝐱\\mathbf{z}_{t}\\sim q(\\mathbf{z}_{t}|\\mathbf{x}) into an estimate 𝐱^θ(𝐳t,λt)≈𝐱subscript^𝐱𝜃subscript𝐳𝑡subscript𝜆𝑡𝐱\\hat{\\mathbf{x}}_{\\theta}(\\mathbf{z}_{t},\\lambda_{t})\\approx\\mathbf{x} for all t𝑡t (we will drop the dependence on λtsubscript𝜆𝑡\\lambda_{t} to simplify notation). We train this denoising model 𝐱^θsubscript^𝐱𝜃\\hat{\\mathbf{x}}_{\\theta} using a weighted mean squared error loss 𝔼ϵ,t(w(λt)‖𝐱^θ(𝐳t)−𝐱‖22)subscript𝔼bold-italic-ϵ𝑡delimited-()𝑤subscript𝜆𝑡subscriptsuperscriptnormsubscript^𝐱𝜃subscript𝐳𝑡𝐱22\\displaystyle\\mathbb{E}_{{\\boldsymbol{\\epsilon}},t}\\!\\left(w(\\lambda_{t})\\|\\hat{\\mathbf{x}}_{\\theta}(\\mathbf{z}_{t})-\\mathbf{x}\\|^{2}_{2}\\right) (2) over uniformly sampled times t∈(0,1)𝑡01t\\in(0,1). This reduction of generation to denoising can be justified as optimizing a weighted variational lower bound on the data log likelihood under the diffusion model, or as a form of denoising score matching (56, 47, 22, 28). In practice, we use the ϵbold-italic-ϵ{\\boldsymbol{\\epsilon}}-prediction parameterization, defined as 𝐱^θ(𝐳t)=(𝐳t−σtϵθ(𝐳t))/αtsubscript^𝐱𝜃subscript𝐳𝑡subscript𝐳𝑡subscript𝜎𝑡subscriptbold-italic-ϵ𝜃subscript𝐳𝑡subscript𝛼𝑡\\hat{\\mathbf{x}}_{\\theta}(\\mathbf{z}_{t})=(\\mathbf{z}_{t}-\\sigma_{t}{\\boldsymbol{\\epsilon}}_{\\theta}(\\mathbf{z}_{t}))/\\alpha_{t}, and train ϵθsubscriptbold-italic-ϵ𝜃{\\boldsymbol{\\epsilon}}_{\\theta} using a mean squared error in ϵbold-italic-ϵ{\\boldsymbol{\\epsilon}} space with t𝑡t sampled according to a cosine schedule . This corresponds to a particular weighting w(λt)𝑤subscript𝜆𝑡w(\\lambda_{t}) for learning a scaled score estimate ϵθ(𝐳t)≈−σt∇𝐳tlogp(𝐳t)subscriptbold-italic-ϵ𝜃subscript𝐳𝑡subscript𝜎𝑡subscript∇subscript𝐳𝑡𝑝subscript𝐳𝑡{\\boldsymbol{\\epsilon}}_{\\theta}(\\mathbf{z}_{t})\\approx-\\sigma_{t}\\nabla_{\\mathbf{z}_{t}}\\log p(\\mathbf{z}_{t}), where p(𝐳t)𝑝subscript𝐳𝑡p(\\mathbf{z}_{t}) is the true density of 𝐳tsubscript𝐳𝑡\\mathbf{z}_{t} under 𝐱∼p(𝐱)similar-to𝐱𝑝𝐱\\mathbf{x}\\sim p(\\mathbf{x}) (22, 28, 48). We also train using the 𝐯𝐯\\mathbf{v}-prediction parameterization for certain models . ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_4",
"text": " We use a variety of diffusion model samplers in this work. One is the discrete time ancestral sampler with sampling variances derived from lower and upper bounds on reverse process entropy (46, 22, 37). To define this sampler, first note that the forward process can be described in reverse as q(𝐳s|𝐳t,𝐱)=𝒩(𝐳s;𝝁~s|t(𝐳t,𝐱),σ~s|t2𝐈)𝑞conditionalsubscript𝐳𝑠subscript𝐳𝑡𝐱𝒩subscript𝐳𝑠subscript~𝝁conditional𝑠𝑡subscript𝐳𝑡𝐱subscriptsuperscript~𝜎2conditional𝑠𝑡𝐈q(\\mathbf{z}_{s}|\\mathbf{z}_{t},\\mathbf{x})=\\mathcal{N}(\\mathbf{z}_{s};\\tilde{\\boldsymbol{\\mu}}_{s|t}(\\mathbf{z}_{t},\\mathbf{x}),\\tilde{\\sigma}^{2}_{s|t}\\mathbf{I}) (noting s<t𝑠𝑡s<t), where 𝝁~s|t(𝐳t,𝐱)=eλt−λs(αs/αt)𝐳t+(1−eλt−λs)αs𝐱andσ~s|t2=(1−eλt−λs)σs2.formulae-sequencesubscript~𝝁conditional𝑠𝑡subscript𝐳𝑡𝐱superscript𝑒subscript𝜆𝑡subscript𝜆𝑠subscript𝛼𝑠subscript𝛼𝑡subscript𝐳𝑡1superscript𝑒subscript𝜆𝑡subscript𝜆𝑠subscript𝛼𝑠𝐱andsubscriptsuperscript~𝜎2conditional𝑠𝑡1superscript𝑒subscript𝜆𝑡subscript𝜆𝑠superscriptsubscript𝜎𝑠2\\displaystyle\\tilde{\\boldsymbol{\\mu}}_{s|t}(\\mathbf{z}_{t},\\mathbf{x})=e^{\\lambda_{t}-\\lambda_{s}}(\\alpha_{s}/\\alpha_{t})\\mathbf{z}_{t}+(1-e^{\\lambda_{t}-\\lambda_{s}})\\alpha_{s}\\mathbf{x}\\quad\\text{and}\\quad\\tilde{\\sigma}^{2}_{s|t}=(1-e^{\\lambda_{t}-\\lambda_{s}})\\sigma_{s}^{2}. (3) Starting at 𝐳1∼𝒩(𝟎,𝐈)similar-tosubscript𝐳1𝒩0𝐈\\mathbf{z}_{1}\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I}), the ancestral sampler follows the rule 𝐳ssubscript𝐳𝑠\\displaystyle\\mathbf{z}_{s} =𝝁~s|t(𝐳t,𝐱^θ(𝐳t))+(σ~s|t2)1−γ(σt|s2)γϵabsentsubscript~𝝁conditional𝑠𝑡subscript𝐳𝑡subscript^𝐱𝜃subscript𝐳𝑡superscriptsubscriptsuperscript~𝜎2conditional𝑠𝑡1𝛾superscriptsubscriptsuperscript𝜎2conditional𝑡𝑠𝛾bold-italic-ϵ\\displaystyle=\\tilde{\\boldsymbol{\\mu}}_{s|t}(\\mathbf{z}_{t},\\hat{\\mathbf{x}}_{\\theta}(\\mathbf{z}_{t}))+\\sqrt{(\\tilde{\\sigma}^{2}_{s|t})^{1-\\gamma}(\\sigma^{2}_{t|s})^{\\gamma}}{\\boldsymbol{\\epsilon}} (4) where ϵbold-italic-ϵ{\\boldsymbol{\\epsilon}} is standard Gaussian noise, γ𝛾\\gamma is a hyperparameter that controls the stochasticity of the sampler , and s,t𝑠𝑡s,t follow a uniformly spaced sequence from 1 to 0. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_5",
"text": " Another sampler, which we found especially effective with our new method for conditional generation (Section 3.1), is the predictor-corrector sampler . Our version of this sampler alternates between the ancestral sampler step 4 and a Langevin correction step of the form 𝐳s←𝐳s−12δσsϵθ(𝐳s)+δσsϵ′←subscript𝐳𝑠subscript𝐳𝑠12𝛿subscript𝜎𝑠subscriptbold-italic-ϵ𝜃subscript𝐳𝑠𝛿subscript𝜎𝑠superscriptbold-italic-ϵ′\\displaystyle\\mathbf{z}_{s}\\leftarrow\\mathbf{z}_{s}-\\frac{1}{2}\\delta\\sigma_{s}{\\boldsymbol{\\epsilon}}_{\\theta}(\\mathbf{z}_{s})+\\sqrt{\\delta}\\sigma_{s}{\\boldsymbol{\\epsilon}}^{\\prime} (5) where δ𝛿\\delta is a step size which we fix to 0.10.10.1 here, and ϵ′superscriptbold-italic-ϵ′{\\boldsymbol{\\epsilon}}^{\\prime} is another independent sample of standard Gaussian noise. The purpose of the Langevin step is to help the marginal distribution of each 𝐳ssubscript𝐳𝑠\\mathbf{z}_{s} generated by the sampler to match the true marginal under the forward process starting at 𝐱∼p(𝐱)similar-to𝐱𝑝𝐱\\mathbf{x}\\sim p(\\mathbf{x}). ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_6",
"text": " In the conditional generation setting, the data 𝐱𝐱\\mathbf{x} is equipped with a conditioning signal 𝐜𝐜\\mathbf{c}, which may represent a class label, text caption, or other type of conditioning. To train a diffusion model to fit p(𝐱|𝐜)𝑝conditional𝐱𝐜p(\\mathbf{x}|\\mathbf{c}), the only modification that needs to be made is to provide 𝐜𝐜\\mathbf{c} to the model as 𝐱^θ(𝐳t,𝐜)subscript^𝐱𝜃subscript𝐳𝑡𝐜\\hat{\\mathbf{x}}_{\\theta}(\\mathbf{z}_{t},\\mathbf{c}). Improvements to sample quality can be obtained in this setting by using classifier-free guidance . This method samples using adjusted model predictions ϵ~θsubscript~bold-italic-ϵ𝜃\\tilde{{\\boldsymbol{\\epsilon}}}_{\\theta}, constructed via ϵ~θ(𝐳t,𝐜)=(1+w)ϵθ(𝐳t,𝐜)−wϵθ(𝐳t),subscript~bold-italic-ϵ𝜃subscript𝐳𝑡𝐜1𝑤subscriptbold-italic-ϵ𝜃subscript𝐳𝑡𝐜𝑤subscriptbold-italic-ϵ𝜃subscript𝐳𝑡\\displaystyle\\tilde{{\\boldsymbol{\\epsilon}}}_{\\theta}(\\mathbf{z}_{t},\\mathbf{c})=(1+w){\\boldsymbol{\\epsilon}}_{\\theta}(\\mathbf{z}_{t},\\mathbf{c})-w{\\boldsymbol{\\epsilon}}_{\\theta}(\\mathbf{z}_{t}), (6) where w𝑤w is the guidance strength, ϵθ(𝐳t,𝐜)=1σt(𝐳t−𝐱^θ(𝐳t,𝐜))subscriptbold-italic-ϵ𝜃subscript𝐳𝑡𝐜1subscript𝜎𝑡subscript𝐳𝑡subscript^𝐱𝜃subscript𝐳𝑡𝐜{\\boldsymbol{\\epsilon}}_{\\theta}(\\mathbf{z}_{t},\\mathbf{c})=\\frac{1}{\\sigma_{t}}(\\mathbf{z}_{t}-\\hat{\\mathbf{x}}_{\\theta}(\\mathbf{z}_{t},\\mathbf{c})) is the regular conditional model prediction, and ϵθ(𝐳t)subscriptbold-italic-ϵ𝜃subscript𝐳𝑡{\\boldsymbol{\\epsilon}}_{\\theta}(\\mathbf{z}_{t}) is a prediction from an unconditional model jointly trained with the conditional model (if 𝐜𝐜\\mathbf{c} consists of embedding vectors, unconditional modeling can be represented as 𝐜=𝟎𝐜0\\mathbf{c}=\\mathbf{0}). For w>0𝑤0w>0 this adjustment has the effect of over-emphasizing the effect of conditioning on the signal 𝐜𝐜\\mathbf{c}, which tends to produce samples of lower diversity but higher quality compared to sampling from the regular conditional model . The method can be interpreted as a way to guide the samples towards areas where an implicit classifier p(𝐜|𝐳t)𝑝conditional𝐜subscript𝐳𝑡p(\\mathbf{c}|\\mathbf{z}_{t}) has high likelihood, and is an adaptation of the explicit classifier guidance method proposed by . ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_7",
"text": " Our approach to video generation using diffusion models is to use the standard diffusion model formalism described in Section 2 with a neural network architecture suitable for video data. Each of our models is trained to jointly model a fixed number of frames at a fixed spatial resolution. To extend sampling to longer sequences of frames or higher spatial resolutions, we will repurpose our models with a conditioning technique described later in Section 3.1. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_8",
"text": " In prior work on image modeling, the standard architecture for 𝐱^θsubscript^𝐱𝜃\\hat{\\mathbf{x}}_{\\theta} in an image diffusion model is a U-Net (38, 44), which is a neural network architecture constructed as a spatial downsampling pass followed by a spatial upsampling pass with skip connections to the downsampling pass activations. The network is built from layers of 2D convolutional residual blocks, for example in the style of the Wide ResNet , and each such convolutional block is followed by a spatial attention block (55, 58, 11). Conditioning information, such as 𝐜𝐜\\mathbf{c} and λtsubscript𝜆𝑡\\lambda_{t}, is provided to the network in the form of an embedding vector added into each residual block (we find it helpful for our models to process these embedding vectors using several MLP layers before adding). ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_9",
"text": " We propose to extend this image diffusion model architecture to video data, given by a block of a fixed number of frames, using a particular type of 3D U-Net that is factorized over space and time. First, we modify the image model architecture by changing each 2D convolution into a space-only 3D convolution, for instance, we change each 3x3 convolution into a 1x3x3 convolution (the first axis indexes video frames, the second and third index the spatial height and width). The attention in each spatial attention block remains as attention over space; i.e., the first axis is treated as a batch axis. Second, after each spatial attention block, we insert a temporal attention block that performs attention over the first axis and treats the spatial axes as batch axes. We use relative position embeddings in each temporal attention block so that the network can distinguish ordering of frames in a way that does not require an absolute notion of video time. We visualize the model architecture in Fig. 1. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_10",
"text": " The use of factorized space-time attention is known to be a good choice in video transformers for its computational efficiency (2, 5, 21). An advantage of our factorized space-time architecture, which is unique to our video generation setting, is that it is particularly straightforward to mask the model to run on independent images rather than a video, simply by removing the attention operation inside each time attention block and fixing the attention matrix to exactly match each key and query vector at each video timestep. The utility of doing so is that it allows us to jointly train the model on both video and image generation. We find in our experiments that this joint training is important for sample quality (Section 4). ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_11",
"text": " The videos we consider modeling typically consist of hundreds to thousands of frames, at a frame rate of at least 24 frames per second. To manage the computational requirements of training our models, we only train on a small subset of say 16 frames at a time. However, at test time we can generate longer videos by extending our samples. For example, we could first generate a video 𝐱a∼pθ(𝐱)similar-tosuperscript𝐱asubscript𝑝𝜃𝐱\\mathbf{x}^{\\text{a}}\\sim p_{\\theta}(\\mathbf{x}) consisting of 16 frames, and then extend it with a second sample 𝐱b∼pθ(𝐱b|𝐱a)similar-tosuperscript𝐱bsubscript𝑝𝜃conditionalsuperscript𝐱bsuperscript𝐱a\\mathbf{x}^{\\text{b}}\\sim p_{\\theta}(\\mathbf{x}^{\\text{b}}|\\mathbf{x}^{\\text{a}}). If 𝐱bsuperscript𝐱b\\mathbf{x}^{\\text{b}} consists of frames following 𝐱asuperscript𝐱a\\mathbf{x}^{\\text{a}}, this allows us to autoregressively extend our sampled videos to arbitrary lengths, which we demonstrate in Section 4.3.3. Alternatively, we could choose 𝐱asuperscript𝐱a\\mathbf{x}^{\\text{a}} to represent a video of lower frame rate, and then define 𝐱bsuperscript𝐱b\\mathbf{x}^{\\text{b}} to be those frames in between the frames of 𝐱asuperscript𝐱a\\mathbf{x}^{\\text{a}}. This allows one to then to upsample a video temporally, similar to how generate high resolution images through spatial upsampling. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_12",
"text": " Both approaches require one to sample from a conditional model, pθ(𝐱b|𝐱a)subscript𝑝𝜃conditionalsuperscript𝐱bsuperscript𝐱ap_{\\theta}(\\mathbf{x}^{\\text{b}}|\\mathbf{x}^{\\text{a}}). This conditional model could be trained explicitly, but it can also be derived approximately from our unconditional model pθ(𝐱)subscript𝑝𝜃𝐱p_{\\theta}(\\mathbf{x}) by imputation, which has the advantage of not requiring a separately trained model. For example, present a general method for conditional sampling from a jointly trained diffusion model pθ(𝐱=(𝐱a,𝐱b))subscript𝑝𝜃𝐱superscript𝐱asuperscript𝐱bp_{\\theta}(\\mathbf{x}=(\\mathbf{x}^{\\text{a}},\\mathbf{x}^{\\text{b}})): In their approach to sampling from pθ(𝐱b|𝐱a)subscript𝑝𝜃conditionalsuperscript𝐱bsuperscript𝐱ap_{\\theta}(\\mathbf{x}^{\\text{b}}|\\mathbf{x}^{\\text{a}}), the sampling procedure for updating 𝐳sbsubscriptsuperscript𝐳b𝑠\\mathbf{z}^{\\text{b}}_{s} is unchanged from the standard method for sampling from pθ(𝐳s|𝐳t)subscript𝑝𝜃conditionalsubscript𝐳𝑠subscript𝐳𝑡p_{\\theta}(\\mathbf{z}_{s}|\\mathbf{z}_{t}), with 𝐳s=(𝐳sa,𝐳sb)subscript𝐳𝑠subscriptsuperscript𝐳a𝑠subscriptsuperscript𝐳b𝑠\\mathbf{z}_{s}=(\\mathbf{z}^{\\text{a}}_{s},\\mathbf{z}^{\\text{b}}_{s}), but the samples for 𝐳sasubscriptsuperscript𝐳a𝑠\\mathbf{z}^{\\text{a}}_{s} are replaced by exact samples from the forward process, q(𝐳sa|𝐱a)𝑞conditionalsubscriptsuperscript𝐳a𝑠superscript𝐱aq(\\mathbf{z}^{\\text{a}}_{s}|\\mathbf{x}^{\\text{a}}), at each iteration. The samples 𝐳sasubscriptsuperscript𝐳a𝑠\\mathbf{z}^{\\text{a}}_{s} then have the correct marginal distribution by construction, and the samples 𝐳sbsubscriptsuperscript𝐳b𝑠\\mathbf{z}^{\\text{b}}_{s} will conform with 𝐳sasubscriptsuperscript𝐳a𝑠\\mathbf{z}^{\\text{a}}_{s} through their effect on the denoising model 𝐱^θ((𝐳ta,𝐳tb))subscript^𝐱𝜃subscriptsuperscript𝐳a𝑡subscriptsuperscript𝐳b𝑡\\hat{\\mathbf{x}}_{\\theta}((\\mathbf{z}^{\\text{a}}_{t},\\mathbf{z}^{\\text{b}}_{t})). Similarly, we could sample 𝐳sasubscriptsuperscript𝐳a𝑠\\mathbf{z}^{\\text{a}}_{s} from q(𝐳sa|𝐱a,𝐳ta)𝑞conditionalsubscriptsuperscript𝐳a𝑠superscript𝐱asubscriptsuperscript𝐳a𝑡q(\\mathbf{z}^{\\text{a}}_{s}|\\mathbf{x}^{\\text{a}},\\mathbf{z}^{\\text{a}}_{t}), which follows the correct conditional distribution in addition to the correct marginal. We will refer to both of these approaches as the replacement method for conditional sampling from diffusion models. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_13",
"text": " When we tried the replacement method to conditional sampling, we found it to not work well for our video models: Although samples 𝐱bsuperscript𝐱b\\mathbf{x}^{\\text{b}} looked good in isolation, they were often not coherent with 𝐱asuperscript𝐱a\\mathbf{x}^{\\text{a}}. This is caused by a fundamental problem with this replacement sampling method. That is, the latents 𝐳sbsubscriptsuperscript𝐳b𝑠\\mathbf{z}^{\\text{b}}_{s} are updated in the direction provided by 𝐱^θb(𝐳t)≈𝔼q(𝐱b|𝐳t)subscriptsuperscript^𝐱b𝜃subscript𝐳𝑡subscript𝔼𝑞delimited-()conditionalsuperscript𝐱𝑏subscript𝐳𝑡\\hat{\\mathbf{x}}^{\\text{b}}_{\\theta}(\\mathbf{z}_{t})\\approx\\mathbb{E}_{q}(\\mathbf{x}^{b}|\\mathbf{z}_{t}), while what is needed instead is 𝔼q(𝐱b|𝐳t,𝐱a)subscript𝔼𝑞delimited-()conditionalsuperscript𝐱𝑏subscript𝐳𝑡superscript𝐱𝑎\\mathbb{E}_{q}(\\mathbf{x}^{b}|\\mathbf{z}_{t},\\mathbf{x}^{a}). Writing this in terms of the score of the data distribution, we get 𝔼q(𝐱b|𝐳t,𝐱a)=𝔼q(𝐱b|𝐳t)+(σt2/αt)∇𝐳tblogq(𝐱a|𝐳t)subscript𝔼𝑞delimited-()conditionalsuperscript𝐱𝑏subscript𝐳𝑡superscript𝐱𝑎subscript𝔼𝑞delimited-()conditionalsuperscript𝐱𝑏subscript𝐳𝑡subscriptsuperscript𝜎2𝑡subscript𝛼𝑡subscript∇subscriptsuperscript𝐳𝑏𝑡𝑞conditionalsuperscript𝐱𝑎subscript𝐳𝑡\\mathbb{E}_{q}(\\mathbf{x}^{b}|\\mathbf{z}_{t},\\mathbf{x}^{a})=\\mathbb{E}_{q}(\\mathbf{x}^{b}|\\mathbf{z}_{t})+(\\sigma^{2}_{t}/\\alpha_{t})\\nabla_{\\mathbf{z}^{b}_{t}}\\log q(\\mathbf{x}^{a}|\\mathbf{z}_{t}), where the second term is missing in the replacement method. Assuming a perfect denoising model, plugging in this missing term would make conditional sampling exact. Since q(𝐱a|𝐳t)𝑞conditionalsuperscript𝐱𝑎subscript𝐳𝑡q(\\mathbf{x}^{a}|\\mathbf{z}_{t}) is not available in closed form, however, we instead propose to approximate it using a Gaussian of the form q(𝐱a|𝐳t)≈𝒩(𝐱^θa(𝐳t),(σt2/αt2)I)𝑞conditionalsuperscript𝐱𝑎subscript𝐳𝑡𝒩subscriptsuperscript^𝐱a𝜃subscript𝐳𝑡subscriptsuperscript𝜎2𝑡subscriptsuperscript𝛼2𝑡Iq(\\mathbf{x}^{a}|\\mathbf{z}_{t})\\approx\\mathcal{N}(\\hat{\\mathbf{x}}^{\\text{a}}_{\\theta}(\\mathbf{z}_{t}),(\\sigma^{2}_{t}/\\alpha^{2}_{t})\\text{I}), where 𝐱^θa(𝐳t)subscriptsuperscript^𝐱a𝜃subscript𝐳𝑡\\hat{\\mathbf{x}}^{\\text{a}}_{\\theta}(\\mathbf{z}_{t}) is a reconstruction of the conditioning data 𝐱asuperscript𝐱a\\mathbf{x}^{\\text{a}} provided by our denoising model. Assuming a perfect model, this approximation becomes exact as t→0→𝑡0t\\rightarrow 0, and empirically we find it to be good for larger t𝑡t also. Plugging in the approximation, and adding a weighting factor wrsubscript𝑤𝑟w_{r}, our proposed method to conditional sampling is a variant of the replacement method with an adjusted denoising model, 𝐱~θbsubscriptsuperscript~𝐱𝑏𝜃\\tilde{\\mathbf{x}}^{b}_{\\theta}, defined by 𝐱~θb(𝐳t)=𝐱^θb(𝐳t)−wrαt2∇𝐳tb∥𝐱a−𝐱^θa(𝐳t)∥22.\\displaystyle\\tilde{\\mathbf{x}}^{b}_{\\theta}(\\mathbf{z}_{t})=\\hat{\\mathbf{x}}^{b}_{\\theta}(\\mathbf{z}_{t})-\\frac{w_{r}\\alpha_{t}}{2}\\nabla_{\\mathbf{z}^{b}_{t}}\\lVert\\mathbf{x}^{a}-\\hat{\\mathbf{x}}^{a}_{\\theta}(\\mathbf{z}_{t})\\rVert_{2}^{2}~{}. (7) The additional gradient term in this expression can be interpreted as a form of guidance (16, 20) based on the model’s reconstruction of the conditioning data, and we therefore refer to this method as reconstruction-guided sampling, or simply reconstruction guidance. Like with other forms of guidance, we find that choosing a larger weighting factor, wr>1subscript𝑤𝑟1w_{r}>1, tends to improve sample quality. We empirically investigate reconstruction guidance in Section 4.3.3, where we find it to work surprisingly well, especially when combined with predictor-corrector samplers using Langevin diffusion . ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_14",
"text": " Reconstruction guidance also extends to the case of spatial interpolation (or super-resolution), in which the mean squared error loss is imposed on a downsampled version of the model prediction, and backpropagation is performed through this downsampling. In this setting, we have low resolution ground truth videos 𝐱asuperscript𝐱𝑎\\mathbf{x}^{a} (e.g. at the 64x64 spatial resolution), which may be generated from a low resolution model, and we wish to upsample them into high resolution videos (e.g. at the 128x128 spatial resolution) using an unconditional high resolution diffusion model 𝐱^θsubscript^𝐱𝜃\\hat{\\mathbf{x}}_{\\theta}. To accomplish this, we adjust the high resolution model as follows: 𝐱~θ(𝐳t)=𝐱^θ(𝐳t)−wrαt2∇𝐳t∥𝐱a−𝐱^θa(𝐳t)∥22\\displaystyle\\tilde{\\mathbf{x}}_{\\theta}(\\mathbf{z}_{t})=\\hat{\\mathbf{x}}_{\\theta}(\\mathbf{z}_{t})-\\frac{w_{r}\\alpha_{t}}{2}\\nabla_{\\mathbf{z}_{t}}\\lVert\\mathbf{x}^{a}-\\hat{\\mathbf{x}}^{a}_{\\theta}(\\mathbf{z}_{t})\\rVert_{2}^{2} (8) where 𝐱^θa(𝐳t)subscriptsuperscript^𝐱𝑎𝜃subscript𝐳𝑡\\hat{\\mathbf{x}}^{a}_{\\theta}(\\mathbf{z}_{t}) is our model’s reconstruction of the low-resolution video from 𝐳tsubscript𝐳𝑡\\mathbf{z}_{t}, which is obtained by downsampling the high resolution output of the model using a differentiable downsampling algorithm such as bilinear interpolation. Note that it is also possible to simultaneously condition on low resolution videos while autoregressively extending samples at the high resolution using the same reconstruction guidance method. In Fig. 2, we show samples of this approach for extending 16x64x64 low resolution samples at frameskip 4 to 64x128x128 samples at frameskip 1 using a 9x128x128 diffusion model. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_15",
"text": " We report our results on video diffusion models for unconditional video generation (Section 4.1), conditional video generation (video prediction) (Section 4.2), and text-conditioned video generation (Section 4.3). We evaluate our models using standard metrics such as FVD , FID , and IS ; details on evaluation are provided below alongside each benchmark. Samples and additional results are provided at https://video-diffusion.github.io/. Architecture hyperparameters, training details, and compute resources are listed in Appendix A. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_16",
"text": " To demonstrate our approach on unconditional generation, we use a popular benchmark of Soomro et al. for unconditional modeling of video. The benchmark consists of short clips of people performing one of 101 activities, and was originally collected for the purpose of training action recognition models. We model short segments of 16 frames from this dataset, downsampled to a spatial resolution of 64x64. In Table 1 we present perceptual quality scores for videos generated by our model, and we compare against methods from the literature, finding that our method strongly improves upon the previous state-of-the-art. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_17",
"text": " We use the data loader provided by TensorFlow Datasets without further processing, and we train on all 13,320 videos. Similar to previous methods, we use the C3D network 111We use the C3D model as implemented at github.com/pfnet-research/tgan2 . for calculating FID and IS, using 10,000 samples generated from our model. C3D internally resizes input data to the 112x112 spatial resolution, so perceptual scores are approximately comparable even when the data is sampled at a different resolution originally. As discussed by , methods in the literature are unfortunately not always consistent in the data preprocessing that is used, which may lead to small differences in reported scores between papers. The Inception Score we calculate for real data (≈60absent60\\approx 60) is consistent with that reported by , who also report a higher real data Inception score of ≈90absent90\\approx 90 for data sampled at the 128x128 resolution, which indicates that our 64x64 model might be at a disadvantage compared to works that generate at a higher resolution. Nevertheless, our model obtains the best perceptual quality metrics that we could find in the literature. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_18",
"text": " A common benchmark task for evaluating generative models of video is video prediction, where the model is given the first frame(s) of a video and is asked to generate the remainder. Models that do well on this conditional generation task are usually trained explicitly for this conditional setting, for example by being autoregressive across frames. Although our models are instead only trained unconditionally, we can adapt them to the video prediction setting by using the guidance method proposed in section 3.1. Here we evaluate this method on two popular video prediction benchmarks, obtaining state-of-the-art results. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_19",
"text": " We evaluate video prediction performance on BAIR Robot Pushing , a standard benchmark in the video literature consisting of approximately 44000 videos of robot pushing motions at the 64x64 spatial resolution. Methods for this benchmark are conditioned on 1 frame and generate the next 15. Results are listed in Table 3. Following the evaluation protocol of and others, we calculate FVD using the I3D network by comparing 100×256100256100\\times 256 model samples against the 256256256 examples in the evaluation set. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_20",
"text": " We additionally evaluate video prediction performance on the Kinetics-600 benchmark (27, 9). Kinetics-600 contains approximately 400 thousand training videos depicting 600 different activities. We train unconditional models on this dataset at the 64×64646464\\times 64 resolution and evaluate on 50 thousand randomly sampled videos from the test set, where we condition on a randomly sampled subsequence of 5 frames and generate the next 11 frames. Like previous works, we calculate FVD and Inception Score using the I3D network . See Table 3 for results. In our reported results we sample test videos without replacement, and we use the same randomly selected subsequences for generating model samples and for defining the ground truth, since this results in the lowest bias and variance in the reported FVD metric. However, from personal communication we learned that (33, 14) instead sampled with replacement, and used a different random seed when sampling the ground truth data. We find that this way of evaluating raises the FVD obtained by our model slightly, from 16.216.216.2 to 16.916.916.9. Inception Score is unaffected. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_21",
"text": " The remaining experiments reported are on text-conditioned video generation. In this text-conditioned video generation setting, we employ a dataset of 10 million captioned videos, and we condition the diffusion model on captions in the form of BERT-large embeddings processed using attention pooling. We consider two model sizes: a small model for the joint training ablation, and a large model for generating the remaining results (both architectures are described in detail in Appendix A), and we explore the effects of joint video-image training, classifier-free guidance, and our newly proposed reconstruction guidance method for autoregressive extension and simultaneous spatial and temporal super-resolution. We report the following metrics in this section on 4096 samples: the video metric FVD, and the Inception-based image metrics FID and IS measured by averaging activations across frames (FID/IS-avg) and by measuring the first frame only (FID/IS-first). For FID and FVD, we report two numbers which are measured against the training and validation sets, respectively. For IS, we report two numbers which are averaged scores across 1 split and 10 splits of samples, respectively. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_22",
"text": " As described in Section 3, one of the main advantages of our video architecture is that it allows us to easily train the model jointly on video and image generative modeling objectives. To implement this joint training, we concatenate random independent image frames to the end of each video sampled from the dataset, and we mask the attention in the temporal attention blocks to prevent mixing information across video frames and each individual image frame. We choose these random independent images from random videos within the same dataset; in future work we plan to explore the effect of choosing images from other larger image-only datasets. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_23",
"text": " Table 4 reports results for an experiment on text-conditioned 16x64x64 videos, where we consider training on an additional 0, 4, or 8 independent image frames per video. One can see clear improvements in video and image sample quality metrics as more independent image frames are added. Adding independent image frames has the effect of reducing variance of the gradient at the expense of some bias for the video modeling objective, and thus it can be seen as a memory optimization to fit more independent examples in a batch. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_24",
"text": " Table 5 reports results that verify the effectiveness of classifier-free guidance on text-to-video generation. As expected, there is clear improvement in the Inception Score-like metrics with higher guidance weight, while the FID-like metrics improve and then degrade with increasing guidance weight. Similar findings have been reported on text-to-image generation . ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_25",
"text": " Figure 3 shows the effect of classifier-free guidance on a text-conditioned video model. Similar to what was observed in other work that used classifier-free guidance on text-conditioned image generation and class-conditioned image generation (20, 16), adding guidance increases the sample fidelity of each individual image and emphases the effect of the conditioning signal. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_26",
"text": " In Section 3.1 we proposed the reconstruction guidance method for conditional sampling from diffusion models, an improvement over the replacement method of . In Table 6 we present results on generating longer videos using both techniques, and find that our proposed method indeed improves over the replacement method in terms of perceptual quality scores. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_27",
"text": " Figure 4 shows the samples of our reconstruction guidance method for conditional sampling compared to the replacement method (Section 3.1) for the purposes of generating long samples in a block-autoregressive manner (Section 4.3.3). The samples from the replacement method clearly show a lack of temporal coherence, since frames from different blocks throughout the generated videos appear to be uncorrelated samples (conditioned on 𝐜𝐜\\mathbf{c}). The samples from the reconstruction guidance method, by contrast, are clearly temporally coherent over the course of the entire autoregressive generation process. Figure 2 additionally shows samples of using the reconstruction guidance method to simultaneously condition on low frequency, low resolution videos while autoregressively extending temporally at a high resolution. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_28",
"text": " Prior work on video generation has usually employed other types of generative models, notably, autoregressive models, VAEs, GANs, and normalizing flows (e.g. 3, 4, 32, 30, 14, 59, 62, 57). Related work on model classes similar to diffusion models includes (25, 24). Concurrent work proposes a diffusion-based approach to video generation that uses an image diffusion model to predict each individual frame within a RNN temporal autoregressive model. Our video diffusion model, by contrast, jointly models entire videos (blocks of frames) using a 3D video architecture with interleaved spatial and temporal attention, and we extend to long sequence lengths by filling in frames or autoregressive temporal extension. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_29",
"text": " We have introduced diffusion models for video modeling, thus bringing recent advances in generative modeling using diffusion models to the video domain. We have shown that with straightforward extensions of conventional U-Net architectures for 2D image modeling to 3D space-time, with factorized space-time attention blocks, one can learn effective generative models for video data using the standard formulation of the diffusion model. This includes unconditional models, text-conditioned models, and video prediction models. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_30",
"text": " We have additionally demonstrated the benefits of joint image-video training and classifier-free guidance for video diffusion models on both video and image sample quality metrics, and we also introduced a new reconstruction-guided conditional sampling method that outperforms existing replacement or imputation methods for conditional sampling from unconditionally trained models. Our reconstruction guidance method can generate long sequences using either frame interpolation (or temporal super-resolution) or extrapolation in an auto-regressive fashion, and also can perform spatial super-resolution. We look forward to investigating this method in a wider variety of conditioning settings. ",
"title": "Video Diffusion Models"
},
{
"id": "2204.03458_all_31",
"text": " Our goal with this work is to advance research on methods in generative modeling, and our methods have the potential to positively impact creative downstream applications. As with prior work in generative modeling, however, our methods have the potential for causing harmful impact and could enhance malicious or unethical uses of generative models, such as fake content generation, harassment, and misinformation spread, and thus we have decided not to release our models. Like all generative models, our models reflect the biases of their training datasets and thus may require curation to ensure fair results from sampling. In particular, our text-to-video models inherit the challenges faced by prior work on text-to-image models, and our future work will involve auditing for forms of social bias, similar to (6, 7, 50, 12) for image-to-text and image labeling models. We see our work as only a starting point for further investigation on video diffusion models and investigation into their societal implications, and we will aim to explore benchmark evaluations for social and cultural bias in the video generation setting and make the necessary research advances to address them. ",
"title": "Video Diffusion Models"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.