欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

卷积神经网络经典论文的学习笔记

程序员文章站 2022-03-17 14:29:27
...

1 Optimization algorithm And Regularization

1.1 Optimization algorithm

  • GD(SGD、MBGD、BGD)

  • SGDM (Momentum)

    Vdω=βVdω+(1β)dω

    Vdb=βVdb+(1β)db

    ω:=ωαVdω

    b:=bαVdb

    (β=0.9)

  • RMSPROP

    Sdw=βSdw+(1β)(dω)2

    Sdb=βSdb+(1β)(db)2

    ω:=ωαdωSdw+ϵ

    b:=bαdbSb+ϵ

    (ϵ=108)

  • Adam

    Vdω=β1Vdω+(1β1)dω

    Vdb=β1Vdb+(1β1)db

    Sdw=β2Sdw+(1β2)(dω)2

    Sdb=β2Sdb+(1β2)(db)2

    Vdωcorrect=Vdω1β1t Vdbcorrect=Vdb1β1t
    Sdωcorrect=Sdω1β2t Sdbcorrect=Sdb1β2t
    ω:=ωαVdωcorrectSdωcorrect+ϵ
    b:=bαVdbcorrectSbcorrect+ϵ

    (β1=0.9β2=0.99ϵ=108)

1.2 Regularization

  • Batch Norm

    μ=1mizi

    σ2=1mi(ziμ)2

    znormi=ziμσ2+ϵ

    z^normi=γznormi+β

    γ and β are used to adjust the expectation and variance)

  • Dropout

    keep_prob=0.8
    d3 = np.random.rand( a3.shape[0],a3.shape[1]) < keep_prob # activate 80%
    a3 =np.multiply(a3,d3) 
    a3/=keep_prob #使**值期望不变

    2 Convolutional Neural Network

2.1 Convolution

  • Convolution

    f[l]=filter sizep[l]=paddings[l]=stride

    Input: nH[l1]nw[l1]nc[l1] (height& width& channel)

    Output: nH[l]nw[l]nc[l]

    nH[l]=nH[l1]+2p[l]f[l]s[l]+1

    nw[l]=nw[l1]+2p[l]f[l]s[l]+1

if Padding=’VALID’ ,then output is to be : n=nw[l1]f[l]s[l]+1

if Padding=’SAME’ ,then output is to be :

n={nw[l1]s[l]+1if (nf)%s >0nw[l1]s[l]else

Sparse connection & Weight sharing

  • Pooling
    f: filter size

    s: stride

    nH=nHfs+1

    nw=nwfs+1

  • Fully connected

General development process

卷积神经网络经典论文的学习笔记

2.2 Deconvolution


卷积神经网络经典论文的学习笔记

Convolution process:

input: 44 filter:33 (s=1,p=0) output: (43+1)(43+1)=22

if roll x to 16*1, then y to 4*1, we can define the convolution process : y=C*x. (C is the following matrix which is 4*16) (4,16)*(16,1)=(4,1)

卷积神经网络经典论文的学习笔记
de-convolution process:


卷积神经网络经典论文的学习笔记

input: 2*2 filter:3*3 s=1 padding=2(full padding) output:4*4

If roll y to 4*1, x to 16*1. then x= CTy (16,4)*(4,1)=(16,1)

2.3 Classical convolution networks

2.3.1 AlexNet- ImageNet Classification with Deep Convolutional Neural Networks

Structure:

​ Below, the net contains eight layers with weights; the first five are convolutional and the remaining three are fully connected. The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels.


卷积神经网络经典论文的学习笔记

​ The first convolutional layer filters the 224×224×3 input image with 96 kernels of size 11×11×3 with a stride of 4 pixels .The second convolutional layer takes as input the (response-normalized and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5 × 5 × 48. The third, fourth, and fifth convolutional layers are connected to one another without any intervening pooling or normalization layers. The third convolutional layer has 384 kernels of size 3 × 3 ×256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth convolutional layer has 384 kernels of size 3 × 3 × 192 , and the fifth convolutional layer has 256 kernels of size 3 × 3 × 192. The fully-connected layers have 4096 neurons each. As following:

卷积神经网络经典论文的学习笔记卷积神经网络经典论文的学习笔记

ReLU Nonlinearity

卷积神经网络经典论文的学习笔记

Local Response Normalization:

bx,yi=ax,yi/(k+αj=max(0,in/2)min(N1,i+n/2)(ax,yi)2)β

卷积神经网络经典论文的学习笔记

Data Augmentation:

  • Extracting random 224 × 224 patches (and their horizontal reflections) from the 256×256 images and training our network on these extracted patches4. This increases the size of our training set by a factor of 2048 。
  • Perform PCA on the set of RGB pixel values throughout the ImageNet training set.

2.3.2 VGG- VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION

ConvNet configurations:


卷积神经网络经典论文的学习笔记

​ We use very small 3 × 3 receptive fields throughout the whole net, which are convolved with the input at every pixel (with stride 1). It is easy to see that a stack of two 3× 3 conv-layers (without spatial pooling in between) has an effective receptive field of 5× 5; three such layers have a 7 × 7 effective receptive field . But why they do this change? First, three non-linear rectification layers instead of a single one, which makes the decision function more discriminative. Second, we decrease the number of parameters: three-layer 3 × 3 convolution stack contains parameters 3*3*3=27, but 7 × 7 convolution contains parameters 7*7=49.

​ For example, two 3× 3 conv-layers has an effective receptive field of 5× 5.

卷积神经网络经典论文的学习笔记

​ Moreover, the incorporation of 1 × 1 conv. layers is a way to increase the nonlinearity of the decision function without affecting the receptive fields of the conv. layers.

ConvNet performance at multiple test scales:

卷积神经网络经典论文的学习笔记

2.3.3 NIN- Network in Network

MLP Convolution Layers :

​ The resulting structure which we call an mlpconv layer is compared with CNN in picture below:

卷积神经网络经典论文的学习笔记

​ The feature maps are obtained by sliding the MLP over the input in a similar manner as CNN and fed into the next layer. Using the relu as an example represents the classic CNN, the feature map can be calculated as follows:

fi,j,k=max(wkTxi,j,0)

​ Maximization over linear functions makes a piecewise linear approximator which is capable of approximating any convex functions. Compared to conventional convolutional layers which perform linear separation, the maxout network is more potent as it can separate concepts that lie within convex sets.


卷积神经网络经典论文的学习笔记

​ The feature maps of maxout layers are calculated as follows:
fi,j,k=maxm(wkmTxi,j)

​ We seek to achieve this by introducing the novel “Network In Network” structure, in which a micro network is introduced within each convolutional layer to compute more abstract features for local patches.

卷积神经网络经典论文的学习笔记

​ The calculation performed by mlpconv layer is shown as follows:

卷积神经网络经典论文的学习笔记

​ The following picture is prone to understanding:

卷积神经网络经典论文的学习笔记

(Statement: The picture is transfered form others)

Global Average Pooling :

​Conventional convolutional neural networks perform convolution in the lower layers of the network. For classification, the feature maps of the last convolutional layer are vectorized and fed into fully connected layers followed by a softmax logistic regression layer. In this paper, we propose another strategy called global average pooling to replace the traditional fully connected layers in CNN.

​The idea is to generate one feature map for each corresponding category of the classification tasks in the last mlpconv layer. Instead of adding fully connection layers , we take the average of each feature map, and the resulting vector fed directly into the softmax layer.

​The disadvantage of fully connection layers:

  • too many parameters

  • be prone to over-fitting

The advantage of global average pooling:

  • more native
  • no parameters to optimize
  • sums out the spatial information, thus it is more robust to spatial transformation of the input.

2.3.4 Inception V3

General Design Principles:

  1. Avoid representational bottlenecks, especially early in the network. Theoretically, information content can not be assessed merely by the dimensionality of the representation as it discards important factors like correlation structure; the dimensionality merely provides a rough estimate of information content.

  2. Higher dimensional representations are easier to process locally within a network.

  3. Spatial aggregation can be done over lower dimensional embeddings without much or any loss in representational power.

  4. Balance the width and depth of the network. The computational budget should therefore be distributed in a balanced way between the depth and width of the network.

Factorizing Convolutions with Large Filter Size:

​ Two ways to factorize convolution:

  • Sliding this small network over the input activation grid boils down to replacing the 5 × 5 convolution with two layers of 3 × 3 convolution .
  • using a 3 × 1 convolution followed by a 1 × 3 convolution is equivalent to sliding a two layer network with the same receptive field as in a 3 × 3 convolution .
卷积神经网络经典论文的学习笔记

​ We can get:

卷积神经网络经典论文的学习笔记卷积神经网络经典论文的学习笔记卷积神经网络经典论文的学习笔记卷积神经网络经典论文的学习笔记

Efficient Grid Size Reduction :

​ We can use two parallel stride 2 blocks: P and C. P is a pooling layer (either average or maximum pooling) the activation, both of them are stride 2 the filter banks of which are concatenated as in figure [10].

卷积神经网络经典论文的学习笔记

卷积神经网络经典论文的学习笔记

Inception-v2 :

卷积神经网络经典论文的学习笔记

Model Regularization via Label Smoothing :

​ For each training example x, our model computes the probability of each label k , p(k|x)=ezki=1Kezi. Consider the ground-truth distribution over labels q(k|x) for this training example, normalized so that i=1Kq(k|x)=1.

cross-entropy :l=k=1Klog(p(k))q(k).

​ Consider a distribution over labels u(k), independent of the training example x, and a smoothing parameter ϵ. For a training example with ground-truth label y, we replace the label distribution q(k|x)=δk,y with

q(k|x)=(1ϵ)δk,y+ϵu(k)

​ we used the uniform distribution u(k)=1/K, δk,y=q(x)(which equals 1 for k = y and 0 otherwise. )
H(q,p)=k=1Klog(p(k))q(k)=(1ϵ)H(q,p)+ϵH(u,p)

Training Methodology :

​ batch_size=32, RMSProp decay=0.9, ϵ=1.0. learning_rate-0.045, decayed every two epoch using an exponential rate of 0.94 ,gradient threshold 2.0.

2.3.5 ResNet- Deep Residual Learning for Image Recognition

​ There exists a solution by construction to the deeper model: the added layers are identity mapping , and the other layers are copied from the learned shallower model.

​ Formally, denoting the desired underlying mapping as H(x),we let the stacked nonlinear layers fit another mapping of F(x):=H(x)x. The original mapping is recast into F(x)+x.

卷积神经网络经典论文的学习笔记

​ We consider a building block defined as:

y=F(x,Wi)+x

​ Here x and y are the input and output vectors of the layers considered. As shown above, there are two layers, F=w2σ(w1x) in which σ denotes Relu and the biases are omitted for simplifying notations. The dimensions of x and F must be equal in Eqn.(3). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Ws by the shortcut connections to match the dimensions:

y=F(x,Wi)+Wsx

Architectures for ResNet :

卷积神经网络经典论文的学习笔记

Deeper Bottleneck Architectures :

​ Next we describe our deeper nets for ImageNet. Because of concerns on the training time that we can afford, we modify the building block as a bottleneck design. For each residual function F, we use a stack of 3 layers instead of 2 . The three layers are 1×1, 3×3, and 1×1 convolutions, where the 1×1 layers are responsible for reducing and then increasing (restoring)
dimensions, leaving the 3×3 layer a bottleneck with smaller input/output dimensions. An example as shown below, where both designs have similar time complexity.

卷积神经网络经典论文的学习笔记

3 Neural Style Transfer

3.1 A Neural Algorithm of Artistic Style(L. A. Gatys)

Lcontent(C,G)=14nH[l]nW[l]nC[l]ij(aij[l][C]aij[l][G])2

  • aij[l][C]is the activation of the ith filter at position j in layer l of content image.

  • Let C and G be the original image and the image that is generated .

Gkk[l][S]=i=1nH[l]i=1nW[l]ai,j,k[l][S]ai,j,k[l][S]Gkk[l][G]=i=1nH[l]i=1nW[l]ai,j,k[l][G]ai,j,k[l][G]

Lstyle(S,G)=l=0Lωl1(2nH[l]nW[l]nC[l])2kk(Gkk[l][S]Gkk[l][G])2

  • Gkk[l][S] and Gkk[l][G] give these feature correlations which is the inner product between the feature map k and k in layer l.
  • ωl are weighting factors of the contribution of each layer to the total loss .
  • nH[l]nW[l]nC[l] give the feature map the height, width and the number of channels.

L(G)=αLcontent(C,G)+βLstyle(S,G)

  • α and β are the weighting factors for content and style reconstruction respectively.

4 Autoencoder

4.1 Stacked Denoising Autoencoders: Learning Useful Representations …

All appear build on the same principle that we may summarize as follows:

卷积神经网络经典论文的学习笔记

Traditional Autoencoders (AE) :

卷积神经网络经典论文的学习笔记

y=fθ(x)=s(Wx+b)x^=gθ(y)=s(Wx+b)

Loss(x,z)=(xz)2orLoss(x,z)=cross entropy(x,z)

The Denoising Autoencoder Algorithm

卷积神经网络经典论文的学习笔记

The key difference is that z is now a deterministic function of x^ rather than x

We emphasize here that our goal is not the task of denoising per se. Rather denoising is advocated and investigated as a training criterion for learning to extract useful features that will constitute better higher level representation. The usefulness of a learnt representation can then be assessed objectively by measuring the accuracy of a classifier that uses it as input.

Geometric Interpretation

卷积神经网络经典论文的学习笔记

Stacking denoising autoencoders

卷积神经网络经典论文的学习笔记

Fine-tuning of a deep network for classification

卷积神经网络经典论文的学习笔记
相关标签: 卷积神经网络