欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

brain segmentation调研--Brain Parcellation as a Pretext Tas

程序员文章站 2022-05-08 14:37:35
...

brain segmentation调研–Brain Parcellation as a Pretext Tas

本文主要是对论文On the Compactness, Eciency, and Representation of 3D Convolutional Networks:Brain Parcellation as a Pretext Tas 进行简单分析,并对论文源码做了分析,最后对其源码所使用平台进行了介绍,。


论文介绍

本文在dilated convolution和residual connection的基础上提出一种3D image segmentation CNN网络,并采用brain MR imags 来train这个模型,最后我们通过droupout近似采样,在像素层进行了可行的不确定性估计。

1. introduction

在introduction部分首先说明了这篇论文有实际作用,目前3D图像的segmentation面临着巨大挑战:

  • 相对于2D更加复杂的图像模式需要提取
  • 3D图像networks计算量大对硬件负担大。

所以本文的目标就是挑战这个challenging problem,提出了论文的目标,设计一个基于三维图像的紧凑的networks。

作者调研了一些文章,大部分都是采取downsample-upsample的模式,在low-level进行downsample增宽感受野,为了方便higher-level的整体特征提取,而在higher-level进行upsample实现空间上的segmentation。
作者提出了networks不采用down-up模式,所有layers都进行特征提取,训练过程中具有 a wide range of receptive fields。

2. 3D networks 的构件

为了采用2个3*3*3的卷积核堆积的形式替换掉1个5*5*5的卷积核,其优点有以下两点:

  • 参数量减少到57%提高了计算量
  • 和5*5*5的卷积核具有同样的感受野

下面我们开始讲讲组件。
1. dilated convolution

为了方便higher-level的整体特征提取,lower-level需要downsample增宽感受野, 3D U-net 采用stride为2的2*2*2-voxel的最大池
本文采用dilate convolution方式进行:

Ox,y,z=m=1M1i=02j=02k=02Wi,j,k,mI(x+ir),(y+jr),(z+kr),m

其中r为dilate factor,I为具有M个通道的feature map, M为卷积核,O为输出的feature map
2. residual connection

Xp+1=Xp+F(Xp,Wp),其中F(Xp,Wp) 是卷积层加上active function**函数组合。通过递推,我们得到:

Xl=Xp+i=pl1F(Xi,Wi)
从该公式我们可以看出残差连接后feature map可以从前面每一层都获取信息。
残差连接的效果:

  • 效果等同于多个简单的networks的ensemble
  • 网络路径增多,没有residual时的路径是固定的

在求残差的过程中作者通过padding 0达到矩阵大小一致(feature map一致)
3. lost function

由于交叉熵不能解决样本不均衡问题,下面是交叉熵的表示
brain segmentation调研--Brain Parcellation as a Pretext Tas
我们采用协方差度量lost
brain segmentation调研--Brain Parcellation as a Pretext Tas

3.网络结构

作者给的网路结构图如下:
brain segmentation调研--Brain Parcellation as a Pretext Tas
我们的网络结构总共有20层,从第二层后开始每两层做一个esidual connection。至于为什么要两层做一个残差连接呢,作者是想把这两层当做一个整体,节约参数的同时达到感受野较大的效果。8-13层采用了dilate为2的convolution,14-19采取了取了dilate为4的convolution。

论文源码解析

1. networks结构源码

打开文件highres3dnet.py

class HighRes3DNet(BaseNet):
    """
    """
    # 该函数用来初始化
    def __init__()
    # 该函数用来生成networks的graph
    def layer_op()
    # 该函数用来打印layers
    def _print(self, list_of_layers):
        for (op, _) in list_of_layers:
            print(op)

这是HighRes3DNet的总体结构,我们先来看初始化函数:

def __init__(self,
                 num_classes,  # 需要segmentation的类总数
                 w_initializer=None,  # w的初始化参数,可以不填
                 w_regularizer=None,  # w的正则化项
                 b_initializer=None,  # b的初始化参数,可以不填
                 b_regularizer=None,  # b的正则化项
                 acti_func='prelu',  # active function
                 name='HighRes3DNet'):  # 名字

        super(HighRes3DNet, self).__init__(
            num_classes=num_classes,
            w_initializer=w_initializer,
            w_regularizer=w_regularizer,
            b_initializer=b_initializer,
            b_regularizer=b_regularizer,
            acti_func=acti_func,
            name=name)

        self.layers = [
            {'name': 'conv_0', 'n_features': 16, 'kernel_size': 3},  # 对应论文中网络的第1层
            {'name': 'res_1', 'n_features': 16, 'kernels': (3, 3), 'repeat': 3},  # 对应论文中的 2-7层
            {'name': 'res_2', 'n_features': 32, 'kernels': (3, 3), 'repeat': 3},  # 对应论文中的 8-13层
            {'name': 'res_3', 'n_features': 64, 'kernels': (3, 3), 'repeat': 3},  # 对应论文中的 14-19层
            {'name': 'conv_1', 'n_features': 80, 'kernel_size': 1},  # 对应论文中的20层,但是这里为什么是80,我没有太懂

接下来我们观察网络每层的构造,首先观察第一层

        ### first convolution layer
        params = self.layers[0]  # 第一层的参数
        first_conv_layer = ConvolutionalLayer(
            n_output_chns=params['n_features'],  # 卷积核的个数
            kernel_size=params['kernel_size'],  # kernel_size的大小
            acti_func=self.acti_func,  #**函数
            w_initializer=self.initializers['w'],  # 初始化
            w_regularizer=self.regularizers['w'],  # 正则化
            name=params['name'])  
        flow = first_conv_layer(images, is_training)  # 对应着ConvolutionalLayer里面的layer_op
        layer_instances.append((first_conv_layer, flow))  # 记录层结构信息

第一层网络调用了ConvolutionalLayer类进行卷积层的构造,现在我们打开convolution.py文件,初始化我们就不看了,记住参数即可,我们直接看里面的layer_op,看怎么生成第一层的graph layer


 def layer_op(self, input_tensor, is_training=None, keep_prob=None):
        conv_layer = ConvLayer(n_output_chns=self.n_output_chns,
                               kernel_size=self.kernel_size,  # 第一层的kernel size是3
                               stride=self.stride,  # 第一层的strid是1
                               dilation=self.dilation,  # 第一层的dilation是1
                               padding=self.padding,
                               with_bias=self.with_bias,
                               w_initializer=self.initializers['w'],
                               w_regularizer=self.regularizers['w'],
                               b_initializer=self.initializers['b'],
                               b_regularizer=self.regularizers['b'],
                               name='conv_')

        if self.with_bn:  # 如果需要bn(防止梯度弥散)
            if is_training is None:
                raise ValueError('is_training argument should be '
                                 'True or False unless with_bn is False')
            bn_layer = BNLayer(
                regularizer=self.regularizers['w'],
                moving_decay=self.moving_decay,
                eps=self.eps,
                name='bn_')

        if self.acti_func is not None:  # **函数
            acti_layer = ActiLayer(
                func=self.acti_func,
                regularizer=self.regularizers['w'],
                name='acti_')

        if keep_prob is not None:  #  ActiLayer类对应了常规的**函数和dropout
            dropout_layer = ActiLayer(func='dropout', name='dropout_')

        def activation(output_tensor):  # bn->active function->dropout
            if self.with_bn:
                output_tensor = bn_layer(output_tensor, is_training)
            if self.acti_func is not None:
                output_tensor = acti_layer(output_tensor)
            if keep_prob is not None:
                output_tensor = dropout_layer(output_tensor, keep_prob=keep_prob)
            return output_tensor


        if self.preactivation:
            output_tensor = conv_layer(activation(input_tensor))  # 用于处理论文中第20层的结构,先**相关函数在卷积
        else:
            output_tensor = activation(conv_layer(input_tensor))  # 对应了论文中第1层的结构,先做卷积在做其他**相关的函数

        return output_tensor

接下来我们看2-19层的代码,再次打开文件highres3dnet.py

        ### resblocks, all kernels dilated by 1 (normal convolution)
        params = self.layers[1]
        with DilatedTensor(flow, dilation_factor=1) as dilated:
            for j in range(params['repeat']):
                res_block = HighResBlock(
                    params['n_features'],
                    params['kernels'],
                    acti_func=self.acti_func,
                    w_initializer=self.initializers['w'],
                    w_regularizer=self.regularizers['w'],
                    name='%s_%d' % (params['name'], j))
                dilated.tensor = res_block(dilated.tensor, is_training)
                layer_instances.append((res_block, dilated.tensor))
        flow = dilated.tensor

        ### resblocks, all kernels dilated by 2
        params = self.layers[2]
        with DilatedTensor(flow, dilation_factor=2) as dilated:   # 卷积dilate
            for j in range(params['repeat']):
                res_block = HighResBlock(
                    params['n_features'],
                    params['kernels'],
                    acti_func=self.acti_func,
                    w_initializer=self.initializers['w'],
                    w_regularizer=self.regularizers['w'],
                    name='%s_%d' % (params['name'], j))
                dilated.tensor = res_block(dilated.tensor, is_training)
                layer_instances.append((res_block, dilated.tensor))
        flow = dilated.tensor

        ### resblocks, all kernels dilated by 4
        params = self.layers[3]
        with DilatedTensor(flow, dilation_factor=4) as dilated:
            for j in range(params['repeat']):
                res_block = HighResBlock(
                    params['n_features'],
                    params['kernels'],
                    acti_func=self.acti_func,
                    w_initializer=self.initializers['w'],
                    w_regularizer=self.regularizers['w'],
                    name='%s_%d' % (params['name'], j))
                dilated.tensor = res_block(dilated.tensor, is_training)
                layer_instances.append((res_block, dilated.tensor))
        flow = dilated.tensor

你会发现第2-19层的代码相似,只有DilatedTensor(flow, dilation_factor=)dilation) factor 不一样。大致流程都是先做dilate convolution,然后一个循环调用HighResBlock
我们现在还是在现在的文件中找到HighResBlock类,关键代码如下(忽略初始化代码)

def layer_op(self, input_tensor, is_training):
        output_tensor = input_tensor
        for (i, k) in enumerate(self.kernels):
            # create parameterised layers
            bn_op = BNLayer(regularizer=self.regularizers['w'],
                            name='bn_{}'.format(i))
            acti_op = ActiLayer(func=self.acti_func,
                                regularizer=self.regularizers['w'],
                                name='acti_{}'.format(i))
            conv_op = ConvLayer(n_output_chns=self.n_output_chns,
                                kernel_size=k,
                                stride=1,
                                w_initializer=self.initializers['w'],
                                w_regularizer=self.regularizers['w'],
                                name='conv_{}'.format(i))
            # connect layers
            output_tensor = bn_op(output_tensor, is_training)  # bn
            output_tensor = acti_op(output_tensor)  # active function
            output_tensor = conv_op(output_tensor)  # 卷积操作
        # make residual connections
        if self.with_res:  # 每两层一次残差连接
            output_tensor = ElementwiseLayer('SUM')(output_tensor, input_tensor)
        return output_tensor

完成流程 bn>active function>卷积,现在分析完了了1-19层的网络构造,我们来看最后一层

        ### 1x1x1 convolution layer
        params = self.layers[4]
        fc_layer = ConvolutionalLayer(  # 
            n_output_chns=params['n_features'],
            kernel_size=params['kernel_size'],
            acti_func=self.acti_func,
            w_initializer=self.initializers['w'],
            w_regularizer=self.regularizers['w'],
            name=params['name'])
        flow = fc_layer(flow, is_training)
        layer_instances.append((fc_layer, flow))

        ### 1x1x1 convolution layer
        params = self.layers[5]
        fc_layer = ConvolutionalLayer(
            n_output_chns=params['n_features'],
            kernel_size=params['kernel_size'],
            acti_func=None,
            w_initializer=self.initializers['w'],
            w_regularizer=self.regularizers['w'],
            name=params['name'])
        flow = fc_layer(flow, is_training)
        layer_instances.append((fc_layer, flow))

(最后一层有点地方没看懂)

niftynet优势

通过查看源码过程中,发现NiftyNet平台已经把常规的CNN组件封装好了。避免了自己造*的尴尬。比如类ConvolutionalLayer,只要传入适当的参数,然后在调用layer_op()传入tender就可以直接实现指定卷积核,**,正则,droupout等等

           ConvolutionalLayer(
            n_output_chns=params['n_features'],
            kernel_size=params['kernel_size'],
            acti_func=self.acti_func,
            w_initializer=self.initializers['w'],
            w_regularizer=self.regularizers['w'],
            name=params['name'])

NiftyNet 加入训练自己的网络

由于没有进行尝试,权且引用官方的文档。而后试验后在做补充

Developing new networks

NiftyNet allows users create new network, and share the network via the model zoo. To fully utilise this feature, a customised network should be prepared in the following steps:

New network and module

Create a new network file, e.g. new_net.py and place this inside a python module directory, e.g. my_network_collection/ together with a new init.py file.

Make the module loadable

Make sure the new network module can be discovered by NiftyNet by doing either of the following:

Place my_network_collection/ inside NIFTYNETHOME/extensions/,withNIFTYNET_HOME defined by home in [global] setting.
Append the directory of my_network_collection/ (i.e. the directory where this folder is located) to your $PYTHONPATH.
Extend BaseNet
Create a new Python class, e.g. NewNet in new_net.py by inheriting the BaseNet class from niftynet.network.base_net. niftynet.network.toynet, a minimal working example of a fully convolutional network, could be a starting point for NewNet.

class ToyNet(BaseNet):
 def __init__(self, num_classes, name='ToyNet'):

     super(ToyNet, self).__init__(
         num_classes=num_classes, acti_func=acti_func, name=name)

     # network specific property
     self.hidden_features = 10

 def layer_op(self, images, is_training):
     # create layer instances
     conv_1 = ConvolutionalLayer(self.hidden_features,
                                 kernel_size=3,
                                 name='conv_input')

     conv_2 = ConvolutionalLayer(self.num_classes,
                                 kernel_size=1,
                                 acti_func=None,
                                 name='conv_output')

     # apply layer instances
     flow = conv_1(images, is_training)
     flow = conv_2(flow, is_training)

     return flow

Implement operations

In NewNet, implement init() function for network property initialisations, and implement layer_op() for network connections.

The network properties can be used to specify the number of channels, kernel dilation factors, as well as sub-network components of the network.

An example of sub-networks composition is presented in Simulator GAN.

The layer operation function layer_op() should specify how the input tensors are connected to network layers. For basic building blocks, using the ones in niftynet/layer/ are recommended. as the layers are implemented in a modular design (convenient for parameter sharing) and can handle 2D, 2.5D and 3D cases in a unified manner whenever possible.

Call NewNet from application

Finally training the network could be done by specifying the newly implemented network in the command line argument

--name my_network_collection.new_net.NewNet

(my_network_collection.new_net refer to the new_net.py file, and NewNet is the class name to be imported from new_net.py)

Command to load NewNet with segmentation application using pip installed NiftyNet is:

net_segment train -c /path/to/customised_config \
                  --name my_network_collection.new_net.NewNet

alternatively, using NiftyNet cloned from source code repository:

python net_segment.py train -c /path/to/customised_config \
                            --name my_network_collection.new_net.NewNet

See also the configuration doc for name parameter.


参考资料

  1. On the Compactness, Eciency, and Representation of 3D Convolutional Networks:Brain Parcellation as a Pretext Tas
  2. NiftyNet/niftynet/network/highres3dnet.py
相关标签: AI医疗