欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

tf.layers.dropout 和 tf.nn.dropout的区别

程序员文章站 2022-07-13 11:28:13
...

1. tf.nn.dropout函数

tf.nn.dropout(
    x,
    keep_prob,
    noise_shape=None,
    seed=None,
    name=None
)

x:指输入,输入tensor

keep_prob: float类型,每个元素被保留下来的概率,设置神经元被选中的概率,在初始化时keep_prob是一个占位符, keep_prob = tf.placeholder(tf.float32) 。tensorflow在run时设置keep_prob具体的值,例如keep_prob: 0.5

noise_shape  : 一个1维的int32张量,代表了随机产生“保留/丢弃”标志的shape。

seed : 整形变量,随机数种子。

name:指定该操作的名字

 

2. tf.layers.dropout 函数

tf.layers.dropout(
    inputs,
    rate=0.5,
    noise_shape=None,
    seed=None,
    training=False,
    name=None
)

inputs: Tensor input.
rate: The dropout rate, between 0 and 1. E.g. "rate=0.1" would drop out 10% of input units.
noise_shape: 1D tensor of type int32 representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape (batch_size, timesteps, features), and you want the dropout mask to be the same for all timesteps, you can use noise_shape=[batch_size, 1, features].
seed: A Python integer. Used to create random seeds. Seetf.set_random_seed for behavior.
training: Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched).
name: The name of the layer (string).

 

3. 两者的区别

两者都是用于dropout,但是其参数和使用有些许差别。

(1)tf.nn.dropout的参数是keep_prob,表示元素被保留下来的概率,预测阶段应设置为1;

(2)tf.layers.dropout的参数是rate,表示元素被丢弃的概率,预测阶段应设置为0.

  (3) tf.layers.dropout在预测阶段不生效,应该是默认trainging=False,因此这时传参数不生效。