欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Tensorboard可视化实验

程序员文章站 2024-03-15 11:27:11
...

实验一:可视化之Graph

对mnist_deep.py代码做适当修改,使用tensorboard查看CNN的整体运行框架图。

def add_layer(inputs, in_size, out_size, activation_function=None):
    # add one more layer and return the output of this layer
    with tf.name_scope('layer'):
        with tf.name_scope('weights'):
            Weights =weight_variable([in_size,out_size])
        with tf.name_scope('biases'):
            biases = bias_variable([1,out_size])
        with tf.name_scope('Wx_plus_b'):
            Wx_plus_b = tf.add(tf.matmul(inputs, Weights),biases)
    if activation_function is None:
        outputs = Wx_plus_b
    else:
        outputs = activation_function(Wx_plus_b)
    return outputs
#loss函数,交叉熵
with tf.name_scope('loss'):
    cross_entropy=tf.reduce_mean(-tf.reduce_sum(y_*tf.log(prediction),reduction_indices=[1]))
#train方法,梯度下降
with tf.name_scope('train'):
    train_step=tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

Tensorboard可视化实验

Tensorboard可视化实验

实验二:可视化之scalar


#loss函数,交叉熵
with tf.name_scope('loss'):
    cross_entropy=tf.reduce_mean(-tf.reduce_sum(y_*tf.log(prediction),reduction_indices=[1]))
    tf.summary.scalar('loss', cross_entropy)
#train方法,梯度下降
with tf.name_scope('train'):
    train_step=tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
init=tf.global_variables_initializer()
sess=tf.Session()
merged = tf.summary.merge_all()
sess.run(init)
writer=tf.summary.FileWriter('logs/',sess.graph)
for i in range(1000):
    batch_xs, batch_ys = mnist.train.next_batch(50)
    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys,keep_prob:0.5})
    if i % 50 == 0:
        result=sess.run(merged,feed_dict={x: batch_xs, y_: batch_ys,keep_prob:0.5})
        writer.add_summary(result,i)
        print("step %d, training accuracy %g" % (i, compute_accuracy(
             mnist.train.images, mnist.train.labels)))
print("testing accuracy %g" % compute_accuracy(
     mnist.test.images, mnist.test.labels))

可以得到loss的图像:

Tensorboard可视化实验

实验三:可视化之histogram


with tf.name_scope('weights'):
    #Weights = tf.Variable(tf.random_normal([in_size, out_size]), name='W')
    Weights =weight_variable([in_size,out_size])
    tf.summary.histogram('weights', Weights)
with tf.name_scope('biases'):
    #biases = tf.Variable(tf.constant(0.1,[1, out_size]), name='b')
    biases = bias_variable([1,out_size])
    tf.summary.histogram('biases', biases)

if activation_function is None:
    outputs = Wx_plus_b
else:
    outputs = activation_function(Wx_plus_b)
tf.summary.histogram('outputs', outputs)

下图显示的是全连接层和softmax层的权重、偏置、输出情况。

Tensorboard可视化实验

Tensorboard可视化实验

实验4:改变W的分布

一、截断正态分布

tf.truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)


def weight_variable(shape):
   initial = tf.truncated_normal(shape,stddev=0.1)
   return tf.Variable(initial)

最大步数10000步,运行结果:

Tensorboard可视化实验

二、随机正态分布

tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)


def weight_variable(shape):
   initial = tf.random_normal(shape,stddev=0.1)
   return tf.Variable(initial)

Tensorboard可视化实验

三、均匀分布

tf.random_uniform(shape, minval=0.0, maxval=1.0, dtype=tf.float32, seed=None, name=None)

经验证,此法不可行。

实验5最优化算法的选择

1. tf.train.GradientDescentOptimizer


train_step=tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

Tensorboard可视化实验

2.tf.train.AdadeltaOptimizer


train_step=tf.train.AdadeltaOptimizer(0.001).minimize(cross_entropy)

Tensorboard可视化实验


相关标签: Tensorboard