欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Tensorflow一元一次方程线性回归示例

程序员文章站 2024-03-17 21:04:40
...

Tensorflow一元一次方程线性回归示例

下面针对:y = w.x + b

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

# 归一化函数
def  normalize(X):
    mean = np.mean(X)
    print("mean=", mean)
    std = np.std(X)
    print("std=", std)
    X = (X - mean)/std
    return X

#1. 训练数据接收占位符
X = tf.placeholder(tf.float32, name="X")
Y = tf.placeholder(tf.float32, name="Y")

#2. 初始化权重和偏置变量
b = tf.Variable(0.0, name="b")
w = tf.Variable(0.0, name="w")

#3. 定义预测的线性模型
Y_hat = X * w + b

#4. 定义损失函数
loss = tf.square(Y - Y_hat, name= "loss")

#5. 选择优化器
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01, name="optimizer").minimize(loss)

#6. 申明初始化操作符
init_op = tf.global_variables_initializer()
total = []

with tf.Session() as sess:
     X_train = normalize(np.array([1.0, 3.0, 5.0, 7.0, 8.0]))
     Y_train = normalize(np.array([2.0, 1.0, 3.0, 4.0, 6.0]))
     n_sample = len(X_train)
     sess.run(init_op, feed_dict={X: X_train, Y: Y_train})
     writer = tf.summary.FileWriter("graphs", sess.graph)
     for i in range(100):
         total_loss = 0
         for x,y in zip(X_train, Y_train):
             _,l = sess.run([optimizer, loss], feed_dict={X:x, Y: y})
             total_loss += l
         total.append(total_loss/l)
         print("epoch {0}: loss {1}".format(i, total_loss/l))
     writer.close()
     b_value,w_value = sess.run([b,w])


Y_pred = X_train * w_value + b_value

#画图--训练数据与回归实线
#实点
plt.plot(X_train, Y_train, 'bo',  label="real_data")
#实线
plt.plot(X_train, Y_pred, label = "pred_data")
plt.legend()
plt.show()

#损失函数训练过程
plt.plot(total)
plt.show()
  • 模拟方程与训练数据的图示如下:
    Tensorflow一元一次方程线性回归示例
    -损失函数训练过程如下(100次训练)
    Tensorflow一元一次方程线性回归示例
    -Tensorflow Board 图示如下
    Tensorflow一元一次方程线性回归示例
相关标签: Tensorflow