欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

tensorflow线性回归

程序员文章站 2022-07-06 21:54:40
...

一、构造1000组高斯分布数据

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

#定义一个空的列表
vector = []
#生成1000组随机数
for i in range(1000):
    x = np.random.normal(0,0.6)
    y = 0.2*x + 0.4 + np.random.normal(0,0.08)
    vector.append([x,y])
    
x_data = [v[0] for v in vector]
y_data =[v[1] for v in vector]
plt.scatter(x_data,y_data,c='r')
plt.show()

tensorflow线性回归

二、训练模型

#初始化权重w和偏置b
w = tf.Variable(tf.random_uniform([1],-1,1),name='w')
b = tf.Variable(tf.random_uniform([1],-1,1),name='b')

#经过计算得出预估值
y = w * x_data + b

#定义损失函数(均方误差)
loss = tf.reduce_mean(tf.square(y-y_data),name='loss')

#采用梯度下降来更新参数
optimizer = tf.train.GradientDescentOptimizer(0.5)

#训练:训练的过程就是最小化损失函数
train = optimizer.minimize(loss,name='train')

sess = tf.Session()
sess.run(tf.global_variables_initializer())
#打印初始权重和偏置
print('w = ',sess.run(w),'b = ',sess.run(b),'loss = ',sess.run(loss))
#执行30次训练
for i in range(60):
    sess.run(train)
    #输出训练好的权重和偏置(每迭代10次显示一下)
    if i%10==0:
        print('w = ',sess.run(w),'b = ',sess.run(b),'loss = ',sess.run(loss))

####运行结果如下:
w =  [-0.24027228] b =  [-0.346241] loss =  0.64620763
w =  [-0.06341013] b =  [0.40470338] loss =  0.033992454
w =  [0.20420161] b =  [0.40080273] loss =  0.006240865
w =  [0.2064087] b =  [0.40077135] loss =  0.0062389774
w =  [0.20642689] b =  [0.40077108] loss =  0.0062389774
w =  [0.20642704] b =  [0.40077108] loss =  0.006238977
w =  [0.20642704] b =  [0.40077108] loss =  0.006238977

三、画出这组数据的拟合曲线

plt.scatter(x_data,y_data,c='r')
plt.plot(x_data,sess.run(w*x_data+b))
plt.show()

tensorflow线性回归