资讯专栏INFORMATION COLUMN

Simplest Linear Regression on Keras Framework

Kross / 543人阅读

It"s a regression problem with one feature inputted.

I wrote this script for fun and for the preparation of oncoming Mathematical modeling contest(also simply in order to complete the task of a daily blog✌( •̀ ω •́ )y), didn"t took a lot of time(It means I can have time to sleep...).

It was accomplished all by myself, that means there is no reference to github"s code. Well, great progress!

I committed it onto my own GitHub, which was not well organized.

Import Packages

import numpy as np
from keras.models import Sequential 
from keras.layers import Dense 
import matplotlib.pyplot as plt 
print ("Import finished")

Because the importing of Keras took a little bit more time, I need a hint that they"ve been successfully imported:

Generating Data

Make them out of sequence in order to make random splitting,
Add some noise:

X = np.linspace(0, 2, 300) 
np.random.shuffle(X)
Y = 3 * X + np.random.randn(*X.shape) * 0.33

Data visualization

plt.scatter(X,Y)
plt.show()
print (X[:10],"
",Y[:10])

Define Train and Test Data

X_train,Y_train = X[:260],Y[:260]
X_test,Y_test = X[260:],Y[260:]

Establish LR Model
input and output dimensions are both set as 1

model = Sequential()
model.add(Dense(units=1, kernel_initializer="uniform", activation="linear", input_dim=1))
weights = model.layers[0].get_weights() 
w_init = weights[0][0][0] 
b_init = weights[1][0] 
print("Linear regression model is initialized with weights w: %.2f, b: %.2f" % (w_init, b_init)) 

see the default coefficients:

Choose Loss-Function and Optimizer
Define loss as mean squared error, choose stochastic gradient descent as optimizer:

model.compile(loss="mse", optimizer="sgd")

Train Model
Run 500 epochs of iterations of sgd.

model.fit(X_train, Y_train, epochs=500, verbose=1)

The loss eventually stabilizes at around 0.0976:

Test Model

Y_pred = model.predict(X_test)
plt.scatter(X_test,Y_test)
plt.plot(X_test,Y_pred)
plt.show()
weights = model.layers[0].get_weights() 
w_init = weights[0][0][0] 
b_init = weights[1][0] 
print("Linear regression model is trained with weights w: %.2f, b: %.2f" % (w_init, b_init)) 

The final weights are 3.00 and 0.03, very close to the setted one(3.00, 0.33), the error of 0.03 might caused by the noise.

Use model

Input 1.66 as feature:

a = np.array([1.66])
Pre=model.predict(a)
print (Pre)

Tomorrow I would change this script into multi-dimensional regression machine, which can solve multi-feature regression problems.

文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。

转载请注明本文地址:https://www.ucloud.cn/yun/40760.html

相关文章

  • Keras TensorFlow教程:如何从零开发一个复杂深度学习模型

    摘要:目前,是成长最快的一种深度学习框架。这将是对社区发展的一个巨大的推动作用。以下代码是如何开始导入和构建序列模型。现在,我们来构建一个简单的线性回归模型。 作者:chen_h微信号 & QQ:862251340微信公众号:coderpai简书地址:https://www.jianshu.com/p/205... Keras 是提供一些高可用的 Python API ,能帮助你快速的构建...

    cyqian 评论0 收藏0
  • 2018 AI、机器学习、深度学习与 Tensorflow 相关优秀书籍、课程、示例链接集锦

    摘要:机器学习深度学习与自然语言处理领域推荐的书籍列表人工智能深度学习与相关书籍课程示例列表是笔者系列的一部分对于其他的资料集锦模型开源工具与框架请参考。 showImg(https://segmentfault.com/img/remote/1460000014946199); DataScienceAI Book Links | 机器学习、深度学习与自然语言处理领域推荐的书籍列表 sho...

    wenshi11019 评论0 收藏0

发表评论

0条评论

Kross

|高级讲师

TA的文章

阅读更多
最新活动
阅读需要支付1元查看
<