资讯专栏INFORMATION COLUMN

TensorFlow学习笔记(8):基于MNIST数据的循环神经网络RNN

venmos / 3241人阅读

摘要:本系列的其他文章已经根据的官方教程基于数据集采用了和进行建模。为了完整性,本文对数据应用模型求解,具体使用的为。

前言

本文输入数据是MNIST,全称是Modified National Institute of Standards and Technology,是一组由这个机构搜集的手写数字扫描文件和每个文件对应标签的数据集,经过一定的修改使其适合机器学习算法读取。这个数据集可以从牛的不行的Yann LeCun教授的网站获取。

本系列的其他文章已经根据TensorFlow的官方教程基于MNIST数据集采用了softmax regression和CNN进行建模。为了完整性,本文对MNIST数据应用RNN模型求解,具体使用的RNN为LSTM。

关于RNN/LSTM的理论知识,可以参考这篇文章

代码
# coding: utf-8
# @author: 陈水平
# @date:2017-02-14
# 

# In[1]:

import tensorflow as tf
import numpy as np


# In[2]:

sess = tf.InteractiveSession()


# In[3]:

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("mnist/", one_hot=True)


# In[4]:

learning_rate = 0.001
batch_size = 128

n_input = 28
n_steps = 28
n_hidden = 128
n_classes = 10

x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])


# In[5]:

def RNN(x, weight, biases):
    # x shape: (batch_size, n_steps, n_input)
    # desired shape: list of n_steps with element shape (batch_size, n_input)
    x = tf.transpose(x, [1, 0, 2])
    x = tf.reshape(x, [-1, n_input])
    x = tf.split(0, n_steps, x)
    outputs = list()
    lstm = tf.nn.rnn_cell.BasicLSTMCell(n_hidden, forget_bias=1.0)
    state = (tf.zeros([n_steps, n_hidden]),)*2
    sess.run(state)
    with tf.variable_scope("myrnn2") as scope:
        for i in range(n_steps-1):
            if i > 0:
                scope.reuse_variables()
            output, state = lstm(x[i], state)
            outputs.append(output)
    final = tf.matmul(outputs[-1], weight) + biases
    return final


# In[6]:

def RNN(x, n_steps, n_input, n_hidden, n_classes):
    # Parameters:
    # Input gate: input, previous output, and bias
    ix = tf.Variable(tf.truncated_normal([n_input, n_hidden], -0.1, 0.1))
    im = tf.Variable(tf.truncated_normal([n_hidden, n_hidden], -0.1, 0.1))
    ib = tf.Variable(tf.zeros([1, n_hidden]))
    # Forget gate: input, previous output, and bias
    fx = tf.Variable(tf.truncated_normal([n_input, n_hidden], -0.1, 0.1))
    fm = tf.Variable(tf.truncated_normal([n_hidden, n_hidden], -0.1, 0.1))
    fb = tf.Variable(tf.zeros([1, n_hidden]))
    # Memory cell: input, state, and bias
    cx = tf.Variable(tf.truncated_normal([n_input, n_hidden], -0.1, 0.1))
    cm = tf.Variable(tf.truncated_normal([n_hidden, n_hidden], -0.1, 0.1))
    cb = tf.Variable(tf.zeros([1, n_hidden]))
    # Output gate: input, previous output, and bias
    ox = tf.Variable(tf.truncated_normal([n_input, n_hidden], -0.1, 0.1))
    om = tf.Variable(tf.truncated_normal([n_hidden, n_hidden], -0.1, 0.1))
    ob = tf.Variable(tf.zeros([1, n_hidden]))
    # Classifier weights and biases
    w = tf.Variable(tf.truncated_normal([n_hidden, n_classes]))
    b = tf.Variable(tf.zeros([n_classes]))

    # Definition of the cell computation
    def lstm_cell(i, o, state):
        input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib)
        forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb)
        update = tf.tanh(tf.matmul(i, cx) + tf.matmul(o, cm) + cb)
        state = forget_gate * state + input_gate * update
        output_gate = tf.sigmoid(tf.matmul(i, ox) +  tf.matmul(o, om) + ob)
        return output_gate * tf.tanh(state), state
    
    # Unrolled LSTM loop
    outputs = list()
    state = tf.Variable(tf.zeros([batch_size, n_hidden]))
    output = tf.Variable(tf.zeros([batch_size, n_hidden]))
    
    # x shape: (batch_size, n_steps, n_input)
    # desired shape: list of n_steps with element shape (batch_size, n_input)
    x = tf.transpose(x, [1, 0, 2])
    x = tf.reshape(x, [-1, n_input])
    x = tf.split(0, n_steps, x)
    for i in x:
        output, state = lstm_cell(i, output, state)
        outputs.append(output)
    logits =tf.matmul(outputs[-1], w) + b
    return logits


# In[7]:

pred = RNN(x, n_steps, n_input, n_hidden, n_classes)

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

# Initializing the variables
init = tf.global_variables_initializer()


# In[8]:

# Launch the graph
sess.run(init)
for step in range(20000):
    batch_x, batch_y = mnist.train.next_batch(batch_size)
    batch_x = batch_x.reshape((batch_size, n_steps, n_input))
    sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})

    if step % 50 == 0:
        acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
        loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
        print "Iter " + str(step) + ", Minibatch Loss= " +               "{:.6f}".format(loss) + ", Training Accuracy= " +               "{:.5f}".format(acc)
print "Optimization Finished!"


# In[9]:

# Calculate accuracy for 128 mnist test images
test_len = batch_size
test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
test_label = mnist.test.labels[:test_len]
print "Testing Accuracy:", sess.run(accuracy, feed_dict={x: test_data, y: test_label})

输出如下:

Iter 0, Minibatch Loss= 2.540429, Training Accuracy= 0.07812
Iter 50, Minibatch Loss= 2.423611, Training Accuracy= 0.06250
Iter 100, Minibatch Loss= 2.318830, Training Accuracy= 0.13281
Iter 150, Minibatch Loss= 2.276640, Training Accuracy= 0.13281
Iter 200, Minibatch Loss= 2.276727, Training Accuracy= 0.12500
Iter 250, Minibatch Loss= 2.267064, Training Accuracy= 0.16406
Iter 300, Minibatch Loss= 2.234139, Training Accuracy= 0.19531
Iter 350, Minibatch Loss= 2.295060, Training Accuracy= 0.12500
Iter 400, Minibatch Loss= 2.261856, Training Accuracy= 0.16406
Iter 450, Minibatch Loss= 2.220284, Training Accuracy= 0.17969
Iter 500, Minibatch Loss= 2.276015, Training Accuracy= 0.13281
Iter 550, Minibatch Loss= 2.220499, Training Accuracy= 0.14062
Iter 600, Minibatch Loss= 2.219574, Training Accuracy= 0.11719
Iter 650, Minibatch Loss= 2.189177, Training Accuracy= 0.25781
Iter 700, Minibatch Loss= 2.195167, Training Accuracy= 0.19531
Iter 750, Minibatch Loss= 2.226459, Training Accuracy= 0.18750
Iter 800, Minibatch Loss= 2.148620, Training Accuracy= 0.23438
Iter 850, Minibatch Loss= 2.122925, Training Accuracy= 0.21875
Iter 900, Minibatch Loss= 2.065122, Training Accuracy= 0.24219
...
Iter 19350, Minibatch Loss= 0.001304, Training Accuracy= 1.00000
Iter 19400, Minibatch Loss= 0.000144, Training Accuracy= 1.00000
Iter 19450, Minibatch Loss= 0.000907, Training Accuracy= 1.00000
Iter 19500, Minibatch Loss= 0.002555, Training Accuracy= 1.00000
Iter 19550, Minibatch Loss= 0.002018, Training Accuracy= 1.00000
Iter 19600, Minibatch Loss= 0.000853, Training Accuracy= 1.00000
Iter 19650, Minibatch Loss= 0.001035, Training Accuracy= 1.00000
Iter 19700, Minibatch Loss= 0.007034, Training Accuracy= 0.99219
Iter 19750, Minibatch Loss= 0.000608, Training Accuracy= 1.00000
Iter 19800, Minibatch Loss= 0.002913, Training Accuracy= 1.00000
Iter 19850, Minibatch Loss= 0.003484, Training Accuracy= 1.00000
Iter 19900, Minibatch Loss= 0.005693, Training Accuracy= 1.00000
Iter 19950, Minibatch Loss= 0.001904, Training Accuracy= 1.00000
Optimization Finished!

Testing Accuracy: 0.992188

文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。

转载请注明本文地址:https://www.ucloud.cn/yun/38428.html

相关文章

  • tensorflow学习笔记3——MNIST应用篇

    摘要:的卷积神经网络应用卷积神经网络的概念卷积神经网络是一种前馈神经网络,它的人工神经元可以响应一部分覆盖范围内的周围单元,对于大型图像处理有出色表现。 MNIST的卷积神经网络应用 卷积神经网络的概念 卷积神经网络(Convolutional Neural Network,CNN)是一种前馈神经网络,它的人工神经元可以响应一部分覆盖范围内的周围单元,对于大型图像处理有出色表现。[2] 它...

    baishancloud 评论0 收藏0
  • 测试对比TensorFlow、MXNet、CNTK、Theano四个框架

    摘要:相比于直接使用搭建卷积神经网络,将作为高级,并使用作为后端要简单地多。测试一学习模型的类型卷积神经网络数据集任务小图片数据集目标将图片分类为个类别根据每一个的训练速度,要比快那么一点点。 如果我们对 Keras 在数据科学和深度学习方面的流行还有疑问,那么考虑一下所有的主流云平台和深度学习框架的支持情况就能发现它的强大之处。目前,Keras 官方版已经支持谷歌的 TensorFlow、微软的...

    hiYoHoo 评论0 收藏0
  • 深度学习

    摘要:深度学习在过去的几年里取得了许多惊人的成果,均与息息相关。机器学习进阶笔记之一安装与入门是基于进行研发的第二代人工智能学习系统,被广泛用于语音识别或图像识别等多项机器深度学习领域。零基础入门深度学习长短时记忆网络。 多图|入门必看:万字长文带你轻松了解LSTM全貌 作者 | Edwin Chen编译 | AI100第一次接触长短期记忆神经网络(LSTM)时,我惊呆了。原来,LSTM是神...

    Vultr 评论0 收藏0

发表评论

0条评论

venmos

|高级讲师

TA的文章

阅读更多
最新活动
阅读需要支付1元查看
<