摘要:有助训练推理调试模型错误。。错误运行训练,通过找到出错地方,改正。。,在每次调用前后基于终端用户界面,控制执行检查图内部状态。张量值注册过滤器,判断图中间张量是否有值。运行,中间张量运行时图像转储到共享目录。
TensorFlow Debugger(tfdbg),TensorFlow专用调试器。用断点、计算机图形化展现实时数据流,可视化运行TensorFlow图形内部结构、状态。有助训练推理调试模型错误。https://www.tensorflow.org/pr... 。
常见错误类型,非数字(nan)、无限值(inf)。tfdbg命令行界面(command line interface,CLI)。
Debugger示例。错误运行MNIST训练,通过TensorFlow Debugger找到出错地方,改正。https://github.com/tensorflow... 。
先直接执行。
python -m tensorflow.python.debug.examples.debug_mnist
准确率第一次训练上千,后面保持较低水平。
TensorFlow Debugger,在每次调用run()前后基于终端用户界面(UI),控制执行、检查图内部状态。
from tensorflow.python import debug as tf_debug sess = tr.debug.LocalCLIDebugWrapperSession(sess) sess.add_tensor_filter("has_inf_or_nan", tf_debug.has_inf_or_nan)
张量值注册过滤器has_inf_on_nan,判断图中间张量是否有nan、inf值。
开启调试模式(debug)。
python -m tensorflow.python.debug.examples.debug_mnist -debug python debug_mnist.py --debug=True
运行开始UI(run-start UI),在tfdbg>后输入交互式命令,run()进入运行结束后UI(run-end UI)。连续运行10次
tfdbg>run -t 10
找出图形第一个nan或inf值
tfdbg> run -f has_inf_or_nan
第一行灰底字表示tfdbg在调用run()后立即停止,生成指定过滤器has_inf_or_nan中间张量。第4次调用run(),36个中间张量包含inf或nan值。首次出现在cross_entropy/Log:0。单击图中cross_entropy/Log:0,单击下划线node_info菜单项,看节点输入张量,是否有0值。
tfdbg>pt softmax/Softmax:0
用ni命令-t标志追溯
ni -t cross_entropy/Log
问题代码
diff = -(y_ * tf.log(y))
修改,对tf.log输入值裁剪
diff = y_ * tf.log(tf.clip_by_value(y, 1e-8, 1.0)) from __future__ import absolute_import from __future__ import division from __future__ import print_function import argparse import sys import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data from tensorflow.python import debug as tf_debug IMAGE_SIZE = 28 HIDDEN_SIZE = 500 NUM_LABELS = 10 RAND_SEED = 42 def main(_): # Import data mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True, fake_data=FLAGS.fake_data) def feed_dict(train): if train or FLAGS.fake_data: xs, ys = mnist.train.next_batch(FLAGS.train_batch_size, fake_data=FLAGS.fake_data) else: xs, ys = mnist.test.images, mnist.test.labels return {x: xs, y_: ys} sess = tf.InteractiveSession() # Create the MNIST neural network graph. # Input placeholders. with tf.name_scope("input"): x = tf.placeholder( tf.float32, [None, IMAGE_SIZE * IMAGE_SIZE], name="x-input") y_ = tf.placeholder(tf.float32, [None, NUM_LABELS], name="y-input") def weight_variable(shape): """Create a weight variable with appropriate initialization.""" initial = tf.truncated_normal(shape, stddev=0.1, seed=RAND_SEED) return tf.Variable(initial) def bias_variable(shape): """Create a bias variable with appropriate initialization.""" initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu): """Reusable code for making a simple neural net layer.""" # Adding a name scope ensures logical grouping of the layers in the graph. with tf.name_scope(layer_name): # This Variable will hold the state of the weights for the layer with tf.name_scope("weights"): weights = weight_variable([input_dim, output_dim]) with tf.name_scope("biases"): biases = bias_variable([output_dim]) with tf.name_scope("Wx_plus_b"): preactivate = tf.matmul(input_tensor, weights) + biases activations = act(preactivate) return activations hidden = nn_layer(x, IMAGE_SIZE**2, HIDDEN_SIZE, "hidden") logits = nn_layer(hidden, HIDDEN_SIZE, NUM_LABELS, "output", tf.identity) y = tf.nn.softmax(logits) with tf.name_scope("cross_entropy"): # The following line is the culprit of the bad numerical values that appear # during training of this graph. Log of zero gives inf, which is first seen # in the intermediate tensor "cross_entropy/Log:0" during the 4th run() # call. A multiplication of the inf values with zeros leads to nans, # which is first in "cross_entropy/mul:0". # # You can use the built-in, numerically-stable implementation to fix this # issue: # diff = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=logits) diff = -(y_ * tf.log(y)) with tf.name_scope("total"): cross_entropy = tf.reduce_mean(diff) with tf.name_scope("train"): train_step = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize( cross_entropy) with tf.name_scope("accuracy"): with tf.name_scope("correct_prediction"): correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) with tf.name_scope("accuracy"): accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) sess.run(tf.global_variables_initializer()) if FLAGS.debug: sess = tf_debug.LocalCLIDebugWrapperSession(sess, ui_type=FLAGS.ui_type) # Add this point, sess is a debug wrapper around the actual Session if # FLAGS.debug is true. In that case, calling run() will launch the CLI. for i in range(FLAGS.max_steps): acc = sess.run(accuracy, feed_dict=feed_dict(False)) print("Accuracy at step %d: %s" % (i, acc)) sess.run(train_step, feed_dict=feed_dict(True)) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.register("type", "bool", lambda v: v.lower() == "true") parser.add_argument( "--max_steps", type=int, default=10, help="Number of steps to run trainer.") parser.add_argument( "--train_batch_size", type=int, default=100, help="Batch size used during training.") parser.add_argument( "--learning_rate", type=float, default=0.025, help="Initial learning rate.") parser.add_argument( "--data_dir", type=str, default="/tmp/mnist_data", help="Directory for storing data") parser.add_argument( "--ui_type", type=str, default="curses", help="Command-line user interface type (curses | readline)") parser.add_argument( "--fake_data", type="bool", nargs="?", const=True, default=False, help="Use fake MNIST data for unit testing") parser.add_argument( "--debug", type="bool", nargs="?", const=True, default=False, help="Use debugger to track down bad values during training") FLAGS, unparsed = parser.parse_known_args() tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
远程调试。tfdbg offline_analyzer。设置本地、远程机器能访问共享目录。debug_utils.watch_graph函数设置运行时参数选项。运行session.run(),中间张量、运行时图像转储到共享目录。本地终端用tfdbg offline_analyzer加载、检查共享目录数据。
python -m tensorflow.python.debug.cli.offline_analyzer --dump_dir=/home/somebody/tfdbg_dumps_1
源码
from tensorflow.python.debug.lib import debug_utils # 构建图,生成session对象,省略 run_options = tf.RunOptions() debug_utils.watch_graph( run_options, sess.graph, # 共享目录位置 # 多个客户端执行run,应用多个不同共享目录 debug_urls=["file://home/somebody/tfdbg_dumps_1"]) session.run(fetches, feed_dict=feeds, options=run_options)
或用会话包装器函数DumpingDebugWrapperSession在共享目录产生训练累积文件。
from tensorflow.python.debug.lib import debug_utils sess = tf_debug.DumpingDebugWrapperSession(sess, "/home/somebody/tfdbg_dumps_1", watch_fn=my_watch_fn)
参考资料:
《TensorFlow技术解析与实战》
欢迎推荐上海机器学习工作机会,我的微信:qingxingfengzi
文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。
转载请注明本文地址:https://www.ucloud.cn/yun/20169.html
摘要:有助训练推理调试模型错误。。错误运行训练,通过找到出错地方,改正。。,在每次调用前后基于终端用户界面,控制执行检查图内部状态。张量值注册过滤器,判断图中间张量是否有值。运行,中间张量运行时图像转储到共享目录。 TensorFlow Debugger(tfdbg),TensorFlow专用调试器。用断点、计算机图形化展现实时数据流,可视化运行TensorFlow图形内部结构、状态。有助训...
摘要:有助训练推理调试模型错误。。错误运行训练,通过找到出错地方,改正。。,在每次调用前后基于终端用户界面,控制执行检查图内部状态。张量值注册过滤器,判断图中间张量是否有值。运行,中间张量运行时图像转储到共享目录。 TensorFlow Debugger(tfdbg),TensorFlow专用调试器。用断点、计算机图形化展现实时数据流,可视化运行TensorFlow图形内部结构、状态。有助训...
摘要:首先需要添加一个新的占位符用于输入正确值计算交叉熵的表达式可以实现为现在我们知道我们需要我们的模型做什么啦,用来训练它是非常容易的。 学习softmax回归模型 一. 下载mnist数据集 新建一个download.py 代码如下: Functions for downloading and reading MNIST data. from __future__ import abso...
摘要:简介读取数据共有三种方法当运行每步计算的时候,从获取数据。数据直接预加载到的中,再把传入运行。在中定义好文件读取的运算节点,把传入运行时,执行读取文件的运算,这样可以避免在和执行环境之间反复传递数据。本文讲解的代码。 简介 TensorFlow读取数据共有三种方法: Feeding:当TensorFlow运行每步计算的时候,从Python获取数据。在Graph的设计阶段,用place...
摘要:是自带的一个可视化工具。本文在学习笔记的基础上修改少量代码,以探索的使用方法。添加标量统计结果。执行后,将返回结果传递给方法即可。效果首先是,显示取值范围更细节的取值概率信息在里,如下双击后,可查看下一层级的详细信息 前言 本文基于TensorFlow官网How-Tos的Visualizing Learning和Graph Visualization写成。 TensorBoard是Te...
阅读 1219·2021-11-15 18:11
阅读 2381·2021-08-19 10:56
阅读 516·2021-08-09 13:42
阅读 626·2019-08-30 15:53
阅读 1936·2019-08-30 10:55
阅读 2988·2019-08-29 17:18
阅读 1249·2019-08-29 13:45
阅读 443·2019-08-29 13:15