《TensorFlow:实战Google深度学习框架》笔记、代码及勘误-第8章 循环神经网络-1-前向传播

可能是为了简化起见,这个例子是用np写的,没用到tensorflow

运算的流程图可参考下面这张图


import numpy as np

# 1. 定义RNN的参数
X = [1,2]
state = [0.0, 0.0]  # 这里初始state是[0.0, 0.0],开始循环以后会更新
# 分开定义不同输入部分的权重以方便操作
w_cell_state = np.asarray([[0.1, 0.2], [0.3, 0.4]])
w_cell_input = np.asarray([0.5, 0.6])
b_cell = np.asarray([0.1, -0.1])

# 定义用于输出的全连接层参数
w_output = np.asarray([[1.0], [2.0]])
b_output = 0.1

# 2. 执行前向传播的过程
# 按照时间顺序执行循环审计网络的前向传播过程
for i in range(len(X)):
    # 计算循环体中的全连接层神经网络
    before_activation = np.dot(state, w_cell_state) + X[i] * w_cell_input + b_cell
    state = np.tanh(before_activation)
    final_output = np.dot(state, w_output) + b_output
    print("iteration round:", i+1)
    print("before activation: ", before_activation)
    print("state: ", state)
    print("output: ", final_output)

运行结果

iteration round: 1
before activation:  [ 0.6  0.5]
state:  [ 0.53704957  0.46211716]
output:  [ 1.56128388]
iteration round: 2
before activation:  [ 1.2923401   1.39225678]
state:  [ 0.85973818  0.88366641]
output:  [ 2.72707101]

编辑于 2017-11-29

文章被以下专栏收录