首页 Paddle框架 帖子详情
api文档的layers.lstm错误
收藏
快速回复
Paddle框架 其他深度学习 1116 2
api文档的layers.lstm错误
收藏
快速回复
Paddle框架 其他深度学习 1116 2

运行飞桨的lstm例子报错

Traceback (most recent call last):
File "/home/c/my/competition/DuEE_baseline/DuEE-PaddleHub/tesy1.py", line 18, in
rnn_out, last_h, last_c = layers.lstm(emb, init_h, init_c, max_len, hidden_size, num_layers, dropout_prob=dropout_prob)
File "/home/c/anaconda3/envs/python27/lib/python2.7/site-packages/paddle/fluid/layers/rnn.py", line 2163, in lstm
persistable=True, type=core.VarDesc.VarType.RAW, stop_gradient=True)
NameError: global name 'core' is not defined

Process finished with exit code 1

以下是代码

import paddle.fluid as fluid
import paddle.fluid.layers as layers

emb_dim = 256
vocab_size = 10000
data = fluid.layers.data(name='x', shape=[-1, 100, 1],
               dtype='int64')
emb = fluid.layers.embedding(input=data, size=[vocab_size, emb_dim], is_sparse=True)
batch_size = 20
max_len = 100
dropout_prob = 0.2
seq_len = 100
hidden_size = 150
num_layers = 1
init_h = layers.fill_constant( [num_layers, batch_size, hidden_size], 'float32', 0.0 )
init_c = layers.fill_constant( [num_layers, batch_size, hidden_size], 'float32', 0.0 )

rnn_out, last_h, last_c = layers.lstm(emb, init_h, init_c, max_len, hidden_size, num_layers, dropout_prob=dropout_prob)
rnn_out.shape  # (-1, 100, 150)
last_h.shape  # (1, 20, 150)
last_c.shape  # (1, 20, 150)

有没有好心人帮忙解答下?O(∩_∩)O谢谢

0
收藏
回复
全部评论(2)
时间顺序
thinc
#2 回复于2020-04

参考一下dynamic_lstm:

import paddle.fluid as fluid

emb_dim = 256
vocab_size = 10000
hidden_dim = 512

data = fluid.layers.data(name='x', shape=[1], dtype='int32', lod_level=1)
emb = fluid.layers.embedding(input=data, size=[vocab_size, emb_dim], is_sparse=True)
forward_proj = fluid.layers.fc(input=emb, size=hidden_dim * 4, bias_attr=False)

forward, cell = fluid.layers.dynamic_lstm(input=forward_proj, size=hidden_dim * 4, use_peepholes=False)
forward.shape  # (-1, 512)
cell.shape  # (-1, 512)
0
回复
星光ld1
#3 回复于2020-05

这个问题建议去官方github提issue我试了一下官方的exmaple在paddle1.7.1环境下也有错,官方例子如下

import paddle.fluid as fluid
import paddle.fluid.layers as layers

emb_dim = 256
vocab_size = 10000
data = fluid.layers.data(name='x', shape=[-1, 100, 1],
               dtype='int32')
emb = fluid.layers.embedding(input=data, size=[vocab_size, emb_dim], is_sparse=True)
batch_size = 20
max_len = 100
dropout_prob = 0.2
seq_len = 100
hidden_size = 150
num_layers = 1
init_h = layers.fill_constant( [num_layers, batch_size, hidden_size], 'float32', 0.0 )
init_c = layers.fill_constant( [num_layers, batch_size, hidden_size], 'float32', 0.0 )

rnn_out, last_h, last_c = layers.lstm(emb, init_h, init_c, max_len, hidden_size, num_layers, dropout_prob=dropout_prob)
rnn_out.shape  # (-1, 100, 150)
last_h.shape  # (1, 20, 150)
layt_c.shape  # (1, 20, 150)
0
回复
需求/bug反馈?一键提issue告诉我们
发现bug?如果您知道修复办法,欢迎提pr直接参与建设飞桨~
在@后输入用户全名并按空格结束,可艾特全站任一用户