paddle.Model()中summary()
收藏
想请教各位一个问题!!!!!想知道这是为什么,看官方api文档说summary()中的参数是可选的,可是我在学习时想使用summary()打印网络的基础结构和参数信息看看,报如下错误:
---------------------------------------------------------------------------ValueError Traceback (most recent call last) in
----> 1 model_cnn.summary()
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model.py in summary(self, input_size, dtype)
1879 else:
1880 _input_size = self._inputs
-> 1881 return summary(self.network, _input_size, dtype)
1882
1883 def _verify_spec(self, specs, shapes=None, dtypes=None, is_input=False):
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model_summary.py in summary(net, input_size, dtypes)
147
148 _input_size = _check_input(_input_size)
--> 149 result, params_info = summary_string(net, _input_size, dtypes)
150 print(result)
151
in summary_string(model, input_size, dtypes)
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py in _decorate_function(func, *args, **kwargs)
313 def _decorate_function(func, *args, **kwargs):
314 with self:
--> 315 return func(*args, **kwargs)
316
317 @decorator.decorator
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model_summary.py in summary_string(model, input_size, dtypes)
274
275 # make a forward pass
--> 276 model(*x)
277
278 # remove these hooks
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py in __call__(self, *inputs, **kwargs)
900 self._built = True
901
--> 902 outputs = self.forward(*inputs, **kwargs)
903
904 for forward_post_hook in self._forward_post_hooks.values():
in forward(self, text, seq_len)
32 def forward(self, text, seq_len=None):
33 # Shape: (batch_size, num_tokens, embedding_dim)
---> 34 embedded_text = self.embedder(text)
35 print('after word-embeding:', embedded_text.shape)
36 # Shape: (batch_size, len(ngram_filter_sizes)*num_filter)
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py in __call__(self, *inputs, **kwargs)
900 self._built = True
901
--> 902 outputs = self.forward(*inputs, **kwargs)
903
904 for forward_post_hook in self._forward_post_hooks.values():
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/common.py in forward(self, x)
1361 padding_idx=self._padding_idx,
1362 sparse=self._sparse,
-> 1363 name=self._name)
1364
1365 def extra_repr(self):
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/functional/input.py in embedding(x, weight, padding_idx, sparse, name)
198 return core.ops.lookup_table_v2(
199 weight, x, 'is_sparse', sparse, 'is_distributed', False,
--> 200 'remote_prefetch', False, 'padding_idx', padding_idx)
201 else:
202 helper = LayerHelper('embedding', **locals())
ValueError: (InvalidArgument) Tensor holds the wrong type, it holds float, but desires to be int64_t.
[Hint: Expected valid == true, but received valid:0 != true:1.] (at /paddle/paddle/fluid/framework/tensor_impl.h:33)
[operator < lookup_table_v2 > error]
源码如下:
class CNNModel(nn.Layer):
def __init__(self,
vocab_size,
num_classes,
emb_dim=128,
padding_idx=0,
num_filter=128,
ngram_filter_sizes=(3, ),
fc_hidden_size=96):
super().__init__()
self.embedder = nn.Embedding(
vocab_size, emb_dim, padding_idx=padding_idx)
self.encoder = ppnlp.seq2vec.CNNEncoder(
emb_dim=emb_dim,
num_filter=num_filter,
ngram_filter_sizes=ngram_filter_sizes)
self.fc = nn.Linear(self.encoder.get_output_dim(), fc_hidden_size)
self.output_layer = nn.Linear(fc_hidden_size, num_classes)
def forward(self, text, seq_len=None):
# Shape: (batch_size, num_tokens, embedding_dim)
embedded_text = self.embedder(text)
print('after word-embeding:', embedded_text.shape)
# Shape: (batch_size, len(ngram_filter_sizes)*num_filter)
encoder_out = self.encoder(embedded_text)
encoder_out = paddle.tanh(encoder_out)
# Shape: (batch_size, fc_hidden_size)
fc_out = self.fc(encoder_out)
# Shape: (batch_size, num_classes)
logits = self.output_layer(fc_out)
return logits
model_cnn = CNNModel(27665,2)
model_cnn = paddle.Model(model_cnn)
model_cnn.summary()
有内涵滴紫菜
已解决
2#
回复于2021-05
通过查看GitHub上的issues,找到了解决方法,是自己学艺不精。。。哎。。。。。这样就行了 [代码] 但是我不知道这个inputsize对不对QAQ [代码] 这个打印出来是这样的: [代码]
0
收藏
请登录后评论
通过查看GitHub上的issues,找到了解决方法,是自己学艺不精。。。哎。。。。。这样就行了
但是我不知道这个inputsize对不对QAQ
这个打印出来是这样的:
有两个summary(),一个paddle.summary(), 一个paddle.Model.summary()
https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/hapi/model/Model_cn.html#summary-input-size-none-batch-size-none-dtype-none
https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/hapi/model_summary/summary_cn.html#summary
这两个api用法稍有不同
直接打印 Layer 对象也能看模型的参数设置
我的代码里面用paddle.summary()会报错,只会paddle.Model.summary()
---------------------------------------------------------------------------AttributeError Traceback (most recent call last) in
----> 1 param_info = paddle.summary(model_cnn,(32,128))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model_summary.py in summary(net, input_size, dtypes)
109 in_train_mode = False
110 else:
--> 111 in_train_mode = net.training
112
113 if in_train_mode:
AttributeError: 'Model' object has no attribute 'training'
paddle.Model是hapi那套
直接继承Layer定义的模型用paddle.summary()
一个适用高级API Model包上的模型,一个适用直接定义的继承自Layer的模型
那想请问一下像paddle.Model().summary(input_size=(32,128), dtype='int64')里面的input_size=(32,128)是怎么确定的呢?我一直没搞懂,我理解的这个input_size是forward的输入的shape,但是输入的其实不是(32,128),128的位置对应的是可变的,那应该是多少才对呢?
paddle.summary(),paddle.Model.summary()这两个api用法稍有不同
那个参数随便输入,只是为了打印网络时显示的权值tensor的维度是全的
修正一下,随便输指的是第一维的batch size
如果是mlp,数据形状是(N,C),N是batch size,C是通道数。N随便,C要与网络的第一层输入的通道数对应
如果是cnn网络,性状就是(N,C,H,W),同样N是batch size,随便输,C是输入通道数要与网络第一层对应,H,W是特征图尺寸,只要保证够大,不网络里任意一层出现非法值即可。
懂了!!!谢谢大佬!!!
我在打印lstm的时候又不行了,通过学习我知道形状应该是(sequence_length,batch_size,input_size),但是我就是打印不出来...一直报
然后我改成了model_lstm.summary(dtype='int64')就打印出来了,再把之前的CNN的也改了,把paddle.Model().summary(input_size=(32,128), dtype='int64')里面的input_size=(32,128)删掉,只是保留把dtype改为int64,也可以打印出来,结果有略微的改变,请问这问题大吗?