首页 Paddle框架 帖子详情
paddle.Model()中summary() 已解决
收藏
快速回复
Paddle框架 问答深度学习 2053 18
paddle.Model()中summary() 已解决
收藏
快速回复
Paddle框架 问答深度学习 2053 18

想请教各位一个问题!!!!!想知道这是为什么,看官方api文档说summary()中的参数是可选的,可是我在学习时想使用summary()打印网络的基础结构和参数信息看看,报如下错误:

---------------------------------------------------------------------------ValueError                                Traceback (most recent call last) in 
----> 1 model_cnn.summary()
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model.py in summary(self, input_size, dtype)
   1879         else:
   1880             _input_size = self._inputs
-> 1881         return summary(self.network, _input_size, dtype)
   1882 
   1883     def _verify_spec(self, specs, shapes=None, dtypes=None, is_input=False):
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model_summary.py in summary(net, input_size, dtypes)
    147 
    148     _input_size = _check_input(_input_size)
--> 149     result, params_info = summary_string(net, _input_size, dtypes)
    150     print(result)
    151 
 in summary_string(model, input_size, dtypes)
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py in _decorate_function(func, *args, **kwargs)
    313         def _decorate_function(func, *args, **kwargs):
    314             with self:
--> 315                 return func(*args, **kwargs)
    316 
    317         @decorator.decorator
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model_summary.py in summary_string(model, input_size, dtypes)
    274 
    275     # make a forward pass
--> 276     model(*x)
    277 
    278     # remove these hooks
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py in __call__(self, *inputs, **kwargs)
    900                 self._built = True
    901 
--> 902             outputs = self.forward(*inputs, **kwargs)
    903 
    904             for forward_post_hook in self._forward_post_hooks.values():
 in forward(self, text, seq_len)
     32     def forward(self, text, seq_len=None):
     33         # Shape: (batch_size, num_tokens, embedding_dim)
---> 34         embedded_text = self.embedder(text)
     35         print('after word-embeding:', embedded_text.shape)
     36         # Shape: (batch_size, len(ngram_filter_sizes)*num_filter)
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py in __call__(self, *inputs, **kwargs)
    900                 self._built = True
    901 
--> 902             outputs = self.forward(*inputs, **kwargs)
    903 
    904             for forward_post_hook in self._forward_post_hooks.values():
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/common.py in forward(self, x)
   1361             padding_idx=self._padding_idx,
   1362             sparse=self._sparse,
-> 1363             name=self._name)
   1364 
   1365     def extra_repr(self):
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/functional/input.py in embedding(x, weight, padding_idx, sparse, name)
    198         return core.ops.lookup_table_v2(
    199             weight, x, 'is_sparse', sparse, 'is_distributed', False,
--> 200             'remote_prefetch', False, 'padding_idx', padding_idx)
    201     else:
    202         helper = LayerHelper('embedding', **locals())
ValueError: (InvalidArgument) Tensor holds the wrong type, it holds float, but desires to be int64_t.
  [Hint: Expected valid == true, but received valid:0 != true:1.] (at /paddle/paddle/fluid/framework/tensor_impl.h:33)
  [operator < lookup_table_v2 > error]

源码如下:

class CNNModel(nn.Layer):

    def __init__(self,
                 vocab_size,
                 num_classes,
                 emb_dim=128,
                 padding_idx=0,
                 num_filter=128,
                 ngram_filter_sizes=(3, ),
                 fc_hidden_size=96):
        super().__init__()
        self.embedder = nn.Embedding(
            vocab_size, emb_dim, padding_idx=padding_idx)
        self.encoder = ppnlp.seq2vec.CNNEncoder(
            emb_dim=emb_dim,
            num_filter=num_filter,
            ngram_filter_sizes=ngram_filter_sizes)
        self.fc = nn.Linear(self.encoder.get_output_dim(), fc_hidden_size)
        self.output_layer = nn.Linear(fc_hidden_size, num_classes)

    def forward(self, text, seq_len=None):
        # Shape: (batch_size, num_tokens, embedding_dim)
        embedded_text = self.embedder(text)
        print('after word-embeding:', embedded_text.shape)
        # Shape: (batch_size, len(ngram_filter_sizes)*num_filter)
        encoder_out = self.encoder(embedded_text)
        encoder_out = paddle.tanh(encoder_out)
        # Shape: (batch_size, fc_hidden_size)
        fc_out = self.fc(encoder_out)
        # Shape: (batch_size, num_classes)
        logits = self.output_layer(fc_out)
        return logits
model_cnn = CNNModel(27665,2)
model_cnn = paddle.Model(model_cnn)
model_cnn.summary()
有内涵滴紫菜
已解决
2# 回复于2021-05
通过查看GitHub上的issues,找到了解决方法,是自己学艺不精。。。哎。。。。。这样就行了 [代码] 但是我不知道这个inputsize对不对QAQ [代码] 这个打印出来是这样的: [代码]
展开
0
收藏
回复
全部评论(18)
时间顺序
有内涵滴紫菜
#2 回复于2021-05

通过查看GitHub上的issues,找到了解决方法,是自己学艺不精。。。哎。。。。。这样就行了

model_cnn.summary(input_size=(32,128), dtype='int64')

但是我不知道这个inputsize对不对QAQ

print('after word-embeding:', embedded_text.shape)

这个打印出来是这样的:

after word-embeding: [32, 113, 128]
after word-embeding: [32, 97, 128]
after word-embeding: [32, 101, 128]
after word-embeding: [32, 105, 128]
after word-embeding: [32, 108, 128]
after word-embeding: [32, 94, 128]
after word-embeding: [32, 98, 128]
after word-embeding: [32, 115, 128]
after word-embeding: [32, 97, 128]
after word-embeding: [32, 103, 128]
............
0
回复
AIStudio810258
#3 回复于2021-05

有两个summary(),一个paddle.summary(), 一个paddle.Model.summary()

0
回复
AIStudio810258
#4 回复于2021-05

https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/hapi/model/Model_cn.html#summary-input-size-none-batch-size-none-dtype-none

0
回复
AIStudio810258
#5 回复于2021-05

https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/hapi/model_summary/summary_cn.html#summary

0
回复
AIStudio810258
#6 回复于2021-05

这两个api用法稍有不同

0
回复
AIStudio810258
#7 回复于2021-05

直接打印 Layer 对象也能看模型的参数设置

0
回复
有内涵滴紫菜
#8 回复于2021-05
这两个api用法稍有不同

我的代码里面用paddle.summary()会报错,只会paddle.Model.summary()

---------------------------------------------------------------------------AttributeError Traceback (most recent call last) in
----> 1 param_info = paddle.summary(model_cnn,(32,128))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model_summary.py in summary(net, input_size, dtypes)
109 in_train_mode = False
110 else:
--> 111 in_train_mode = net.training
112
113 if in_train_mode:
AttributeError: 'Model' object has no attribute 'training'

0
回复
AIStudio810258
#9 回复于2021-05

paddle.Model是hapi那套

0
回复
AIStudio810258
#10 回复于2021-05

直接继承Layer定义的模型用paddle.summary()

0
回复
AIStudio810258
#11 回复于2021-05

一个适用高级API Model包上的模型,一个适用直接定义的继承自Layer的模型

0
回复
有内涵滴紫菜
#12 回复于2021-05
一个适用高级API Model包上的模型,一个适用直接定义的继承自Layer的模型

那想请问一下像paddle.Model().summary(input_size=(32,128), dtype='int64')里面的input_size=(32,128)是怎么确定的呢?我一直没搞懂,我理解的这个input_size是forward的输入的shape,但是输入的其实不是(32,128),128的位置对应的是可变的,那应该是多少才对呢?

0
回复
13168076035z
#13 回复于2021-05

paddle.summary(),paddle.Model.summary()这两个api用法稍有不同

0
回复
AIStudio810290
#14 回复于2021-05
那想请问一下像paddle.Model().summary(input_size=(32,128), dtype='int64')里面的input_size=(32,128)是怎么确定的呢?我一直没搞懂,我理解的这个input_size是forward的输入的shape,但是输入的其实不是(32,128),128的位置对应的是可变的,那应该是多少才对呢?
展开

那个参数随便输入,只是为了打印网络时显示的权值tensor的维度是全的

0
回复
AIStudio810290
#15 回复于2021-05
那个参数随便输入,只是为了打印网络时显示的权值tensor的维度是全的

修正一下,随便输指的是第一维的batch size

0
回复
AIStudio810290
#16 回复于2021-05

如果是mlp,数据形状是(N,C),N是batch size,C是通道数。N随便,C要与网络的第一层输入的通道数对应

0
回复
AIStudio810290
#17 回复于2021-05

如果是cnn网络,性状就是(N,C,H,W),同样N是batch size,随便输,C是输入通道数要与网络第一层对应,H,W是特征图尺寸,只要保证够大,不网络里任意一层出现非法值即可。

0
回复
有内涵滴紫菜
#18 回复于2021-05
如果是cnn网络,性状就是(N,C,H,W),同样N是batch size,随便输,C是输入通道数要与网络第一层对应,H,W是特征图尺寸,只要保证够大,不网络里任意一层出现非法值即可。

懂了!!!谢谢大佬!!!

0
回复
有内涵滴紫菜
#19 回复于2021-05
如果是cnn网络,性状就是(N,C,H,W),同样N是batch size,随便输,C是输入通道数要与网络第一层对应,H,W是特征图尺寸,只要保证够大,不网络里任意一层出现非法值即可。

我在打印lstm的时候又不行了,通过学习我知道形状应该是(sequence_length,batch_size,input_size),但是我就是打印不出来...一直报

TypeError: forward() missing 1 required positional argument: 'seq_len'

然后我改成了model_lstm.summary(dtype='int64')就打印出来了,再把之前的CNN的也改了,把paddle.Model().summary(input_size=(32,128), dtype='int64')里面的input_size=(32,128)删掉,只是保留把dtype改为int64,也可以打印出来,结果有略微的改变,请问这问题大吗?

---------------------------------------------------------------------------
 Layer (type)       Input Shape          Output Shape         Param #    
===========================================================================
  Embedding-4       [[32, 128]]         [32, 128, 128]       5,190,144   
   Conv2D-5     [[32, 1, 128, 128]]   [32, 128, 126, 1]       49,280     
    Tanh-1      [[32, 128, 126, 1]]   [32, 128, 126, 1]          0       
 CNNEncoder-3     [[32, 128, 128]]        [32, 128]              0       
   Linear-7         [[32, 128]]            [32, 96]           12,384     
   Linear-8          [[32, 96]]            [32, 2]              194      
===========================================================================
Total params: 5,252,002
Trainable params: 5,252,002
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.02
Forward/backward pass size (MB): 11.93
Params size (MB): 20.03
Estimated Total Size (MB): 31.98
---------------------------------------------------------------------------
 Layer (type)       Input Shape          Output Shape         Param #    
===========================================================================
  Embedding-4       [[32, 104]]         [32, 104, 128]       5,190,144   
   Conv2D-5     [[32, 1, 104, 128]]   [32, 128, 102, 1]       49,280     
    Tanh-1      [[32, 128, 102, 1]]   [32, 128, 102, 1]          0       
 CNNEncoder-3     [[32, 104, 128]]        [32, 128]              0       
   Linear-7         [[32, 128]]            [32, 96]           12,384     
   Linear-8          [[32, 96]]            [32, 2]              194      
===========================================================================
Total params: 5,252,002
Trainable params: 5,252,002
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.01
Forward/backward pass size (MB): 9.68
Params size (MB): 20.03
Estimated Total Size (MB): 29.73
0
回复
需求/bug反馈?一键提issue告诉我们
发现bug?如果您知道修复办法,欢迎提pr直接参与建设飞桨~
在@后输入用户全名并按空格结束,可艾特全站任一用户