首页 Paddle Inference 帖子详情
Paddle-TensorrT
收藏
快速回复
Paddle Inference 问答推理 2918 10
Paddle-TensorrT
收藏
快速回复
Paddle Inference 问答推理 2918 10

官网上的 Paddle-Tensorrt 只有 mobilenet 的 classification Demo, 请问是目前还不支持Detectrion吗? 或者有Detection的demo code 可以学习吗? 我用 PaddleDetection export_model 在Paddle-Tensorrt 里运行 遇到问题

0
收藏
回复
全部评论(10)
时间顺序
AIStudio783230
#2 回复于2019-12

W1223 11:59:43.711913 20079 device_context.cc:236] Please NOTE: device: 0, CUDA Capability: 61, Driver API Version: 10.1, Runtime API Version: 9.0
W1223 11:59:43.712009 20079 device_context.cc:244] device: 0, cuDNN Version: 7.6.
terminate called after throwing an instance of 'paddle::platform::EnforceNotMet'
what():


C++ Call Stacks (More useful to developers):

0 std::__cxx11::basic_string<char, std::char_traits, std::allocator > paddle::platform::GetTraceBackString<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >(std::__cxx11::basic_string<char, std::char_traits, std::allocator >&&, char const*, int)
1 paddle::framework::Tensor::check_memory_size() const
2 paddle::framework::Tensor::Slice(long, long) const
3 paddle::operators::CUDAGenerateProposalsKernel<paddle::platform::CUDADeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const
4 std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CUDAPlace, false, 0ul, paddle::operators::CUDAGenerateProposalsKernel<paddle::platform::CUDADeviceContext, float> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&)
5 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const
6 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
7 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
8 paddle::framework::NaiveExecutor::Run()
9 paddle::AnalysisPredictor::ZeroCopyRun()


Python Call Stacks (More useful to users):

File "/home/a/.conda/envs/paddle/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2488, in append_op
attrs=kwargs.get("attrs", None))
File "/home/a/.conda/envs/paddle/lib/python3.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
return self.main_program.current_block().append_op(*args, **kwargs)
File "/home/a/.conda/envs/paddle/lib/python3.7/site-packages/paddle/fluid/layers/detection.py", line 2808, in generate_proposals
'RpnRoiProbs': rpn_roi_probs})
File "/home/a/lztnew/train_net_work/baidu/PaddleDetection-release-0.1/ppdet/core/workspace.py", line 113, in partial_apply
return op(*args, **kwargs_)
File "/home/a/lztnew/train_net_work/baidu/PaddleDetection-release-0.1/ppdet/modeling/anchor_heads/rpn_head.py", line 438, in _get_single_proposals
variances=self.anchor_var)
File "/home/a/lztnew/train_net_work/baidu/PaddleDetection-release-0.1/ppdet/modeling/anchor_heads/rpn_head.py", line 462, in get_proposals
fpn_feat, im_info, lvl, mode)
File "/home/a/lztnew/train_net_work/baidu/PaddleDetection-release-0.1/ppdet/modeling/architectures/cascade_rcnn.py", line 108, in build
rpn_rois = self.rpn_head.get_proposals(body_feats, im_info, mode=mode)
File "/home/a/lztnew/train_net_work/baidu/PaddleDetection-release-0.1/ppdet/modeling/architectures/cascade_rcnn.py", line 289, in test
return self.build(feed_vars, 'test')
File "tools/export_model.py", line 101, in main
test_fetches = model.test(feed_vars)
File "tools/export_model.py", line 118, in
main()


Error Message Summary:

Error: Tensor holds no memory. Call Tensor::mutable_data first.
[Hint: holder_ should not be null.] at (/home/a/lztnew/train_net_work/baidu/Paddle/paddle/fluid/framework/tensor.cc:23)
[operator < generate_proposals > error]
已放弃 (核心已转储)

0
回复
AIStudio783230
#3 回复于2019-12

用的是 官方的PaddleDetection 模型 cascade_rcnn_r50_fpn_1x
exportmodel
固定了 输入 3,640 ,640

0
回复
AIStudio783230
#4 回复于2019-12

namespace paddle {
using paddle::AnalysisConfig;
DEFINE_string(dirname, "./cascade_rcnn_r50_fpn_1x", "Directory of the inference model.");
using Time = decltype(std::chrono::high_resolution_clock::now());
Time time() { return std::chrono::high_resolution_clock::now(); };
double time_diff(Time t1, Time t2) {
typedef std::chrono::microseconds ms;
auto diff = t2 - t1;
ms counter = std::chrono::duration_cast(diff);
return counter.count() / 1000.0;
}

void PrepareTRTConfig(AnalysisConfig *config, int batch_size) {
config->SetModel(FLAGS_dirname + "/model",
FLAGS_dirname + "/params");
config->EnableUseGpu(5000, 0);
//We use ZeroCopyTensor here, so we set config->SwitchUseFeedFetchOps(false)
config->SwitchUseFeedFetchOps(false);
config->EnableTensorRtEngine(1 << 20, 1, 30, AnalysisConfig::Precision::kFloat32, false);
}

bool test_map_cnn(int batch_size, int repeat) {
AnalysisConfig config;
PrepareTRTConfig(&config, batch_size);
auto predictor = CreatePaddlePredictor(config);

int channels = 3;
int height = 640;
int width = 640;
// int input_num = channels * height * width * batch_size;
float input[batch_size * channels * height * width] = {0};
auto input_names = predictor->GetInputNames();
auto input_t = predictor->GetInputTensor(input_names[0]);
input_t->Reshape({batch_size, channels, height, width});
input_t->copy_from_cpu(input);
// run
auto time1 = time();
for (size_t i = 0; i < repeat; i++) {
CHECK(predictor->ZeroCopyRun());
}
auto time2 = time();

0
回复
AIStudio783231
#5 回复于2019-12

你好:
可以参考下这个,https://github.com/PaddlePaddle/PaddleDetection/tree/release/0.1/inference

如果是python里的tensorrt,可以参考下这个pull request
https://github.com/PaddlePaddle/models/pull/3091/files

0
回复
AIStudio783230
#6 回复于2019-12

https://github.com/PaddlePaddle/PaddleDetection/tree/release/0.1/inference
你好,这个文档得已经 跑通了 , 但是 这里只是 分类得文档 并没有 检测得demo 请问有检测得demo吗
C++ 得 Paddle-Tensorrt得

0
回复
AIStudio783231
#7 回复于2019-12

https://github.com/PaddlePaddle/PaddleDetection/tree/release/0.1/inference

  • 支持多种常见的图像检测模型,如YOLOv3, Faster-RCNN, Faster-RCNN+FPN,用户通过少量配置即可加载模型完成常见检测任务

我看这里README的描述支持检测的吧

0
回复
AIStudio783230
#8 回复于2019-12

你好 这个文档只是 C++的 推理 , 但是 没有使用到Tensorrt加速呢 请问有相关 Tensorrt Detection 的 文档吗

0
回复
AIStudio783230
#10 回复于2019-12

非常感谢 您的 耐心 回答,是我太没耐心看了,非常感谢

0
回复
1
1224wxwx
#11 回复于2020-02

请问 https://github.com/PaddlePaddle/PaddleDetection/tree/release/0.1/inference 这个你跑通了吗?我在make C++预测库的时候,总是说之前安装的opencv里缺少libopencv_imgcodes.a这个文件,请问你有没有遇到呀?

0
回复
需求/bug反馈?一键提issue告诉我们
发现bug?如果您知道修复办法,欢迎提pr直接参与建设飞桨~
在@后输入用户全名并按空格结束,可艾特全站任一用户