首页 Paddle Inference 帖子详情
c++预测时提示输出tensor不可以reshape
收藏
快速回复
Paddle Inference 问答推理 1443 5
c++预测时提示输出tensor不可以reshape
收藏
快速回复
Paddle Inference 问答推理 1443 5

fluid 1.6.0版本,用c++加载模型预测。

--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [infer_clean_graph_pass]
--- Running IR pass [attention_lstm_fuse_pass]
--- Running IR pass [seqconv_eltadd_relu_fuse_pass]
--- Running IR pass [fc_lstm_fuse_pass]
--- Running IR pass [mul_lstm_fuse_pass]
--- Running IR pass [fc_gru_fuse_pass]
--- Running IR pass [mul_gru_fuse_pass]
--- Running IR pass [seq_concat_fc_fuse_pass]
--- Running IR pass [fc_fuse_pass]
---  detected 5 subgraphs
--- Running IR pass [repeated_fc_relu_fuse_pass]
--- Running IR pass [squared_mat_sub_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1119 15:07:43.434036 28762 analysis_predictor.cc:451] == optimize end ==
---  skip [feed], feed -> dense_data
---  skip [feed], feed -> sparse_data
---  skip [save_infer_model/scale_0], fetch -> fetch
---  skip [save_infer_model/scale_1], fetch -> fetch
terminate called after throwing an instance of 'paddle::platform::EnforceNotMet'
  what():  Can't reshape the output tensor, it is readonly at [/paddle/paddle/fluid/inference/api/details/zero_copy_tensor.cc:28]
PaddlePaddle Call Stacks:
0       0x7fb30fe57b83p void paddle::platform::EnforceNotMet::Init<char const*>(char const*, char const*, int) + 563
1       0x7fb30fe582e9p paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int) + 137
2       0x7fb30fe74e10p paddle::ZeroCopyTensor::Reshape(std::vector<int, std::allocator<int> > const&) + 720
3             0x401a8ep
4             0x402069p
5       0x7fb30e84bbd5p __libc_start_main + 245
6             0x401529p

Aborted
0
收藏
回复
全部评论(5)
时间顺序
AIStudio784535
#2 回复于2019-11

what(): Can't reshape the output tensor, it is readonly at [/paddle/paddle/fluid/inference/api/details/zero_copy_tensor.cc:28]

0
回复
AIStudio784535
#3 回复于2019-11

cpu?gpu?集群?单机?训练?预测、

0
回复
AIStudio784534
#4 回复于2019-11

本地训练后,写了一个c++测试一下,模型预测是否成功。cpu,单机。

#include <iostream>
#include <numeric>
#include "paddle_inference_api.h"

using namespace std;

namespace paddle
{
    void CreateConfig(AnalysisConfig *config)
    {
        // 模型从磁盘进行加载
        // config->SetModel(model_dirname + "/model",
        //                 model_dirname + "/params");
        config->SetModel("./model", "./params");
        // 如果模型从内存中加载,可以使用SetModelBuffer接口
        // config->SetModelBuffer(prog_buffer, prog_size, params_buffer, params_size);
        // config->EnableUseGpu(10 /*the initial size of the GPU memory pool in MB*/,  0 /*gpu_id*/);

        // for cpu
        config->DisableGpu();
        // config->EnableMKLDNN();   // 可选
        config->SetCpuMathLibraryNumThreads(10);

        // 当使用ZeroCopyTensor的时候,此处一定要设置为false。
        config->SwitchUseFeedFetchOps(false);
        // 当多输入的时候,此处一定要设置为true
        config->SwitchSpecifyInputNames(true);
        // config->SwitchIrDebug(true); // 开关打开,会在每个图优化过程后生成dot文件,方便可视化。
        // config->SwitchIrOptim(false); // 默认为true。如果设置为false,关闭所有优化,执行过程同 NativePredictor
        // config->EnableMemoryOptim(); // 开启内存/显存复用
    }

    void RunAnalysis(vector<int64_t>& inputs, float *dense)
    {
        // 1. 创建AnalysisConfig
        AnalysisConfig config;
        CreateConfig(&config);

        // 2. 根据config 创建predictor
        auto predictor = CreatePaddlePredictor(config);

        // 3. 创建输入
        // 同NativePredictor样例一样,此处可以使用PaddleTensor来创建输入
        // 以下的代码中使用了ZeroCopy的接口,同使用PaddleTensor不同的是:此接口可以避免预测中多余的cpu copy,提升预测性能。
        auto input_names = predictor->GetInputNames();			
        auto sparse_input_t = predictor->GetInputTensor(input_names[0]);
        sparse_input_t->SetLoD({{0, inputs.size()}});
        sparse_input_t->Reshape({1, inputs.size()});
        sparse_input_t->copy_from_cpu(inputs.data());

        auto dense_input_t = predictor->GetInputTensor(input_names[1]);
        dense_input_t->Reshape({1, 2});
        dense_input_t->copy_from_cpu(dense);

        bool ret = predictor->ZeroCopyRun();
        if (!ret) {
            cout << "failed to infer with paddle predictor" << endl;
            return;
        }

        std::vector<float> infer_res;
        auto output_names = predictor->GetOutputNames();
        auto cvr_t = predictor->GetOutputTensor(output_names[1]);
        std::vector<int> output_shape = cvr_t->shape();
        int out_num = std::accumulate(output_shape.begin(), output_shape.end(), 1, std::multiplies<int>());
        infer_res.resize(out_num);
        cvr_t->copy_to_cpu(infer_res.data());
        
        float pcvr = infer_res[0];
        
        cout << pcvr << endl;
    }
} // namespace paddle

int main()
{
    vector<int64_t> item;
    item.push_back(1);
    item.push_back(2);
    item.push_back(3);
    item.push_back(4);

    float dense[2] = {0.4, 0.5};

    paddle::RunAnalysis(item, dense);
    return 0;
}
0
回复
AIStudio784534
#5 回复于2019-11

模型在这个paddlecloud任务里,

详细日志:

resource_list: nodes=20,walltime=48:00:00,resource=full

qsubf arguments confirmation:

-N xuzhang_scvr_yangzhifeng01_20191119_paddlecloud

-v PLATFORM=maybach

--conf /home/paddle/cloud/job/job-0bb5dd39ddb5be22/submit/qsub_f.conf

--hdfs afs://tianqi.afs.baidu.com:9902

--ugi fcr-tianqi-d,absUPEwUB7nc

--hout /app/ecom/brand/yangzhifeng01/scvr/output/08b5e88a-7b60-5ee6-b215-bcd484367c19/job-0bb5dd39ddb5be22/

--files [./paddle]

[INFO] client.version: 3.5.0

[INFO] session.id: 19772208.yq01-smart-000.yq01.baidu.com

[INFO] making tar.gz: from [./paddle]

[INFO] making tar.gz done: size=837925

[INFO] uploading the job package finished.

[INFO] qsub_f: jobid=app-user-20191119154646-25171.yq01-hpc-lvliang01-smart-master.dmop.baidu.com, pls waiting for complete!

[INFO] qsub_f: see more at http://yq01-hpc-lvliang01-mon.dmop.baidu.com:8919/taskinfo.html?appId=app-user-20191119154646-25171

[INFO] qsub_f: to stop, pls run: qdel app-user-20191119154646-25171.yq01-hpc-lvliang01-smart-master.dmop.baidu.com

0
回复
AIStudio784534
#7 回复于2019-11

解决了,上面你指出的第二个reshape最后一维应该是1才行。然后其他有问题的地方,代码里边用了fluid.layer.embedding层,没有用更新的fluid.embedding层。还有需要用cmake编译,直接用gcc编译出来有问题。

0
回复
需求/bug反馈?一键提issue告诉我们
发现bug?如果您知道修复办法,欢迎提pr直接参与建设飞桨~
在@后输入用户全名并按空格结束,可艾特全站任一用户