首页 Paddle框架 帖子详情
不保存模型,如何直接预测
收藏
快速回复
Paddle框架 问答深度学习 1098 4
不保存模型,如何直接预测
收藏
快速回复
Paddle框架 问答深度学习 1098 4

程序可以正常执行
train=>保存模型到本地=》从本地读取模型=》进行预测

但是如何执行:
train=》进行预测

目前模仿保存模型的方法写了以下,报参数未初始化错误。代码如下:

import paddle.fluid as fluid

import numpy as np

class TransEEnv:
    def calc(self,h,r,t):
        h = fluid.layers.l2_normalize(h, -1)
        r = fluid.layers.l2_normalize(r, -1)
        t = fluid.layers.l2_normalize(t, -1)
        # fluid.layers.elementwise_sub(fluid.layers.elementwise_add(h, r), t)
        return fluid.layers.abs(h+r-t)


    def __init__(self,entity_num,rel_num,dim,lr,margin):
        maa=fluid.layers.fill_constant(shape=[1],dtype="float32",value=margin)
        zero = fluid.layers.fill_constant(shape=[1], dtype="float32", value=0)
        initx=6/rel_num**0.5
        pos_triple=fluid.data(name="pos_triple",shape=[None,3],dtype="int64")
        entity_param_attrs = fluid.ParamAttr(
            name="ent_embedding",
            learning_rate=0.5,
            initializer=fluid.initializer.UniformInitializer(low=-initx,high=initx),
            trainable=True)
        relation_param_attrs = fluid.ParamAttr(
            name="rel_embedding",
            learning_rate=0.5,
            initializer=fluid.initializer.UniformInitializer(low=-initx, high=initx),
            trainable=True)
        vh = fluid.embedding(input=pos_triple[:,0], size=(entity_num, dim), param_attr=entity_param_attrs,dtype='float32')
        vr= fluid.embedding(input=pos_triple[:,1], size=(rel_num, dim), param_attr=relation_param_attrs, dtype='float32')
        vt = fluid.embedding(input=pos_triple[:,2], size=(entity_num, dim), param_attr=entity_param_attrs,dtype='float32')

        self.main_program = fluid.default_main_program()  # 获取默认/全局主函数
        self.startup_program = fluid.default_startup_program()  # 获取默认/全局启动程序
        self.test_program = self.main_program.clone(for_test=True)
        neg_triple = fluid.data(name="neg_triple", shape=[None, 3], dtype="int64")

        vh_ = fluid.embedding(input=neg_triple[:,0], size=(entity_num, dim), param_attr=entity_param_attrs,dtype='float32')
        vr_ = fluid.embedding(input=neg_triple[:,1], size=(rel_num, dim), param_attr=relation_param_attrs, dtype='float32')
        vt_ = fluid.embedding(input=neg_triple[:,2], size=(entity_num, dim), param_attr=entity_param_attrs,dtype='float32')

        self.cost_pos=fluid.layers.reduce_sum(self.calc(vh,vr,vt),1)
        cost_neg=fluid.layers.reduce_sum(self.calc(vh_,vr_,vt_),1)

        self.loss=fluid.layers.reduce_mean(fluid.layers.elementwise_max(self.cost_pos-cost_neg+maa,zero))

        # self.main_program = fluid.default_main_program()  # 获取默认/全局主函数
        # self.startup_program = fluid.default_startup_program()  # 获取默认/全局启动程序
        # self.test_program = self.main_program.clone(for_test=True)

        optimizer=fluid.optimizer.AdamOptimizer(lr)
        optimizer.minimize(self.loss)



    def tran_step(self):
        use_cuda = False
        place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()  # 指明executor的执行场所
        self.train_exe = fluid.Executor(place)
        self.train_exe.run(self.startup_program)

    def train(self,pos_triple,neg_triple):
        loss,=self.train_exe.run(self.main_program,feed={
            "pos_triple":pos_triple,
            "neg_triple":neg_triple
        },fetch_list=[self.loss])
        print(loss)

    def test_step(self):
        use_cuda = False
        place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()  # 指明executor的执行场所
        self.test_exe = fluid.Executor(place)
        # infer_exe = fluid.Executor(place)
        self.inference_scope = fluid.core.Scope()
        # self.test_exe.run(self.test_program)
    def test(self,test_triple):
        with fluid.scope_guard(self.inference_scope):
            cost = self.test_exe.run(self.test_program, feed={
                "pos_triple": test_triple
            }, fetch_list=[self.cost_pos])
        return cost




if __name__ == '__main__':
    model=TransEEnv(10,10,100,0.1,1.0)
    model.tran_step()
    model.train(np.array([[1,2,3],[4,5,6]]).astype("int64"),np.array([[4,2,3],[4,5,8]]).astype("int64"))
    model.test_step()
    print(model.test(np.array([[1,2,3],[2,3,4]]).astype("int64")))

报错信息如下:

----------------------
Error: The Tensor in the lookup_table_v2 Op's Input Variable W(ent_embedding) is not initialized.
  [Hint: Expected t->IsInitialized() == true, but received t->IsInitialized():0 != true:1.] at (D:\1.6.2\paddle\paddle\fluid\framework\operator.cc:1144)
  [operator < lookup_table_v2 > error]
0
收藏
回复
全部评论(4)
时间顺序
AIStudio792089
#2 回复于2020-01

可以试下移除test_step中的 self.inference_scope = fluid.core.Scope(),这样能够使用训练中的参数,若self.test_program不包含参数优化更新的内容,test使用train的scope也不会对train有影响。

0
回复
AIStudio792088
#3 回复于2020-01

image
解决问题得关键是,self.test_program = self.main_program.clone(for_test=True)这个语句放在程序得位置,这个语句之上得操作,是被认为在test中的,开发中应该注意!!!

0
回复
AIStudio792088
#4 回复于2020-01

可以试下移除test_step中的 self.inference_scope = fluid.core.Scope(),这样能够使用训练中的参数,若self.test_program不包含参数优化更新的内容,test使用train的scope也不会对train有影响。

可以了,谢谢

0
回复
AIStudio792089
#5 回复于2020-01

如果解决的话这里先行关闭此issue,如有问题可以reopen

0
回复
需求/bug反馈?一键提issue告诉我们
发现bug?如果您知道修复办法,欢迎提pr直接参与建设飞桨~
在@后输入用户全名并按空格结束,可艾特全站任一用户