损失函数问题
进985 发布于2020-05 浏览:3199 回复:5
0
收藏
快速回复

谁会用下面这个损失函数呀?动态图版本的

warpctc
paddle.fluid.layers.warpctc(input, label, blank=0, norm_by_times=False, input_length=None, label_length=None)[源代码]

收藏
点赞
0
个赞
共5条回复 最后由进985回复于2020-05
#6进985回复于2020-05
#5 189******30回复
搞语音识别啦~~

是呀,不过感觉好像资料不太多呀

0
#5189******30回复于2020-05

搞语音识别啦~~

0
#4进985回复于2020-05
  1. def train():
        # x_data ,y_data = pr.read_all_data(config.file_name,config.train_list)
        
        with fluid.dygraph.guard(place = fluid.CUDAPlace(0)):  #GPU:    place = fluid.CUDAPlace(0)
            #定义数据层
            speech = fluid.layers.data(name='speech',shape=[78,500,500],dtype='float32')
            label = fluid.layers.data(name='label',shape=[config.y_max_num,1440],dtype='int64')
            data1 = fluid.layers.fill_constant(shape=[config.batch_size], value=60, dtype='int64')
            data2 = fluid.layers.fill_constant(shape=[config.batch_size], value=60, dtype='int64')
            train_reader = paddle.batch(pr.custom_reader(config.file_name,config.train_list),batch_size=config.batch_size,drop_last=True)
            
            vgg = V.VGG()
            try:
                model,_ = fluid.dygraph.load_dygraph("vgg")
                vgg.load_dict(model)
            except:
                print("未加载模型")
            #优化器
            # optimizer = fluid.optimizer.AdamOptimizer(learning_rate=config.learning_rate,parameter_list=vgg.parameters())
            optimizer = fluid.optimizer.SGD(learning_rate=config.learning_rate,parameter_list=vgg.parameters())
            
            for epoch_num in range(config.epoch_num):
                for batch_id,data in enumerate(train_reader()):
                    dy_x_data = np.array([x[0] for x in data]).astype('float32')
                    y_data = np.array([x[1] for x in data]).astype('int64')
                    # y_data = y_data.transpose(1,0)
                    y_data = y_data[:,:,np.newaxis]
                    # print(y_data.shape,"y_dataaaaaaaaaaaaa")
                    speech = fluid.dygraph.to_variable(dy_x_data)
                    label = fluid.dygraph.to_variable(y_data)
                    label.stop_gradient = True
                    
                    out = vgg(speech)
                    # print(out[0,0,:],"outttttttttttttttttttttttt")
                    # print(label.shape,"labellllllllll")
                    # print(out[0,0,:],"outttttttttttttttttt")
                    #损失
                    loss = fluid.layers.cross_entropy(input=out, label=label)
                    # loss= fluid.layers.warpctc(input=out, label=label,input_length=data1,label_length=data2)
                    # print(loss,"lossssssssssssssssssssssss")
                    avg_loss = fluid.layers.mean(loss)
                    #反馈传递
                    avg_loss.backward()
                    optimizer.minimize(avg_loss)
                    
                    vgg.clear_gradients()
                    
                    if batch_id % 1 == 0:
                       print("Loss at epoch {} step {}: {}".format(epoch_num, batch_id, avg_loss.numpy()))
                fluid.save_dygraph(vgg.state_dict(),"vgg")
                    
    if __name__ =='__main__' :
        
        train()
        
        

    已经解决啦

0
#3进985回复于2020-05

我想要解决,大佬看一眼吧

0
#2进985回复于2020-05

有哪位大佬可以解决一下

0
TOP
切换版块