首页 Paddle框架 帖子详情
self.optimizer.step()报错,求助!! 已解决
收藏
快速回复
Paddle框架 问答深度学习模型训练 634 6
self.optimizer.step()报错,求助!! 已解决
收藏
快速回复
Paddle框架 问答深度学习模型训练 634 6

模型选择的是Adam优化器。以下是报错信息以及部分代码。我寻思我也没往优化器离传入int类型的数据啊??咋回事这是?头一次遇见。求大佬指点

 

Traceback (most recent call last):
  File "/home/harry/Python_Demo/Paddlebased/SLBR_motif_removal/train.py", line 191, in 
    main()
  File "/home/harry/Python_Demo/Paddlebased/SLBR_motif_removal/train.py", line 186, in main
    model.train()
  File "/home/harry/Python_Demo/Paddlebased/SLBR_motif_removal/train.py", line 104, in train
    self.train_one_epoch(epoch)
  File "/home/harry/Python_Demo/Paddlebased/SLBR_motif_removal/train.py", line 76, in train_one_epoch
    self.optimizer.step()
  File "/home/harry/.conda/envs/Paddle/lib/python3.7/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/home/harry/.conda/envs/Paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py", line 299, in __impl__
    return func(*args, **kwargs)
  File "/home/harry/.conda/envs/Paddle/lib/python3.7/site-packages/decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
  File "/home/harry/.conda/envs/Paddle/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
    return wrapped_func(*args, **kwargs)
  File "/home/harry/.conda/envs/Paddle/lib/python3.7/site-packages/paddle/fluid/framework.py", line 434, in __impl__
    return func(*args, **kwargs)
  File "/home/harry/.conda/envs/Paddle/lib/python3.7/site-packages/paddle/optimizer/adam.py", line 451, in step
    loss=None, startup_program=None, params_grads=params_grads)
  File "/home/harry/.conda/envs/Paddle/lib/python3.7/site-packages/paddle/optimizer/optimizer.py", line 954, in _apply_optimize
    params_grads, self.regularization)
  File "/home/harry/.conda/envs/Paddle/lib/python3.7/site-packages/paddle/optimizer/optimizer.py", line 1039, in append_regularization_ops
    regularization)
  File "/home/harry/.conda/envs/Paddle/lib/python3.7/site-packages/paddle/optimizer/optimizer.py", line 986, in _create_regularization_of_grad
    regularization_term = regularization(param, grad, grad.block)
TypeError: 'int' object is not callable

Process finished with exit code 1

 

 

    def __init__(self, args):
        # 加载参数
        self.args = args

        # 模型加载
        print("==> 创建模型 ")
        self.model = SLBR(args=self.args)
        print("==> 成功创建模型")

        # 学习率与优化器
        self.scheduler = paddle.optimizer.lr.StepDecay(learning_rate=self.args.lr, step_size=self.args.schedule,
                                                  gamma=self.args.gamma)
        # self.scheduler = 1e-3  # 创建一个随着epoch变化的学习率
        self.optimizer = paddle.optimizer.Adam(parameters=self.model.parameters(),
                                          learning_rate=self.scheduler, beta1=self.args.beta1, beta2=self.args.beta2,
                                          weight_decay=self.args.weight_decay)
        # self.lr = self.args.lr

        # 数据集加载
        train_sets = DEHW(root=self.args.train_sets, train=True)
        self.len_train_sets = len(train_sets)
        self.train_loader = DataLoader(train_sets, batch_size=self.args.batch_size, num_workers=self.args.num_workers)
        val_sets = DEHW(root=self.args.val_sets, train=False)
        self.len_val_sets = len(val_sets)
        self.val_loader = DataLoader(val_sets, batch_size=1, num_workers=2)

        # 日志的主目录checkpoint,不存在则创建
        check_dir(os.path.join(self.args.checkpoint))
        # 创建tensorboard
        self.tensorboard = Tensorboard(mode="paddle", logdir=self.args.tensorboard_path)

        # 一些指标的记录
        self.best_acc = 0
        self.is_best = False
        self.current_epoch = 0
        self.metric = -100000
        self.hl = 6 if self.args.hl else 1
        self.step = 0


        # 损失函数
        self.loss = Losses(self.args)

        # 指标计算
        self.metric = Metrics()

        print('==> 总参数量: %.2fM' % (sum(p.numel() for p in self.model.parameters()) / 1e6))
        print('==> 当前存储目录: %s' % (self.args.checkpoint))

    def train_one_epoch(self, epoch):
        self.model.train()
        current_batch = 0
        for i, (inputs, gts, masks) in enumerate(self.train_loader):
            self.optimizer.clear_grad()
            current_batch += self.args.batch_size
            outputs = self.model(inputs)
            coarse_loss, refine_loss, style_loss, mask_loss = self.loss(
                 outputs[0], gts, outputs[1], masks)
            total_loss = self.args.lambda_l1 * (coarse_loss + refine_loss) + self.args.lambda_mask * (
                mask_loss) + style_loss
            total_loss.backward()
            self.optimizer.step() # 这一步报错
d
dreamTyou
已解决
4# 回复于2022-07
找到原因了。parser传给优化器的参数不知道为啥有一个变成Int类型的了。 [代码]
展开
0
收藏
回复
全部评论(6)
时间顺序
12123chenhao
#2 回复于2022-07

你看一下你损失函数的输入是不是有一个在CPU,有一个在GPU

0
回复
d
dreamTyou
#3 回复于2022-07
你看一下你损失函数的输入是不是有一个在CPU,有一个在GPU

都在gpu0上面

0
回复
d
dreamTyou
#4 回复于2022-07
你看一下你损失函数的输入是不是有一个在CPU,有一个在GPU

找到原因了。parser传给优化器的参数不知道为啥有一个变成Int类型的了。

优化器是这样的
self.optimizer = paddle.optimizer.Adam(parameters=self.model.parameters(),
                                               learning_rate=self.scheduler, beta1=self.args.beta1,
                                               beta2=self.args.beta2,weight_decay=self.args.weight_decay
                                               )
里面的weight_decay参数这里出现了问题。

parser.add_argument('--weight_decay', default=0, type=float, help='weight decay (default: 0)')#这个传进去会报错
修改后
parser.add_argument('--weight_decay', default=0.0, type=float, help='weight decay (default: 0)')#这个传进去不会报错


上边的参数其实是个Int类型的
下面的参数才是float类型的,我以为后边指定了float这个数就会变成float类型。其实那个是命令行传入的数据的类型。太坑了。pytorch就没这个问题。
0
回复
beyondyourself
#6 回复于2022-07

这属于书写不规范,不是框架的问题

0
回复
d
dreamTyou
#7 回复于2022-07
这属于书写不规范,不是框架的问题

的确书写不规范。别人的pytorch代码就是这样写的,用paddle复现就得挨坑。

0
回复
李长安
#8 回复于2022-07

大佬大佬

0
回复
需求/bug反馈?一键提issue告诉我们
发现bug?如果您知道修复办法,欢迎提pr直接参与建设飞桨~
在@后输入用户全名并按空格结束,可艾特全站任一用户