The Op flatten_contiguous_range_grad doesn't have any grad op.
收藏
def gradient_penalty(self, batch_x, fake_image):
batchsz = batch_x.shape[0]
# [b, w, h, c]
t = paddle.uniform((batchsz, 1, 1, 1))
t = paddle.broadcast_to(t, batch_x.shape)
interplate: paddle.Tensor = t * batch_x + (1 - t) * fake_image
interplate.stop_gradient = False
d_interplate_logits = self.discriminator(interplate)
grads = paddle.grad(
outputs=d_interplate_logits,
inputs=interplate,
grad_outputs=paddle.ones_like(d_interplate_logits),
create_graph=True,
retain_graph=True
)[0]
grads = paddle.reshape(grads, [grads.shape[0], -1])
epsilon = 1e-12
gp = paddle.sqrt(
paddle.mean(paddle.square(grads), axis=1) + epsilon
)
gp = paddle.mean((gp - 1) ** 2)
return gp
代码报错:The Op flatten_contiguous_range_grad doesn't have any grad op.
在同样结构网络下运行相同的获取gp的函数,pytorch可以运行,paddle报错,是不支持高阶求导吗????
0
收藏
请登录后评论
高阶求导是支持的,然后pytorch和paddle 的使用方法是有区别的,可以看一下api文档。
建议去官网提个issue让官方看看