首页 版块 访问AI主站 注册 发帖
ljjfordownload
21
积分 积分商城
0
获赞
动态图模式中如何创建可学习的参数
Ta的回复 :我下载了源代码看看,感觉可以试试两种方式: 1. 创建 ParamBase 类型数据 (文档找不到,简直离谱) 2. 创建一个自定义Layer,继承自 paddle.fluid.dygraph.layers.Layer ,然后在__init__函数中调用 self.create_parameter 来创建可学习的参数 (文档也是没有的。。。我是paddle.nn.Linear的实现,在里面找到的) 现在的PaddlePaddle对动态图的支持还很不完善啊。。。。建议自己下载源代码研究吧。 上面两种方法我还没有尝试,之后试出结果了再补一下更具体的答案
4
动态图模式中如何创建可学习的参数
Ta的回复 :可能不用继承paddle.fluid.dygraph.layers.Layer,继承nn.Layer就行了,源代码,Linear层定义参数的代码大概是:     def __init__(self,                  in_features,                  out_features,                  weight_attr=None,                  bias_attr=None,                  name=None):         super(Linear, self).__init__()         self._dtype = self._helper.get_default_dtype()         self._weight_attr = weight_attr         self._bias_attr = bias_attr         self.weight = self.create_parameter(             shape=[in_features, out_features],             attr=self._weight_attr,             dtype=self._dtype,             is_bias=False)         self.bias = self.create_parameter(             shape=[out_features],             attr=self._bias_attr,             dtype=self._dtype,             is_bias=True)         self.name = name
4
如何使用SpectralNorm谱归一化
Ta的回复 :没法直接当做一个Layer去使用 (虽然它很奇怪的继承自nn,Layer) 它的用法大概是: 1. 拿出前一个Layer的Weight参数 ,假设参数名为 weight 2.计算Normal之后的Weight  new_weight = paddle.nn.SpectralNorm(weight.shape, dim=1, power_iters=2)(weight) 3.设置前一个Layer的Weight为新Weight 设置这个SpectralNorm接口的人简直是个天才,我就没见过这么难用的接口 在paddleGAN这个开源项目中,我找到了一些对于SpectralNorm的封装,基本上就是直接绕过官方接口,自己直接利用C++接口手撕了个新的。 import paddle import paddle.nn as nn import math class _SpectralNorm(nn.SpectralNorm):     def __init__(self,                  weight_shape,                  dim=0,                  power_iters=1,                  eps=1e-12,                  dtype='float32'):         super(_SpectralNorm, self).__init__(weight_shape, dim, power_iters, eps,                                             dtype)     def forward(self, weight):         inputs = {'Weight': weight, 'U': self.weight_u, 'V': self.weight_v}         out = self._helper.create_variable_for_type_inference(self._dtype)         _power_iters = self._power_iters if self.training else 0         self._helper.append_op(type="spectral_norm",                                inputs=inputs,                                outputs={                                    "Out": out,                                },                                attrs={                                    "dim": self._dim,                                    "power_iters": _power_iters,                                    "eps": self._eps,                                })         return out class Spectralnorm(paddle.nn.Layer):     def __init__(self, layer, dim=0, power_iters=1, eps=1e-12, dtype='float32'):         super(Spectralnorm, self).__init__()         self.spectral_norm = _SpectralNorm(layer.weight.shape, dim, power_iters,                                            eps, dtype)         self.dim = dim         self.power_iters = power_iters         self.eps = eps         self.layer = layer         weight = layer._parameters['weight']         del layer._parameters['weight']         self.weight_orig = self.create_parameter(weight.shape,                                                  dtype=weight.dtype)         self.weight_orig.set_value(weight)     def forward(self, x):         weight = self.spectral_norm(self.weight_orig)         self.layer.weight = weight         out = self.layer(x)         return out   在搭建网络的时候,大概是这么用的:        sequence = [                 Spectralnorm(                     nn.Conv2D(input_nc,                               ndf,                               kernel_size=kw,                               stride=2,                               padding=padw)),                 nn.LeakyReLU(0.01)             ]
5
如何使用SpectralNorm谱归一化
Ta的回复 :我在PaddleGAN中发现了另外一个spectral_norm的实现,现在我用的是这个。我把这里的代码作为一个额外的库放到了 work 目录下,之后需要用到就导入。这个感觉更正确一点 (很奇怪,为什么PaddlePaddle官方不提供这样的接口) import math import numpy as np import paddle import paddle.nn as nn import paddle.nn.functional as F @paddle.no_grad() def constant_(x, value):     temp_value = paddle.full(x.shape, value, x.dtype)     x.set_value(temp_value)     return x @paddle.no_grad() def normal_(x, mean=0., std=1.):     temp_value = paddle.normal(mean, std, shape=x.shape)     x.set_value(temp_value)     return x @paddle.no_grad() def uniform_(x, a=-1., b=1.):     temp_value = paddle.uniform(min=a, max=b, shape=x.shape)     x.set_value(temp_value)     return x class SpectralNorm(object):     def __init__(self, name='weight', n_power_iterations=1, dim=0, eps=1e-12):         self.name = name         self.dim = dim         if n_power_iterations <= 0:             raise ValueError(                 'Expected n_power_iterations to be positive, but '                 'got n_power_iterations={}'.format(n_power_iterations))         self.n_power_iterations = n_power_iterations         self.eps = eps     def reshape_weight_to_matrix(self, weight):         weight_mat = weight         if self.dim != 0:             # transpose dim to front             weight_mat = weight_mat.transpose([                 self.dim,                 *[d for d in range(weight_mat.dim()) if d != self.dim]             ])         height = weight_mat.shape[0]         return weight_mat.reshape([height, -1])     def compute_weight(self, layer, do_power_iteration):         weight = getattr(layer, self.name + '_orig')         u = getattr(layer, self.name + '_u')         v = getattr(layer, self.name + '_v')         weight_mat = self.reshape_weight_to_matrix(weight)         if do_power_iteration:             with paddle.no_grad():                 for _ in range(self.n_power_iterations):                     v.set_value(                         F.normalize(                             paddle.matmul(weight_mat,                                           u,                                           transpose_x=True,                                           transpose_y=False),                             axis=0,                             epsilon=self.eps,                         ))                     u.set_value(                         F.normalize(                             paddle.matmul(weight_mat, v),                             axis=0,                             epsilon=self.eps,                         ))                 if self.n_power_iterations > 0:                     u = u.clone()                     v = v.clone()         sigma = paddle.dot(u, paddle.mv(weight_mat, v))         weight = weight / sigma         return weight     def remove(self, layer):         with paddle.no_grad():             weight = self.compute_weight(layer, do_power_iteration=False)         delattr(layer, self.name)         delattr(layer, self.name + '_u')         delattr(layer, self.name + '_v')         delattr(layer, self.name + '_orig')         layer.add_parameter(self.name, weight.detach())     def __call__(self, layer, inputs):         setattr(layer, self.name,                 self.compute_weight(layer, do_power_iteration=layer.training))     @staticmethod     def apply(layer, name, n_power_iterations, dim, eps):         for k, hook in layer._forward_pre_hooks.items():             if isinstance(hook, SpectralNorm) and hook.name == name:                 raise RuntimeError("Cannot register two spectral_norm hooks on "                                    "the same parameter {}".format(name))         fn = SpectralNorm(name, n_power_iterations, dim, eps)         weight = layer._parameters[name]         with paddle.no_grad():             weight_mat = fn.reshape_weight_to_matrix(weight)             h, w = weight_mat.shape             # randomly initialize u and v             u = layer.create_parameter([h])             u = normal_(u, 0., 1.)             v = layer.create_parameter([w])             v = normal_(v, 0., 1.)             u = F.normalize(u, axis=0, epsilon=fn.eps)             v = F.normalize(v, axis=0, epsilon=fn.eps)         # delete fn.name form parameters, otherwise you can not set attribute         del layer._parameters[fn.name]         layer.add_parameter(fn.name + "_orig", weight)         # still need to assign weight back as fn.name because all sorts of         # things may assume that it exists, e.g., when initializing weights.         # However, we can't directly assign as it could be an Parameter and         # gets added as a parameter. Instead, we register weight * 1.0 as a plain         # attribute.         setattr(layer, fn.name, weight * 1.0)         layer.register_buffer(fn.name + "_u", u)         layer.register_buffer(fn.name + "_v", v)         layer.register_forward_pre_hook(fn)         return fn def spectral_norm(layer,                   name='weight',                   n_power_iterations=1,                   eps=1e-12,                   dim=None):     if dim is None:         if isinstance(layer, (nn.Conv1DTranspose, nn.Conv2DTranspose,                               nn.Conv3DTranspose, nn.Linear)):             dim = 1         else:             dim = 0     SpectralNorm.apply(layer, name, n_power_iterations, dim, eps)     return layer  
5
DataLoader加载无Label数据
Ta的回复 :将 random_img = paddle.randn((3,64,64)) 修改成 random_img = paddle.randn((1,3,64,64)) 可以解决问题,如果是实际图片,可以进行一次reshape(1,3,64,64)来解决问题
1
切换版块
智能客服