如何使用SpectralNorm谱归一化
B英俊 发布于2021-04 浏览:2231 回复:5
0
收藏

正则化卷积层的权重

收藏
点赞
0
个赞
共5条回复 最后由189******30回复于2021-05
#6189******30回复于2021-05
#2 ljjfordownload回复
没法直接当做一个Layer去使用 (虽然它很奇怪的继承自nn,Layer) 它的用法大概是: 1. 拿出前一个Layer的Weight参数 ,假设参数名为 weight 2.计算Normal之后的Weight  new_weight = paddle.nn.SpectralNorm(weight.shape, dim=1, power_iters=2)(weight) 3.设置前一个Layer的Weight为新Weight 设置这个SpectralNorm接口的人简直是个天才,我就没见过这么难用的接口 在paddleGAN这个开源项目中,我找到了一些对于SpectralNorm的封装,基本上就是直接绕过官方接口,自己直接利用C++接口手撕了个新的。 import paddle import paddle.nn as nn import math class _SpectralNorm(nn.SpectralNorm):     def __init__(self,                  weight_shape,                  dim=0,                  power_iters=1,                  eps=1e-12,                  dtype='float32'):         super(_SpectralNorm, self).__init__(weight_shape, dim, power_iters, eps,                                             dtype)     def forward(self, weight):         inputs = {'Weight': weight, 'U': self.weight_u, 'V': self.weight_v}         out = self._helper.create_variable_for_type_inference(self._dtype)         _power_iters = self._power_iters if self.training else 0         self._helper.append_op(type="spectral_norm",                                inputs=inputs,                                outputs={                                    "Out": out,                                },                                attrs={                                    "dim": self._dim,                                    "power_iters": _power_iters,                                    "eps": self._eps,                                })         return out class Spectralnorm(paddle.nn.Layer):     def __init__(self, layer, dim=0, power_iters=1, eps=1e-12, dtype='float32'):         super(Spectralnorm, self).__init__()         self.spectral_norm = _SpectralNorm(layer.weight.shape, dim, power_iters,                                            eps, dtype)         self.dim = dim         self.power_iters = power_iters         self.eps = eps         self.layer = layer         weight = layer._parameters['weight']         del layer._parameters['weight']         self.weight_orig = self.create_parameter(weight.shape,                                                  dtype=weight.dtype)         self.weight_orig.set_value(weight)     def forward(self, x):         weight = self.spectral_norm(self.weight_orig)         self.layer.weight = weight         out = self.layer(x)         return out   在搭建网络的时候,大概是这么用的:        sequence = [                 Spectralnorm(                     nn.Conv2D(input_nc,                               ndf,                               kernel_size=kw,                               stride=2,                               padding=padw)),                 nn.LeakyReLU(0.01)             ]
展开

感谢大佬详细说明

0
#5ck不思考回复于2021-04
#2 ljjfordownload回复
没法直接当做一个Layer去使用 (虽然它很奇怪的继承自nn,Layer) 它的用法大概是: 1. 拿出前一个Layer的Weight参数 ,假设参数名为 weight 2.计算Normal之后的Weight  new_weight = paddle.nn.SpectralNorm(weight.shape, dim=1, power_iters=2)(weight) 3.设置前一个Layer的Weight为新Weight 设置这个SpectralNorm接口的人简直是个天才,我就没见过这么难用的接口 在paddleGAN这个开源项目中,我找到了一些对于SpectralNorm的封装,基本上就是直接绕过官方接口,自己直接利用C++接口手撕了个新的。 import paddle import paddle.nn as nn import math class _SpectralNorm(nn.SpectralNorm):     def __init__(self,                  weight_shape,                  dim=0,                  power_iters=1,                  eps=1e-12,                  dtype='float32'):         super(_SpectralNorm, self).__init__(weight_shape, dim, power_iters, eps,                                             dtype)     def forward(self, weight):         inputs = {'Weight': weight, 'U': self.weight_u, 'V': self.weight_v}         out = self._helper.create_variable_for_type_inference(self._dtype)         _power_iters = self._power_iters if self.training else 0         self._helper.append_op(type="spectral_norm",                                inputs=inputs,                                outputs={                                    "Out": out,                                },                                attrs={                                    "dim": self._dim,                                    "power_iters": _power_iters,                                    "eps": self._eps,                                })         return out class Spectralnorm(paddle.nn.Layer):     def __init__(self, layer, dim=0, power_iters=1, eps=1e-12, dtype='float32'):         super(Spectralnorm, self).__init__()         self.spectral_norm = _SpectralNorm(layer.weight.shape, dim, power_iters,                                            eps, dtype)         self.dim = dim         self.power_iters = power_iters         self.eps = eps         self.layer = layer         weight = layer._parameters['weight']         del layer._parameters['weight']         self.weight_orig = self.create_parameter(weight.shape,                                                  dtype=weight.dtype)         self.weight_orig.set_value(weight)     def forward(self, x):         weight = self.spectral_norm(self.weight_orig)         self.layer.weight = weight         out = self.layer(x)         return out   在搭建网络的时候,大概是这么用的:        sequence = [                 Spectralnorm(                     nn.Conv2D(input_nc,                               ndf,                               kernel_size=kw,                               stride=2,                               padding=padw)),                 nn.LeakyReLU(0.01)             ]
展开

我一开始也是按照这个思路实现的,可是SpectralNorm 输出的是tensor,当我直接令 layer.weight = new_weight 时,告诉我类型不匹配。 因为layer.weight的类型时parameter之类的类型。

请问您知道怎么做这个更新weight的操作吗

0
#4B英俊回复于2021-04
#2 ljjfordownload回复
没法直接当做一个Layer去使用 (虽然它很奇怪的继承自nn,Layer) 它的用法大概是: 1. 拿出前一个Layer的Weight参数 ,假设参数名为 weight 2.计算Normal之后的Weight  new_weight = paddle.nn.SpectralNorm(weight.shape, dim=1, power_iters=2)(weight) 3.设置前一个Layer的Weight为新Weight 设置这个SpectralNorm接口的人简直是个天才,我就没见过这么难用的接口 在paddleGAN这个开源项目中,我找到了一些对于SpectralNorm的封装,基本上就是直接绕过官方接口,自己直接利用C++接口手撕了个新的。 import paddle import paddle.nn as nn import math class _SpectralNorm(nn.SpectralNorm):     def __init__(self,                  weight_shape,                  dim=0,                  power_iters=1,                  eps=1e-12,                  dtype='float32'):         super(_SpectralNorm, self).__init__(weight_shape, dim, power_iters, eps,                                             dtype)     def forward(self, weight):         inputs = {'Weight': weight, 'U': self.weight_u, 'V': self.weight_v}         out = self._helper.create_variable_for_type_inference(self._dtype)         _power_iters = self._power_iters if self.training else 0         self._helper.append_op(type="spectral_norm",                                inputs=inputs,                                outputs={                                    "Out": out,                                },                                attrs={                                    "dim": self._dim,                                    "power_iters": _power_iters,                                    "eps": self._eps,                                })         return out class Spectralnorm(paddle.nn.Layer):     def __init__(self, layer, dim=0, power_iters=1, eps=1e-12, dtype='float32'):         super(Spectralnorm, self).__init__()         self.spectral_norm = _SpectralNorm(layer.weight.shape, dim, power_iters,                                            eps, dtype)         self.dim = dim         self.power_iters = power_iters         self.eps = eps         self.layer = layer         weight = layer._parameters['weight']         del layer._parameters['weight']         self.weight_orig = self.create_parameter(weight.shape,                                                  dtype=weight.dtype)         self.weight_orig.set_value(weight)     def forward(self, x):         weight = self.spectral_norm(self.weight_orig)         self.layer.weight = weight         out = self.layer(x)         return out   在搭建网络的时候,大概是这么用的:        sequence = [                 Spectralnorm(                     nn.Conv2D(input_nc,                               ndf,                               kernel_size=kw,                               stride=2,                               padding=padw)),                 nn.LeakyReLU(0.01)             ]
展开

我也觉得这个接口难用-捂脸。我之前硬着头皮用,然后就卡在了jit.save的模型保存那一块了,总之用起来很矛盾。

你提供的改写方法很OK,直接跑通了,感谢你的帮助!

0
#3ljjfordownload回复于2021-04

我在PaddleGAN中发现了另外一个spectral_norm的实现,现在我用的是这个。我把这里的代码作为一个额外的库放到了 work 目录下,之后需要用到就导入。这个感觉更正确一点 (很奇怪,为什么PaddlePaddle官方不提供这样的接口)

import math
import numpy as np

import paddle
import paddle.nn as nn
import paddle.nn.functional as F

@paddle.no_grad()
def constant_(x, value):
    temp_value = paddle.full(x.shape, value, x.dtype)
    x.set_value(temp_value)
    return x

@paddle.no_grad()
def normal_(x, mean=0., std=1.):
    temp_value = paddle.normal(mean, std, shape=x.shape)
    x.set_value(temp_value)
    return x

@paddle.no_grad()
def uniform_(x, a=-1., b=1.):
    temp_value = paddle.uniform(min=a, max=b, shape=x.shape)
    x.set_value(temp_value)
    return x

class SpectralNorm(object):
    def __init__(self, name='weight', n_power_iterations=1, dim=0, eps=1e-12):
        self.name = name
        self.dim = dim
        if n_power_iterations <= 0:
            raise ValueError(
                'Expected n_power_iterations to be positive, but '
                'got n_power_iterations={}'.format(n_power_iterations))
        self.n_power_iterations = n_power_iterations
        self.eps = eps

    def reshape_weight_to_matrix(self, weight):
        weight_mat = weight
        if self.dim != 0:
            # transpose dim to front
            weight_mat = weight_mat.transpose([
                self.dim,
                *[d for d in range(weight_mat.dim()) if d != self.dim]
            ])

        height = weight_mat.shape[0]

        return weight_mat.reshape([height, -1])

    def compute_weight(self, layer, do_power_iteration):
        weight = getattr(layer, self.name + '_orig')
        u = getattr(layer, self.name + '_u')
        v = getattr(layer, self.name + '_v')
        weight_mat = self.reshape_weight_to_matrix(weight)

        if do_power_iteration:
            with paddle.no_grad():
                for _ in range(self.n_power_iterations):
                    v.set_value(
                        F.normalize(
                            paddle.matmul(weight_mat,
                                          u,
                                          transpose_x=True,
                                          transpose_y=False),
                            axis=0,
                            epsilon=self.eps,
                        ))

                    u.set_value(
                        F.normalize(
                            paddle.matmul(weight_mat, v),
                            axis=0,
                            epsilon=self.eps,
                        ))
                if self.n_power_iterations > 0:
                    u = u.clone()
                    v = v.clone()

        sigma = paddle.dot(u, paddle.mv(weight_mat, v))
        weight = weight / sigma
        return weight

    def remove(self, layer):
        with paddle.no_grad():
            weight = self.compute_weight(layer, do_power_iteration=False)
        delattr(layer, self.name)
        delattr(layer, self.name + '_u')
        delattr(layer, self.name + '_v')
        delattr(layer, self.name + '_orig')

        layer.add_parameter(self.name, weight.detach())

    def __call__(self, layer, inputs):
        setattr(layer, self.name,
                self.compute_weight(layer, do_power_iteration=layer.training))

    @staticmethod
    def apply(layer, name, n_power_iterations, dim, eps):
        for k, hook in layer._forward_pre_hooks.items():
            if isinstance(hook, SpectralNorm) and hook.name == name:
                raise RuntimeError("Cannot register two spectral_norm hooks on "
                                   "the same parameter {}".format(name))

        fn = SpectralNorm(name, n_power_iterations, dim, eps)
        weight = layer._parameters[name]

        with paddle.no_grad():
            weight_mat = fn.reshape_weight_to_matrix(weight)
            h, w = weight_mat.shape

            # randomly initialize u and v
            u = layer.create_parameter([h])
            u = normal_(u, 0., 1.)
            v = layer.create_parameter([w])
            v = normal_(v, 0., 1.)
            u = F.normalize(u, axis=0, epsilon=fn.eps)
            v = F.normalize(v, axis=0, epsilon=fn.eps)

        # delete fn.name form parameters, otherwise you can not set attribute
        del layer._parameters[fn.name]
        layer.add_parameter(fn.name + "_orig", weight)
        # still need to assign weight back as fn.name because all sorts of
        # things may assume that it exists, e.g., when initializing weights.
        # However, we can't directly assign as it could be an Parameter and
        # gets added as a parameter. Instead, we register weight * 1.0 as a plain
        # attribute.
        setattr(layer, fn.name, weight * 1.0)
        layer.register_buffer(fn.name + "_u", u)
        layer.register_buffer(fn.name + "_v", v)

        layer.register_forward_pre_hook(fn)
        return fn

def spectral_norm(layer,
                  name='weight',
                  n_power_iterations=1,
                  eps=1e-12,
                  dim=None):

    if dim is None:
        if isinstance(layer, (nn.Conv1DTranspose, nn.Conv2DTranspose,
                              nn.Conv3DTranspose, nn.Linear)):
            dim = 1
        else:
            dim = 0
    SpectralNorm.apply(layer, name, n_power_iterations, dim, eps)
    return layer

 

0
#2ljjfordownload回复于2021-04

没法直接当做一个Layer去使用 (虽然它很奇怪的继承自nn,Layer)

它的用法大概是:

1. 拿出前一个Layer的Weight参数 ,假设参数名为 weight

2.计算Normal之后的Weight  new_weight = paddle.nn.SpectralNorm(weight.shape, dim=1, power_iters=2)(weight)

3.设置前一个Layer的Weight为新Weight

设置这个SpectralNorm接口的人简直是个天才,我就没见过这么难用的接口

在paddleGAN这个开源项目中,我找到了一些对于SpectralNorm的封装,基本上就是直接绕过官方接口,自己直接利用C++接口手撕了个新的。

import paddle
import paddle.nn as nn
import math


class _SpectralNorm(nn.SpectralNorm):
    def __init__(self,
                 weight_shape,
                 dim=0,
                 power_iters=1,
                 eps=1e-12,
                 dtype='float32'):
        super(_SpectralNorm, self).__init__(weight_shape, dim, power_iters, eps,
                                            dtype)

    def forward(self, weight):
        inputs = {'Weight': weight, 'U': self.weight_u, 'V': self.weight_v}
        out = self._helper.create_variable_for_type_inference(self._dtype)
        _power_iters = self._power_iters if self.training else 0
        self._helper.append_op(type="spectral_norm",
                               inputs=inputs,
                               outputs={
                                   "Out": out,
                               },
                               attrs={
                                   "dim": self._dim,
                                   "power_iters": _power_iters,
                                   "eps": self._eps,
                               })

        return out


class Spectralnorm(paddle.nn.Layer):
    def __init__(self, layer, dim=0, power_iters=1, eps=1e-12, dtype='float32'):
        super(Spectralnorm, self).__init__()
        self.spectral_norm = _SpectralNorm(layer.weight.shape, dim, power_iters,
                                           eps, dtype)
        self.dim = dim
        self.power_iters = power_iters
        self.eps = eps
        self.layer = layer
        weight = layer._parameters['weight']
        del layer._parameters['weight']
        self.weight_orig = self.create_parameter(weight.shape,
                                                 dtype=weight.dtype)
        self.weight_orig.set_value(weight)

    def forward(self, x):
        weight = self.spectral_norm(self.weight_orig)
        self.layer.weight = weight
        out = self.layer(x)
        return out

 

在搭建网络的时候,大概是这么用的:

       sequence = [
                Spectralnorm(
                    nn.Conv2D(input_nc,
                              ndf,
                              kernel_size=kw,
                              stride=2,
                              padding=padw)),
                nn.LeakyReLU(0.01)
            ]

0
TOP
切换版块