首页 AI Studio教育版 帖子详情
作业帖 | 百度深度学习集训营
收藏
快速回复
AI Studio教育版 其他师资培训 798207 953
作业帖 | 百度深度学习集训营
收藏
快速回复
AI Studio教育版 其他师资培训 798207 953

百度深度学习集训营已经正式开营,每个阶段的作业都将有各自的奖励,欢迎大家学习~

PS:如遇帖子过期、审核不通过的情况,请先复制内容保存在word文档,然后根据提示,完成个人实名验证,刷新后重新粘贴复制的内容,即可提交~

欢迎大家报名参加~

1月9日作业:

作业9-1:在第二章中学习过如何设置学习率衰减,这里建议使用分段衰减的方式,衰减系数为0.1, 根据ResNet目前的训练情况,应该在训练到多少步的时候设置衰减合适?请设置好学习率衰减方式,在眼疾识别数据集iChallenge-PM上重新训练ResNet模型。

作业9-1奖励:在作业中随机各抽取5名同学送出飞桨本+数据线+飞桨贴纸

回复帖子形式:  作业9-1:XXX

抽奖作业截止时间:2020年1月13日中午12点之前

作业9-2奖励:在作业中随机各抽取5名同学送出飞桨本+数据线+飞桨贴纸

回复帖子形式:  作业9-2:XXX

抽奖作业截止时间:2020年1月13日中午12点之前

 

1月7日作业:

作业8:如果将LeNet模型中的中间层的激活函数Sigmoid换成ReLU,在眼底筛查数据集上将会得到什么样的结果?Loss是否能收敛,ReLU和Sigmoid之间的区别是引起结果不同的原因吗?请发表你的观点

作业8奖励:在作业中随机各抽取5名同学送出飞桨本+数据线+飞桨贴纸

回复帖子形式:  作业8:XXX

获奖同学:#820 thunder95、#819 你还说不想我吗、 #818 百度用户#0762194095、#817 呵赫 he、#816 星光1dl

1月2日作业

作业7-1  计算卷积中一共有多少次乘法和加法操作

输入数据形状是[10, 3, 224, 224],卷积核kh = kw = 3,输出通道数为64,步幅stride=1,填充ph = pw =1

完成这样一个卷积,一共需要做多少次乘法和加法操作?

提示:先看输出一个像素点需要做多少次乘法和加法操作,然后再计算总共需要的操作次数

提交方式:请回复乘法和加法操作的次数,例如:乘法1000,加法1000

作业7-1奖励:抽取5人赢得飞桨定制本+数据线,截止时间2020年1月6日中午12点之前

回复帖子形式:  作业7-1:XXX

作业7-2奖励:从正确答案中抽取5人获得飞桨定制本+50元京东卡,截止时间2020年1月6日中午12点之前 

 

12月31日作业

作业6-1:

1.将普通神经网络模型的每层输出打印,观察内容
2.将分类准确率的指标 用PLT库画图表示
3.通过分类准确率,判断以采用不同损失函数训练模型的效果优劣
4.作图比较:随着训练进行,模型在训练集和测试集上的Loss曲线
5.调节正则化权重,观察4的作图曲线的变化,并分析原因
作业6-1奖励:抽取5人赢得飞桨定制本+数据线 ,回复帖子形式:  作业6-1:XXX

作业6-2:

正确运行AI Studio《百度架构师手把手教深度学习》课程里面的作业3 的极简版代码,分析训练过程中可能出现的问题或值得优化的地方,通过以下几点优化:

(1)样本:数据增强的方法

(2)假设:改进网络模型

(2)损失:尝试各种Loss

(2)优化:尝试各种优化器和学习率

目标:尽可能使模型在mnist测试集上的分类准确率最高

提交实现最高分类准确率的代码和模型,我们筛选最优结果前10名进行评奖

作业6-2奖励:飞桨定制本+50元京东卡

 

12月25日作业

12月23日作业

作业4-1:在AI studio上运行作业2,用深度学习完成房价预测模型

作业4-1奖励:飞桨定制本+ 《深度学习导论与应用实践》教材,选取第2、3、23、123、223、323…名同学送出奖品

作业4-2:回复下面问题,将答案回复帖子下方:

通过Python、深度学习框架,不同方法写房价预测,Python编写的模型 和 基于飞桨编写的模型在哪些方面存在异同?例如程序结构,编写难易度,模型的预测效果,训练的耗时等等?

回复帖子形式:  作业4-2:XXX

作业4-2奖励:在12月27日(本周五)中午12点前提交的作业中,我们选出最优前五名,送出百度定制数据线+《深度学习导论与应用实践》教材


12月17日作业

完成下面两个问题,并将答案回复在帖子下面,回帖形式:作业3-1(1)XX(2)XX

作业奖励:在2019年12月20日中午12点之前提交,随机抽取5名同学进行点评,礼品是本+数据线

12月12日作业

获奖者:第12名:飞天雄者                                     

12月10日作业
作业1-1:在AI Studio平台上https://aistudio.baidu.com/aistudio/education/group/info/888 跑通房价预测案例

作业1-1奖励:最先完成作业的前3名,以及第6名、66名、166名、266名、366名、466名、566名、666名的同学均可获得飞桨定制大礼包:飞桨帽子、飞桨数据线 、飞桨定制logo笔

作业1-1的获奖者如图:

作业1-2:完成下面两个问题,并将答案发布在帖子下面
①类比牛顿第二定律的案例,在你的工作和生活中还有哪些问题可以用监督学习的框架来解决?假设和参数是什么?优化目标是什么?
②为什么说AI工程师有发展前景?怎样从经济学(市场供需)的角度做出解读?
作业1-2奖励:回复帖子且点赞top5,获得《深度学习导论与应用实践》教材+飞桨定制本

点赞Top5获奖者:1.飞天雄者  2.God_s_apple  3.177*******62   4.学痞龙   5.故乡237、qq526557820

作业截止时间2020年1月10日,再此之前完成,才有资格参加最终Mac大奖评选

 

报名流程:

1.加入QQ群:726887660,班主任会在QQ群里进行学习资料、答疑、奖品等活动

2.点此链接,加入课程报名并实践:https://aistudio.baidu.com/aistudio/course/introduce/888

温馨提示:课程的录播会在3个工作日内上传到AI studio《百度架构师手把手教深度学习》课程上

 

49
收藏
回复
全部评论(953)
时间顺序
友友的路
#882 回复于2020-01

出差中,只能周末补作业,所以作业做的晚了点

作业 7-1:

输出的一个像素点需要做:

9次乘法

8次加法+1次偏置项加法=9次加法

所以:

一共要做乘法次数:224*224*3*64*10*9=867041280

一共要做加法次数:224*224*3*64*10*9=867041280

 

作业7-2:

0
回复
A
AIStudio179297
#883 回复于2020-01

作业9-1:

学习率为0.001时运行结果如下

start training ...
epoch: 0, batch_id: 0, loss is: [0.68759763]
epoch: 0, batch_id: 10, loss is: [0.66900766]
epoch: 0, batch_id: 20, loss is: [0.63202274]
epoch: 0, batch_id: 30, loss is: [0.6028468]
[validation] accuracy/loss: 0.6825000047683716/0.5756283402442932
epoch: 1, batch_id: 0, loss is: [0.6199329]
epoch: 1, batch_id: 10, loss is: [0.6663334]
epoch: 1, batch_id: 20, loss is: [0.6462129]
epoch: 1, batch_id: 30, loss is: [0.4021377]
[validation] accuracy/loss: 0.7124999761581421/0.5426428318023682
epoch: 2, batch_id: 0, loss is: [0.3464933]
epoch: 2, batch_id: 10, loss is: [0.24527419]
epoch: 2, batch_id: 20, loss is: [0.2241699]
epoch: 2, batch_id: 30, loss is: [0.26528594]
[validation] accuracy/loss: 0.92249995470047/0.25362706184387207
epoch: 3, batch_id: 0, loss is: [0.30517268]
epoch: 3, batch_id: 10, loss is: [1.5472536]
epoch: 3, batch_id: 20, loss is: [1.374891]
epoch: 3, batch_id: 30, loss is: [0.41873083]
[validation] accuracy/loss: 0.9325000643730164/0.20503848791122437
epoch: 4, batch_id: 0, loss is: [0.12756532]
epoch: 4, batch_id: 10, loss is: [0.288566]
epoch: 4, batch_id: 20, loss is: [0.69378877]
epoch: 4, batch_id: 30, loss is: [0.24086866]
[validation] accuracy/loss: 0.8999999761581421/0.35233309864997864

查看loss变化可以i看出,在约170轮迭代之后,loss还是有增大以及波动现象,这个时候进行权重衰减即可

0
回复
fly
#884 回复于2020-01

作业9-1:在第二章中学习过如何设置学习率衰减,这里建议使用分段衰减的方式,衰减系数为0.1, 根据ResNet目前的训练情况,应该在训练到多少步的时候设置衰减合适?请设置好学习率衰减方式,在眼疾识别数据集iChallenge-PM上重新训练ResNet模型。

通过观察在16step左右比较合适

采用分段式学习率,衰减系数0.1

作业9-2: GoogleNet 补充

网络部分改动

训练部分改动

训练情况:

0
回复
学习使我快乐
#885 回复于2020-01

作业9-1:基础学习率设置为0.1,多次实验发现每两个epoch比较合适,而且初始学习率如果设置为0.001更合适

0
回复
A
AIStudio179297
#886 回复于2020-01

作业9-2:

代码修改如下:

 self.Incep4a_1 = Pool2D(self.full_name(), pool_stride=1,global_pooling=True, pool_type='avg')
        self.Incep4a_2 = FC(self.full_name(), size=128)
        self.Incep4a_3 = FC(self.full_name(), size=1024)
        self.Incep4a_4 = FC(self.full_name(), size=1)
        
        self.Incep4d_1 = Pool2D(self.full_name(), pool_stride=1,global_pooling=True, pool_type='avg')
        self.Incep4d_2 = FC(self.full_name(), size=128)
        self.Incep4d_3 = FC(self.full_name(), size=1024)
        self.Incep4d_4 = FC(self.full_name(), size=1)
        

    def forward(self, x, mode=None):
        x = self.pool1(self.conv1(x))
        x = self.pool2(self.conv2_2(self.conv2_1(x)))
        x = self.pool3(self.block3_2(self.block3_1(x)))
        block4_1 =  self.block4_1(x)
        x = self.block4_3(self.block4_2(self.block4_1(x)))
        block4_4 = self.block4_4(x)
        x = self.pool4(self.block4_5(self.block4_4(x)))
        x = self.pool5(self.block5_2(self.block5_1(x)))
        x = self.fc(x)
        if mode == 'train':
            Inception_4a = self.Incep4a_4(self.Incep4a_3(self.Incep4a_2((self.Incep4a_1(block4_1)))))
            Inception_4d = self.Incep4d_4(self.Incep4d_3(self.Incep4d_2((self.Incep4d_1(block4_4)))))
            return Inception_4a,Inception_4d,x
        return x

# 运行模型前向计算,得到预测值
                Incep_4a,Incep_4d,logits = model(img, 'train')
                # 进行loss计算
                loss_1 = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label)*0.6
                loss_2 = fluid.layers.sigmoid_cross_entropy_with_logits(Incep_4a, label)*0.2
                loss_3 = fluid.layers.sigmoid_cross_entropy_with_logits(Incep_4d, label)*0.2
                avg_loss = fluid.layers.mean(loss_1+loss_2+loss_3)

训练结果如下:

start training ...
epoch: 0, batch_id: 0, loss is: [0.85753334]
epoch: 0, batch_id: 10, loss is: [0.6495509]
epoch: 0, batch_id: 20, loss is: [0.80458724]
epoch: 0, batch_id: 30, loss is: [0.7380688]
[validation] accuracy/loss: 0.5275000333786011/0.6729762554168701
epoch: 1, batch_id: 0, loss is: [0.7021468]
epoch: 1, batch_id: 10, loss is: [0.6514107]
epoch: 1, batch_id: 20, loss is: [0.70616704]
epoch: 1, batch_id: 30, loss is: [0.6673646]

0
回复
边陲
#887 回复于2020-01

作业9-1:

优化器配置

定义优化器

训练及评估

epoch =7,8时模型效果最好,测试数据集准确率达到0.97

0
回复
边陲
#888 回复于2020-01

作业9-2:

在课件网络结构的基础上修改:

训练部分修改:

训练及评估

epoch = 8 时模型效果最好,准确率为约0.96

0
回复
t
tichen858
#889 回复于2020-01

作业9-1:

训练第3与4轮之间出现最小值,分界点可以为3*图片总数

作业9-2:

变更后代码如下:

class GoogLeNet2(fluid.dygraph.Layer):
def __init__(self, name_scope):
super(GoogLeNet2, self).__init__(name_scope)
# GoogLeNet包含五个模块,每个模块后面紧跟一个池化层
# 第一个模块包含1个卷积层
self.conv1 = Conv2D(self.full_name(), num_filters=64, filter_size=7,
padding=3, act='relu')
# 3x3最大池化
self.pool1 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,
pool_padding=1, pool_type='max')
# 第二个模块包含2个卷积层
self.conv2_1 = Conv2D(self.full_name(), num_filters=64,
filter_size=1, act='relu')
self.conv2_2 = Conv2D(self.full_name(), num_filters=192,
filter_size=3, padding=1, act='relu')
# 3x3最大池化
self.pool2 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,
pool_padding=1, pool_type='max')
# 第三个模块包含2个Inception块
self.block3_1 = Inception(self.full_name(), 64, (96, 128), (16, 32), 32)
self.block3_2 = Inception(self.full_name(), 128, (128, 192), (32, 96), 64)
# 3x3最大池化
self.pool3 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,
pool_padding=1, pool_type='max')
# 第四个模块包含5个Inception块
self.block4_1 = Inception(self.full_name(), 192, (96, 208), (16, 48), 64)
self.block4_2 = Inception(self.full_name(), 160, (112, 224), (24, 64), 64)
self.block4_3 = Inception(self.full_name(), 128, (128, 256), (24, 64), 64)
self.block4_4 = Inception(self.full_name(), 112, (144, 288), (32, 64), 64)
self.block4_5 = Inception(self.full_name(), 256, (160, 320), (32, 128), 128)
# 3x3最大池化
self.pool4 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,
pool_padding=1, pool_type='max')
# 第五个模块包含2个Inception块
self.block5_1 = Inception(self.full_name(), 256, (160, 320), (32, 128), 128)
self.block5_2 = Inception(self.full_name(), 384, (192, 384), (48, 128), 128)
# 全局池化,尺寸用的是global_pooling,pool_stride不起作用
self.pool5 = Pool2D(self.full_name(), pool_stride=1,
global_pooling=True, pool_type='avg')
self.fc = FC(self.full_name(), size=1)

# self.br_Pool=Pool2D(self.full_name),pool_size=5,pool_stride=3,pool_type='avg',global_pooling=True)
self.br_Pool=Pool2D(self.full_name(),pool_stride=1, pool_type='avg',global_pooling=True)
self.br_FC1 = FC(self.full_name(),size=128)
self.br_FC2=FC(self.full_name(),size=1024)
self.br_FC3=FC(self.full_name(),size=1)
#self.Mid_Out = self.br_FC3(self.br_FC2(self.br_FC1(self.br_Pool(self.full_name()))))
self.br1_Pool=Pool2D(self.full_name(),pool_stride=1, pool_type='avg',global_pooling=True)
self.br1_FC1 = FC(self.full_name(),size=128)
self.br1_FC2=FC(self.full_name(),size=1024)
self.br1_FC3=FC(self.full_name(),size=1)
def forward(self, x):
x = self.pool1(self.conv1(x))
x = self.pool2(self.conv2_2(self.conv2_1(x)))
x = self.pool3(self.block3_2(self.block3_1(x)))
x = self.block4_1(x)
b1 =x

x = self.block4_4(self.block4_3(self.block4_2(x)))
b2 =x

x = self.pool4(self.block4_5(x))
x = self.pool5(self.block5_2(self.block5_1(x)))
x = self.fc(x)
out1 = self.br_FC3(self.br_FC2(self.br_FC1(self.br_Pool(b1))))
out2 = self.br1_FC3(self.br1_FC2(self.br1_FC1(self.br1_Pool(b2))))
# x = x*0.6+out1*0.2+out2*0.2
return x,out1,out2
# return

# 定义训练过程
def train(model):
with fluid.dygraph.guard():
print('start training ... ')
model.train()
epoch_num = 5
# 定义优化器
opt = fluid.optimizer.Momentum(learning_rate=0.001, momentum=0.9)
# 定义数据读取器,训练数据读取器和验证数据读取器
train_loader = data_loader(DATADIR, batch_size=10, mode='train')
valid_loader = valid_data_loader(DATADIR2, CSVFILE)
for epoch in range(epoch_num):
for batch_id, data in enumerate(train_loader()):
x_data, y_data = data
img = fluid.dygraph.to_variable(x_data)
label = fluid.dygraph.to_variable(y_data)
# 运行模型前向计算,得到预测值
logits,logits1,logits2= model(img)
# 进行loss计算
loss = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label)
loss1 = fluid.layers.sigmoid_cross_entropy_with_logits(logits1, label)
loss2 = fluid.layers.sigmoid_cross_entropy_with_logits(logits2, label)
loss = loss*0.6+loss1*0.2+loss2*0.2
avg_loss = fluid.layers.mean(loss)

if batch_id % 10 == 0:
print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
# 反向传播,更新权重,清除梯度
avg_loss.backward()
opt.minimize(avg_loss)
model.clear_gradients()

model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(valid_loader()):
x_data, y_data = data
img = fluid.dygraph.to_variable(x_data)
label = fluid.dygraph.to_variable(y_data)
# 运行模型前向计算,得到预测值
logits,logits1,logits2= model(img)
# 二分类,sigmoid计算后的结果以0.5为阈值分两个类别
# 计算sigmoid后的预测概率,进行loss计算
pred = fluid.layers.sigmoid(logits)
pred1 = fluid.layers.sigmoid(logits1)
pred2 = fluid.layers.sigmoid(logits2)
pred = pred*0.6+pred1*0.2+pred2*0.2
loss = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label)
loss1 = fluid.layers.sigmoid_cross_entropy_with_logits(logits1, label)
loss2 = fluid.layers.sigmoid_cross_entropy_with_logits(logits2, label)
loss = loss*0.6+loss1*0.2+loss2*0.2
# 计算预测概率小于0.5的类别
pred2 = pred * (-1.0) + 1.0
# 得到两个类别的预测概率,并沿第一个维度级联
pred = fluid.layers.concat([pred2, pred], axis=1)
acc = fluid.layers.accuracy(pred, fluid.layers.cast(label, dtype='int64'))
accuracies.append(acc.numpy())
losses.append(loss.numpy())
print("[validation] accuracy/loss: {}/{}".format(np.mean(accuracies), np.mean(losses)))
model.train()

# save params of model
fluid.save_dygraph(model.state_dict(), 'mnist')
# save optimizer state
fluid.save_dygraph(opt.state_dict(), 'mnist')

with fluid.dygraph.guard():
model = GoogLeNet2("GoogLeNet2")

train(model)

运行结果为:

start training ...
epoch: 0, batch_id: 0, loss is: [0.7137656]
epoch: 0, batch_id: 10, loss is: [0.93023956]
epoch: 0, batch_id: 20, loss is: [0.69743234]
epoch: 0, batch_id: 30, loss is: [0.57077324]
[validation] accuracy/loss: 0.5900000333786011/0.6498130559921265
epoch: 1, batch_id: 0, loss is: [0.7095642]
epoch: 1, batch_id: 10, loss is: [0.59617066]
epoch: 1, batch_id: 20, loss is: [0.57464474]
epoch: 1, batch_id: 30, loss is: [0.50502384]
[validation] accuracy/loss: 0.7350000143051147/0.5207179188728333
epoch: 2, batch_id: 0, loss is: [0.5213567]
epoch: 2, batch_id: 10, loss is: [0.49083018]
epoch: 2, batch_id: 20, loss is: [0.5481979]
epoch: 2, batch_id: 30, loss is: [0.40814772]
[validation] accuracy/loss: 0.857499897480011/0.45265498757362366
epoch: 3, batch_id: 0, loss is: [0.53391224]
epoch: 3, batch_id: 10, loss is: [0.5812551]
epoch: 3, batch_id: 20, loss is: [0.46119028]
epoch: 3, batch_id: 30, loss is: [0.32463118]
[validation] accuracy/loss: 0.949999988079071/0.304992139339447
epoch: 4, batch_id: 0, loss is: [0.3884324]
epoch: 4, batch_id: 10, loss is: [0.7446553]
epoch: 4, batch_id: 20, loss is: [0.31812298]
epoch: 4, batch_id: 30, loss is: [0.5365273]
[validation] accuracy/loss: 0.92249995470047/0.308567076921463

若直接在forward中按权重加和,最优精度仅为90%。对三个输出分别取lose是必要的

0
回复
aaaLKgo
#890 回复于2020-01

作业9-2:

1. 代码:

import numpy as np
import paddle
import paddle.fluid as fluid
from paddle.fluid.layer_helper import LayerHelper
from paddle.fluid.dygraph.nn import Conv2D, Pool2D, BatchNorm, FC
from paddle.fluid.dygraph.base import to_variable

import os
import random

DATADIR = '/home/aistudio/work/palm/PALM-Training400/PALM-Training400'
DATADIR2 = '/home/aistudio/work/palm/PALM-Validation400'
CSVFILE = '/home/aistudio/work/palm/PALM-Validation-GT/labels.csv'

# 定义Inception块
class Inception(fluid.dygraph.Layer):
    def __init__(self, name_scope, c1, c2, c3, c4, **kwargs):
        '''
        Inception模块的实现代码,
        name_scope, 模块名称,数据类型为string
        c1,  图(b)中第一条支路1x1卷积的输出通道数,数据类型是整数
        c2,图(b)中第二条支路卷积的输出通道数,数据类型是tuple或list, 
               其中c2[0]是1x1卷积的输出通道数,c2[1]是3x3
        c3,图(b)中第三条支路卷积的输出通道数,数据类型是tuple或list, 
               其中c3[0]是1x1卷积的输出通道数,c3[1]是3x3
        c4,  图(b)中第一条支路1x1卷积的输出通道数,数据类型是整数
        '''
        super(Inception, self).__init__(name_scope)
        # 依次创建Inception块每条支路上使用到的操作
        self.p1_1 = Conv2D(self.full_name(), num_filters=c1, 
                           filter_size=1, act='relu')
        self.p2_1 = Conv2D(self.full_name(), num_filters=c2[0], 
                           filter_size=1, act='relu')
        self.p2_2 = Conv2D(self.full_name(), num_filters=c2[1], 
                           filter_size=3, padding=1, act='relu')
        self.p3_1 = Conv2D(self.full_name(), num_filters=c3[0], 
                           filter_size=1, act='relu')
        self.p3_2 = Conv2D(self.full_name(), num_filters=c3[1], 
                           filter_size=5, padding=2, act='relu')
        self.p4_1 = Pool2D(self.full_name(), pool_size=3, 
                           pool_stride=1,  pool_padding=1, 
                           pool_type='max')
        self.p4_2 = Conv2D(self.full_name(), num_filters=c4, 
                           filter_size=1, act='relu')

    def forward(self, x):
        # 支路1只包含一个1x1卷积
        p1 = self.p1_1(x)
        # 支路2包含 1x1卷积 + 3x3卷积
        p2 = self.p2_2(self.p2_1(x))
        # 支路3包含 1x1卷积 + 5x5卷积
        p3 = self.p3_2(self.p3_1(x))
        # 支路4包含 最大池化和1x1卷积
        p4 = self.p4_2(self.p4_1(x))
        # 将每个支路的输出特征图拼接在一起作为最终的输出结果
        return fluid.layers.concat([p1, p2, p3, p4], axis=1)  
    
class GoogLeNetFull(fluid.dygraph.Layer):
    def __init__(self, name_scope):
        super(GoogLeNetFull, self).__init__(name_scope)
        # GoogLeNet包含五个模块,每个模块后面紧跟一个池化层
        # 第一个模块包含1个卷积层
        self.conv1 = Conv2D(self.full_name(), num_filters=64, filter_size=7, 
                            padding=3, act='relu')
        # 3x3最大池化
        self.pool1 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,  
                            pool_padding=1, pool_type='max')
        # 第二个模块包含2个卷积层
        self.conv2_1 = Conv2D(self.full_name(), num_filters=64, 
                              filter_size=1, act='relu')
        self.conv2_2 = Conv2D(self.full_name(), num_filters=192, 
                              filter_size=3, padding=1, act='relu')
        # 3x3最大池化
        self.pool2 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,  
                            pool_padding=1, pool_type='max')
        # 第三个模块包含2个Inception块
        self.block3_1 = Inception(self.full_name(), 64, (96, 128), (16, 32), 32)
        self.block3_2 = Inception(self.full_name(), 128, (128, 192), (32, 96), 64)
        # 3x3最大池化
        self.pool3 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,  
                               pool_padding=1, pool_type='max')
        # 第四个模块包含5个Inception块
        self.block4_1 = Inception(self.full_name(), 192, (96, 208), (16, 48), 64)
        # self.out1_pool = Pool2D(self.full_name(), pool_size=5, pool_stride=3,  
        #                     pool_padding=1, pool_type='avg')   # 14 * 14
        self.out1_pool = Pool2D(self.full_name(), pool_stride=1, 
                               global_pooling=True, pool_type='avg')
        self.out1_fc1 = FC(self.full_name(), size=128, act='relu')
        self.out1_fc2 = FC(self.full_name(), size=1024, act='relu')
        self.out1_dropout_ratio = 0.7
        self.out1_fc3 = FC(self.full_name(), size=1)
        
        self.block4_2 = Inception(self.full_name(), 160, (112, 224), (24, 64), 64)
        self.block4_3 = Inception(self.full_name(), 128, (128, 256), (24, 64), 64)
        self.block4_4 = Inception(self.full_name(), 112, (144, 288), (32, 64), 64)
        # self.out2_pool = Pool2D(self.full_name(), pool_size=5, pool_stride=3,  
        #                     pool_padding=1, pool_type='avg')   # 14 * 14
        self.out2_pool = Pool2D(self.full_name(), pool_stride=1, 
                               global_pooling=True, pool_type='avg')
        self.out2_fc1 = FC(self.full_name(), size=128, act='relu')
        self.out2_fc2 = FC(self.full_name(), size=1024, act='relu')
        self.out2_dropout_ratio = 0.7
        self.out2_fc3 = FC(self.full_name(), size=1)
        
        self.block4_5 = Inception(self.full_name(), 256, (160, 320), (32, 128), 128)
        # 3x3最大池化
        self.pool4 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,  
                               pool_padding=1, pool_type='max')
        # 第五个模块包含2个Inception块
        self.block5_1 = Inception(self.full_name(), 256, (160, 320), (32, 128), 128)
        self.block5_2 = Inception(self.full_name(), 384, (192, 384), (48, 128), 128)
        # 全局池化,尺寸用的是global_pooling,pool_stride不起作用
        self.pool5 = Pool2D(self.full_name(), pool_stride=1, 
                               global_pooling=True, pool_type='avg')
        self.out3_dropout_ratio = 0.7
        self.fc = FC(self.full_name(),  size=1)

    def forward(self, x):
        x = self.pool1(self.conv1(x))
        x = self.pool2(self.conv2_2(self.conv2_1(x)))
        x = self.pool3(self.block3_2(self.block3_1(x)))
        branch1_input = self.block4_1(x)
        branch1 = self.out1_fc2(self.out1_fc1(self.out1_pool(branch1_input)))
        branch1 = fluid.layers.dropout(branch1, self.out1_dropout_ratio)
        branch1 = self.out1_fc3(branch1)
        
        x = self.block4_3(self.block4_2(branch1_input))
        branch2_input = self.block4_4(x)
        branch2 = self.out2_fc2(self.out2_fc1(self.out2_pool(branch2_input)))
        branch2 = fluid.layers.dropout(branch2, self.out2_dropout_ratio)
        branch2 = self.out2_fc3(branch2)
        
        x = self.pool4(self.block4_5(branch2_input))
        x = self.pool5(self.block5_2(self.block5_1(x)))
        x = fluid.layers.dropout(x, self.out3_dropout_ratio)
        branch3 = self.fc(x)
        return branch1, branch2, branch3

# 定义训练过程
def train_googlenet(model):
    with fluid.dygraph.guard():
        print('start training ... ')
        model.train()
        epoch_num = 5
        # 定义优化器
        opt = fluid.optimizer.Momentum(learning_rate=0.001, momentum=0.9)
        # 定义数据读取器,训练数据读取器和验证数据读取器
        train_loader = data_loader(DATADIR, batch_size=10, mode='train')
        valid_loader = valid_data_loader(DATADIR2, CSVFILE)
        for epoch in range(epoch_num):
            for batch_id, data in enumerate(train_loader()):
                x_data, y_data = data
                img = fluid.dygraph.to_variable(x_data)
                label = fluid.dygraph.to_variable(y_data)
                # 运行模型前向计算,得到预测值
                b1, b2, b3 = model(img)
                # 进行loss计算
                loss1 = fluid.layers.sigmoid_cross_entropy_with_logits(b1, label)
                loss2 = fluid.layers.sigmoid_cross_entropy_with_logits(b2, label)
                loss3 = fluid.layers.sigmoid_cross_entropy_with_logits(b3, label)
                loss = 0.2 * loss1 + 0.2 * loss2 + 0.6 * loss3
                avg_loss = fluid.layers.mean(loss)

                if batch_id % 10 == 0:
                    print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
                # 反向传播,更新权重,清除梯度
                avg_loss.backward()
                opt.minimize(avg_loss)
                model.clear_gradients()

            model.eval()
            accuracies = []
            losses = []
            for batch_id, data in enumerate(valid_loader()):
                x_data, y_data = data
                img = fluid.dygraph.to_variable(x_data)
                label = fluid.dygraph.to_variable(y_data)
                # 运行模型前向计算,得到预测值
                b1, b2, b3 = model(img)
                # 二分类,sigmoid计算后的结果以0.5为阈值分两个类别
                # 计算sigmoid后的预测概率,进行loss计算
                pred1 = fluid.layers.sigmoid(b1)
                pred2 = fluid.layers.sigmoid(b2)
                pred3 = fluid.layers.sigmoid(b3)
                pred = 0.2 * pred1 + 0.2 * pred2 + 0.6 * pred3
                loss1 = fluid.layers.sigmoid_cross_entropy_with_logits(b1, label)
                loss2 = fluid.layers.sigmoid_cross_entropy_with_logits(b2, label)
                loss3 = fluid.layers.sigmoid_cross_entropy_with_logits(b3, label)
                loss = 0.2 * loss1 + 0.2 * loss2 + 0.6 * loss3
                # 计算预测概率小于0.5的类别
                pred4 = pred * (-1.0) + 1.0
                # 得到两个类别的预测概率,并沿第一个维度级联
                pred = fluid.layers.concat([pred4, pred], axis=1)
                acc = fluid.layers.accuracy(pred, fluid.layers.cast(label, dtype='int64'))
                accuracies.append(acc.numpy())
                losses.append(loss.numpy())
            print("[validation] accuracy/loss: {}/{}".format(np.mean(accuracies), np.mean(losses)))
            model.train()

        # save params of model
        fluid.save_dygraph(model.state_dict(), 'mnist')
        # save optimizer state
        fluid.save_dygraph(opt.state_dict(), 'mnist')


if __name__ == '__main__':
    # 创建模型
    with fluid.dygraph.guard():
        model = GoogLeNetFull("GoogLeNetFull")

    train_googlenet(model)

 

2. 运行结果:

0
回复
aaaLKgo
#891 回复于2020-01

作业9-1:

在第 7 个epoch(即280个batch)时进行学习率衰减较为合适;

下图是使用学习率衰减之前,每个epoch整体平均的训练损失和验证损失,可以看到训练集损失其实一直在降低,但验证集损失在第7-12轮时降低了较低点后就开始波动,有一点过拟合的状态,因此考虑在第7轮进行学习率衰减;

# 定义优化器
boundaries = [280]
values = [1e-3, 1e-4]
opt = fluid.optimizer.Momentum(
    learning_rate=fluid.layers.piecewise_decay(boundaries=boundaries, values=values),
    momentum=0.9
)


下图是学习率衰减之后的训练损失和验证损失,可以看到验证集的损失波动小了很多,而且整体是在朝着降低的趋势。因此,选择第7轮作为学习率衰减的地方

0
回复
scy
#892 回复于2020-01

作业8:

如果将LeNet模型中的中间层的激活函数Sigmoid换成ReLU,在眼底筛查数据集上将会得到什么样的结果?Loss是否能收敛,ReLU和Sigmoid之间的区别是引起结果不同的原因吗?请发表你的观点

换成Relu激活函数以后,可以发现loss会收敛,而且收敛更快,根据这两个函数的特性可以发现,sigmoid函数只有在中间小范围内梯度较大,而在两端很大的范围内梯度几乎为0,反向传播时会出现梯度消失的现象,Relu函数则不同,在大于0时梯度一直为1,不会有梯度消失的现象。

0
回复
scy
#893 回复于2020-01

作业9-2:

class GoogLeNet(fluid.dygraph.Layer):
    def __init__(self, name_scope):
        super(GoogLeNet, self).__init__(name_scope)
        # GoogLeNet包含五个模块,每个模块后面紧跟一个池化层
        # 第一个模块包含1个卷积层
        self.conv1 = Conv2D(self.full_name(), num_filters=64, filter_size=7, 
                            padding=3, act='relu')
        # 3x3最大池化
        self.pool1 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,  
                            pool_padding=1, pool_type='max')
        # 第二个模块包含2个卷积层
        self.conv2_1 = Conv2D(self.full_name(), num_filters=64, 
                              filter_size=1, act='relu')
        self.conv2_2 = Conv2D(self.full_name(), num_filters=192, 
                              filter_size=3, padding=1, act='relu')
        # 3x3最大池化
        self.pool2 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,  
                            pool_padding=1, pool_type='max')
        # 第三个模块包含2个Inception块
        self.block3_1 = Inception(self.full_name(), 64, (96, 128), (16, 32), 32)
        self.block3_2 = Inception(self.full_name(), 128, (128, 192), (32, 96), 64)
        # 3x3最大池化
        self.pool3 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,  
                               pool_padding=1, pool_type='max')
        # 第四个模块包含5个Inception块
        self.block4_1 = Inception(self.full_name(), 192, (96, 208), (16, 48), 64)
        #添加输出1
        self.out1_pool = Pool2D(self.full_name(),global_pooling=True, pool_type='avg')
        self.out1_fc1 = FC(self.full_name(), size=128, act='relu')
        self.out1_fc2 = FC(self.full_name(), size=1024, act='relu')
        self.out1_dropout_ratio = 0.7
        self.out1_fc3 = FC(self.full_name(), size=1)
        
        self.block4_2 = Inception(self.full_name(), 160, (112, 224), (24, 64), 64)
        self.block4_3 = Inception(self.full_name(), 128, (128, 256), (24, 64), 64)
        self.block4_4 = Inception(self.full_name(), 112, (144, 288), (32, 64), 64)
        
        #添加输出2
        self.out2_pool = Pool2D(self.full_name(),global_pooling=True, pool_type='avg')
        self.out2_fc1 = FC(self.full_name(), size=128, act='relu')
        self.out2_fc2 = FC(self.full_name(), size=1024, act='relu')
        self.out2_dropout_ratio = 0.7
        self.out2_fc3 = FC(self.full_name(), size=1)
        
        self.block4_5 = Inception(self.full_name(), 256, (160, 320), (32, 128), 128)
        # 3x3最大池化
        self.pool4 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,  
                               pool_padding=1, pool_type='max')

        self.block5_1 = Inception(self.full_name(), 256, (160, 320), (32, 128), 128)
        self.block5_2 = Inception(self.full_name(), 384, (192, 384), (48, 128), 128)

        self.pool5 = Pool2D(self.full_name(), pool_stride=1, 
                               global_pooling=True, pool_type='avg')
        self.fc = FC(self.full_name(),  size=1)

    def forward(self, x):
        x = self.pool1(self.conv1(x))
        x = self.pool2(self.conv2_2(self.conv2_1(x)))
        x = self.pool3(self.block3_2(self.block3_1(x)))

        x = self.block4_1(x)
        x1 = self.out1_fc2(self.out1_fc1(self.out1_pool(x)))
        x1 = fluid.layers.dropout(x1, self.out1_dropout_ratio)
        x1 = self.out1_fc3(x1)
        
        x = self.block4_3(self.block4_2(self.block4_1(x)))
        #第二个预测
        x = self.block4_4(x)
        x2 = self.out2_fc2(self.out2_fc1(self.out2_pool(x)))
        x2 = fluid.layers.dropout(x2, self.out2_dropout_ratio)
        x2 = self.out2_fc3(x2)
        
        x = self.pool4(self.block4_5(self.block4_4(x)))
        x = self.pool5(self.block5_2(self.block5_1(x)))
        x = self.fc(x)
        return x
def trainInception(model):
    with fluid.dygraph.guard():
        model.train()
        epoch_num = 15
        opt = fluid.optimizer.Momentum(learning_rate=0.001, momentum=0.9)
        train_loader = data_loader(DATADIR, batch_size=10, mode='train')
        valid_loader = valid_data_loader(DATADIR2, CSVFILE)
        for epoch in range(epoch_num):
            for batch_id, data in enumerate(train_loader()):
                x_data, y_data = data
                img = fluid.dygraph.to_variable(x_data)
                label = fluid.dygraph.to_variable(y_data)
                out1, out2, out3 = model(img)
                loss1 = fluid.layers.sigmoid_cross_entropy_with_logits(out1, label)
                loss2 = fluid.layers.sigmoid_cross_entropy_with_logits(out2, label)
                loss3 = fluid.layers.sigmoid_cross_entropy_with_logits(out3, label)
                loss = 0.2 * loss1 + 0.2 * loss2 + 0.6 * loss3
                avg_loss = fluid.layers.mean(loss)

                if batch_id % 10 == 0:
                    print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
                avg_loss.backward()
                opt.minimize(avg_loss)
                model.clear_gradients()

0
回复
scy
#894 回复于2020-01

作业9-1:

#学习率使用分段衰减的方式,衰减系数为0.1
def getDecyOptimizer(iters = 10,epoches=[20,35,45],lr=0.01):
    boundaries = [iters * e for e in epoches]
    values = [lr * (0.1 ** j) for j in range(len(boundaries)+1)]
    print(boundaries,values)
    opt = fluid.optimizer.Momentum(
                    momentum=0.9,
                    learning_rate=fluid.layers.piecewise_decay(boundaries,values),
                    regularization= fluid.regularizer.L2Decay(0.0001)
        )
    return opt
getDecyOptimizer()

经测试,大约在400步之后趋于稳定波动

0
回复
w
wangyf童鞋
#895 回复于2020-01

作业9-1:

首先将epoch设置为100,得到训练的loss图:

从图中可以看出,在1000次时loss下降到最低,因此从1000次开始减小学习率,其代码如下

boundaries = [1000, 2000]
values = [0.001, 0.0001, 0.00001]
opt = fluid.optimizer.Momentum(
        momentum=0.9,
        learning_rate=fluid.layers.piecewise_decay(boundaries=boundaries, 
        values=values),
        regularization=fluid.regularizer.L2Decay(1e-4))

再次进行训练,loss图如下:

发现变化并不明显,推测是因为初始学习率过小,建议初始学习率设置为0.1,随后下降到0.01,0.001

0
回复
l
lcl050024
#896 回复于2020-01

作业7-1  计算卷积中一共有多少次乘法和加法操作

输入数据形状是[10, 3, 224, 224],卷积核kh = kw = 3,输出通道数为64,步幅stride=1,填充ph = pw =1

完成这样一个卷积,一共需要做多少次乘法和加法操作?

提示:先看输出一个像素点需要做多少次乘法和加法操作,然后再计算总共需要的操作次数

乘法需要 :((224+2-3)/1+1)* 10 * 3 * 64 * 9 = 867041280次

加法需要:((224+2-3)/1+1)* 10 * 3 * 64  * ( 8 + 1) = 867041280次

 

0
回复
w
wangyf童鞋
#897 回复于2020-01

作业9-2:

程序:

import cv2
import random
import numpy as np

# 对读入的图像数据进行预处理
def transform_img(img):
    # 将图片尺寸缩放道 224x224
    img = cv2.resize(img, (224, 224))
    # 读入的图像数据格式是[H, W, C]
    # 使用转置操作将其变成[C, H, W]
    img = np.transpose(img, (2,0,1))
    img = img.astype('float32')
    # 将数据范围调整到[-1.0, 1.0]之间
    img = img / 255.
    img = img * 2.0 - 1.0
    return img

# 定义训练集数据读取器
def data_loader(datadir, batch_size=10, mode = 'train'):
    # 将datadir目录下的文件列出来,每条文件都要读入
    filenames = os.listdir(datadir)
    def reader():
        if mode == 'train':
            # 训练时随机打乱数据顺序
            random.shuffle(filenames)
        batch_imgs = []
        batch_labels = []
        for name in filenames:
            filepath = os.path.join(datadir, name)
            img = cv2.imread(filepath)
            img = transform_img(img)
            if name[0] == 'H' or name[0] == 'N':
                # H开头的文件名表示高度近似,N开头的文件名表示正常视力
                # 高度近视和正常视力的样本,都不是病理性的,属于负样本,标签为0
                label = 0
            elif name[0] == 'P':
                # P开头的是病理性近视,属于正样本,标签为1
                label = 1
            else:
                raise('Not excepted file name')
            # 每读取一个样本的数据,就将其放入数据列表中
            batch_imgs.append(img)
            batch_labels.append(label)
            if len(batch_imgs) == batch_size:
                # 当数据列表的长度等于batch_size的时候,
                # 把这些数据当作一个mini-batch,并作为数据生成器的一个输出
                imgs_array = np.array(batch_imgs).astype('float32')
                labels_array = np.array(batch_labels).astype('float32').reshape(-1, 1)
                yield imgs_array, labels_array
                batch_imgs = []
                batch_labels = []

        if len(batch_imgs) > 0:
            # 剩余样本数目不足一个batch_size的数据,一起打包成一个mini-batch
            imgs_array = np.array(batch_imgs).astype('float32')
            labels_array = np.array(batch_labels).astype('float32').reshape(-1, 1)
            yield imgs_array, labels_array

    return reader

# 定义验证集数据读取器
def valid_data_loader(datadir, csvfile, batch_size=10, mode='valid'):
    # 训练集读取时通过文件名来确定样本标签,验证集则通过csvfile来读取每个图片对应的标签
    # 请查看解压后的验证集标签数据,观察csvfile文件里面所包含的内容
    # csvfile文件所包含的内容格式如下,每一行代表一个样本,
    # 其中第一列是图片id,第二列是文件名,第三列是图片标签,
    # 第四列和第五列是Fovea的坐标,与分类任务无关
    # ID,imgName,Label,Fovea_X,Fovea_Y
    # 1,V0001.jpg,0,1157.74,1019.87
    # 2,V0002.jpg,1,1285.82,1080.47
    # 打开包含验证集标签的csvfile,并读入其中的内容
    filelists = open(csvfile).readlines()
    def reader():
        batch_imgs = []
        batch_labels = []
        for line in filelists[1:]:
            line = line.strip().split(',')
            name = line[1]
            label = int(line[2])
            # 根据图片文件名加载图片,并对图像数据作预处理
            filepath = os.path.join(datadir, name)
            img = cv2.imread(filepath)
            img = transform_img(img)
            # 每读取一个样本的数据,就将其放入数据列表中
            batch_imgs.append(img)
            batch_labels.append(label)
            if len(batch_imgs) == batch_size:
                # 当数据列表的长度等于batch_size的时候,
                # 把这些数据当作一个mini-batch,并作为数据生成器的一个输出
                imgs_array = np.array(batch_imgs).astype('float32')
                labels_array = np.array(batch_labels).astype('float32').reshape(-1, 1)
                yield imgs_array, labels_array
                batch_imgs = []
                batch_labels = []

        if len(batch_imgs) > 0:
            # 剩余样本数目不足一个batch_size的数据,一起打包成一个mini-batch
            imgs_array = np.array(batch_imgs).astype('float32')
            labels_array = np.array(batch_labels).astype('float32').reshape(-1, 1)
            yield imgs_array, labels_array

    return reader
    
# 开始训练
import os
import random
import paddle
import paddle.fluid as fluid
import numpy as np

DATADIR = '/home/aistudio/work/palm/PALM-Training400/PALM-Training400'
DATADIR2 = '/home/aistudio/work/palm/PALM-Validation400'
CSVFILE = '/home/aistudio/work/palm/PALM-Validation-GT/labels.csv'

# 定义训练过程
def train(model):
    with fluid.dygraph.guard():
        print('start training ... ')
        model.train()
        epoch_num = 5
        # 定义优化器
        opt = fluid.optimizer.Momentum(learning_rate=0.001, momentum=0.9)
        # 定义数据读取器,训练数据读取器和验证数据读取器
        train_loader = data_loader(DATADIR, batch_size=10, mode='train')
        valid_loader = valid_data_loader(DATADIR2, CSVFILE)
        for epoch in range(epoch_num):
            for batch_id, data in enumerate(train_loader()):
                x_data, y_data = data
                img = fluid.dygraph.to_variable(x_data)
                label = fluid.dygraph.to_variable(y_data)
                # 运行模型前向计算,得到预测值
                logits1, logits2, logits3 = model(img)
                # 进行loss计算
                loss1 = fluid.layers.sigmoid_cross_entropy_with_logits(logits1, label)
                loss2 = fluid.layers.sigmoid_cross_entropy_with_logits(logits2, label)
                loss3 = fluid.layers.sigmoid_cross_entropy_with_logits(logits3, label)
                loss = 0.2*loss1 + 0.2*loss2 + 0.6*loss3
                avg_loss = fluid.layers.mean(loss)

                if batch_id % 10 == 0:
                    print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
                # 反向传播,更新权重,清除梯度
                avg_loss.backward()
                opt.minimize(avg_loss)
                model.clear_gradients()

            model.eval()
            accuracies = []
            losses = []
            for batch_id, data in enumerate(valid_loader()):
                x_data, y_data = data
                img = fluid.dygraph.to_variable(x_data)
                label = fluid.dygraph.to_variable(y_data)
                # 运行模型前向计算,得到预测值
                logits1, logits2, logits3 = model(img)
                logits = 0.2**logits1 + 0.2*logits2 + 0.6*logits3
                # 二分类,sigmoid计算后的结果以0.5为阈值分两个类别
                # 计算sigmoid后的预测概率,进行loss计算
                pred = fluid.layers.sigmoid(logits)
                loss = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label)
                # 计算预测概率小于0.5的类别
                pred2 = pred * (-1.0) + 1.0
                # 得到两个类别的预测概率,并沿第一个维度级联
                pred = fluid.layers.concat([pred2, pred], axis=1)
                acc = fluid.layers.accuracy(pred, fluid.layers.cast(label, dtype='int64'))
                accuracies.append(acc.numpy())
                losses.append(loss.numpy())
            print("[validation] accuracy/loss: {}/{}".format(np.mean(accuracies), np.mean(losses)))
            model.train()

        # save params of model
        fluid.save_dygraph(model.state_dict(), 'googlenet')
        # save optimizer state
        fluid.save_dygraph(opt.state_dict(), 'googlenet')

# GoogLeNet模型代码
import numpy as np
import paddle
import paddle.fluid as fluid
from paddle.fluid.layer_helper import LayerHelper
from paddle.fluid.dygraph.nn import Conv2D, Pool2D, BatchNorm, FC
from paddle.fluid.dygraph.base import to_variable

# 定义Inception块
class Inception(fluid.dygraph.Layer):
    def __init__(self, name_scope, c1, c2, c3, c4, **kwargs):
        '''
        Inception模块的实现代码,
        name_scope, 模块名称,数据类型为string
        c1,  图(b)中第一条支路1x1卷积的输出通道数,数据类型是整数
        c2,图(b)中第二条支路卷积的输出通道数,数据类型是tuple或list, 
               其中c2[0]是1x1卷积的输出通道数,c2[1]是3x3
        c3,图(b)中第三条支路卷积的输出通道数,数据类型是tuple或list, 
               其中c3[0]是1x1卷积的输出通道数,c3[1]是3x3
        c4,  图(b)中第一条支路1x1卷积的输出通道数,数据类型是整数
        '''
        super(Inception, self).__init__(name_scope)
        # 依次创建Inception块每条支路上使用到的操作
        self.p1_1 = Conv2D(self.full_name(), num_filters=c1, 
                           filter_size=1, act='relu')
        self.p2_1 = Conv2D(self.full_name(), num_filters=c2[0], 
                           filter_size=1, act='relu')
        self.p2_2 = Conv2D(self.full_name(), num_filters=c2[1], 
                           filter_size=3, padding=1, act='relu')
        self.p3_1 = Conv2D(self.full_name(), num_filters=c3[0], 
                           filter_size=1, act='relu')
        self.p3_2 = Conv2D(self.full_name(), num_filters=c3[1], 
                           filter_size=5, padding=2, act='relu')
        self.p4_1 = Pool2D(self.full_name(), pool_size=3, 
                           pool_stride=1,  pool_padding=1, 
                           pool_type='max')
        self.p4_2 = Conv2D(self.full_name(), num_filters=c4, 
                           filter_size=1, act='relu')

    def forward(self, x):
        # 支路1只包含一个1x1卷积
        p1 = self.p1_1(x)
        # 支路2包含 1x1卷积 + 3x3卷积
        p2 = self.p2_2(self.p2_1(x))
        # 支路3包含 1x1卷积 + 5x5卷积
        p3 = self.p3_2(self.p3_1(x))
        # 支路4包含 最大池化和1x1卷积
        p4 = self.p4_2(self.p4_1(x))
        # 将每个支路的输出特征图拼接在一起作为最终的输出结果
        return fluid.layers.concat([p1, p2, p3, p4], axis=1)  
    
class GoogLeNet(fluid.dygraph.Layer):
    def __init__(self, name_scope):
        super(GoogLeNet, self).__init__(name_scope)
        # GoogLeNet包含五个模块,每个模块后面紧跟一个池化层
        # 第一个模块包含1个卷积层
        self.conv1 = Conv2D(self.full_name(), num_filters=64, filter_size=7, 
                            padding=3, act='relu')
        # 3x3最大池化
        self.pool1 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,  
                            pool_padding=1, pool_type='max')
        # 第二个模块包含2个卷积层
        self.conv2_1 = Conv2D(self.full_name(), num_filters=64, 
                              filter_size=1, act='relu')
        self.conv2_2 = Conv2D(self.full_name(), num_filters=192, 
                              filter_size=3, padding=1, act='relu')
        # 3x3最大池化
        self.pool2 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,  
                            pool_padding=1, pool_type='max')
        # 第三个模块包含2个Inception块
        self.block3_1 = Inception(self.full_name(), 64, (96, 128), (16, 32), 32)
        self.block3_2 = Inception(self.full_name(), 128, (128, 192), (32, 96), 64)
        # 3x3最大池化
        self.pool3 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,  
                               pool_padding=1, pool_type='max')
        # 第四个模块包含5个Inception块
        self.block4_1 = Inception(self.full_name(), 192, (96, 208), (16, 48), 64)
        self.block4_2 = Inception(self.full_name(), 160, (112, 224), (24, 64), 64)
        self.block4_3 = Inception(self.full_name(), 128, (128, 256), (24, 64), 64)
        self.block4_4 = Inception(self.full_name(), 112, (144, 288), (32, 64), 64)
        self.block4_5 = Inception(self.full_name(), 256, (160, 320), (32, 128), 128)
        # 3x3最大池化
        self.pool4 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,  
                               pool_padding=1, pool_type='max')
        # 第五个模块包含2个Inception块
        self.block5_1 = Inception(self.full_name(), 256, (160, 320), (32, 128), 128)
        self.block5_2 = Inception(self.full_name(), 384, (192, 384), (48, 128), 128)
        # 全局池化,尺寸用的是global_pooling,pool_stride不起作用
        self.pool_avg = Pool2D(self.full_name(), pool_stride=1, 
                               global_pooling=True, pool_type='avg')
        self.fc11 = FC(self.full_name(),  size=128)
        self.fc12 = FC(self.full_name(),  size=1024)
        self.fc13 = FC(self.full_name(),  size=1)
        self.fc21 = FC(self.full_name(),  size=128)
        self.fc22 = FC(self.full_name(),  size=1024)
        self.fc23 = FC(self.full_name(),  size=1)
        self.fc33 = FC(self.full_name(),  size=1)
        self.drop_ratio=0.7

    def forward(self, x):
        x = self.pool1(self.conv1(x))
        x = self.pool2(self.conv2_2(self.conv2_1(x)))
        x = self.pool3(self.block3_2(self.block3_1(x)))
        x = self.block4_1(x)
        out1 = x
        out1 = self.fc12(self.fc11(self.pool_avg(out1)))
        out1 = fluid.layers.dropout(out1, self.drop_ratio)
        out1 = self.fc13(out1)
        x = self.block4_4(self.block4_3(self.block4_2(x)))
        out2 = x
        out2 = self.fc22(self.fc21(self.pool_avg(out2)))
        out2 = fluid.layers.dropout(out2, self.drop_ratio)
        out2 = self.fc23(out2)
        x = self.pool4(self.block4_5(x))
        out3 = x
        out3 = self.pool_avg(self.block5_2(self.block5_1(out3)))
        out3 = fluid.layers.dropout(out3, self.drop_ratio)
        out3 = self.fc33(out3)
        #x = 0.2 * out1 + 0.2 * out2 + 0.6 *out3
        return out1, out2, out3
       
with fluid.dygraph.guard():
    model = GoogLeNet("GoogLeNet")

train(model)

运行结果:

0
回复
l
lcl050024
#898 回复于2020-01

作业8:如果将LeNet模型中的中间层的激活函数Sigmoid换成ReLU,在眼底筛查数据集上将会得到什么样的结果?Loss是否能收敛,ReLU和Sigmoid之间的区别是引起结果不同的原因吗?

sigmod:

relu

仅仅将卷基层之间的激活函数改为relu 之后 loss有很明显的下降且准确率有了较大幅度的提升。整个模型已经具备了识别的意义。原因应该是在于relu在正向有恒定的梯度,可以将梯度信息不衰减的传递回去。而sigmoid存在非常明显的梯度衰减影响,导致在较大的图像较复杂的图像上有梯度消失现象,难以将有用的信息反向传播回去,带来有改善的权重改变

0
回复
呵呵xyz1
#899 回复于2020-01

作业9-1:

使用分段衰减的方式,衰减系数为0.1, 根据ResNet目前的训练情况

batch_size=10

epoch_num = 5
# 定义优化器
# opt = fluid.optimizer.Momentum(learning_rate=0.001, momentum=0.9)
boundaries =[160,180]
values = [0.001,0.0001,0.0001]
opt = fluid.optimizer.Momentum(learning_rate=fluid.layers.piecewise_decay(boundaries, values), momentum=0.9)

start training ...
epoch: 0, batch_id: 0, loss is: [0.6900862]
epoch: 0, batch_id: 10, loss is: [0.7834173]
epoch: 0, batch_id: 20, loss is: [0.6089634]
epoch: 0, batch_id: 30, loss is: [0.44614416]
[validation] accuracy/loss: 0.7675000429153442/0.4995689392089844
epoch: 1, batch_id: 0, loss is: [0.62597483]
epoch: 1, batch_id: 10, loss is: [0.37226442]
epoch: 1, batch_id: 20, loss is: [0.56305426]
epoch: 1, batch_id: 30, loss is: [0.5852512]
[validation] accuracy/loss: 0.8700000643730164/0.31707146763801575
epoch: 2, batch_id: 0, loss is: [0.34716362]
epoch: 2, batch_id: 10, loss is: [0.29987663]
epoch: 2, batch_id: 20, loss is: [0.40632883]
epoch: 2, batch_id: 30, loss is: [0.23577423]
[validation] accuracy/loss: 0.8725000619888306/0.3043900728225708
epoch: 3, batch_id: 0, loss is: [0.6481099]
epoch: 3, batch_id: 10, loss is: [0.3430357]
epoch: 3, batch_id: 20, loss is: [0.5004776]
epoch: 3, batch_id: 30, loss is: [0.46621162]
[validation] accuracy/loss: 0.925000011920929/0.22478042542934418
epoch: 4, batch_id: 0, loss is: [0.05665208]
epoch: 4, batch_id: 10, loss is: [0.08026433]
epoch: 4, batch_id: 20, loss is: [0.10686707]
epoch: 4, batch_id: 30, loss is: [0.0467202]
[validation] accuracy/loss: 0.9524999856948853/0.17152376472949982

0
回复
呵呵xyz1
#900 回复于2020-01

作业9-2:

# 导入需要的包
import paddle
import paddle.fluid as fluid
import numpy as np
import os
import random
import cv2

from paddle.fluid.layer_helper import LayerHelper
from paddle.fluid.dygraph.nn import Conv2D, Pool2D, BatchNorm, FC
from paddle.fluid.dygraph.base import to_variable
# from paddle.fluid.dygraph.nn import Conv2D, Pool2D, FC
pathdest='data/'

DATADIR = os.path.join(pathdest,'PALM-Training400')
DATADIR2 = os.path.join(pathdest,'PALM-Validation400')
CSVFILE = os.path.join(pathdest,'labels.csv')

 

# 对读入的图像数据进行预处理
def transform_img(img):
# 将图片尺寸缩放道 224x224
img = cv2.resize(img, (224, 224))
# 读入的图像数据格式是[H, W, C]
# 使用转置操作将其变成[C, H, W]
img = np.transpose(img, (2,0,1))
img = img.astype('float32')
# 将数据范围调整到[-1.0, 1.0]之间
img = img / 255.
img = img * 2.0 - 1.0
return img

# 定义训练集数据读取器
def data_loader(datadir, batch_size=10, mode = 'train'):
# 将datadir目录下的文件列出来,每条文件都要读入
filenames = os.listdir(datadir)
def reader():
if mode == 'train':
# 训练时随机打乱数据顺序
random.shuffle(filenames)
batch_imgs = []
batch_labels = []
for name in filenames:
filepath = os.path.join(datadir, name)
img = cv2.imread(filepath)
img = transform_img(img)
if name[0] == 'H' or name[0] == 'N':
# H开头的文件名表示高度近似,N开头的文件名表示正常视力
# 高度近视和正常视力的样本,都不是病理性的,属于负样本,标签为0
label = 0
elif name[0] == 'P':
# P开头的是病理性近视,属于正样本,标签为1
label = 1
else:
raise('Not excepted file name')
# 每读取一个样本的数据,就将其放入数据列表中
batch_imgs.append(img)
batch_labels.append(label)
if len(batch_imgs) == batch_size:
# 当数据列表的长度等于batch_size的时候,
# 把这些数据当作一个mini-batch,并作为数据生成器的一个输出
imgs_array = np.array(batch_imgs).astype('float32')
labels_array = np.array(batch_labels).astype('float32').reshape(-1, 1)
yield imgs_array, labels_array
batch_imgs = []
batch_labels = []

if len(batch_imgs) > 0:
# 剩余样本数目不足一个batch_size的数据,一起打包成一个mini-batch
imgs_array = np.array(batch_imgs).astype('float32')
labels_array = np.array(batch_labels).astype('float32').reshape(-1, 1)
yield imgs_array, labels_array

return reader

# 定义验证集数据读取器
def valid_data_loader(datadir, csvfile, batch_size=10, mode='valid'):
# 训练集读取时通过文件名来确定样本标签,验证集则通过csvfile来读取每个图片对应的标签
# 请查看解压后的验证集标签数据,观察csvfile文件里面所包含的内容
# csvfile文件所包含的内容格式如下,每一行代表一个样本,
# 其中第一列是图片id,第二列是文件名,第三列是图片标签,
# 第四列和第五列是Fovea的坐标,与分类任务无关
# ID,imgName,Label,Fovea_X,Fovea_Y
# 1,V0001.jpg,0,1157.74,1019.87
# 2,V0002.jpg,1,1285.82,1080.47
# 打开包含验证集标签的csvfile,并读入其中的内容
filelists = open(csvfile).readlines()
def reader():
batch_imgs = []
batch_labels = []
for line in filelists[1:]:
line = line.strip().split(',')
name = line[1]
label = int(line[2])
# 根据图片文件名加载图片,并对图像数据作预处理
filepath = os.path.join(datadir, name)
img = cv2.imread(filepath)
img = transform_img(img)
# 每读取一个样本的数据,就将其放入数据列表中
batch_imgs.append(img)
batch_labels.append(label)
if len(batch_imgs) == batch_size:
# 当数据列表的长度等于batch_size的时候,
# 把这些数据当作一个mini-batch,并作为数据生成器的一个输出
imgs_array = np.array(batch_imgs).astype('float32')
labels_array = np.array(batch_labels).astype('float32').reshape(-1, 1)
yield imgs_array, labels_array
batch_imgs = []
batch_labels = []

if len(batch_imgs) > 0:
# 剩余样本数目不足一个batch_size的数据,一起打包成一个mini-batch
imgs_array = np.array(batch_imgs).astype('float32')
labels_array = np.array(batch_labels).astype('float32').reshape(-1, 1)
yield imgs_array, labels_array

return reader

class Inception(fluid.dygraph.Layer):
def __init__(self, name_scope, c1, c2, c3, c4, **kwargs):
'''
Inception模块的实现代码,
name_scope, 模块名称,数据类型为string
c1, 图(b)中第一条支路1x1卷积的输出通道数,数据类型是整数
c2,图(b)中第二条支路卷积的输出通道数,数据类型是tuple或list,
其中c2[0]是1x1卷积的输出通道数,c2[1]是3x3
c3,图(b)中第三条支路卷积的输出通道数,数据类型是tuple或list,
其中c3[0]是1x1卷积的输出通道数,c3[1]是3x3
c4, 图(b)中第一条支路1x1卷积的输出通道数,数据类型是整数
'''
super(Inception, self).__init__(name_scope)
# 依次创建Inception块每条支路上使用到的操作
self.p1_1 = Conv2D(self.full_name(), num_filters=c1,
filter_size=1, act='relu')
self.p2_1 = Conv2D(self.full_name(), num_filters=c2[0],
filter_size=1, act='relu')
self.p2_2 = Conv2D(self.full_name(), num_filters=c2[1],
filter_size=3, padding=1, act='relu')
self.p3_1 = Conv2D(self.full_name(), num_filters=c3[0],
filter_size=1, act='relu')
self.p3_2 = Conv2D(self.full_name(), num_filters=c3[1],
filter_size=5, padding=2, act='relu')
self.p4_1 = Pool2D(self.full_name(), pool_size=3,
pool_stride=1, pool_padding=1,
pool_type='max')
self.p4_2 = Conv2D(self.full_name(), num_filters=c4,
filter_size=1, act='relu')

def forward(self, x):
# 支路1只包含一个1x1卷积
p1 = self.p1_1(x)
# 支路2包含 1x1卷积 + 3x3卷积
p2 = self.p2_2(self.p2_1(x))
# 支路3包含 1x1卷积 + 5x5卷积
p3 = self.p3_2(self.p3_1(x))
# 支路4包含 最大池化和1x1卷积
p4 = self.p4_2(self.p4_1(x))
# 将每个支路的输出特征图拼接在一起作为最终的输出结果
return fluid.layers.concat([p1, p2, p3, p4], axis=1)

# 定义Inception块


class Inception(fluid.dygraph.Layer):
def __init__(self, name_scope, c1, c2, c3, c4, **kwargs):
'''
Inception模块的实现代码,
name_scope, 模块名称,数据类型为string
c1, 图(b)中第一条支路1x1卷积的输出通道数,数据类型是整数
c2,图(b)中第二条支路卷积的输出通道数,数据类型是tuple或list,
其中c2[0]是1x1卷积的输出通道数,c2[1]是3x3
c3,图(b)中第三条支路卷积的输出通道数,数据类型是tuple或list,
其中c3[0]是1x1卷积的输出通道数,c3[1]是3x3
c4, 图(b)中第一条支路1x1卷积的输出通道数,数据类型是整数
'''
super(Inception, self).__init__(name_scope)
# 依次创建Inception块每条支路上使用到的操作
self.p1_1 = Conv2D(self.full_name(), num_filters=c1,
filter_size=1, act='relu')
self.p2_1 = Conv2D(self.full_name(), num_filters=c2[0],
filter_size=1, act='relu')
self.p2_2 = Conv2D(self.full_name(), num_filters=c2[1],
filter_size=3, padding=1, act='relu')
self.p3_1 = Conv2D(self.full_name(), num_filters=c3[0],
filter_size=1, act='relu')
self.p3_2 = Conv2D(self.full_name(), num_filters=c3[1],
filter_size=5, padding=2, act='relu')
self.p4_1 = Pool2D(self.full_name(), pool_size=3,
pool_stride=1, pool_padding=1,
pool_type='max')
self.p4_2 = Conv2D(self.full_name(), num_filters=c4,
filter_size=1, act='relu')

def forward(self, x):
# 支路1只包含一个1x1卷积
p1 = self.p1_1(x)
# 支路2包含 1x1卷积 + 3x3卷积
p2 = self.p2_2(self.p2_1(x))
# 支路3包含 1x1卷积 + 5x5卷积
p3 = self.p3_2(self.p3_1(x))
# 支路4包含 最大池化和1x1卷积
p4 = self.p4_2(self.p4_1(x))
# 将每个支路的输出特征图拼接在一起作为最终的输出结果
return fluid.layers.concat([p1, p2, p3, p4], axis=1)


class GoogLeNet(fluid.dygraph.Layer):
def __init__(self, name_scope):
super(GoogLeNet, self).__init__(name_scope)
# GoogLeNet包含五个模块,每个模块后面紧跟一个池化层
# 第一个模块包含1个卷积层
self.conv1 = Conv2D(self.full_name(), num_filters=64, filter_size=7,
padding=3, act='relu')
# 3x3最大池化
self.pool1 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,
pool_padding=1, pool_type='max')
# 第二个模块包含2个卷积层
self.conv2_1 = Conv2D(self.full_name(), num_filters=64,
filter_size=1, act='relu')
self.conv2_2 = Conv2D(self.full_name(), num_filters=192,
filter_size=3, padding=1, act='relu')
# 3x3最大池化
self.pool2 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,
pool_padding=1, pool_type='max')
# 第三个模块包含2个Inception块
self.block3_1 = Inception(self.full_name(), 64, (96, 128), (16, 32), 32)
self.block3_2 = Inception(self.full_name(), 128, (128, 192), (32, 96), 64)
# 3x3最大池化
self.pool3 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,
pool_padding=1, pool_type='max')
# 第四个模块包含5个Inception块
self.block4_1 = Inception(self.full_name(), 192, (96, 208), (16, 48), 64)
self.block4_2 = Inception(self.full_name(), 160, (112, 224), (24, 64), 64)
self.block4_3 = Inception(self.full_name(), 128, (128, 256), (24, 64), 64)
self.block4_4 = Inception(self.full_name(), 112, (144, 288), (32, 64), 64)
self.block4_5 = Inception(self.full_name(), 256, (160, 320), (32, 128), 128)
# 3x3最大池化
self.pool4 = Pool2D(self.full_name(), pool_size=3, pool_stride=2,
pool_padding=1, pool_type='max')
# 第五个模块包含2个Inception块
self.block5_1 = Inception(self.full_name(), 256, (160, 320), (32, 128), 128)
self.block5_2 = Inception(self.full_name(), 384, (192, 384), (48, 128), 128)
# 全局池化,尺寸用的是global_pooling,pool_stride不起作用
self.pool5 = Pool2D(self.full_name(), pool_stride=1,
global_pooling=True, pool_type='avg')
self.fc = FC(self.full_name(), size=1)


self.pool4_01= Pool2D(self.full_name(), pool_size=5, pool_stride=3,
pool_padding=1, pool_type='avg')
self.fc401 = FC(self.full_name(), size=128)
self.fc402 = FC(self.full_name(), size=1024)
self.fc403 = FC(self.full_name(), size=1)

self.pool5_01 = Pool2D(self.full_name(), pool_size=5, pool_stride=3,
pool_padding=1, pool_type='avg')
self.fc501 = FC(self.full_name(), size=128)
self.fc502 = FC(self.full_name(), size=1024)
self.fc503 = FC(self.full_name(), size=1)


def forward(self, x):
x = self.pool1(self.conv1(x))
x = self.pool2(self.conv2_2(self.conv2_1(x)))
x = self.pool3(self.block3_2(self.block3_1(x)))
soout1 = self.block4_1(x)
x = self.block4_3(self.block4_2(soout1))

soout2 = self.block4_4(x)

x = self.pool4(self.block4_5(soout2))
x = self.pool5(self.block5_2(self.block5_1(x)))
x3 = self.fc(x)

out1 = self.fc403(self.fc402(self.fc401(self.pool4_01(soout1))))
out2 = self.fc503(self.fc502(self.fc501(self.pool5_01(soout2))))
return x3,out2,out1


# 定义训练过程
def traingooglenet(model):
use_gpu = True

place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()

with fluid.dygraph.guard(place):
print('start training ... ')
model.train()
epoch_num = 5
# 定义优化器
# opt = fluid.optimizer.Momentum(learning_rate=0.001, momentum=0.9)
boundaries =[160,180]
values = [0.001,0.0001,0.0001]
opt = fluid.optimizer.Momentum(learning_rate=fluid.layers.piecewise_decay(boundaries, values), momentum=0.9)
# optimizer = fluid.optimizer.AdamOptimizer(learning_rate=fluid.layers.piecewise_decay(boundaries, values),
# regularization=fluid.regularizer.L2Decay(regularization_coeff=0.005))
# opt=fluid.optimizer.AdamOptimizer(0.001)

# 定义数据读取器,训练数据读取器和验证数据读取器
train_loader = data_loader(DATADIR, batch_size=10, mode='train')
valid_loader = valid_data_loader(DATADIR2, CSVFILE)
for epoch in range(epoch_num):
for batch_id, data in enumerate(train_loader()):
x_data, y_data = data
img = fluid.dygraph.to_variable(x_data)
label = fluid.dygraph.to_variable(y_data)
# 运行模型前向计算,得到预测值
logits,logits1,logits2 = model(img)
# 进行loss计算
loss = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label)
loss1 = fluid.layers.sigmoid_cross_entropy_with_logits(logits1, label)
loss2 = fluid.layers.sigmoid_cross_entropy_with_logits(logits2, label)
avg_cost0 = fluid.layers.mean(x=loss)
avg_cost1 = fluid.layers.mean(x=loss1)
avg_cost2 = fluid.layers.mean(x=loss2)

avg_loss = 0.6 * avg_cost0 + 0.2 * avg_cost1 + 0.2 * avg_cost2

# avg_loss = fluid.layers.mean(loss)

if batch_id % 10 == 0:
print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
# 反向传播,更新权重,清除梯度
avg_loss.backward()
opt.minimize(avg_loss)
model.clear_gradients()

model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(valid_loader()):
x_data, y_data = data
img = fluid.dygraph.to_variable(x_data)
label = fluid.dygraph.to_variable(y_data)
# 运行模型前向计算,得到预测值
logits, logits1, logits2 = model(img)
# 进行loss计算
loss = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label)
loss1 = fluid.layers.sigmoid_cross_entropy_with_logits(logits1, label)
loss2 = fluid.layers.sigmoid_cross_entropy_with_logits(logits2, label)
avg_cost0 = fluid.layers.mean(x=loss)
avg_cost1 = fluid.layers.mean(x=loss1)
avg_cost2 = fluid.layers.mean(x=loss2)

loss = 0.6 * avg_cost0 + 0.2 * avg_cost1 + 0.2 * avg_cost2
# 二分类,sigmoid计算后的结果以0.5为阈值分两个类别
# 计算sigmoid后的预测概率,进行loss计算
pred = fluid.layers.sigmoid(logits)
# loss = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label)
# 计算预测概率小于0.5的类别
pred2 = pred * (-1.0) + 1.0
# 得到两个类别的预测概率,并沿第一个维度级联
pred = fluid.layers.concat([pred2, pred], axis=1)
acc = fluid.layers.accuracy(pred, fluid.layers.cast(label, dtype='int64'))
accuracies.append(acc.numpy())
losses.append(loss.numpy())
print("[validation] accuracy/loss: {}/{}".format(np.mean(accuracies), np.mean(losses)))
model.train()

# save params of model
fluid.save_dygraph(model.state_dict(), 'mnist')
# save optimizer state
fluid.save_dygraph(opt.state_dict(), 'mnist')


with fluid.dygraph.guard():
model = GoogLeNet("GoogLeNet")

traingooglenet(model)

 

>>>


start training ...
W0113 22:37:14.678062 14085 device_context.cc:235] Please NOTE: device: 0, CUDA Capability: 75, Driver API Version: 10.1, Runtime API Version: 10.0
W0113 22:37:14.680400 14085 device_context.cc:243] device: 0, cuDNN Version: 7.3.
W0113 22:37:14.680410 14085 device_context.cc:269] WARNING: device: 0. The installed Paddle is compiled with CUDNN 7.4, but CUDNN version in your machine is 7.3, which may cause serious incompatible bug. Please recompile or reinstall Paddle with compatible CUDNN version.
epoch: 0, batch_id: 0, loss is: [0.79400337]
epoch: 0, batch_id: 10, loss is: [0.77001095]
epoch: 0, batch_id: 20, loss is: [0.53227884]
epoch: 0, batch_id: 30, loss is: [0.55032647]
[validation] accuracy/loss: 0.5275000333786011/0.5415353775024414
epoch: 1, batch_id: 0, loss is: [0.6208921]
epoch: 1, batch_id: 10, loss is: [0.6914213]
epoch: 1, batch_id: 20, loss is: [0.36758906]
epoch: 1, batch_id: 30, loss is: [0.54296]
[validation] accuracy/loss: 0.9225000143051147/0.39544814825057983
epoch: 2, batch_id: 0, loss is: [0.50428385]
epoch: 2, batch_id: 10, loss is: [0.46039152]
epoch: 2, batch_id: 20, loss is: [0.1979063]
epoch: 2, batch_id: 30, loss is: [0.34022614]
[validation] accuracy/loss: 0.9125000238418579/0.286837637424469
epoch: 3, batch_id: 0, loss is: [0.35543692]
epoch: 3, batch_id: 10, loss is: [0.53375757]
epoch: 3, batch_id: 20, loss is: [0.4520824]
epoch: 3, batch_id: 30, loss is: [0.289438]
[validation] accuracy/loss: 0.9274999499320984/0.23497848212718964
epoch: 4, batch_id: 0, loss is: [0.23615289]
epoch: 4, batch_id: 10, loss is: [0.36257458]
epoch: 4, batch_id: 20, loss is: [0.27140918]
epoch: 4, batch_id: 30, loss is: [0.13654925]
[validation] accuracy/loss: 0.9624999761581421/0.19405212998390198

0
回复
FrankFly
#901 回复于2020-01

作业9-1

学习率分段衰减,主要参考损失函数的下降情况,如果在训练过程中,出现抖动,先下降后上升,说明需要调小学习率。

以上说明,在batch_num=80,160时出现了抖动,因此设置如下:

boundaries = [80, 160]
values = [0.01, 0.001, 0.0005]
opt = fluid.optimizer.Momentum(learning_rate=fluid.layers.piecewise_decay(boundaries=boundaries, values=values), momentum=0.9)
# opt = fluid.optimizer.Momentum(learning_rate=0.001, momentum=0.9) # 设置分段之前的配置
调整后

损失函数下降变得稳定。

0
回复
在@后输入用户全名并按空格结束,可艾特全站任一用户