首页 AI Studio教育版 帖子详情
作业帖 | 百度深度学习集训营
收藏
快速回复
AI Studio教育版 其他师资培训 807302 953
作业帖 | 百度深度学习集训营
收藏
快速回复
AI Studio教育版 其他师资培训 807302 953

百度深度学习集训营已经正式开营,每个阶段的作业都将有各自的奖励,欢迎大家学习~

PS:如遇帖子过期、审核不通过的情况,请先复制内容保存在word文档,然后根据提示,完成个人实名验证,刷新后重新粘贴复制的内容,即可提交~

欢迎大家报名参加~

1月9日作业:

作业9-1:在第二章中学习过如何设置学习率衰减,这里建议使用分段衰减的方式,衰减系数为0.1, 根据ResNet目前的训练情况,应该在训练到多少步的时候设置衰减合适?请设置好学习率衰减方式,在眼疾识别数据集iChallenge-PM上重新训练ResNet模型。

作业9-1奖励:在作业中随机各抽取5名同学送出飞桨本+数据线+飞桨贴纸

回复帖子形式:  作业9-1:XXX

抽奖作业截止时间:2020年1月13日中午12点之前

作业9-2奖励:在作业中随机各抽取5名同学送出飞桨本+数据线+飞桨贴纸

回复帖子形式:  作业9-2:XXX

抽奖作业截止时间:2020年1月13日中午12点之前

 

1月7日作业:

作业8:如果将LeNet模型中的中间层的激活函数Sigmoid换成ReLU,在眼底筛查数据集上将会得到什么样的结果?Loss是否能收敛,ReLU和Sigmoid之间的区别是引起结果不同的原因吗?请发表你的观点

作业8奖励:在作业中随机各抽取5名同学送出飞桨本+数据线+飞桨贴纸

回复帖子形式:  作业8:XXX

获奖同学:#820 thunder95、#819 你还说不想我吗、 #818 百度用户#0762194095、#817 呵赫 he、#816 星光1dl

1月2日作业

作业7-1  计算卷积中一共有多少次乘法和加法操作

输入数据形状是[10, 3, 224, 224],卷积核kh = kw = 3,输出通道数为64,步幅stride=1,填充ph = pw =1

完成这样一个卷积,一共需要做多少次乘法和加法操作?

提示:先看输出一个像素点需要做多少次乘法和加法操作,然后再计算总共需要的操作次数

提交方式:请回复乘法和加法操作的次数,例如:乘法1000,加法1000

作业7-1奖励:抽取5人赢得飞桨定制本+数据线,截止时间2020年1月6日中午12点之前

回复帖子形式:  作业7-1:XXX

作业7-2奖励:从正确答案中抽取5人获得飞桨定制本+50元京东卡,截止时间2020年1月6日中午12点之前 

 

12月31日作业

作业6-1:

1.将普通神经网络模型的每层输出打印,观察内容
2.将分类准确率的指标 用PLT库画图表示
3.通过分类准确率,判断以采用不同损失函数训练模型的效果优劣
4.作图比较:随着训练进行,模型在训练集和测试集上的Loss曲线
5.调节正则化权重,观察4的作图曲线的变化,并分析原因
作业6-1奖励:抽取5人赢得飞桨定制本+数据线 ,回复帖子形式:  作业6-1:XXX

作业6-2:

正确运行AI Studio《百度架构师手把手教深度学习》课程里面的作业3 的极简版代码,分析训练过程中可能出现的问题或值得优化的地方,通过以下几点优化:

(1)样本:数据增强的方法

(2)假设:改进网络模型

(2)损失:尝试各种Loss

(2)优化:尝试各种优化器和学习率

目标:尽可能使模型在mnist测试集上的分类准确率最高

提交实现最高分类准确率的代码和模型,我们筛选最优结果前10名进行评奖

作业6-2奖励:飞桨定制本+50元京东卡

 

12月25日作业

12月23日作业

作业4-1:在AI studio上运行作业2,用深度学习完成房价预测模型

作业4-1奖励:飞桨定制本+ 《深度学习导论与应用实践》教材,选取第2、3、23、123、223、323…名同学送出奖品

作业4-2:回复下面问题,将答案回复帖子下方:

通过Python、深度学习框架,不同方法写房价预测,Python编写的模型 和 基于飞桨编写的模型在哪些方面存在异同?例如程序结构,编写难易度,模型的预测效果,训练的耗时等等?

回复帖子形式:  作业4-2:XXX

作业4-2奖励:在12月27日(本周五)中午12点前提交的作业中,我们选出最优前五名,送出百度定制数据线+《深度学习导论与应用实践》教材


12月17日作业

完成下面两个问题,并将答案回复在帖子下面,回帖形式:作业3-1(1)XX(2)XX

作业奖励:在2019年12月20日中午12点之前提交,随机抽取5名同学进行点评,礼品是本+数据线

12月12日作业

获奖者:第12名:飞天雄者                                     

12月10日作业
作业1-1:在AI Studio平台上https://aistudio.baidu.com/aistudio/education/group/info/888 跑通房价预测案例

作业1-1奖励:最先完成作业的前3名,以及第6名、66名、166名、266名、366名、466名、566名、666名的同学均可获得飞桨定制大礼包:飞桨帽子、飞桨数据线 、飞桨定制logo笔

作业1-1的获奖者如图:

作业1-2:完成下面两个问题,并将答案发布在帖子下面
①类比牛顿第二定律的案例,在你的工作和生活中还有哪些问题可以用监督学习的框架来解决?假设和参数是什么?优化目标是什么?
②为什么说AI工程师有发展前景?怎样从经济学(市场供需)的角度做出解读?
作业1-2奖励:回复帖子且点赞top5,获得《深度学习导论与应用实践》教材+飞桨定制本

点赞Top5获奖者:1.飞天雄者  2.God_s_apple  3.177*******62   4.学痞龙   5.故乡237、qq526557820

作业截止时间2020年1月10日,再此之前完成,才有资格参加最终Mac大奖评选

 

报名流程:

1.加入QQ群:726887660,班主任会在QQ群里进行学习资料、答疑、奖品等活动

2.点此链接,加入课程报名并实践:https://aistudio.baidu.com/aistudio/course/introduce/888

温馨提示:课程的录播会在3个工作日内上传到AI studio《百度架构师手把手教深度学习》课程上

 

49
收藏
回复
全部评论(953)
时间顺序
lsvine_bai
#802 回复于2020-01

作业8:

将LeNet模型中的中间层的激活函数Sigmoid换成ReLU,在眼底筛查数据集上训练5轮,结果如下

start training ...
epoch: 0, batch_id: 0, loss is: [0.57262313]
epoch: 0, batch_id: 10, loss is: [0.6918155]
epoch: 0, batch_id: 20, loss is: [0.6859298]
epoch: 0, batch_id: 30, loss is: [0.7123734]
[validation] accuracy/loss: 0.5275000333786011/0.6834651231765747
epoch: 1, batch_id: 0, loss is: [0.64026356]
epoch: 1, batch_id: 10, loss is: [0.66066235]
epoch: 1, batch_id: 20, loss is: [0.64182264]
epoch: 1, batch_id: 30, loss is: [0.60993457]
[validation] accuracy/loss: 0.5375000238418579/0.623694658279419
epoch: 2, batch_id: 0, loss is: [0.607789]
epoch: 2, batch_id: 10, loss is: [0.64166164]
epoch: 2, batch_id: 20, loss is: [0.6747283]
epoch: 2, batch_id: 30, loss is: [0.55050313]
[validation] accuracy/loss: 0.8075000643730164/0.4566318094730377
epoch: 3, batch_id: 0, loss is: [0.5051197]
epoch: 3, batch_id: 10, loss is: [0.4702305]
epoch: 3, batch_id: 20, loss is: [0.30006894]
epoch: 3, batch_id: 30, loss is: [0.304127]
[validation] accuracy/loss: 0.9325000643730164/0.43645384907722473
epoch: 4, batch_id: 0, loss is: [0.54525477]
epoch: 4, batch_id: 10, loss is: [0.24121094]
epoch: 4, batch_id: 20, loss is: [0.13882813]
epoch: 4, batch_id: 30, loss is: [0.37191218]
[validation] accuracy/loss: 0.9174998998641968/0.20673498511314392

loss收敛,说明ReLU和Sigmoid之间的区别是引起结果不同的原因。

0
回复
windly4548
#803 回复于2020-01

作业8:如果将LeNet模型中的中间层的激活函数Sigmoid换成ReLU,在眼底筛查数据集上将会得到什么样的结果?Loss是否能收敛,ReLU和Sigmoid之间的区别是引起结果不同的原因吗?

测试结果:

start training ...
epoch: 0, batch_id: 0, loss is: [1.1550038]
epoch: 0, batch_id: 10, loss is: [0.55296606]
epoch: 0, batch_id: 20, loss is: [0.2915718]
epoch: 0, batch_id: 30, loss is: [0.3046143]
[validation] accuracy/loss: 0.9174998998641968/0.2766391336917877
epoch: 1, batch_id: 0, loss is: [0.3741205]
epoch: 1, batch_id: 10, loss is: [0.13736041]
epoch: 1, batch_id: 20, loss is: [0.2447811]
epoch: 1, batch_id: 30, loss is: [0.24415164]
[validation] accuracy/loss: 0.9175000190734863/0.25997745990753174
epoch: 2, batch_id: 0, loss is: [0.12214657]
epoch: 2, batch_id: 10, loss is: [0.31817046]
epoch: 2, batch_id: 20, loss is: [0.03187106]
epoch: 2, batch_id: 30, loss is: [0.1490582]
[validation] accuracy/loss: 0.9075000882148743/0.22624513506889343
epoch: 3, batch_id: 0, loss is: [0.36196473]
epoch: 3, batch_id: 10, loss is: [0.30938962]
epoch: 3, batch_id: 20, loss is: [0.10498185]
epoch: 3, batch_id: 30, loss is: [0.10940933]
[validation] accuracy/loss: 0.9049999117851257/0.22799579799175262
epoch: 4, batch_id: 0, loss is: [0.07662266]
epoch: 4, batch_id: 10, loss is: [0.2484332]
epoch: 4, batch_id: 20, loss is: [0.12090935]
epoch: 4, batch_id: 30, loss is: [0.20970507]
[validation] accuracy/loss: 0.9100000262260437/0.2502134442329407

loss已经收敛。说明relu函数有效

relu函数在大于0的部分梯度为常数,不会产生梯度弥散现象.,另外Relu函数的导数计算更快,所以使用梯度下降时比Sigmod收敛起来要快很多。

sigmod函数,在正负饱和区的梯度都接近于0,可能会导致梯度消失现象。

 

0
回复
C
CHMBYS
#804 回复于2020-01

作业3-1:

1 使用numpy计算tanh激活函数:

import numpy as np
import matplotlib.pyplot as pl
x=np.arange(-10,10,0.5)
y=(np.exp(x)-np.exp(-x))/(np.exp(x)+np.exp(-x))
pl.plot(x,y)
pl.text(-7.5,0.25,"y=tanh(x)",fontsize=12)
pl.show()
输出:

 

2 统计随机生成元素中有多少个大于零:

import numpy as np
p=np.random.randn(10,10)
p=p>0
p.sum()

0
回复
傲骨heart
#805 回复于2020-01

作业3-1

# thanh(x)激活函数示意图
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.patches as patches

#设置图片大小
plt.figure(figsize=(8, 3))

# x是1维数组,数组大小是从-10. 到10.的实数,每隔0.1取一个点
x = np.arange(-10, 10, 0.1)
# 计算 Sigmoid函数
s = (np.exp(x)-np.exp(-x))/(np.exp(x)+np.exp(-x))

# 画出函数曲线
plt.plot(x, s, color='r')
# 添加文字说明
plt.text(-5., 0.9, r'$y=\sigma(x)$', fontsize=13)

# 设置坐标轴格式
currentAxis=plt.gca()
currentAxis.xaxis.set_label_text('x', fontsize=15)
currentAxis.yaxis.set_label_text('y', fontsize=15)
plt.show()

2. 大于 0  元素统计

import numpy as np

p = np.random.randn(10,10)
q = (p>0)
q.sum()

0
回复
傲骨heart
#806 回复于2020-01

4-2作业

通过Python、深度学习框架,不同方法写房价预测,Python编写的模型 和 基于飞桨编写的模型在程序结构方面是相同的,但是在难易程度上有较大不同:

用python实现更多关注底层细节,做重复工作,任务繁重,用飞桨的架构相对简单,尤其在训练的耗时方面,飞桨必定做了优化,必然比python快,一个训练程序的写码开发周期也比python 要短很多。

0
回复
c
cheeryoung79
#807 回复于2020-01

loss能够收敛,ReLU和Sigmoid之间的区别是引起结果不同的原因。sigmoid在饱合区的导数都接近0,不如relu导数计算快。

0
回复
c
cheeryoung79
#808 回复于2020-01
loss能够收敛,ReLU和Sigmoid之间的区别是引起结果不同的原因。sigmoid在饱合区的导数都接近0,不如relu导数计算快。

作业8-1:

loss能够收敛,ReLU和Sigmoid之间的区别是引起结果不同的原因。sigmoid在饱合区的导数都接近0,不如relu导数计算快。

0
回复
小公主mini516
#809 回复于2020-01

作业6-1

1.      将普通神经网络模型的每层输出打印,观察内容

网络结构:

# 定义卷积层,输出通道20,卷积核大小为5,步长为1,padding为2,使用relu激活函数

         self.conv1 = Conv2D(name_scope, num_filters=20, filter_size=5, stride=1, padding=2, act='relu')

         # 定义池化层,池化核为2,采用最大池化方式

         self.pool1 = Pool2D(name_scope, pool_size=2, pool_stride=2, pool_type='max')

         # 定义卷积层,输出通道20,卷积核大小为5,步长为1,padding为2,使用relu激活函数

         self.conv2 = Conv2D(name_scope, num_filters=20, filter_size=5, stride=1, padding=2, act='relu')

         # 定义池化层,池化核为2,采用最大池化方式

         self.pool2 = Pool2D(name_scope, pool_size=2, pool_stride=2, pool_type='max')

         # 定义全连接层,输出节点数为10,激活函数使用softmax

         self.fc = FC(name_scope, size=10, act='softmax')

每层输出:

########## print network layer's superparams ##############

conv1-- kernel_size:[20, 1, 5, 5], padding:[2, 2], stride:[1, 1]

conv2-- kernel_size:[20, 20, 5, 5], padding:[2, 2], stride:[1, 1]

pool1-- pool_type:max, pool_size:[2, 2], pool_stride:[2, 2]

pool2-- pool_type:max, poo2_size:[2, 2], pool_stride:[2, 2]

fc-- weight_size:[980, 10], bias_size_[10], activation:softmax

 

########## print shape of features of every layer ###############

inputs_shape: [100, 1, 28, 28]

outputs1_shape: [100, 20, 28, 28]

outputs2_shape: [100, 20, 14, 14]

outputs3_shape: [100, 20, 14, 14]

outputs4_shape: [100, 20, 7, 7]

outputs5_shape: [100, 10]

epoch: 0, batch: 400, loss is: [0.2531139], acc is [0.92]

 

########## print convolution layer's kernel ###############

conv1 params -- kernel weights: name tmp_7228, dtype: VarType.FP32 shape: [5, 5]       lod: {}

        dim: 5, 5

        layout: NCHW

        dtype: float

        data: [-0.242103 -0.132295 -0.229645 -0.164384 0.45169 -0.0892209 0.488976 -0.272486 0.219852 -0.159725 0.424371 -0.135378 -0.670147 0.355388 -0.197672 -0.330204 -0.148645 0.155917 0.162294 -0.0963658 -0.245995 -0.0880248 0.359809 0.0352617 -0.386941]

conv2 params -- kernel weights: name tmp_7230, dtype: VarType.FP32 shape: [5, 5]       lod: {}

        dim: 5, 5

        layout: NCHW

        dtype: float

        data: [0.0995129 0.0257107 -0.0292947 -0.0651389 -0.113342 -0.000224075 -0.0201527 0.0786458 -0.0611365 0.163853 0.120552 0.111357 0.101217 0.0723503 0.10999 -0.109705 -0.0831034 -0.0293732 0.0642289 -0.00365802 0.000582727 -0.0991549 -0.0122638 0.0422396 0.0500205]

 

 

The 0th channel of conv1 layer:  name tmp_7232, dtype: VarType.FP32 shape: [28, 28]    lod: {}

        dim: 28, 28

        layout: NCHW

        dtype: float

        data: [-0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0903191 -0.15557 -0.236196 -0.241746 -0.153966 -0.147906 0.093906 -0.00604814 -0.337423 -0.239869 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0616008 -0.256736 -0.328175 -0.136404 -0.0204647 -0.171478 -0.337289 -0.0999133 -0.329538 -0.76699 -0.558046 -0.035194 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0857847 -0.354695 -0.368451 -0.0420718 -0.147783 -0.417659 -0.811622 -1.0194 -0.458992 -0.615873 -0.301289 -0.228901 -0.175392 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.242152 -0.452071 -0.113826 0.16639 -0.340027 -0.835887 -1.06918 -0.83477 -0.097862 -0.135306 0.30009 -0.39236 -0.396772 -0.0457641 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.043463 -0.463133 -0.451555 0.378991 0.0152002 -0.629212 -0.602995 -0.539157 -0.733705 -0.755028 -0.177379 -0.0287547 -0.599919 -0.392293 -0.0944498 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.228589 -0.590313 0.0493152 0.621787 -0.331213 -0.617768 -0.420735 -0.297481 -0.335275 -0.854572 -0.228024 -0.374608 -0.547785 -0.292264 -0.0344047 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0661353 -0.447762 -0.348089 0.947329 0.091705 -1.00656 -0.4641 -0.0662305 -0.363908 -0.719332 -0.649174 -0.242592 -0.984078 -0.595517 -0.361504 -0.0456422 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0755287 -0.55735 -0.0482252 0.569437 -0.678433 -0.86284 -0.440819 -0.5121 -0.405874 -0.242355 -0.20497 -0.708446 -1.26395 -0.580121 -0.486626 -0.0809822 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0515786 -0.475985 0.197246 -0.142554 -0.650923 -0.342367 -0.631649 -0.824911 -0.776686 -0.383496 -0.571917 -0.710002 -0.892012 -0.55959 -0.425448 -0.0627096 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0595487 -0.0950973 0.179741 -0.758113 -0.618805 -0.427763 -1.04213 -0.750655 -0.775042 -0.561815 -0.558212 -0.178421 -0.991445 -0.586376 -0.314202 -0.031147 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 0.0338596 0.170076 0.0220658 -0.747149 -0.559367 -0.458857 -0.413136 -0.201429 -0.162049 -0.585085 -0.607436 -0.172475 -0.656015 -0.428367 -0.459291 -0.0815457 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 0.0497798 0.293661 0.0310032 -0.12294 -0.429241 -0.626325 -0.335812 -0.187951 -0.446221 -0.667218 -0.418401 -0.154567 -0.3729 -0.465341 -0.627795 -0.118058 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 0.227556 0.216743 0.121572 -0.169381 -0.22671 -0.256568 -0.11337 -0.39142 -0.578947 -0.287222 -0.135137 -0.827945 -0.725234 -0.409802 -0.0433821 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 0.0109627 0.306394 0.276402 0.0845731 -0.0776933 -0.201416 -0.35756 -0.492595 -0.750862 -0.353988 -0.21955 -1.07409 -0.587521 -0.227746 -0.00663609 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 0.0356645 0.10719 0.0517605 -0.00652822 -0.0693066 -0.233842 -0.353622 -0.412559 0.288775 -0.118183 -0.903326 -0.453154 -0.240989 -0.0146332 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.1801 -0.281056 0.415531 -0.232642 -0.647279 -0.417584 -0.375816 -0.0555574 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.264707 -0.214804 0.489314 -0.37767 -0.695725 -0.563552 -0.362165 -0.0357445 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.38624 -0.144962 0.479498 -0.447529 -0.783253 -0.495923 -0.268771 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.433113 0.0220413 0.210682 -0.520523 -0.66132 -0.495923 -0.268771 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.342186 0.146357 -0.230388 -0.302184 -0.443808 -0.495923 -0.268771 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0273796 0.381766 -0.291374 -0.50855 -0.492933 -0.205704 -0.137126 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 0.211798 0.200349 -0.141189 -0.736831 -0.581221 0.200178 0.0395845 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 0.283118 0.215804 -0.044989 -0.441772 -0.131455 -0.146771 -0.18752 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102 -0.0102102]

 

The 14th channel of conv2 layer:  name tmp_7234, dtype: VarType.FP32 shape: [14, 14]   lod: {}

        dim: 14, 14

        layout: NCHW

        dtype: float

        data: [0.0535989 0.0788964 0.0481762 -0.0127427 -0.0875682 -0.0778051 -0.0748894 -0.00375219 0.0683495 -0.122627 -0.178587 -0.00947256 -0.00727456 -0.000654146 0.155406 0.192647 0.12837 -0.0477296 -0.418024 -0.720157 -0.451491 0.00461896 0.197716 -0.0400457 0.148728 0.317315 0.137825 0.100274 0.173004 0.181874 0.117849 -0.0949116 -0.284971 -0.0640672 0.41343 -0.346835 -0.435864 -0.149204 0.255333 0.506476 0.182654 0.153247 0.16075 0.160948 0.110517 -0.103814 0.553148 0.736652 -0.587417 -1.25845 -0.100226 -1.60417 -1.1819 0.404234 0.210705 0.153247 0.157998 0.0823684 -0.211109 -0.239255 -0.341456 -1.70982 -2.22241 -0.776431 1.07245 -0.0612871 -1.58713 0.0878981 0.23654 0.153247 0.138939 0.151584 -0.326141 -1.38154 -2.66952 -2.50329 -2.89946 -3.35658 -1.8098 0.645049 -0.867552 -0.0401451 0.186365 0.153247 0.116463 -0.185387 -0.748148 -0.988277 -0.0705949 0.321955 -1.81408 -1.82773 -2.69253 -2.13428 -1.46534 -0.29297 0.144464 0.153247 0.0757936 -0.411957 -0.831188 -0.612167 -1.47881 -0.486737 -0.924439 -0.507169 -0.883508 -1.47635 -1.68089 -0.369949 0.136414 0.153247 0.137335 -0.136505 -0.916821 -1.09942 -3.29964 -3.41553 -2.53949 -0.704842 -0.755689 0.396695 -1.03707 -0.354785 0.111083 0.153247 0.158532 0.201268 -0.21405 1.06557 1.55483 -0.74866 -3.18747 -1.81667 -1.54675 -0.270715 -1.03923 -0.278188 0.124056 0.153247 0.167661 0.213861 0.177261 0.362154 1.20194 0.759379 -0.95734 -0.0113857 -0.988643 -0.995747 -1.36422 -0.253504 0.139683 0.153247 0.173004 0.213014 0.246039 0.206465 0.0394087 0.249356 -0.60006 0.200255 -0.481392 -0.419555 -1.5203 -0.320032 0.128428 0.153247 0.151739 0.204765 0.247048 0.247048 -0.186418 -0.620704 -1.88352 -0.716227 -0.437663 0.968115 -0.167258 -0.172998 0.15147 0.120921 0.0663479 0.108882 0.164285 0.164285 -0.107008 -0.597684 -1.1169 -0.278523 -1.92087 0.186292 0.429009 -0.212157 0.114669 0.0875942]

 

The output of last layer: name tmp_7235, dtype: VarType.FP32 shape: [10]       lod: {}

        dim: 10

        layout: NCHW

        dtype: float

        data: [2.37351e-09 3.36688e-09 2.39315e-06 2.07316e-05 0.000129918 2.60889e-05 1.85672e-07 1.4381e-05 3.01259e-05 0.999776]

 

0
回复
小公主mini516
#810 回复于2020-01

作业 6-1 

2.将分类准确率的指标 用PLT库画图表示.(y轴是acc,我这里标注错了)

cross_entropy

3.通过分类准确率,判断以采用不同损失函数训练模型的效果优劣

cross_entropy比softmax_with_cross_entropy

 

 

0
回复
skywalk163
#811 回复于2020-01

作业7-1:

乘法867041280,加法867041280

 

作业7-2:

0
回复
skywalk163
#812 回复于2020-01

作业8:如果将LeNet模型中的中间层的激活函数Sigmoid换成ReLU,在眼底筛查数据集上Loss能收敛,ReLU和Sigmoid之间的区别应该是引起结果不同的原因。我这边收敛的效果不太好。

start training ...
epoch: 0, batch_id: 0, loss is: [0.61188585]
epoch: 0, batch_id: 10, loss is: [0.8701159]
epoch: 0, batch_id: 20, loss is: [0.58843714]
epoch: 0, batch_id: 30, loss is: [0.18596354]
[validation] accuracy/loss: 0.9125000238418579/0.2410975843667984
epoch: 1, batch_id: 0, loss is: [0.14298041]
epoch: 1, batch_id: 10, loss is: [0.2700767]
epoch: 1, batch_id: 20, loss is: [0.22871165]
epoch: 1, batch_id: 30, loss is: [0.16175634]
[validation] accuracy/loss: 0.9125000238418579/0.21629394590854645
epoch: 2, batch_id: 0, loss is: [0.22155638]
epoch: 2, batch_id: 10, loss is: [0.47624618]
epoch: 2, batch_id: 20, loss is: [0.319356]
epoch: 2, batch_id: 30, loss is: [0.12867372]
[validation] accuracy/loss: 0.9149999618530273/0.22166669368743896
epoch: 3, batch_id: 0, loss is: [0.19493543]
epoch: 3, batch_id: 10, loss is: [0.21872854]
epoch: 3, batch_id: 20, loss is: [0.26570597]
epoch: 3, batch_id: 30, loss is: [0.1590705]
[validation] accuracy/loss: 0.9375/0.1794666349887848
epoch: 4, batch_id: 0, loss is: [0.18692257]
epoch: 4, batch_id: 10, loss is: [0.11870432]
epoch: 4, batch_id: 20, loss is: [0.09862967]
epoch: 4, batch_id: 30, loss is: [0.15813789]
[validation] accuracy/loss: 0.9225000143051147/0.19724367558956146
epoch: 5, batch_id: 0, loss is: [0.26366368]
epoch: 5, batch_id: 10, loss is: [0.08715878]
epoch: 5, batch_id: 20, loss is: [0.06083786]
epoch: 5, batch_id: 30, loss is: [0.01232141]
[validation] accuracy/loss: 0.9375/0.16793064773082733
epoch: 6, batch_id: 0, loss is: [0.04683672]
epoch: 6, batch_id: 10, loss is: [0.0086766]
epoch: 6, batch_id: 20, loss is: [0.01587074]
epoch: 6, batch_id: 30, loss is: [0.2986502]
[validation] accuracy/loss: 0.9099999666213989/0.19084174931049347
epoch: 7, batch_id: 0, loss is: [0.03153861]
epoch: 7, batch_id: 10, loss is: [0.12148639]
epoch: 7, batch_id: 20, loss is: [0.07664253]
epoch: 7, batch_id: 30, loss is: [0.00180655]
[validation] accuracy/loss: 0.9375/0.16267727315425873
epoch: 8, batch_id: 0, loss is: [0.03384463]
epoch: 8, batch_id: 10, loss is: [0.04817336]
epoch: 8, batch_id: 20, loss is: [0.0524444]
epoch: 8, batch_id: 30, loss is: [0.03821645]
[validation] accuracy/loss: 0.9524999856948853/0.15133006870746613
epoch: 9, batch_id: 0, loss is: [0.00356278]
epoch: 9, batch_id: 10, loss is: [0.139805]
epoch: 9, batch_id: 20, loss is: [0.02452004]
epoch: 9, batch_id: 30, loss is: [0.05679499]
[validation] accuracy/loss: 0.9450000524520874/0.1985950618982315
epoch: 10, batch_id: 0, loss is: [0.02260941]
epoch: 10, batch_id: 10, loss is: [0.0143445]
epoch: 10, batch_id: 20, loss is: [0.06397994]
epoch: 10, batch_id: 30, loss is: [0.20992565]
[validation] accuracy/loss: 0.9524999856948853/0.16782602667808533
epoch: 11, batch_id: 0, loss is: [0.01754483]
epoch: 11, batch_id: 10, loss is: [0.00636472]
epoch: 11, batch_id: 20, loss is: [0.00052195]
epoch: 11, batch_id: 30, loss is: [0.02565801]
[validation] accuracy/loss: 0.9399999380111694/0.16493897140026093
epoch: 12, batch_id: 0, loss is: [0.00188129]
epoch: 12, batch_id: 10, loss is: [0.00356475]
epoch: 12, batch_id: 20, loss is: [0.19718799]
epoch: 12, batch_id: 30, loss is: [0.02466022]
[validation] accuracy/loss: 0.9449999928474426/0.19480137526988983
epoch: 13, batch_id: 0, loss is: [0.00663883]
epoch: 13, batch_id: 10, loss is: [0.00742584]
epoch: 13, batch_id: 20, loss is: [0.03527917]
epoch: 13, batch_id: 30, loss is: [0.00233276]
[validation] accuracy/loss: 0.934999942779541/0.1994282752275467
epoch: 14, batch_id: 0, loss is: [0.00048874]
epoch: 14, batch_id: 10, loss is: [0.0025646]
epoch: 14, batch_id: 20, loss is: [0.0041927]
epoch: 14, batch_id: 30, loss is: [0.02140281]
[validation] accuracy/loss: 0.9375/0.21436703205108643

0
回复
小公主mini516
#813 回复于2020-01

作业6-1

4.作图比较:随着训练进行,模型在训练集和测试集上的Loss曲线

5.调节正则化权重,观察4的作图曲线的变化,并分析原因

regularization=fluid.regularizer.L2Decay(regularization_coeff=0.1)

加入正则化项,避免模型过拟合。但是加入正则化后,loss增大了

0
回复
跌路冷
#814 回复于2020-01

作业1-2

1)根据西瓜的色泽、纹理判断瓜的甜度

假设:以色泽、纹理为输入X,参数theta的函数H

参数:以神经网路为例,theta为权重

优化目标:使得H(theta,X)的输出与真实输入Y尽量一致

2)AI技术可以给各行各业赋能,比如提高效率、降低成本、增加产品附加值等等。因此目前对AI人才需求量旺盛,而目前各大高校才刚开始开设人工智能专业,人工智能相关人才非常抢手,因此AI工程师有发展前景。

0
回复
小公主mini516
#815 回复于2020-01

作业8

将LeNet模型中的中间层的激活函数Sigmoid换成ReLU,在眼底筛查数据集上Loss可以收敛。

sigmoid 梯度下降非常明显,至少减少75%。

ReLU 反向梯度没有损失。 ReLU 在训练过程中收敛更快

start training ...
epoch: 0, batch_id: 0, loss is: [0.8311712]
epoch: 0, batch_id: 10, loss is: [0.75977224]
epoch: 0, batch_id: 20, loss is: [0.62046874]
epoch: 0, batch_id: 30, loss is: [0.55899584]
[validation] accuracy/loss: 0.7800000309944153/0.4688795208930969
epoch: 1, batch_id: 0, loss is: [0.57008827]
epoch: 1, batch_id: 10, loss is: [0.49123535]
epoch: 1, batch_id: 20, loss is: [0.34217924]
epoch: 1, batch_id: 30, loss is: [0.2729541]
[validation] accuracy/loss: 0.9174998998641968/0.23008708655834198
epoch: 2, batch_id: 0, loss is: [0.14669901]
epoch: 2, batch_id: 10, loss is: [0.44717345]
epoch: 2, batch_id: 20, loss is: [0.4178056]
epoch: 2, batch_id: 30, loss is: [0.04775361]
[validation] accuracy/loss: 0.9175000190734863/0.2236608862876892
epoch: 3, batch_id: 0, loss is: [0.51134837]
epoch: 3, batch_id: 10, loss is: [0.0601957]
epoch: 3, batch_id: 20, loss is: [0.04553626]
epoch: 3, batch_id: 30, loss is: [0.32979023]
[validation] accuracy/loss: 0.9100000262260437/0.20146411657333374
epoch: 4, batch_id: 0, loss is: [0.32185698]
epoch: 4, batch_id: 10, loss is: [0.22855277]
epoch: 4, batch_id: 20, loss is: [0.02208542]
epoch: 4, batch_id: 30, loss is: [0.03730985]
[validation] accuracy/loss: 0.9325000643730164/0.1958758533000946

0
回复
星光ld1
#816 回复于2020-01

作业8:

如果将LeNet模型中的中间层的激活函数Sigmoid换成ReLU,在眼底筛查数据集上将会得到什么样的结果?Loss是否能收敛,ReLU和Sigmoid之间的区别是引起结果不同的原因吗?

换成ReLU可以收敛,基本在5个epoch下可以获得94%的验证集准确率。结果如下图所示:

Loss可以收敛,ReLU是引起结果差异的主要原因,单纯的加深深度仍保持激活微sigmoid结果得不到改善,替换第一层sigmoid,其余层为relu,也可以有一定效果,这边推测原因是sigmoid只有在输入在0附近有比较大的梯度,远离时梯度较小,多次sigmoid后梯度过小,权重得不到有效更新。而Relu激活,大于0的输入均会产生+1梯度,只有小于0不产生梯度,叠加多层梯度幅值不会显著下降,权重更新更有效。

0
回复
呵赫he
#817 回复于2020-01

作业8:

start training ...
epoch: 0, batch_id: 0, loss is: [0.7605928]
epoch: 0, batch_id: 10, loss is: [0.54173887]
epoch: 0, batch_id: 20, loss is: [0.24313883]
epoch: 0, batch_id: 30, loss is: [0.48164058]
[validation] accuracy/loss: 0.9149999618530273/0.25530192255973816
epoch: 1, batch_id: 0, loss is: [0.14735879]
epoch: 1, batch_id: 10, loss is: [0.08558887]
epoch: 1, batch_id: 20, loss is: [0.10213143]
epoch: 1, batch_id: 30, loss is: [0.1050892]
[validation] accuracy/loss: 0.8575000762939453/0.2944568991661072
epoch: 2, batch_id: 0, loss is: [0.4966081]
epoch: 2, batch_id: 10, loss is: [0.2625029]
epoch: 2, batch_id: 20, loss is: [0.24349603]
epoch: 2, batch_id: 30, loss is: [0.35876155]
[validation] accuracy/loss: 0.9225000143051147/0.2158813327550888
epoch: 3, batch_id: 0, loss is: [0.10023664]
epoch: 3, batch_id: 10, loss is: [0.31895268]
epoch: 3, batch_id: 20, loss is: [0.21706752]
epoch: 3, batch_id: 30, loss is: [0.17611277]
[validation] accuracy/loss: 0.9399999380111694/0.19568277895450592
epoch: 4, batch_id: 0, loss is: [0.08944153]
epoch: 4, batch_id: 10, loss is: [0.14101766]
epoch: 4, batch_id: 20, loss is: [0.08374935]
epoch: 4, batch_id: 30, loss is: [0.22869536]
[validation] accuracy/loss: 0.9449999928474426/0.15649732947349548

以上是我只将sigmoid改为relu,其他参数均保持不变得到的结果,修改后,loss虽然也有波动,但还是呈现下降趋势,且第5个epoch时,validation accuracy也有0.94。猜测可能眼疾图片相较于mnist数据集,图片明显增大,并且要更为细粒度的特征才能有效区分辨别眼疾问题,而sigmoid的倒数最大就0.25,并且在很大的区域里,都处于软饱和状态,梯度几乎为零,这就导致可能并不能有效地更新参数。而relu函数不仅计算简单,速度快,而且在正区间里,导数恒为1,能够使得模型参数进行有效更新,使其能获得更细粒度的特征,有效区分眼疾问题。

0
回复
vortual
#818 回复于2020-01

作业8:

start training ... 
epoch: 0, batch_id: 0, loss is: [0.70829755]
epoch: 0, batch_id: 10, loss is: [0.56533426]
epoch: 0, batch_id: 20, loss is: [0.08799349]
epoch: 0, batch_id: 30, loss is: [0.6046146]
[validation] accuracy/loss: 0.7899999618530273/0.532879114151001
epoch: 1, batch_id: 0, loss is: [0.39749104]
epoch: 1, batch_id: 10, loss is: [0.5014836]
epoch: 1, batch_id: 20, loss is: [0.6070895]
epoch: 1, batch_id: 30, loss is: [0.4396693]
[validation] accuracy/loss: 0.8474999666213989/0.3580026626586914
epoch: 2, batch_id: 0, loss is: [0.48417154]
epoch: 2, batch_id: 10, loss is: [0.9675636]
epoch: 2, batch_id: 20, loss is: [0.19961545]
epoch: 2, batch_id: 30, loss is: [1.1551516]
[validation] accuracy/loss: 0.8225000500679016/0.4214121103286743
epoch: 3, batch_id: 0, loss is: [0.38194698]
epoch: 3, batch_id: 10, loss is: [0.46495494]
epoch: 3, batch_id: 20, loss is: [0.09343378]
epoch: 3, batch_id: 30, loss is: [0.23192282]
[validation] accuracy/loss: 0.9275000691413879/0.28732946515083313
epoch: 4, batch_id: 0, loss is: [0.23739669]
epoch: 4, batch_id: 10, loss is: [0.31052268]
epoch: 4, batch_id: 20, loss is: [0.11809415]
epoch: 4, batch_id: 30, loss is: [0.23537043]
[validation] accuracy/loss: 0.8975000381469727/0.3057504892349243

激活函数换成Relu后,结果明显变好了。而且将AlexNet的中间层从Relu换成Sigmoid后,AlexNet也无法收敛了。所以可以得出ReLU和Sigmoid之间的区别是引起结果不同的原因之一。猜想造成这种情况的原因是:手写体识别图片比较小,但是将眼疾识别图片同样换成28*28大小时,LeNet也不能完全收敛,所以不止图片大小的原因。手写体识别是单通道,而眼疾识别是三通道输入。输入特征多导致计算出来的数据分布更广,从而在Sigmoid两端梯度消失的特征变多,之后无法进行学习。

0
回复
FrankFly
#819 回复于2020-01

作业8:

[左侧sigmoid,右侧relu]

激活函数从sigmoid改成relu后,loss明显下降,识别效果提升,这是由于这两者的梯度特性不同导致的,sigmoid函数的梯度在饱和区接近于0,由于图片size较大,因此网络层的输出很容易大于1,使得出现了梯度消失现象,而relu函数的梯度在大于0时,梯度为常数,不会导致梯度消失,因此反向传播时能有较大更新,训练效果更好。

0
回复
thunder95
#820 回复于2020-01

 作业8:

将激活函数Sigmoid换成ReLU后,收敛速度明显加快,损失值下降很快,而且最终的识别效果也很好。主要是因为sigmoid的两端导数接近于0, 且即便是中间部分,梯度也只能接近1/4, 容易造成梯度消失,从而降低学习效果。 RELU大于0时,梯度为常数,运行效率较高,学习更快,更容易跳过学习的瓶颈,最终训练的acc提升了6%左右。

sigmoid:
epoch: 0, batch_id: 0, loss is: [2.5275047]
epoch: 0, batch_id: 1000, loss is: [2.2986321]
epoch: 0, batch_id: 2000, loss is: [2.335277]
epoch: 0, batch_id: 3000, loss is: [2.2777514]
epoch: 0, batch_id: 4000, loss is: [2.2712574]
epoch: 0, batch_id: 5000, loss is: [2.3250005]
[validation] accuracy/loss: 0.3083000183105469/2.269392251968384
epoch: 1, batch_id: 0, loss is: [2.2627172]
epoch: 1, batch_id: 1000, loss is: [2.234743]
epoch: 1, batch_id: 2000, loss is: [2.2577486]
epoch: 1, batch_id: 3000, loss is: [2.0155036]
epoch: 1, batch_id: 4000, loss is: [1.6208599]
epoch: 1, batch_id: 5000, loss is: [1.720129]
[validation] accuracy/loss: 0.6924999952316284/1.1326113939285278
epoch: 2, batch_id: 0, loss is: [0.8663475]
epoch: 2, batch_id: 1000, loss is: [0.7190557]
epoch: 2, batch_id: 2000, loss is: [0.7133648]
epoch: 2, batch_id: 3000, loss is: [0.428047]
epoch: 2, batch_id: 4000, loss is: [0.37107068]
epoch: 2, batch_id: 5000, loss is: [0.6824454]
[validation] accuracy/loss: 0.8634000420570374/0.5227167010307312

relu:
epoch: 0, batch_id: 1000, loss is: [0.06106605]
epoch: 0, batch_id: 2000, loss is: [0.14019644]
epoch: 0, batch_id: 3000, loss is: [0.01926768]
epoch: 0, batch_id: 4000, loss is: [0.01033278]
epoch: 0, batch_id: 5000, loss is: [0.01547813]
[validation] accuracy/loss: 0.9636999368667603/0.10895710438489914
epoch: 1, batch_id: 0, loss is: [0.01023675]
epoch: 1, batch_id: 1000, loss is: [0.01679087]
epoch: 1, batch_id: 2000, loss is: [0.00410097]
epoch: 1, batch_id: 3000, loss is: [0.00113919]
epoch: 1, batch_id: 4000, loss is: [0.01116453]
epoch: 1, batch_id: 5000, loss is: [0.00396175]
[validation] accuracy/loss: 0.9726999402046204/0.08064977079629898
epoch: 2, batch_id: 0, loss is: [0.01500158]
epoch: 2, batch_id: 1000, loss is: [0.01724296]
epoch: 2, batch_id: 2000, loss is: [0.00221792]
epoch: 2, batch_id: 3000, loss is: [0.00031453]
epoch: 2, batch_id: 4000, loss is: [0.02127162]
epoch: 2, batch_id: 5000, loss is: [0.00169308]
[validation] accuracy/loss: 0.9827999472618103/0.05546225607395172

0
回复
张小黄
#821 回复于2020-01

作业7-1:

乘法:3*3【卷积核】*(224【图像长度】-3【卷积核长度】+1+1【padding长度】*2)*(224-3+1+1*2)【同前一个,此为宽度】*3【图像通道数】*64【卷积核个数】*10【图像个数】=867041280

加法:(3*3-1)*(224-3+1+1*2)*(224-3+1+1*2)*3*64*10+1*(224-3+1+1*2)*(224-3+1+1*2)*3*64*10【偏置项】=867041280

0
回复
在@后输入用户全名并按空格结束,可艾特全站任一用户