首页 AI Studio教育版 帖子详情
作业帖 | 百度深度学习集训营
收藏
快速回复
AI Studio教育版 其他师资培训 793143 953
作业帖 | 百度深度学习集训营
收藏
快速回复
AI Studio教育版 其他师资培训 793143 953

百度深度学习集训营已经正式开营,每个阶段的作业都将有各自的奖励,欢迎大家学习~

PS:如遇帖子过期、审核不通过的情况,请先复制内容保存在word文档,然后根据提示,完成个人实名验证,刷新后重新粘贴复制的内容,即可提交~

欢迎大家报名参加~

1月9日作业:

作业9-1:在第二章中学习过如何设置学习率衰减,这里建议使用分段衰减的方式,衰减系数为0.1, 根据ResNet目前的训练情况,应该在训练到多少步的时候设置衰减合适?请设置好学习率衰减方式,在眼疾识别数据集iChallenge-PM上重新训练ResNet模型。

作业9-1奖励:在作业中随机各抽取5名同学送出飞桨本+数据线+飞桨贴纸

回复帖子形式:  作业9-1:XXX

抽奖作业截止时间:2020年1月13日中午12点之前

作业9-2奖励:在作业中随机各抽取5名同学送出飞桨本+数据线+飞桨贴纸

回复帖子形式:  作业9-2:XXX

抽奖作业截止时间:2020年1月13日中午12点之前

 

1月7日作业:

作业8:如果将LeNet模型中的中间层的激活函数Sigmoid换成ReLU,在眼底筛查数据集上将会得到什么样的结果?Loss是否能收敛,ReLU和Sigmoid之间的区别是引起结果不同的原因吗?请发表你的观点

作业8奖励:在作业中随机各抽取5名同学送出飞桨本+数据线+飞桨贴纸

回复帖子形式:  作业8:XXX

获奖同学:#820 thunder95、#819 你还说不想我吗、 #818 百度用户#0762194095、#817 呵赫 he、#816 星光1dl

1月2日作业

作业7-1  计算卷积中一共有多少次乘法和加法操作

输入数据形状是[10, 3, 224, 224],卷积核kh = kw = 3,输出通道数为64,步幅stride=1,填充ph = pw =1

完成这样一个卷积,一共需要做多少次乘法和加法操作?

提示:先看输出一个像素点需要做多少次乘法和加法操作,然后再计算总共需要的操作次数

提交方式:请回复乘法和加法操作的次数,例如:乘法1000,加法1000

作业7-1奖励:抽取5人赢得飞桨定制本+数据线,截止时间2020年1月6日中午12点之前

回复帖子形式:  作业7-1:XXX

作业7-2奖励:从正确答案中抽取5人获得飞桨定制本+50元京东卡,截止时间2020年1月6日中午12点之前 

 

12月31日作业

作业6-1:

1.将普通神经网络模型的每层输出打印,观察内容
2.将分类准确率的指标 用PLT库画图表示
3.通过分类准确率,判断以采用不同损失函数训练模型的效果优劣
4.作图比较:随着训练进行,模型在训练集和测试集上的Loss曲线
5.调节正则化权重,观察4的作图曲线的变化,并分析原因
作业6-1奖励:抽取5人赢得飞桨定制本+数据线 ,回复帖子形式:  作业6-1:XXX

作业6-2:

正确运行AI Studio《百度架构师手把手教深度学习》课程里面的作业3 的极简版代码,分析训练过程中可能出现的问题或值得优化的地方,通过以下几点优化:

(1)样本:数据增强的方法

(2)假设:改进网络模型

(2)损失:尝试各种Loss

(2)优化:尝试各种优化器和学习率

目标:尽可能使模型在mnist测试集上的分类准确率最高

提交实现最高分类准确率的代码和模型,我们筛选最优结果前10名进行评奖

作业6-2奖励:飞桨定制本+50元京东卡

 

12月25日作业

12月23日作业

作业4-1:在AI studio上运行作业2,用深度学习完成房价预测模型

作业4-1奖励:飞桨定制本+ 《深度学习导论与应用实践》教材,选取第2、3、23、123、223、323…名同学送出奖品

作业4-2:回复下面问题,将答案回复帖子下方:

通过Python、深度学习框架,不同方法写房价预测,Python编写的模型 和 基于飞桨编写的模型在哪些方面存在异同?例如程序结构,编写难易度,模型的预测效果,训练的耗时等等?

回复帖子形式:  作业4-2:XXX

作业4-2奖励:在12月27日(本周五)中午12点前提交的作业中,我们选出最优前五名,送出百度定制数据线+《深度学习导论与应用实践》教材


12月17日作业

完成下面两个问题,并将答案回复在帖子下面,回帖形式:作业3-1(1)XX(2)XX

作业奖励:在2019年12月20日中午12点之前提交,随机抽取5名同学进行点评,礼品是本+数据线

12月12日作业

获奖者:第12名:飞天雄者                                     

12月10日作业
作业1-1:在AI Studio平台上https://aistudio.baidu.com/aistudio/education/group/info/888 跑通房价预测案例

作业1-1奖励:最先完成作业的前3名,以及第6名、66名、166名、266名、366名、466名、566名、666名的同学均可获得飞桨定制大礼包:飞桨帽子、飞桨数据线 、飞桨定制logo笔

作业1-1的获奖者如图:

作业1-2:完成下面两个问题,并将答案发布在帖子下面
①类比牛顿第二定律的案例,在你的工作和生活中还有哪些问题可以用监督学习的框架来解决?假设和参数是什么?优化目标是什么?
②为什么说AI工程师有发展前景?怎样从经济学(市场供需)的角度做出解读?
作业1-2奖励:回复帖子且点赞top5,获得《深度学习导论与应用实践》教材+飞桨定制本

点赞Top5获奖者:1.飞天雄者  2.God_s_apple  3.177*******62   4.学痞龙   5.故乡237、qq526557820

作业截止时间2020年1月10日,再此之前完成,才有资格参加最终Mac大奖评选

 

报名流程:

1.加入QQ群:726887660,班主任会在QQ群里进行学习资料、答疑、奖品等活动

2.点此链接,加入课程报名并实践:https://aistudio.baidu.com/aistudio/course/introduce/888

温馨提示:课程的录播会在3个工作日内上传到AI studio《百度架构师手把手教深度学习》课程上

 

49
收藏
回复
全部评论(953)
时间顺序
w
wsdsoft
#942 回复于2020-03
wsdsoft #941
因为: $$H_{out} = \frac{H + 2p_h - k_h}{s_h} + 1$$ $$W_{out} = \frac{W + 2p_w - k_w}{s_w} + 1$$ 所以,总计算量为: 加法数=$9*10*64*224*2224=289013760$ 乘法数也一样
展开

不能写公式啊。。。。对不起

0
回复
aitrust
#943 回复于2020-03

作业1-2:完成下面两个问题,并将答案发布在帖子下面
①类比牛顿第二定律的案例,在你的工作和生活中还有哪些问题可以用监督学习的框架来解决?假设和参数是什么?优化目标是什么?

日常中遇到的一些回归问题都可以用监督学习来解决。比如我们可以通过一年的降雨量分布,来预测某一地区这一年的小麦产量。假设是降雨量分布和小麦产量符合某种函数关系。输入就是一年每个月的降雨量数据,输出是小麦产量预测。训练数据是过去历年的全球各个小麦产区的降雨量和产量。损失函数就是预测产量和实际产品的差别。优化目标即最小化损失函数。

②为什么说AI工程师有发展前景?怎样从经济学(市场供需)的角度做出解读?

以前大量靠人工和经验达成的工作,未来都会转成机器完成。尤其是信息爆炸后拥有大量数据的领域,监督学习后机器的效率要远高于人类,这样的领域有很多。所以现在需要更多的AI工程是,后端去不断改进模型和算法,前端不断和各个产业融合去实际地提升现实社会的生产效率。就行吴恩达说的那样,AI是第二次工业革命。而深度学习则是新时代的增长能源。

0
回复
aitrust
#944 回复于2020-03

作业2-1:

(1)

从右往左反向传播,从上倒下依次:

第三层梯度:1.1,650

第二层梯度:1.1,1.1

第一层梯度:110,2.2,3.3,165

(2)反向传播推导出来以后,转化成代码就很晕了,希望高手指正:

class Network(object):
    def __init__(self, num_of_weights):
        # 随机产生w的初始值
        # 为了保持程序每次运行结果的一致性,此处设置固定的随机数种子
        np.random.seed(0)
        self.w = np.random.randn(12,num_of_weights)
        self.b = 0.
        # 增加隐藏层
        self.w1 = np.random.randn(1, 12)
        self.b1 = 0.
        
    def forward(self, x):
        z = np.dot(self.w , x.T) + self.b
        # 增加隐藏层
        z1 = np.dot(self.w1 , z) + self.b1
        return self.w,self.w1,z,z1
    
    def loss(self, z1, y):
        error = z1.T - y
        num_samples = error.shape[0]
        cost = error * error
        cost = np.sum(cost) / num_samples
        return cost
    
    def gradient(self, x, y):
        w,w1,z,z1 = self.forward(x)
        gradient_w1 = np.dot((z1 - y.T) , z.T) / x.shape[0]
        gradient_w = np.dot(np.dot(w1.T , (z1 - y.T)) , x) / x.shape[0]
        gradient_b1 = np.mean(z1 - y.T)
        gradient_b = np.mean(np.dot(w1.T , (z1 - y.T)))
        
        return gradient_w,gradient_w1, gradient_b,gradient_b1
    
    def update(self, gradient_w,gradient_w1, gradient_b,gradient_b1, eta = 0.01):
        self.w = self.w - eta * gradient_w
        self.b = self.b - eta * gradient_b
        self.w1 = self.w1 - eta * gradient_w1
        self.b1 = self.b1 - eta * gradient_b1

        
    def train(self, x, y, iterations=100, eta=0.01):
        losses = []
        for i in range(iterations):
            w,w1,z,z1 = self.forward(x)
            L = self.loss(z, y)
            gradient_w,gradient_w1, gradient_b,gradient_b1 = self.gradient(x, y)
            self.update(gradient_w,gradient_w1, gradient_b,gradient_b1, eta)
            losses.append(L)
            if (i+1) % 10 == 0:
                print('iter {}, loss {}'.format(i, L))
        return losses

# 获取数据
train_data, test_data = load_data()
x = train_data[:, :-1]
y = train_data[:, -1:]
# 创建网络
net = Network(13)
num_iterations=1000
# 启动训练
losses = net.train(x,y, iterations=num_iterations, eta=0.01)

# 画出损失函数的变化趋势
plot_x = np.arange(num_iterations)
plot_y = np.array(losses)
plt.plot(plot_x, plot_y)
plt.show()

iter 9, loss 9.3825921080206
iter 19, loss 9.347619340210835
iter 29, loss 9.324036797656442
iter 39, loss 9.307376312619404
iter 49, loss 9.29525552855137
iter 59, loss 9.286221841530988
iter 69, loss 9.279337230816516
iter 79, loss 9.27397745072883
iter 89, loss 9.26971866375798
iter 99, loss 9.266268495454465
iter 109, loss 9.263422406261167
iter 119, loss 9.261035392097929
iter 129, loss 9.259003301907022
iter 139, loss 9.257250332757472
iter 149, loss 9.255720562402686
iter 159, loss 9.254372156841095
iter 169, loss 9.253173370462248
iter 179, loss 9.252099759379124
iter 189, loss 9.2511322231411
iter 199, loss 9.250255616728243
iter 209, loss 9.249457758190983
iter 219, loss 9.24872871282957
iter 229, loss 9.248060272080783
iter 239, loss 9.247445570497401
iter 249, loss 9.246878801392063
iter 259, loss 9.24635500350909
iter 269, loss 9.245869899229895
iter 279, loss 9.245419770473609
iter 289, loss 9.245001362406565
iter 299, loss 9.244611807851346
iter 309, loss 9.244248567248333
iter 319, loss 9.243909380417353
iter 329, loss 9.243592227363267
iter 339, loss 9.243295296085812
iter 349, loss 9.243016955871662
iter 359, loss 9.242755734923504
iter 369, loss 9.242510301456623
iter 379, loss 9.242279447596855
iter 389, loss 9.242062075564606
iter 399, loss 9.241857185742456
iter 409, loss 9.241663866308963
iter 419, loss 9.241481284185795
iter 429, loss 9.241308677094889
iter 439, loss 9.241145346560543
iter 449, loss 9.24099065172123
iter 459, loss 9.240844003839314
iter 469, loss 9.240704861415619
iter 479, loss 9.24057272583057
iter 489, loss 9.240447137445885
iter 499, loss 9.240327672110501
iter 509, loss 9.240213938022558
iter 519, loss 9.240105572906016
iter 529, loss 9.24000224146603
iter 539, loss 9.239903633091924
iter 549, loss 9.239809459780613
iter 559, loss 9.239719454256683
iter 569, loss 9.23963336826825
iter 579, loss 9.23955097104027
iter 589, loss 9.239472047868965
iter 599, loss 9.239396398843112
iter 609, loss 9.239323837679414
iter 619, loss 9.239254190660613
iter 629, loss 9.239187295666332
iter 639, loss 9.239123001287602
iter 649, loss 9.239061166017096
iter 659, loss 9.239001657507888
iter 669, loss 9.238944351894267
iter 679, loss 9.238889133168918
iter 689, loss 9.238835892611236
iter 699, loss 9.238784528262139
iter 709, loss 9.23873494444119
iter 719, loss 9.238687051302275
iter 729, loss 9.238640764424403
iter 739, loss 9.238596004434566
iter 749, loss 9.238552696659877
iter 759, loss 9.238510770806512
iter 769, loss 9.238470160663093
iter 779, loss 9.23843080382655
iter 789, loss 9.238392641448506
iter 799, loss 9.238355618000536
iter 809, loss 9.238319681056726
iter 819, loss 9.23828478109214
iter 829, loss 9.238250871295905
iter 839, loss 9.238217907397747
iter 849, loss 9.23818584750692
iter 859, loss 9.238154651962537
iter 869, loss 9.23812428319445
iter 879, loss 9.23809470559382
iter 889, loss 9.238065885392652
iter 899, loss 9.238037790551644
iter 909, loss 9.238010390655663
iter 919, loss 9.237983656816333
iter 929, loss 9.23795756158117
iter 939, loss 9.237932078848809
iter 949, loss 9.237907183789844
iter 959, loss 9.237882852772902
iter 969, loss 9.237859063295577
iter 979, loss 9.237835793919825
iter 989, loss 9.237813024211583
iter 999, loss 9.23779073468425

 

0
回复
AI-BAI
#945 回复于2020-03
wsdsoft #942
不能写公式啊。。。。对不起

可以在notebook里写好把链接贴过来,也可以将公式处理成图片。这样写看起来怪怪的

0
回复
aitrust
#946 回复于2020-03

3-1

(1)

x = np.arange(-10.,10,0.1)
y = (np.exp(x)-np.exp(-x))/(np.exp(x)+np.exp(-x))
plt.figure(figsize=(5,5))
plt.plot(x,y,color='r')
plt.text(-7.5,0.5,r'$y=tanh(x)$',fontsize=13)
currentAxis=plt.gca()
currentAxis.xaxis.set_label_text('x',fontsize=15)
currentAxis.yaxis.set_label_text('y',fontsize=15)
plt.show()

 

(2)

p = np.random.randn(10, 10)
q = (p>0)
q.sum()
0
回复
aitrust
#947 回复于2020-03

作业4-2:

程序结构上,用深度学习框架架构更模式化。

编写难易度上,用深度学习框架要容易很多。

模型的预测效果,两者是相同的。

训练的耗时上,两者也是相同的。

0
回复
1
132张z
#948 回复于2020-03

作业3-1

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patchs


#设置图片大小
plt.figure(figsize=(8, 3))

x = np.arange(-10, 10, 0.1)
# 计算 tanh函数
s = (np.exp(x)-np.exp(-x))/(np.exp(x)+np.exp(-x))


#########################################################
# 以下部分为画图程序

# 设置两个子图窗口,将Sigmoid的函数图像画在左边
f = plt.subplot(121)
# 画出函数曲线
plt.plot(x, s, color='r')
# 添加文字说明
plt.text(-5., 0.9, r'$y=tanh(x)$', fontsize=13)
# 设置坐标轴格式
currentAxis=plt.gca()
currentAxis.xaxis.set_label_text('x', fontsize=15)
currentAxis.yaxis.set_label_text('y', fontsize=15)

plt.show()

(2)


p = np.random.randn(10, 10)
q=(p>0)
q.sum()

0
回复
w
wangaaa5
#949 回复于2020-04

用代码实现两层的神经网络的梯度传播,中间层的尺寸为13【房价预测案例】(教案当前的版本为一层的神经网络)

import numpy as np
class Network(object):
def __init__(self, num_of_weight, num_of_hidden=13):
# 随机产生参数
# 隐藏层数量num_fo_hidden
self.w1 = np.random.randn(num_of_weight, num_of_hidden)
self.b1 = np.zeros((num_of_weight, 1))
self.w2 = np.random.randn(num_of_hidden, 1)
self.b2 = 0.

def forward(self, x):
z1 = np.dot(x, self.w1) + self.b1.T
z2 = np.dot(z1, self.w2) + self.b2
return z1, z2

def loss(self, z, y):
error = z - y
num_samples = error.shape[0]
cost = error * error
cost = np.sum(cost) / num_samples
return cost
def gradient(self, x, y):
z1,z2 = self.forward(x)
N = x.shape[0]
gradient_w2 = 1. / N * np.sum((z2 - y) * z1, axis=0)
gradient_w2 = gradient_w2[:, np.newaxis]
gradient_b2 = 1. / N * np.sum(z2 - y)

gradient_w1 = 1. / N * np.sum((z2 - y) * np.dot(x, self.w2), axis=0)
gradient_w1 = gradient_w1[:, np.newaxis]
gradient_b1 = 1. / N * np.sum(z2 - y) * self.w2
return gradient_w1, gradient_b1, gradient_w2, gradient_b2

def update(self, gradient_w1, gradient_b1, gradient_w2, gradient_b2, eta=0.01):
self.w1 = self.w1 - eta * gradient_w1
self.w2 = self.w2 - eta * gradient_w2
self.b1 = self.b1 - eta * gradient_b1
self.b2 = self.b2 - eta * gradient_b2

def train(self, training_data, num_epoches, batch_size=10, eta=0.01):
n = len(training_data)
losses = []
for epoch_id in range(num_epoches):
# 在迭代开始之前将训练数据的顺序随机打乱
# 然后在按每次取batch_size条数据的方式取出
np.random.shuffle(training_data)
# 将训练数据进行拆分,每个mini_batch包含batch_size条数据
mini_batches = [training_data[k : k + batch_size] for k in range(0, n, batch_size)]
for iter_id, mini_batch in enumerate(mini_batches):
x = mini_batch[:, : -1]
y = mini_batch[:, -1 :]
a1, a2 = self.forward(x)
loss = self.loss(a2, y)
gradient_w1, gradient_b1, gradient_w2, gradient_b2 = self.gradient(x, y)
self.update(gradient_w1, gradient_b1, gradient_w2, gradient_b2, eta)
losses.append(loss)
print('Epoch {:3d} / iter {:3d}, loss = {:.4f}'.format(epoch_id, iter_id, loss))
return losses

import matplotlib.pyplot as plt
# 获取数据
train_data, test_data = load_data()

# 创建网络
net = Network(13)
#启动训练
losses = net.train(train_data, num_epoches=50, batch_size=100, eta=0.1)

# 画出损失函数的变化趋势
plot_x = np.arange(len(losses))
plot_y = np.array(losses)
plt.plot(plot_x, plot_y)
plt.show()

1
import numpy as np
2
class Network(object):
3
def __init__(self, num_of_weight, num_of_hidden=13):
4
# 随机产生参数
5
# 隐藏层数量num_fo_hidden
6
self.w1 = np.random.randn(num_of_weight, num_of_hidden)
7
self.b1 = np.zeros((num_of_weight, 1))
8
self.w2 = np.random.randn(num_of_hidden, 1)
9
self.b2 = 0.
10

11
def forward(self, x):
12
z1 = np.dot(x, self.w1) + self.b1.T
13
z2 = np.dot(z1, self.w2) + self.b2
14
return z1, z2
15

16
def loss(self, z, y):
17
error = z - y
18
num_samples = error.shape[0]
19
cost = error * error
20
cost = np.sum(cost) / num_samples
21
return cost
22
def gradient(self, x, y):
23
z1,z2 = self.forward(x)
24
N = x.shape[0]
25
gradient_w2 = 1. / N * np.sum((z2 - y) * z1, axis=0)
26
gradient_w2 = gradient_w2[:, np.newaxis]
27
gradient_b2 = 1. / N * np.sum(z2 - y)
28

29
gradient_w1 = 1. / N * np.sum((z2 - y) * np.dot(x, self.w2), axis=0)
30
gradient_w1 = gradient_w1[:, np.newaxis]
31
gradient_b1 = 1. / N * np.sum(z2 - y) * self.w2
32
return gradient_w1, gradient_b1, gradient_w2, gradient_b2
33

34
def update(self, gradient_w1, gradient_b1, gradient_w2, gradient_b2, eta=0.01):
35
self.w1 = self.w1 - eta * gradient_w1
36
self.w2 = self.w2 - eta * gradient_w2
37
self.b1 = self.b1 - eta * gradient_b1
38
self.b2 = self.b2 - eta * gradient_b2
39

40
def train(self, training_data, num_epoches, batch_size=10, eta=0.01):
41
n = len(training_data)
42
losses = []
43
for epoch_id in range(num_epoches):
44
# 在迭代开始之前将训练数据的顺序随机打乱
45
# 然后在按每次取batch_size条数据的方式取出
46
np.random.shuffle(training_data)
47
# 将训练数据进行拆分,每个mini_batch包含batch_size条数据
48
mini_batches = [training_data[k : k + batch_size] for k in range(0, n, batch_size)]
49
for iter_id, mini_batch in enumerate(mini_batches):
50
x = mini_batch[:, : -1]
51
y = mini_batch[:, -1 :]
52
a1, a2 = self.forward(x)
53
loss = self.loss(a2, y)
54
gradient_w1, gradient_b1, gradient_w2, gradient_b2 = self.gradient(x, y)
55
self.update(gradient_w1, gradient_b1, gradient_w2, gradient_b2, eta)
56
losses.append(loss)
57
print('Epoch {:3d} / iter {:3d}, loss = {:.4f}'.format(epoch_id, iter_id, loss))
58
return losses
59

60
import matplotlib.pyplot as plt
61
# 获取数据
62
train_data, test_data = load_data()
63

64
# 创建网络
65
net = Network(13)
66
#启动训练
67
losses = net.train(train_data, num_epoches=50, batch_size=100, eta=0.1)
68

69
# 画出损失函数的变化趋势
70
plot_x = np.arange(len(losses))
71
plot_y = np.array(losses)
72
plt.plot(plot_x, plot_y)
73
plt.show()

 


Epoch 0 / iter 0, loss = 6.0063
Epoch 0 / iter 1, loss = 3.9008
Epoch 0 / iter 2, loss = 2.8839
Epoch 0 / iter 3, loss = 2.0580
Epoch 0 / iter 4, loss = 1.8866
Epoch 1 / iter 0, loss = 1.6941
Epoch 1 / iter 1, loss = 1.0443
Epoch 1 / iter 2, loss = 1.0616
Epoch 1 / iter 3, loss = 0.8426
Epoch 1 / iter 4, loss = 1.9301
Epoch 2 / iter 0, loss = 0.7405
Epoch 2 / iter 1, loss = 0.7418
Epoch 2 / iter 2, loss = 0.5212
Epoch 2 / iter 3, loss = 0.6352
Epoch 2 / iter 4, loss = 0.8516
Epoch 3 / iter 0, loss = 0.7719
Epoch 3 / iter 1, loss = 0.4037
Epoch 3 / iter 2, loss = 0.5405
Epoch 3 / iter 3, loss = 0.4721
Epoch 3 / iter 4, loss = 0.1092
Epoch 4 / iter 0, loss = 0.4165
Epoch 4 / iter 1, loss = 0.5451
Epoch 4 / iter 2, loss = 0.4381
Epoch 4 / iter 3, loss = 0.4561
Epoch 4 / iter 4, loss = 0.4751
Epoch 5 / iter 0, loss = 0.6218
Epoch 5 / iter 1, loss = 0.4024
Epoch 5 / iter 2, loss = 0.3839
Epoch 5 / iter 3, loss = 0.4192
Epoch 5 / iter 4, loss = 0.1826
Epoch 6 / iter 0, loss = 0.4473
Epoch 6 / iter 1, loss = 0.3952
Epoch 6 / iter 2, loss = 0.4334
Epoch 6 / iter 3, loss = 0.3009
Epoch 6 / iter 4, loss = 0.0966
Epoch 7 / iter 0, loss = 0.2935
Epoch 7 / iter 1, loss = 0.3804
Epoch 7 / iter 2, loss = 0.3648
Epoch 7 / iter 3, loss = 0.3375
Epoch 7 / iter 4, loss = 0.0409
Epoch 8 / iter 0, loss = 0.3339
Epoch 8 / iter 1, loss = 0.3186
Epoch 8 / iter 2, loss = 0.3365
Epoch 8 / iter 3, loss = 0.2962
Epoch 8 / iter 4, loss = 0.1815
Epoch 9 / iter 0, loss = 0.2622
Epoch 9 / iter 1, loss = 0.3199
Epoch 9 / iter 2, loss = 0.2625
Epoch 9 / iter 3, loss = 0.3464
Epoch 9 / iter 4, loss = 0.4756
Epoch 10 / iter 0, loss = 0.3407
Epoch 10 / iter 1, loss = 0.3133
Epoch 10 / iter 2, loss = 0.2473
Epoch 10 / iter 3, loss = 0.2802
Epoch 10 / iter 4, loss = 0.1967
Epoch 11 / iter 0, loss = 0.3424
Epoch 11 / iter 1, loss = 0.2601
Epoch 11 / iter 2, loss = 0.2525
Epoch 11 / iter 3, loss = 0.2559
Epoch 11 / iter 4, loss = 0.3054
Epoch 12 / iter 0, loss = 0.3352
Epoch 12 / iter 1, loss = 0.1944
Epoch 12 / iter 2, loss = 0.2578
Epoch 12 / iter 3, loss = 0.2640
Epoch 12 / iter 4, loss = 0.1363
Epoch 13 / iter 0, loss = 0.2577
Epoch 13 / iter 1, loss = 0.2718
Epoch 13 / iter 2, loss = 0.2268
Epoch 13 / iter 3, loss = 0.1996
Epoch 13 / iter 4, loss = 0.2169
Epoch 14 / iter 0, loss = 0.2631
Epoch 14 / iter 1, loss = 0.1760
Epoch 14 / iter 2, loss = 0.2497
Epoch 14 / iter 3, loss = 0.2292
Epoch 14 / iter 4, loss = 0.2817
Epoch 15 / iter 0, loss = 0.4323
Epoch 15 / iter 1, loss = 0.2697
Epoch 15 / iter 2, loss = 0.1862
Epoch 15 / iter 3, loss = 0.1937
Epoch 15 / iter 4, loss = 0.3019
Epoch 16 / iter 0, loss = 0.2487
Epoch 16 / iter 1, loss = 0.2243
Epoch 16 / iter 2, loss = 0.2238
Epoch 16 / iter 3, loss = 0.1705
Epoch 16 / iter 4, loss = 0.0828
Epoch 17 / iter 0, loss = 0.1791
Epoch 17 / iter 1, loss = 0.1712
Epoch 17 / iter 2, loss = 0.1927
Epoch 17 / iter 3, loss = 0.2110
Epoch 17 / iter 4, loss = 0.4230
Epoch 18 / iter 0, loss = 0.1850
Epoch 18 / iter 1, loss = 0.2162
Epoch 18 / iter 2, loss = 0.1621
Epoch 18 / iter 3, loss = 0.1851
Epoch 18 / iter 4, loss = 0.1974
Epoch 19 / iter 0, loss = 0.2490
Epoch 19 / iter 1, loss = 0.2036
Epoch 19 / iter 2, loss = 0.1634
Epoch 19 / iter 3, loss = 0.1759
Epoch 19 / iter 4, loss = 0.2964
Epoch 20 / iter 0, loss = 0.2657
Epoch 20 / iter 1, loss = 0.1705
Epoch 20 / iter 2, loss = 0.1788
Epoch 20 / iter 3, loss = 0.1668
Epoch 20 / iter 4, loss = 0.1888
Epoch 21 / iter 0, loss = 0.2154
Epoch 21 / iter 1, loss = 0.1902
Epoch 21 / iter 2, loss = 0.1292
Epoch 21 / iter 3, loss = 0.1227
Epoch 21 / iter 4, loss = 0.1492
Epoch 22 / iter 0, loss = 0.2150
Epoch 22 / iter 1, loss = 0.1522
Epoch 22 / iter 2, loss = 0.1640
Epoch 22 / iter 3, loss = 0.1478
Epoch 22 / iter 4, loss = 0.5226
Epoch 23 / iter 0, loss = 0.3681
Epoch 23 / iter 1, loss = 0.2176
Epoch 23 / iter 2, loss = 0.1413
Epoch 23 / iter 3, loss = 0.1263
Epoch 23 / iter 4, loss = 0.1355
Epoch 24 / iter 0, loss = 0.1242
Epoch 24 / iter 1, loss = 0.1322
Epoch 24 / iter 2, loss = 0.1601
Epoch 24 / iter 3, loss = 0.1188
Epoch 24 / iter 4, loss = 0.1885
Epoch 25 / iter 0, loss = 0.1434
Epoch 25 / iter 1, loss = 0.1084
Epoch 25 / iter 2, loss = 0.1080
Epoch 25 / iter 3, loss = 0.1328
Epoch 25 / iter 4, loss = 0.1292
Epoch 26 / iter 0, loss = 0.1309
Epoch 26 / iter 1, loss = 0.1165
Epoch 26 / iter 2, loss = 0.1095
Epoch 26 / iter 3, loss = 0.1282
Epoch 26 / iter 4, loss = 0.0612
Epoch 27 / iter 0, loss = 0.1875
Epoch 27 / iter 1, loss = 0.1044
Epoch 27 / iter 2, loss = 0.1162
Epoch 27 / iter 3, loss = 0.1060
Epoch 27 / iter 4, loss = 0.1016
Epoch 28 / iter 0, loss = 0.1396
Epoch 28 / iter 1, loss = 0.1151
Epoch 28 / iter 2, loss = 0.1031
Epoch 28 / iter 3, loss = 0.1109
Epoch 28 / iter 4, loss = 0.0723
Epoch 29 / iter 0, loss = 0.1343
Epoch 29 / iter 1, loss = 0.1323
Epoch 29 / iter 2, loss = 0.0874
Epoch 29 / iter 3, loss = 0.1004
Epoch 29 / iter 4, loss = 0.0330
Epoch 30 / iter 0, loss = 0.0974
Epoch 30 / iter 1, loss = 0.0967
Epoch 30 / iter 2, loss = 0.1011
Epoch 30 / iter 3, loss = 0.1188
Epoch 30 / iter 4, loss = 0.1907
Epoch 31 / iter 0, loss = 0.1053
Epoch 31 / iter 1, loss = 0.1087
Epoch 31 / iter 2, loss = 0.1039
Epoch 31 / iter 3, loss = 0.0874
Epoch 31 / iter 4, loss = 0.1647
Epoch 32 / iter 0, loss = 0.1252
Epoch 32 / iter 1, loss = 0.0964
Epoch 32 / iter 2, loss = 0.1082
Epoch 32 / iter 3, loss = 0.0940
Epoch 32 / iter 4, loss = 0.0685
Epoch 33 / iter 0, loss = 0.0824
Epoch 33 / iter 1, loss = 0.1001
Epoch 33 / iter 2, loss = 0.0855
Epoch 33 / iter 3, loss = 0.0882
Epoch 33 / iter 4, loss = 0.1732
Epoch 34 / iter 0, loss = 0.0903
Epoch 34 / iter 1, loss = 0.0848
Epoch 34 / iter 2, loss = 0.0914
Epoch 34 / iter 3, loss = 0.0832
Epoch 34 / iter 4, loss = 0.0487
Epoch 35 / iter 0, loss = 0.0981
Epoch 35 / iter 1, loss = 0.0693
Epoch 35 / iter 2, loss = 0.0821
Epoch 35 / iter 3, loss = 0.1000
Epoch 35 / iter 4, loss = 0.0486
Epoch 36 / iter 0, loss = 0.1349
Epoch 36 / iter 1, loss = 0.0854
Epoch 36 / iter 2, loss = 0.0674
Epoch 36 / iter 3, loss = 0.0806
Epoch 36 / iter 4, loss = 0.0954
Epoch 37 / iter 0, loss = 0.1437
Epoch 37 / iter 1, loss = 0.0970
Epoch 37 / iter 2, loss = 0.0819
Epoch 37 / iter 3, loss = 0.0731
Epoch 37 / iter 4, loss = 0.0716
Epoch 38 / iter 0, loss = 0.0971
Epoch 38 / iter 1, loss = 0.0747
Epoch 38 / iter 2, loss = 0.0856
Epoch 38 / iter 3, loss = 0.0795
Epoch 38 / iter 4, loss = 0.0522
Epoch 39 / iter 0, loss = 0.0971
Epoch 39 / iter 1, loss = 0.0701
Epoch 39 / iter 2, loss = 0.0773
Epoch 39 / iter 3, loss = 0.0802
Epoch 39 / iter 4, loss = 0.0547
Epoch 40 / iter 0, loss = 0.0876
Epoch 40 / iter 1, loss = 0.0711
Epoch 40 / iter 2, loss = 0.0850
Epoch 40 / iter 3, loss = 0.0678
Epoch 40 / iter 4, loss = 0.0176
Epoch 41 / iter 0, loss = 0.0642
Epoch 41 / iter 1, loss = 0.0654
Epoch 41 / iter 2, loss = 0.0860
Epoch 41 / iter 3, loss = 0.0784
Epoch 41 / iter 4, loss = 0.1004
Epoch 42 / iter 0, loss = 0.0938
Epoch 42 / iter 1, loss = 0.0761
Epoch 42 / iter 2, loss = 0.0664
Epoch 42 / iter 3, loss = 0.0740
Epoch 42 / iter 4, loss = 0.1262
Epoch 43 / iter 0, loss = 0.1119
Epoch 43 / iter 1, loss = 0.0722
Epoch 43 / iter 2, loss = 0.0631
Epoch 43 / iter 3, loss = 0.0695
Epoch 43 / iter 4, loss = 0.0157
Epoch 44 / iter 0, loss = 0.0624
Epoch 44 / iter 1, loss = 0.0602
Epoch 44 / iter 2, loss = 0.0630
Epoch 44 / iter 3, loss = 0.0792
Epoch 44 / iter 4, loss = 0.0533
Epoch 45 / iter 0, loss = 0.0723
Epoch 45 / iter 1, loss = 0.0618
Epoch 45 / iter 2, loss = 0.0685
Epoch 45 / iter 3, loss = 0.0656
Epoch 45 / iter 4, loss = 0.0500
Epoch 46 / iter 0, loss = 0.0611
Epoch 46 / iter 1, loss = 0.0607
Epoch 46 / iter 2, loss = 0.0707
Epoch 46 / iter 3, loss = 0.0614
Epoch 46 / iter 4, loss = 0.0388
Epoch 47 / iter 0, loss = 0.1042
Epoch 47 / iter 1, loss = 0.0559
Epoch 47 / iter 2, loss = 0.0641
Epoch 47 / iter 3, loss = 0.0495
Epoch 47 / iter 4, loss = 0.1000
Epoch 48 / iter 0, loss = 0.1023
Epoch 48 / iter 1, loss = 0.0644
Epoch 48 / iter 2, loss = 0.0672
Epoch 48 / iter 3, loss = 0.0520
Epoch 48 / iter 4, loss = 0.0422
Epoch 49 / iter 0, loss = 0.0745
Epoch 49 / iter 1, loss = 0.0575
Epoch 49 / iter 2, loss = 0.0478
Epoch 49 / iter 3, loss = 0.0629
Epoch 49 / iter 4, loss = 0.0718

0
回复
朋朋辈辈的故事
#950 回复于2020-05

作业7-1:

289013760乘法, 610142470加法

0
回复
朋朋辈辈的故事
#951 回复于2020-05

作业7-2

作业8-1

在眼疾筛选数据集中,将LeNet网络的"sigmoid"激活函数改为"relu",在AI Studio上运行得知,识别准确率从0.55提高到0.92,通过观察训练集上的loss和校验集上的loss,均处于下降状态,说明要继续增加epoch,再观察loss变化.,显而易见,激活函数的不同是引起不同结果的原因.

0
回复
dnydoney
#952 回复于2020-06

0
回复
c
cillian
#953 回复于2020-06

作业1-2:

①:类比牛顿第二定律的案例,在你的工作和生活中还有哪些问题可以用监督学习的框架来解决?假设和参数是什么?优化目标是什么?

答:刚刚经过毕业季的大学生,将学校寝室里的东西用快递寄回家花了不少钱。快递费的计算就可以用到监督学习的框架来解决.

假设:假定快递费与寄送距离,寄送包裹重量,寄送包裹体积,寄送包裹物体的特性(酒水类特殊防护),寄送地交通问题、寄件人保价等等相关,并且认为假设空间是线性的关系

参数:各个影响因素影响的权重值

优化目标:对已有数据的拟合的loss足够小

②:为什么说AI工程师有发展前景?怎样从经济学(市场供需)的角度做出解读?

答:人工智能应用前景广阔,应用范围广,人才需求多。"AI+"已经在安防,金融,零售,营销等各个方面进行赋能。

当前对于高级的人工智能人才,仍然处于供不应求的状态。

0
回复
笑过640
#954 回复于2021-04

通过分类准确率,判断以采用不同损失函数训练模型的效果优劣。 以上任务运行结果

0
回复
在@后输入用户全名并按空格结束,可艾特全站任一用户