首页 AI Studio教育版 帖子详情
作业帖 | 百度深度学习集训营
收藏
快速回复
AI Studio教育版 其他师资培训 1068035 953
作业帖 | 百度深度学习集训营
收藏
快速回复
AI Studio教育版 其他师资培训 1068035 953

百度深度学习集训营已经正式开营,每个阶段的作业都将有各自的奖励,欢迎大家学习~

PS:如遇帖子过期、审核不通过的情况,请先复制内容保存在word文档,然后根据提示,完成个人实名验证,刷新后重新粘贴复制的内容,即可提交~

欢迎大家报名参加~

1月9日作业:

作业9-1:在第二章中学习过如何设置学习率衰减,这里建议使用分段衰减的方式,衰减系数为0.1, 根据ResNet目前的训练情况,应该在训练到多少步的时候设置衰减合适?请设置好学习率衰减方式,在眼疾识别数据集iChallenge-PM上重新训练ResNet模型。

作业9-1奖励:在作业中随机各抽取5名同学送出飞桨本+数据线+飞桨贴纸

回复帖子形式:  作业9-1:XXX

抽奖作业截止时间:2020年1月13日中午12点之前

作业9-2奖励:在作业中随机各抽取5名同学送出飞桨本+数据线+飞桨贴纸

回复帖子形式:  作业9-2:XXX

抽奖作业截止时间:2020年1月13日中午12点之前

 

1月7日作业:

作业8:如果将LeNet模型中的中间层的激活函数Sigmoid换成ReLU,在眼底筛查数据集上将会得到什么样的结果?Loss是否能收敛,ReLU和Sigmoid之间的区别是引起结果不同的原因吗?请发表你的观点

作业8奖励:在作业中随机各抽取5名同学送出飞桨本+数据线+飞桨贴纸

回复帖子形式:  作业8:XXX

获奖同学:#820 thunder95、#819 你还说不想我吗、 #818 百度用户#0762194095、#817 呵赫 he、#816 星光1dl

1月2日作业

作业7-1  计算卷积中一共有多少次乘法和加法操作

输入数据形状是[10, 3, 224, 224],卷积核kh = kw = 3,输出通道数为64,步幅stride=1,填充ph = pw =1

完成这样一个卷积,一共需要做多少次乘法和加法操作?

提示:先看输出一个像素点需要做多少次乘法和加法操作,然后再计算总共需要的操作次数

提交方式:请回复乘法和加法操作的次数,例如:乘法1000,加法1000

作业7-1奖励:抽取5人赢得飞桨定制本+数据线,截止时间2020年1月6日中午12点之前

回复帖子形式:  作业7-1:XXX

作业7-2奖励:从正确答案中抽取5人获得飞桨定制本+50元京东卡,截止时间2020年1月6日中午12点之前 

 

12月31日作业

作业6-1:

1.将普通神经网络模型的每层输出打印,观察内容
2.将分类准确率的指标 用PLT库画图表示
3.通过分类准确率,判断以采用不同损失函数训练模型的效果优劣
4.作图比较:随着训练进行,模型在训练集和测试集上的Loss曲线
5.调节正则化权重,观察4的作图曲线的变化,并分析原因
作业6-1奖励:抽取5人赢得飞桨定制本+数据线 ,回复帖子形式:  作业6-1:XXX

作业6-2:

正确运行AI Studio《百度架构师手把手教深度学习》课程里面的作业3 的极简版代码,分析训练过程中可能出现的问题或值得优化的地方,通过以下几点优化:

(1)样本:数据增强的方法

(2)假设:改进网络模型

(2)损失:尝试各种Loss

(2)优化:尝试各种优化器和学习率

目标:尽可能使模型在mnist测试集上的分类准确率最高

提交实现最高分类准确率的代码和模型,我们筛选最优结果前10名进行评奖

作业6-2奖励:飞桨定制本+50元京东卡

 

12月25日作业

12月23日作业

作业4-1:在AI studio上运行作业2,用深度学习完成房价预测模型

作业4-1奖励:飞桨定制本+ 《深度学习导论与应用实践》教材,选取第2、3、23、123、223、323…名同学送出奖品

作业4-2:回复下面问题,将答案回复帖子下方:

通过Python、深度学习框架,不同方法写房价预测,Python编写的模型 和 基于飞桨编写的模型在哪些方面存在异同?例如程序结构,编写难易度,模型的预测效果,训练的耗时等等?

回复帖子形式:  作业4-2:XXX

作业4-2奖励:在12月27日(本周五)中午12点前提交的作业中,我们选出最优前五名,送出百度定制数据线+《深度学习导论与应用实践》教材


12月17日作业

完成下面两个问题,并将答案回复在帖子下面,回帖形式:作业3-1(1)XX(2)XX

作业奖励:在2019年12月20日中午12点之前提交,随机抽取5名同学进行点评,礼品是本+数据线

12月12日作业

获奖者:第12名:飞天雄者                                     

12月10日作业
作业1-1:在AI Studio平台上https://aistudio.baidu.com/aistudio/education/group/info/888 跑通房价预测案例

作业1-1奖励:最先完成作业的前3名,以及第6名、66名、166名、266名、366名、466名、566名、666名的同学均可获得飞桨定制大礼包:飞桨帽子、飞桨数据线 、飞桨定制logo笔

作业1-1的获奖者如图:

作业1-2:完成下面两个问题,并将答案发布在帖子下面
①类比牛顿第二定律的案例,在你的工作和生活中还有哪些问题可以用监督学习的框架来解决?假设和参数是什么?优化目标是什么?
②为什么说AI工程师有发展前景?怎样从经济学(市场供需)的角度做出解读?
作业1-2奖励:回复帖子且点赞top5,获得《深度学习导论与应用实践》教材+飞桨定制本

点赞Top5获奖者:1.飞天雄者  2.God_s_apple  3.177*******62   4.学痞龙   5.故乡237、qq526557820

作业截止时间2020年1月10日,再此之前完成,才有资格参加最终Mac大奖评选

 

报名流程:

1.加入QQ群:726887660,班主任会在QQ群里进行学习资料、答疑、奖品等活动

2.点此链接,加入课程报名并实践:https://aistudio.baidu.com/aistudio/course/introduce/888

温馨提示:课程的录播会在3个工作日内上传到AI studio《百度架构师手把手教深度学习》课程上

 

49
收藏
回复
全部评论(953)
时间顺序
奔向北方的列车
#642 回复于2019-12

作业2-2

class Network(object):

    def __init__(self, num_of_weights):

        # 为了保持程序每次运行结果的一致性,此处设置固定的随机数种子

        np.random.seed(0)

        # 第一层的参数 为了保证第二层的输入为 (1*13) 第一层的输入权重应为(13*13) (1*13) * (13*13) = (1*13)

        self.w0 = np.random.randn(num_of_weights, num_of_weights)

        self.b0 = np.zeros(num_of_weights)

       

        self.w1 = np.random.randn(num_of_weights, 1)

        self.b1 = 0.

       

    def forward0(self, x):

        print(x.shape)

        x = np.dot(x, self.w0) + self.b0

        return x

    def forward1(self, x):

        x = np.dot(x, self.w1) + self.b1

        return x

   

    def loss(self, z, y):

        error = z - y

        num_samples = error.shape[0]

        cost = error * error

        cost = np.sum(cost) / num_samples

        return cost

   

    def gradient(self, x, y, z):

        #z = self.forward_1(x) or 为2

        N = x.shape[0]

        gradient_w = 1. / N     * np.sum((z-y) * x, axis=0)

        gradient_w = gradient_w[:, np.newaxis]

        gradient_b = 1. / N     * np.sum(z-y)

        return gradient_w, gradient_b

    

   

    def update0(self, gradient_w, gradient_b, eta = 0.01):

        self.w0 = self.w0 - eta * gradient_w

        self.b0 = self.b0 - eta * gradient_b

    def update1(self, gradient_w, gradient_b, eta = 0.01):

        self.w1 = self.w1 - eta * gradient_w

        self.b1 = self.b1 - eta * gradient_b

       

    def train(self, x, y, iterations=100, eta=0.01):

        losses = []

        for i in range(iterations):

            out0 = self.forward0(x)

            out1 = self.forward1(out0)

            L = self.loss(out1, y)

           

            gradient_w1, gradient_b1 = self.gradient(out0, y, self.forward1(out0))

            self.update1(gradient_w1, gradient_b1, eta)

           

            gradient_w0, gradient_b0 = self.gradient(x, out0, self.forward1(x))

            self.update0(gradient_w0, gradient_b0, eta)

            

            losses.append(L)

            if (i+1) % 10 == 0:

                print('iter {}, loss {}'.format(i, L))

        return losses

 

# 获取数据

train_data, test_data = load_data()

x = train_data[:, :-1]

y = train_data[:, -1:]

# 创建网络

net = Network(13)

num_iterations=300

# 启动训练

losses = net.train(x,y, iterations=num_iterations, eta=0.01)

 

# 画出变化趋势

plot_x = np.arange(num_iterations)

plot_y = np.array(losses)

plt.plot(plot_x, plot_y)

plt.show()

0
回复
奔向北方的列车
#643 回复于2019-12

作业4-2:

使用Python和飞桨深度学习框架编写房价预测,在程序流程结构大体基本一致,包括数据处理、模型设计、训练配置、训练过程、保存模型。

在编写效率方面,基于飞桨框架下编写更加简单直接,模型的最终预测效果保持一致。

同时由于框架的支持,对训练环境支持多种模式:CUP、GUP,分布式等,训练速度大大加快,耗时更小。

1
回复
C
C孤独K患者
#644 回复于2019-12

作业5-1:

1)随机读取N(N=100)张测试数据

def random_read_test_imgs(eval_set, set_nums=100):

    # 读取数据下标,将数据随机打乱
    index_list = list(range(len(eval_set[0])))
    random.shuffle(index_list)
    # 选取一百个数据
    index_list = index_list[:set_nums]
    print(index_list[:5])
    return index_list
    
datafile = './work/mnist.json.gz'
print('loading mnist dataset from {} ......'.format(datafile))
data = json.load(gzip.open(datafile)) 
# 读取到的数据可以直接区分训练集,验证集,测试集
_, _, eval_set = data
set_nums = 100
eval_index_set  = random_read_test_imgs(eval_set, set_nums)

2)测试,并统计结果

# 定义测试过程
res = 0 # 用于统计正确结果的数目
with fluid.dygraph.guard():
    # 加载网络模型
    model = MNIST("mnist")
    model_dict,_=fluid.load_dygraph("mnist")
    model.load_dict(model_dict)
    
    model.eval()
    for i in eval_index_set:
        test_img = np.reshape(eval_set[0][i], [1, IMG_ROWS, IMG_COLS]).astype('float32')
        label = np.reshape(eval_set[1][i], [1]).astype('float32')
        
        result = model(fluid.dygraph.to_variable(test_img))
        if result.numpy().astype('int32') == label:
            res += 1
    print("准确率为:" ,res/100)
0
回复
C
C孤独K患者
#645 回复于2019-12

作业5-2:

常见的卷积神经网络有:

LeNet-5,AlexxNet,VGGNet,GoogleNet,ResNet,DenseNet和DPN等。

0
回复
万国风云
#646 回复于2019-12

程序打乱MNIST的测试集,然后抽取一百个进行预测并计算准确率,代码如下:

def load_testdata(size=100):
testset = paddle.dataset.mnist.test()
test_reader = paddle.batch(testset,batch_size=1)
test_batches = [i for i in test_reader()]
np.random.shuffle(test_batches)
samples = test_batches[:100]
features , labels = [i[0][0] for i in samples], [i[0][1] for i in samples]
return features , labels

with fluid.dygraph.guard():
model = MNIST("mnist")
params_file_path = 'mnist'
model_dict, _ = fluid.load_dygraph("mnist")
model.load_dict(model_dict) 

model.eval()
EPOCH_NUM = 100
sum = 0
for epoch in range(EPOCH_NUM):
features,labels = load_testdata()
x = np.array(features)
y = np.array(labels).reshape(-1,1).astype('uint8')

result = model(fluid.dygraph.to_variable(x))
sum +=np.sum(result.numpy().astype('uint8') == y)
accu = sum/100/100
print("重复执行100次,每次随机抽取100个样本进行预测,平均准确率为:{}%".format(accu*100))

程序执行后,得到的准确率为21.58%

0
回复
万国风云
#647 回复于2019-12

作业5-1:

(注:刚刚程序代码乱了,而且没写上作业标号,现在补发)

程序打乱MNIST的测试集,然后抽取一百个进行预测并计算准确率,代码如下:

def load_testdata(size=100):
    testset = paddle.dataset.mnist.test()
    test_reader = paddle.batch(testset,batch_size=1)
    test_batches = [i for i in test_reader()]
    np.random.shuffle(test_batches)
    samples = test_batches[:100]
    features , labels = [i[0][0] for i in samples], [i[0][1] for i in samples]
    return features , labels
    
with fluid.dygraph.guard():
    model = MNIST("mnist")
    params_file_path = 'mnist'
    model_dict, _ = fluid.load_dygraph("mnist")
    model.load_dict(model_dict)   
    
    model.eval()
    EPOCH_NUM = 100
    sum = 0
    for epoch in range(EPOCH_NUM):
        features,labels = load_testdata()
        x = np.array(features)
        y = np.array(labels).reshape(-1,1).astype('uint8')
        
        result = model(fluid.dygraph.to_variable(x))
        sum +=np.sum(result.numpy().astype('uint8') == y)
    accu = sum/100/100
    print("重复执行100次,每次随机抽取100个样本进行预测,平均准确率为:{}%".format(accu*100))


程序执行后,得到的准确率为21.58%。

0
回复
M
Mr. Zhou
#648 回复于2019-12

作业5-2:

常见卷积神经网络:LeNet, AlexNet, VGG, NiN, GooleNet, ResNet, DenseNet

0
回复
壮滑芍李ZoS
#649 回复于2019-12

Foggy

0
回复
友友的路
#650 回复于2020-01

最近半个月出差中,所以作业提交有点晚了

# 作业3-1: tanh激活函数绘制
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.patches as patches

#设置图片大小
plt.figure(figsize=(8, 3))

# x是1维数组,数组大小是从-10. 到10.的实数,每隔0.1取一个点
x = np.arange(-10, 10, 0.1)
# 计算 Sigmoid函数
s = (np.exp(x) - np.exp(-x)) / (np.exp(x) + np.exp(-x))

# 计算ReLU函数
y = np.clip(x, a_min = 0., a_max = None)
f = plt.subplot(111)
# 画出函数曲线
plt.plot(x, s, color='r')
# 添加文字说明
plt.text(-5., 0.9, r'$y=tanh(x)$', fontsize=13)
# 设置坐标轴格式
currentAxis=plt.gca()
currentAxis.xaxis.set_label_text('x', fontsize=15)
currentAxis.yaxis.set_label_text('y', fontsize=15)
plt.show()

 

#作业3-2:统计随机生成的矩阵中大于0的元素个数
import numpy as np
p = np.random.randn(10,10)
np.sum(p>0)

0
回复
scy
#651 回复于2020-01

作业 5-1:

本次作业使用网络结构
self.conv1 = Conv2D(name_scope, num_filters=20, filter_size=5, stride=1, padding=2, act="relu")
self.pool1 = Pool2D(name_scope, pool_size=2, pool_stride=2, pool_type='max')
self.conv2 = Conv2D(name_scope, num_filters=20, filter_size=5, stride=1, padding=2, act="relu")
self.pool2 = Pool2D(name_scope, pool_size=2, pool_stride=2, pool_type='max')
self.fc = FC(name_scope, size=10, act='softmax')

#参数组合

lrs = np.arange(0.01,0.001,(0.001-0.01)/3) #学习率选项  [0.1,0.069,0.039]
Optimizers = [fluid.optimizer.SGDOptimizer,fluid.optimizer.MomentumOptimizer,
fluid.optimizer.AdagradOptimizer,fluid.optimizer.AdamOptimizer] #优化器选项

固定随机抽取测试集中100条数据,通过一下学习率和优化器选项组合对测试集数据进行12轮准确率测试,结果如下:

 

作业 5-2 常见的卷积神经网络

LeNet、Alexnet、VGG、GooLeNet、ResNet、DenseNet

作业 5-3 手写数字识别,优化算法和学习率在不同情况下的训练损失和正确率:

总体上看,在训练集下Adam优化器比较稳定,不通学习率下损失和准确率表现不错;而Adagrad效果不好

但是,从测试集准确率来看,在某些组合下模型有过拟合的嫌疑,表现为训练集准确率到99%,但测试集准确率并不好

 

0
回复
万国风云
#652 回复于2020-01

作业5-2: 常见卷积神经网络: LeNet,AlexNet,VggNet,ResNet

LeNet:每个卷积层包括三部分:卷积、池化和非线性激活函数(sigmoid激活函数);使用卷积提取空间特征;降采样层采用平均池化。

AlexNet:使用两块GPU并行加速训练,大大降低了训练时间;使用ReLu作为激活函数,解决了网络较深时的梯度弥散问题;使用数据增强、dropout和LRN层来防止网络过拟合,增强模型的泛化能力。

VggNet:泛化性能很好,容易迁移到其他的图像识别项目上。

ResNet(残差神经网络):输入可以直接连接到输出,使得整个网络只需要学习残差,简化学习目标和难度;使得训练超级深的神经网络成为可能,避免了不断加深神经网络,准确率达到饱和的现象。

0
回复
武林風灬
#653 回复于2020-01

#!/usr/bin/env python
#coding=utf-8
import sys
import cv2
import signal
import io
import numpy as np
import os
import urllib
import time
import numpy as np
import argparse
import functools
import matplotlib.pyplot as plt
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
import paddle
import paddle.fluid as fluid
import reader
import re
import base64
from io import BytesIO
from mobilenet_ssd import build_mobilenet_ssd
from utility import add_arguments, print_arguments, check_cuda
rf = None
f = None
global i
parser = argparse.ArgumentParser(description=__doc__)
add_arg = functools.partial(add_arguments, argparser=parser)
# yapf: disable
add_arg('dataset', str, 'pascalvoc', "coco and pascalvoc.")
add_arg('use_gpu', bool, False, "Whether use GPU.")
add_arg('image_path', str, '', "The image used to inference and visualize.")
add_arg('model_dir', str, 'model/best_model', "The model path.")
add_arg('nms_threshold', float, 0.45, "NMS threshold.")
add_arg('confs_threshold', float, 0.5, "Confidence threshold to draw bbox.")
add_arg('resize_h', int, 300, "The resized image height.")
add_arg('resize_w', int, 300, "The resized image height.")
add_arg('mean_value_B', float, 127.5, "Mean value for B channel which will be subtracted.") #123.68
add_arg('mean_value_G', float, 127.5, "Mean value for G channel which will be subtracted.") #116.78
add_arg('mean_value_R', float, 127.5, "Mean value for R channel which will be subtracted.") #103.94
# yapf: enable

class receive:
def __init__(self):
read_path = "/tmp/server_in.pipe"
write_path = "/tmp/server_out.pipe"
#image=cv2.imread("/home/ros/0.jpg")
try:
os.mkfifo(read_path)
os.mkfifo(write_path)
except OSError as e:
print("mkfifo error:", e)
global rf
global f

rf = os.open(read_path, os.O_RDONLY)
f = os.open(write_path, os.O_SYNC | os.O_CREAT | os.O_RDWR)
def load(self):
global rf
global f
s = os.read(rf, 1000000)

#print(len(s))


image=np.fromstring(s,np.uint8)

os.write(f, bytes("4asdafa",'UTF-8'))
return image
def infer(args, data_args, image_find, model_dir):

image_shape = [3, 300,300]
image_find=Image.open(image_find)
#print(image_find.size())
# if image_find.any():
#image_find=Image.fromarray(image_find) data_args.resize_w
'''
if 'coco' in data_args.dataset:
num_classes = 91

# cocoapi
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
label_fpath = os.path.join(data_dir, label_file)
coco = COCO(label_fpath)
category_ids = coco.getCatIds()
label_list = {
item['id']: item['name']
for item in coco.loadCats(category_ids)
}
label_list[0] = ['background']
'''
#if 'pascalvoc' in data_args.dataset:
num_classes = 21
label_list = data_args.label_list

image = fluid.layers.data(name='image', shape=image_shape, dtype='float32')

#print(image)
locs, confs, box, box_var = build_mobilenet_ssd(image, num_classes,image_shape)
nmsed_out = fluid.layers.detection_output(locs, confs, box, box_var, 0.45)
#print(nmsed_out)
place = fluid.CUDAPlace(0) if args.use_gpu else fluid.CPUPlace()
exe = fluid.Executor(place)
# yapf: disable
if model_dir:
def if_exist(var):
return os.path.exists(os.path.join(model_dir, var.name))
fluid.io.load_vars(exe, model_dir, predicate=if_exist)
infer_reader = reader.infer(data_args, image_find)

feeder = fluid.DataFeeder(place=place, feed_list=[image])
#print(feeder)
data = infer_reader()
#print(data)

# switch network to test mode (i.e. batch norm test mode)return_numpy=False
test_program = fluid.default_main_program().clone(for_test=True)
nmsed_out_v, = exe.run(test_program,feed=feeder.feed([[data]]),fetch_list=[nmsed_out],return_numpy=False)
#nmsed_out_v=[]
nmsed_out_v = np.array(nmsed_out_v)
#image_path=Image.fromarray(image_find)
pic=draw_bounding_box_on_image(image_find, nmsed_out_v, args.confs_threshold,label_list)
image_shape=[]

#draw_bounding_box_on_image(image_path, nmsed_out_v, args.confs_threshold,label_list)
return pic


def draw_bounding_box_on_image(image_find, nms_out, confs_threshold,label_list):

#cv2.namedWindow("image", cv2.WINDOW_AUTOSIZE)
#image =Image.fromarray(image_find)
image=image_find
draw = ImageDraw.Draw(image)
im_width, im_height = image.size

for dt in nms_out:

if dt[1] < confs_threshold:
continue
category_id = dt[0]
bbox = dt[2:]
xmin, ymin, xmax, ymax = clip_bbox(dt[2:])
(left, right, top, bottom) = (xmin * im_width, xmax * im_width,
ymin * im_height, ymax * im_height)
draw.line([(left, top), (left, bottom), (right, bottom), (right, top),(left, top)],width=4,fill='red')
draw.text((left, top), label_list[int(category_id)], (255, 255, 0))
img = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR)

#plt.imshow(image)
#plt.show()
#cv2.imshow("image",img)
#cv2.waitKey(10)
return img

def clip_bbox(bbox):
xmin = max(min(bbox[0], 1.), 0.)
ymin = max(min(bbox[1], 1.), 0.)
xmax = max(min(bbox[2], 1.), 0.)
ymax = max(min(bbox[3], 1.), 0.)
return xmin, ymin, xmax, ymax
def quit(signum, frame):
print("exit")
cv2.destroyAllWindows()
os.close(rf)
os.close(f)
sys.exit()
class receive:
def __init__(self):
read_path = "/tmp/server_in.pipe"
write_path = "/tmp/server_out.pipe"
#image=cv2.imread("/home/ros/0.jpg")
try:
os.mkfifo(read_path)
os.mkfifo(write_path)
except OSError as e:
print("mkfifo error:", e)
global rf
global f

rf = os.open(read_path, os.O_RDONLY)
f = os.open(write_path, os.O_SYNC | os.O_CREAT | os.O_RDWR)
def load(self):
global rf
global f
s = os.read(rf, 1000000)
image=np.fromstring(s,np.uint8)
if len(s)<1:
#print(len(s))


image=[]
#image=Image.open(io.BytesIO(s))
os.write(f, bytes("4asdafa",'UTF-8'))
return image


def main(args):
cv2.namedWindow("image", cv2.WINDOW_AUTOSIZE)
signal.signal(signal.SIGINT, quit)
signal.signal(signal.SIGTERM, quit)
#bool signalp
#bb=receive()
global i

#while True:
for i in range(2):
if i==0:

pic1='/home/ros/ssd/data/pascalvoc/VOCdevkit/VOC2007/JPEGImages/000022.jpg'
if i==1:

pic1='/home/ros/ssd/data/pascalvoc/VOCdevkit/VOC2007/JPEGImages/000025.jpg'
args = parser.parse_args()
print_arguments(args)

check_cuda(args.use_gpu)

data_dir = 'data/pascalvoc'
label_file = 'label_list'

if not os.path.exists(args.model_dir):
raise ValueError("The model path [%s] does not exist." % (args.model_dir))

 

data_args = reader.Settings(
dataset=args.dataset,
data_dir=data_dir,
label_file=label_file,
resize_h=args.resize_h,
resize_w=args.resize_w,
mean_value=[args.mean_value_B, args.mean_value_G, args.mean_value_R],
apply_distort=False,
apply_expand=False,
ap_version='')
pic=infer(
args,
data_args=data_args,
image_find=pic1,
model_dir=args.model_dir)
cv2.imshow("image",pic)
cv2.waitKey(10)

 


if __name__ == '__main__':
#cv2.namedWindow("image", cv2.WINDOW_AUTOSIZE)
main(sys.argv)

只能执行一次,第二次到标红色的地方就报错;

报错如下

/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/executor.py:779: UserWarning: The following exception is not an EOF exception.
"The following exception is not an EOF exception.")
Traceback (most recent call last):
File "b.py", line 260, in
main(sys.argv)
File "b.py", line 247, in main
model_dir=args.model_dir)
File "b.py", line 121, in infer
nmsed_out_v, = exe.run(test_program,feed=feeder.feed([[data]]),fetch_list=[nmsed_out],return_numpy=False)
File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/executor.py", line 780, in run
six.reraise(*sys.exc_info())
File "/home/ros/.local/lib/python3.6/site-packages/six.py", line 696, in reraise
raise value
File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/executor.py", line 775, in run
use_program_cache=use_program_cache)
File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/executor.py", line 822, in _run_impl
use_program_cache=use_program_cache)
File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/executor.py", line 899, in _run_program
fetch_var_name)
paddle.fluid.core_avx.EnforceNotMet:

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0 std::string paddle::platform::GetTraceBackString(std::string const&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
2 paddle::framework::Tensor::type() const
3 paddle::operators::ConvOp::GetExpectedKernelType(paddle::framework::ExecutionContext const&) const
4 paddle::framework::OperatorWithKernel::ChooseKernel(paddle::framework::RuntimeContext const&, paddle::framework::Scope const&, paddle::platform::Place const&) const
5 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const
6 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
7 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
8 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool)
9 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector > const&, bool)

------------------------------------------
Python Call Stacks (More useful to users):
------------------------------------------
File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/framework.py", line 2488, in append_op
attrs=kwargs.get("attrs", None))
File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
return self.main_program.current_block().append_op(*args, **kwargs)
File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/layers/nn.py", line 2803, in conv2d
"data_format": data_format,
File "/home/ros/ssd/mobilenet_ssd.py", line 92, in conv_bn
bias_attr=False)
File "/home/ros/ssd/mobilenet_ssd.py", line 27, in ssd_net
tmp = self.conv_bn(self.img, 3, int(32 * scale), 2, 1)
File "/home/ros/ssd/mobilenet_ssd.py", line 137, in build_mobilenet_ssd
return ssd_model.ssd_net()
File "b.py", line 102, in infer
locs, confs, box, box_var = build_mobilenet_ssd(image, num_classes,image_shape)
File "b.py", line 247, in main
model_dir=args.model_dir)
File "b.py", line 260, in
main(sys.argv)

----------------------
Error Message Summary:
----------------------
Error: Tensor not initialized yet when Tensor::type() is called.
[Hint: holder_ should not be null.] at (/paddle/paddle/fluid/framework/tensor.h:139)
[operator < conv2d > error]

 

有大神帮忙看一下吗

0
回复
qingge
#654 回复于2020-01

作业5-1:

作业5-2:

常见卷积神经网络:LeNet, AlexNet, VGG, NiN, GooleNet, ResNet, DenseNet

作业5-3:

Adam算法最好,最优学习率为0.1

0
回复
qingge
#655 回复于2020-01

作业5-1:

作业5-2:

常见卷积神经网络:LeNet, AlexNet, VGG, NiN, GooleNet, ResNet, DenseNet

作业5-3:

Adam算法最好,最优学习率为0.1

0
回复
BADA星
#656 回复于2020-01

作业 3-1

(1)

import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.patches as patches

plt.figure(figsize=(8,5))

x = np.arange(-10, 10, 0.1)
y = (np.exp(x) - np.exp(-x))/(np.exp(x) + np.exp(-x))

plt.plot(x, y, color = 'r')
plt.text(-5., 0.9, r'tanh', fontsize=13)
currentAxis=plt.gca()
currentAxis.xaxis.set_label_text('x', fontsize=15)
currentAxis.yaxis.set_label_text('y', fontsize=15)
plt.show()

(2)

import numpy as np

p = np.random.randn(10,10)
q = (p > 0)
print(q)
np.sum(q)

 

 

0
回复
凌烟阁主人
#657 回复于2020-01
#!/usr/bin/env python #coding=utf-8 import sys import cv2 import signal import io import numpy as np import os import urllib import time import numpy as np import argparse import functools import matplotlib.pyplot as plt from PIL import Image from PIL import ImageDraw from PIL import ImageFont import paddle import paddle.fluid as fluid import reader import re import base64 from io import BytesIO from mobilenet_ssd import build_mobilenet_ssd from utility import add_arguments, print_arguments, check_cuda rf = None f = None global i parser = argparse.ArgumentParser(description=__doc__) add_arg = functools.partial(add_arguments, argparser=parser) # yapf: disable add_arg('dataset', str, 'pascalvoc', "coco and pascalvoc.") add_arg('use_gpu', bool, False, "Whether use GPU.") add_arg('image_path', str, '', "The image used to inference and visualize.") add_arg('model_dir', str, 'model/best_model', "The model path.") add_arg('nms_threshold', float, 0.45, "NMS threshold.") add_arg('confs_threshold', float, 0.5, "Confidence threshold to draw bbox.") add_arg('resize_h', int, 300, "The resized image height.") add_arg('resize_w', int, 300, "The resized image height.") add_arg('mean_value_B', float, 127.5, "Mean value for B channel which will be subtracted.") #123.68 add_arg('mean_value_G', float, 127.5, "Mean value for G channel which will be subtracted.") #116.78 add_arg('mean_value_R', float, 127.5, "Mean value for R channel which will be subtracted.") #103.94 # yapf: enable class receive: def __init__(self): read_path = "/tmp/server_in.pipe" write_path = "/tmp/server_out.pipe" #image=cv2.imread("/home/ros/0.jpg") try: os.mkfifo(read_path) os.mkfifo(write_path) except OSError as e: print("mkfifo error:", e) global rf global f rf = os.open(read_path, os.O_RDONLY) f = os.open(write_path, os.O_SYNC | os.O_CREAT | os.O_RDWR) def load(self): global rf global f s = os.read(rf, 1000000) #print(len(s)) image=np.fromstring(s,np.uint8) os.write(f, bytes("4asdafa",'UTF-8')) return image def infer(args, data_args, image_find, model_dir): image_shape = [3, 300,300] image_find=Image.open(image_find) #print(image_find.size()) # if image_find.any(): #image_find=Image.fromarray(image_find) data_args.resize_w ''' if 'coco' in data_args.dataset: num_classes = 91 # cocoapi from pycocotools.coco import COCO from pycocotools.cocoeval import COCOeval label_fpath = os.path.join(data_dir, label_file) coco = COCO(label_fpath) category_ids = coco.getCatIds() label_list = { item['id']: item['name'] for item in coco.loadCats(category_ids) } label_list[0] = ['background'] ''' #if 'pascalvoc' in data_args.dataset: num_classes = 21 label_list = data_args.label_list image = fluid.layers.data(name='image', shape=image_shape, dtype='float32') #print(image) locs, confs, box, box_var = build_mobilenet_ssd(image, num_classes,image_shape) nmsed_out = fluid.layers.detection_output(locs, confs, box, box_var, 0.45) #print(nmsed_out) place = fluid.CUDAPlace(0) if args.use_gpu else fluid.CPUPlace() exe = fluid.Executor(place) # yapf: disable if model_dir: def if_exist(var): return os.path.exists(os.path.join(model_dir, var.name)) fluid.io.load_vars(exe, model_dir, predicate=if_exist) infer_reader = reader.infer(data_args, image_find) feeder = fluid.DataFeeder(place=place, feed_list=[image]) #print(feeder) data = infer_reader() #print(data) # switch network to test mode (i.e. batch norm test mode)return_numpy=False test_program = fluid.default_main_program().clone(for_test=True) nmsed_out_v, = exe.run(test_program,feed=feeder.feed([[data]]),fetch_list=[nmsed_out],return_numpy=False) #nmsed_out_v=[] nmsed_out_v = np.array(nmsed_out_v) #image_path=Image.fromarray(image_find) pic=draw_bounding_box_on_image(image_find, nmsed_out_v, args.confs_threshold,label_list) image_shape=[] #draw_bounding_box_on_image(image_path, nmsed_out_v, args.confs_threshold,label_list) return pic def draw_bounding_box_on_image(image_find, nms_out, confs_threshold,label_list): #cv2.namedWindow("image", cv2.WINDOW_AUTOSIZE) #image =Image.fromarray(image_find) image=image_find draw = ImageDraw.Draw(image) im_width, im_height = image.size for dt in nms_out: if dt[1] < confs_threshold: continue category_id = dt[0] bbox = dt[2:] xmin, ymin, xmax, ymax = clip_bbox(dt[2:]) (left, right, top, bottom) = (xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height) draw.line([(left, top), (left, bottom), (right, bottom), (right, top),(left, top)],width=4,fill='red') draw.text((left, top), label_list[int(category_id)], (255, 255, 0)) img = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR) #plt.imshow(image) #plt.show() #cv2.imshow("image",img) #cv2.waitKey(10) return img def clip_bbox(bbox): xmin = max(min(bbox[0], 1.), 0.) ymin = max(min(bbox[1], 1.), 0.) xmax = max(min(bbox[2], 1.), 0.) ymax = max(min(bbox[3], 1.), 0.) return xmin, ymin, xmax, ymax def quit(signum, frame): print("exit") cv2.destroyAllWindows() os.close(rf) os.close(f) sys.exit() class receive: def __init__(self): read_path = "/tmp/server_in.pipe" write_path = "/tmp/server_out.pipe" #image=cv2.imread("/home/ros/0.jpg") try: os.mkfifo(read_path) os.mkfifo(write_path) except OSError as e: print("mkfifo error:", e) global rf global f rf = os.open(read_path, os.O_RDONLY) f = os.open(write_path, os.O_SYNC | os.O_CREAT | os.O_RDWR) def load(self): global rf global f s = os.read(rf, 1000000) image=np.fromstring(s,np.uint8) if len(s)<1: #print(len(s)) image=[] #image=Image.open(io.BytesIO(s)) os.write(f, bytes("4asdafa",'UTF-8')) return image def main(args): cv2.namedWindow("image", cv2.WINDOW_AUTOSIZE) signal.signal(signal.SIGINT, quit) signal.signal(signal.SIGTERM, quit) #bool signalp #bb=receive() global i #while True: for i in range(2): if i==0: pic1='/home/ros/ssd/data/pascalvoc/VOCdevkit/VOC2007/JPEGImages/000022.jpg' if i==1: pic1='/home/ros/ssd/data/pascalvoc/VOCdevkit/VOC2007/JPEGImages/000025.jpg' args = parser.parse_args() print_arguments(args) check_cuda(args.use_gpu) data_dir = 'data/pascalvoc' label_file = 'label_list' if not os.path.exists(args.model_dir): raise ValueError("The model path [%s] does not exist." % (args.model_dir))   data_args = reader.Settings( dataset=args.dataset, data_dir=data_dir, label_file=label_file, resize_h=args.resize_h, resize_w=args.resize_w, mean_value=[args.mean_value_B, args.mean_value_G, args.mean_value_R], apply_distort=False, apply_expand=False, ap_version='') pic=infer( args, data_args=data_args, image_find=pic1, model_dir=args.model_dir) cv2.imshow("image",pic) cv2.waitKey(10)   if __name__ == '__main__': #cv2.namedWindow("image", cv2.WINDOW_AUTOSIZE) main(sys.argv) 只能执行一次,第二次到标红色的地方就报错; 报错如下 /home/ros/.local/lib/python3.6/site-packages/paddle/fluid/executor.py:779: UserWarning: The following exception is not an EOF exception. "The following exception is not an EOF exception.") Traceback (most recent call last): File "b.py", line 260, in main(sys.argv) File "b.py", line 247, in main model_dir=args.model_dir) File "b.py", line 121, in infer nmsed_out_v, = exe.run(test_program,feed=feeder.feed([[data]]),fetch_list=[nmsed_out],return_numpy=False) File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/executor.py", line 780, in run six.reraise(*sys.exc_info()) File "/home/ros/.local/lib/python3.6/site-packages/six.py", line 696, in reraise raise value File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/executor.py", line 775, in run use_program_cache=use_program_cache) File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/executor.py", line 822, in _run_impl use_program_cache=use_program_cache) File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/executor.py", line 899, in _run_program fetch_var_name) paddle.fluid.core_avx.EnforceNotMet: -------------------------------------------- C++ Call Stacks (More useful to developers): -------------------------------------------- 0 std::string paddle::platform::GetTraceBackString(std::string const&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) 2 paddle::framework::Tensor::type() const 3 paddle::operators::ConvOp::GetExpectedKernelType(paddle::framework::ExecutionContext const&) const 4 paddle::framework::OperatorWithKernel::ChooseKernel(paddle::framework::RuntimeContext const&, paddle::framework::Scope const&, paddle::platform::Place const&) const 5 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const 6 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const 7 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&) 8 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) 9 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector > const&, bool) ------------------------------------------ Python Call Stacks (More useful to users): ------------------------------------------ File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/framework.py", line 2488, in append_op attrs=kwargs.get("attrs", None)) File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op return self.main_program.current_block().append_op(*args, **kwargs) File "/home/ros/.local/lib/python3.6/site-packages/paddle/fluid/layers/nn.py", line 2803, in conv2d "data_format": data_format, File "/home/ros/ssd/mobilenet_ssd.py", line 92, in conv_bn bias_attr=False) File "/home/ros/ssd/mobilenet_ssd.py", line 27, in ssd_net tmp = self.conv_bn(self.img, 3, int(32 * scale), 2, 1) File "/home/ros/ssd/mobilenet_ssd.py", line 137, in build_mobilenet_ssd return ssd_model.ssd_net() File "b.py", line 102, in infer locs, confs, box, box_var = build_mobilenet_ssd(image, num_classes,image_shape) File "b.py", line 247, in main model_dir=args.model_dir) File "b.py", line 260, in main(sys.argv) ---------------------- Error Message Summary: ---------------------- Error: Tensor not initialized yet when Tensor::type() is called. [Hint: holder_ should not be null.] at (/paddle/paddle/fluid/framework/tensor.h:139) [operator < conv2d > error]   有大神帮忙看一下吗
展开

你为啥不截图呢,这个格式谁能看得懂,指出哪里报错,代码流程容易看

0
回复
s
sljackson
#658 回复于2020-01

5-1:

# #加载飞桨和相关类库
# import paddle
# import paddle.fluid as fluid
# from paddle.fluid.dygraph.nn import FC,Conv2D,Pool2D
# import numpy as np
# import os,json,gzip,random
# from PIL import Image


# 加载相关库
import os
import random
import paddle
import paddle.fluid as fluid
from paddle.fluid.dygraph.nn import Conv2D, Pool2D, FC
import numpy as np
from PIL import Image

import gzip
import json

# 定义数据集读取器
def load_data(mode='train'):

    # 读取数据文件
    datafile = './work/mnist.json.gz'
    print('loading mnist dataset from {} ......'.format(datafile))
    data = json.load(gzip.open(datafile))
    # 读取数据集中的训练集,验证集和测试集
    train_set, val_set, eval_set = data

    # 数据集相关参数,图片高度IMG_ROWS, 图片宽度IMG_COLS
    IMG_ROWS = 28
    IMG_COLS = 28
    # 根据输入mode参数决定使用训练集,验证集还是测试
    if mode == 'train':
        imgs = train_set[0]
        labels = train_set[1]
    elif mode == 'valid':
        imgs = val_set[0]
        labels = val_set[1]
    elif mode == 'eval':
        imgs = eval_set[0]
        labels = eval_set[1]
    # 获得所有图像的数量
    imgs_length = len(imgs)
    # 验证图像数量和标签数量是否一致
    assert len(imgs) == len(labels), \
          "length of train_imgs({}) should be the same as train_labels({})".format(
                  len(imgs), len(labels))

    index_list = list(range(imgs_length))

    # 读入数据时用到的batchsize
    BATCHSIZE = 100

    # 定义数据生成器
    def data_generator():
        # 训练模式下,打乱训练数据
        if mode == 'train':
            random.shuffle(index_list)
        imgs_list = []
        labels_list = []
        # 按照索引读取数据
        for i in index_list:
            # 读取图像和标签,转换其尺寸和类型
            img = np.reshape(imgs[i], [1, IMG_ROWS, IMG_COLS]).astype('float32')
            label = np.reshape(labels[i], [1]).astype('int64')
            imgs_list.append(img) 
            labels_list.append(label)
            # 如果当前数据缓存达到了batch size,就返回一个批次数据
            if len(imgs_list) == BATCHSIZE:
                yield np.array(imgs_list), np.array(labels_list)
                # 清空数据缓存列表
                imgs_list = []
                labels_list = []

        # 如果剩余数据的数目小于BATCHSIZE,
        # 则剩余数据一起构成一个大小为len(imgs_list)的mini-batch
        if len(imgs_list) > 0:
            yield np.array(imgs_list), np.array(labels_list)

    return data_generator


# 定义模型结构
class MNIST(fluid.dygraph.Layer):
     def __init__(self, name_scope):
         super(MNIST, self).__init__(name_scope)
         name_scope = self.full_name()
         # 定义卷积层,输出通道20,卷积核大小为5,步长为1,padding为2,使用relu激活函数
         self.conv1 = Conv2D(name_scope, num_filters=20, filter_size=5, stride=1, padding=2, act='relu')
         # 定义池化层,池化核为2,采用最大池化方式
         self.pool1 = Pool2D(name_scope, pool_size=2, pool_stride=2, pool_type='max')
         # 定义卷积层,输出通道20,卷积核大小为5,步长为1,padding为2,使用relu激活函数
         self.conv2 = Conv2D(name_scope, num_filters=20, filter_size=5, stride=1, padding=2, act='relu')
         # 定义池化层,池化核为2,采用最大池化方式
         self.pool2 = Pool2D(name_scope, pool_size=2, pool_stride=2, pool_type='max')
         # 定义全连接层,输出节点数为10,激活函数使用softmax
         self.fc = FC(name_scope, size=10, act='softmax')
        #  self.fc = FC(name_scope, size=10, act=None)
         
    # 定义网络的前向计算过程
     def forward(self, inputs):
         x = self.conv1(inputs)
         x = self.pool1(x)
         x = self.conv2(x)
         x = self.pool2(x)
         x = self.fc(x)
         return x

#仅优化算法的设置有所差别
with fluid.dygraph.guard(fluid.CUDAPlace(0)):
    model = MNIST("mnist")
    model.train()
    #调用加载数据的函数
    train_loader = load_data('train')
    
    #四种优化算法的设置方案,可以逐一尝试效果
    # optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.01)
    optimizer = fluid.optimizer.MomentumOptimizer(learning_rate=0.01, momentum=0.9)
    # optimizer = fluid.optimizer.AdagradOptimizer(learning_rate=0.01)
    # optimizer = fluid.optimizer.AdamOptimizer(learning_rate=0.01)
    
    EPOCH_NUM = 5
    for epoch_id in range(EPOCH_NUM):
        for batch_id, data in enumerate(train_loader()):
            #准备数据,变得更加简洁
            image_data, label_data = data
            image = fluid.dygraph.to_variable(image_data)
            label = fluid.dygraph.to_variable(label_data)
            
            #前向计算的过程
            predict = model(image)
            
            #计算损失,取一个批次样本损失的平均值
            loss = fluid.layers.cross_entropy(predict, label)
            avg_loss = fluid.layers.mean(loss)
            
            #每训练了100批次的数据,打印下当前Loss的情况
            if batch_id % 200 == 0:
                print("epoch: {}, batch: {}, loss is: {}".format(epoch_id, batch_id, avg_loss.numpy()))
            
            #后向传播,更新参数的过程
            avg_loss.backward()
            optimizer.minimize(avg_loss)
            model.clear_gradients()

    #保存模型参数
    fluid.save_dygraph(model.state_dict(), 'mnist')

# 读取一张本地的样例图片,转变成模型输入的格式
def load_image(img_path):
    # 从img_path中读取图像,并转为灰度图
    im = Image.open(img_path).convert('L')
    # print(np.array(im))
    im = im.resize((28, 28), Image.ANTIALIAS)
    im = np.array(im).reshape(1, -1).astype(np.float32)
    # 图像归一化,保持和数据集的数据范围一致
    im = 1 - im / 127.5
    return im

# 定义预测过程
with fluid.dygraph.guard():
    model = MNIST("mnist")
    params_file_path = 'mnist'
    img_path = './work/example_6.jpg'
    # 加载模型参数
    model_dict, _ = fluid.load_dygraph("mnist")
    model.load_dict(model_dict)
    
    model.eval()
    tensor_img = load_image(img_path)
    img_list = []
    img = np.reshape(tensor_img, [1, 28, 28]).astype('float32')
    img_list.append(img)
    # print(np.array(tensor_img))
    result = model(fluid.dygraph.to_variable(np.array(img_list)))
    
    arr = result[0].numpy().tolist()
    pre = arr.index(np.max(arr))
    #预测输出取整,即为预测的数字
    # print("本次预测的数字是", result.numpy().astype('float32'))
    print("本次预测的数字是", pre)


5-2:
常见卷积神经网络:LeNet, AlexNet, VGG, NiN, GooleNet, ResNet, DenseNet

5-3:

optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.01)
本次共预测:10000,错误数:270,准确率:0.973

optimizer = fluid.optimizer.MomentumOptimizer(learning_rate=0.01, momentum=0.9)
本次共预测:10000,错误数:127,准确率:0.9873


optimizer = fluid.optimizer.AdagradOptimizer(learning_rate=0.01)
本次共预测:10000,错误数:320,准确率:0.968

optimizer = fluid.optimizer.AdamOptimizer(learning_rate=0.01)
本次共预测:10000,错误数:155,准确率:0.9845

0
回复
呵呵xyz1
#659 回复于2020-01

作业4-2:

使用Python编写代码网络及python代码的认识要求高些  ,对paddle搭建自己的网络不需要了解计算的具体过程和复杂代码的实现,有很多api知道它的作用就可以建立自己的网络并训练,如果模型复杂paddle就有明显的优势,计算速度快,等。。

0
回复
风车车
#660 回复于2020-01

作业1-2:

①房价的预测可以用监督学习的框架来解决。假设:房屋面积、地理位置、楼层、建成年限等等参数对房屋价格的关系?优化目标是预测的房价与实际房价的误差最小。
②AI工程师有发展前景。从经济学(市场供需)的角度做出解读,未来几十年市场对AI工程师的需求是很大的,需求大于供应,目前处于蓬勃发展阶段,还有很大的空缺。

 

 

0
回复
漂流寻梦plxm
#661 回复于2020-01

作业1-2(补作业):
1.生活中,网站内容的阅读量和领域、关键字等相关的预测,假设是线性函数网络,参数是领域、不同关键字、不同发稿人的相关权重,优化目标是预测值接近实际阅读值,可以用方差最小化。还有每天网站的发稿量和日期、历史事件、天气、节假日等的相关度,也可以做线性假设,日期、历史事件、天气、节假日信息等的权重作为参数,优化目标是与实际发稿量接近,同样用方差衡量。
2.随着现在信息的发展,人类社会每天产生的信息量是一个天文数字,而且增长越来越快,已经产生的有存储问题,还有更重要的是如何在海量信息用迅速获取准确有效而且贴合实际需求的信息,AI是在大数据的基础上产生的,在这方面天生具有优势。

0
回复
在@后输入用户全名并按空格结束,可艾特全站任一用户