首页 炼丹房 帖子详情
PaddlePaddle七日打卡营之CV学习之旅
收藏
快速回复
炼丹房 其他新手上路 1206 0
PaddlePaddle七日打卡营之CV学习之旅
收藏
快速回复
炼丹房 其他新手上路 1206 0

       最近,跟着百度AI Studio完成了一个为期七日的打卡营课程《深度学习7日入门-CV疫情特辑》,从这门公开课程中学到了许多知识,了解了深度学习的又一大框架——PaddlePaddle,初步掌握了这一AI框架在解决实际项目和问题过程中的使用方法。下面简单回顾下这七日所学到的内容。

Day01:新冠疫情可视化
       在第一天的课程中,老师首先为我们介绍了图像识别的定义与问题,然后介绍了传统图像识别的方法,最后讲解了人工智能的发展历程,并布置了“新冠疫情可视化”的实战作业,要求爬取3月31日当天丁香园公开的统计数据,根据累计确诊数,使用pyecharts绘制如下图所示的疫情分布饼图。通过这次实战作业,我学会了使用python爬取网络内容以及pyecharts这一图表数据可视化框架。

import json
import datetime
from pyecharts import options as opts
from pyecharts.charts import Pie

# 读原始数据文件
today = datetime.date.today().strftime('%Y%m%d') #20200331
datafile = 'data/'+ today + '.json'
with open(datafile, 'r', encoding='UTF-8') as file:
json_array = json.loads(file.read())

# 分析全国实时确诊数据:'confirmedCount'字段
china_data = []
for province in json_array:
china_data.append((province['provinceShortName'], province['confirmedCount']))
china_data = sorted(china_data, key=lambda x: x[1], reverse=True) #reverse=True,表示降序,反之升序

print(china_data)

# 全国疫情地图
labels = [data[0] for data in china_data]
counts = [data[1] for data in china_data]

p = Pie(init_opts=opts.InitOpts(width="900px", height="500px"))
p.add("累计确诊", [list(z) for z in zip(labels, counts)], center=["50%", "65%"], radius="50%",)

#系列配置项,可配置图元样式、文字样式、标签样式、点线样式等
p.set_series_opts(label_opts=opts.LabelOpts(formatter="{b}: {c}"))
#全局配置项,可配置标题、动画、坐标轴、图例等
p.set_global_opts(title_opts=opts.TitleOpts(title="全国累计确诊数据饼图", subtitle='数据来源:丁香园'),
legend_opts=opts.LegendOpts(pos_right="5%", orient="vertical"))
#render()会生成本地 HTML 文件,默认会在当前目录生成 render.html 文件,也可以传入路径参数,如 m.render("mycharts.html")
p.render(path='/home/aistudio/data/全国累计确诊数据饼图.html')

​ ​


Day02:手势识别

       在第二天的课程中,老师对深度学习的发展历程与数字图像处理的方法做了简单介绍,并布置了使用全连接神经网络DNN实现手势识别的实战作业,要求实现DNN的网络结构,并调节参数、优化网络,提升在测试集上的识别准确率。通过这次学习,我学会了使用PaddlePaddle编写基本网络结构代码,从数据处理到网络结构、再到训练、再到测试整个深度学习项目的搭建过程,感受到了深度学习在实际项目中所表现的惊人的能力、所实现的不可思议的结果。

#定义DNN网络
class MyDNN(fluid.dygraph.Layer):
    def __init__(self, num_classes=10):
        super(MyDNN, self).__init__()
        # 创建卷积和池化层块,每个卷积层使用Sigmoid激活函数,后面跟着一个2x2的池化
        self.conv1 = Conv2D(num_channels=3, num_filters=32, filter_size=5, act='relu')
        self.pool1 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')
        self.conv2 = Conv2D(num_channels=32, num_filters=64, filter_size=5, act='relu')
        self.pool2 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')
        self.conv3 = Conv2D(num_channels=64, num_filters=64, filter_size=5, act='relu')
        self.pool3 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')
        self.conv4 = Conv2D(num_channels=64, num_filters=120, filter_size=4, act='relu')
        self.pool4 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')
        self.conv5 = Conv2D(num_channels=120, num_filters=120, filter_size=3, act='relu')
        # 创建全连接层,第一个全连接层的输出神经元个数为64, 第二个全连接层输出神经元个数为分裂标签的类别数
        self.fc1 = Linear(input_dim=120, output_dim=64, act='relu')
        self.fc2 = Linear(input_dim=64, output_dim=num_classes)
    def forward(self, x):
        x = self.conv1(x)
        x = self.pool1(x)
        x = self.conv2(x)
        x = self.pool2(x)
        x = self.conv3(x)
        x = self.pool3(x)
        x = self.conv4(x)
        x = self.pool4(x)
        x = self.conv5(x)
        x = fluid.layers.reshape(x, [x.shape[0], -1])
        x = self.fc1(x)
        x = self.fc2(x)
        return x


Day03:车牌识别
       在第三天的课程中,老师首先分析了全连接神经网络DNN存在的模型结构不够灵活、网络参数太多的诸多问题。然后引出了卷积神经网络CNN,其在网络结构上有三大特性:局部连接、权重共享、下采样,这三大特性共同减少了网络参数、加快了模型训练速度,并一步一步的讲解了卷积核作用在图像上的卷积过程。最后介绍了一种经典的卷积神经网络模型LeNet-5,并布置了使用此模型实现车牌识别的实战作业,并调节参数、优化网络结构,提升其在测试集上的识别准确率。通过这次课程学习,我学会了使用PaddlePaddle来搭建一个简单的CNN网络结构,包括卷积层、全连接层函数的使用方法,如何进行前向传播等等。

#定义网络
class MyLeNet(fluid.dygraph.Layer):
    def __init__(self):
        super(MyLeNet,self).__init__()
        self.conv1 = Conv2D(num_channels=1, num_filters=32, filter_size=5, act='relu')
        self.pool1 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')
        self.conv2 = Conv2D(num_channels=32, num_filters=128, filter_size=3, act='relu')
        self.pool2 = Pool2D(pool_size=2, pool_stride=2, pool_type='max')
        self.conv3 = Conv2D(num_channels=128, num_filters=512, filter_size=3, act='relu')
        self.fc1 = Linear(input_dim=512, output_dim=120, act='relu')
        self.fc2 = Linear(input_dim=120, output_dim=65)
    def forward(self, x):
        x = self.conv1(x)
        x = self.pool1(x)
        x = self.conv2(x)
        x = self.pool2(x)
        x = self.conv3(x)
        x = fluid.layers.reshape(x, [x.shape[0], -1])
        x = self.fc1(x)
        x = self.fc2(x)
        return x


Day04:口罩分类
       在第四天的课程中,老师为我们讲解了经典的卷积神经网络结构,包括AlexNet、VGG、GoogLeNet、Inception、ResNet等等,分析了各大模型在各大数据集与比赛上的结果以及其发展历程。并布置了使用VGGNet实现口罩分类的实战作业,并调节参数,提升其在测试集上的分类准确率。通过这次作业,我学会了使用PaddlePaddle来编写经典的卷积神经网络结构代码。

class ConvPool(fluid.dygraph.Layer):
    '''卷积+池化'''
    def __init__(self, num_channels, num_filters, filter_size, pool_size, pool_stride, groups, pool_padding=0,
                 pool_type='max', conv_stride=1, conv_padding=1, act=None):
        super(ConvPool, self).__init__()  
        self._conv2d_list = []
        for i in range(groups):
            conv2d = self.add_sublayer(   #返回一个由所有子层组成的列表。
                'bb_%d' % i,
                fluid.dygraph.Conv2D(
                num_channels=num_channels, #通道数
                num_filters=num_filters,   #卷积核个数
                filter_size=filter_size,   #卷积核大小
                stride=conv_stride,        #步长
                padding=conv_padding,      #padding大小,默认为0
                act=act)
            )
 
        num_channels = num_filters
        self._conv2d_list.append(conv2d)   
 
        self._pool2d = fluid.dygraph.Pool2D(
            pool_size=pool_size,           #池化核大小
            pool_type=pool_type,           #池化类型,默认是最大池化
            pool_stride=pool_stride,       #池化步长
            pool_padding=pool_padding      #填充大小
        )
 
    def forward(self, inputs):
        x = inputs
        for conv in self._conv2d_list:
            x = conv(x)
        x = self._pool2d(x)
        return x
 
class VGGNet(fluid.dygraph.Layer):
    '''
    VGG网络
    '''
    def __init__(self):
        super(VGGNet, self).__init__()
        self.convpool01 = ConvPool(3, 64, 3, 2, 2, 2, act="relu") #3:通道,  64:卷积核个数,  3:卷积核大小 2:池化核大小, 2:池化步长 2:连续卷积个数
        self.convpool02 = ConvPool(64, 128, 3, 2 , 2, 2, act="relu")
        self.convpool03 = ConvPool(128, 256, 3, 2 , 2, 3, act="relu")     
        self.convpool04 = ConvPool(256, 512, 3, 2 , 2, 3, act="relu")
        self.convpool05 = ConvPool(512, 512, 3, 2 , 2, 3, act="relu")       
        self.pool_5_shape = 512 * 7 * 7
        self.fc01 = fluid.dygraph.Linear(self.pool_5_shape, 4096, act = "relu")
        self.drop1_ratio = 0.5
        self.fc02 = fluid.dygraph.Linear(4096, 4096, act = "relu")
        self.drop2_ratio = 0.5
        self.fc03 = fluid.dygraph.Linear(4096, 2, act = "softmax")
 
    def forward(self, inputs, label=None):
        out = self.convpool01(inputs)
        out = self.convpool02(out)
        out = self.convpool03(out)
        out = self.convpool04(out)      
        out = self.convpool05(out) 
 
        out = fluid.layers.reshape(out, shape=[-1, 512 * 7 * 7])
        out = fluid.layers.dropout(self.fc01(out), self.drop1_ratio)
        out = fluid.layers.dropout(self.fc02(out), self.drop2_ratio)
        out = self.fc03(out)
 
        if label is not None:
            acc = fluid.layers.accuracy(input=out, label=label)
            return out, acc
        else:
            return out



Day05:PaddleHub体验
       在第五天的课程中,老师为我们介绍了PaddleHub以及使用指南。并使用PaddleHub完成人流密度检测比赛,要求给定图片,训练模型,统计图片中的总人数。比赛中使用了CSRNet的网络结构。

class CNN(fluid.dygraph.Layer):
    ''' 网络 '''
    def __init__(self):
        super(CNN, self).__init__()
 
        self.conv01_1 = fluid.dygraph.Conv2D(num_channels=3,num_filters=64,filter_size=3,padding=1,act="relu")
        self.conv01_2 = fluid.dygraph.Conv2D(num_channels=64,num_filters=64,filter_size=3,padding=1,act="relu")
        self.pool01=fluid.dygraph.Pool2D(pool_size=2,pool_type='max',pool_stride=2)
 
        self.conv02_1 = fluid.dygraph.Conv2D(num_channels=64,num_filters=128,filter_size=3,padding=1,act="relu")
        self.conv02_2 = fluid.dygraph.Conv2D(num_channels=128,num_filters=128,filter_size=3,padding=1,act="relu")
        self.pool02=fluid.dygraph.Pool2D(pool_size=2,pool_type='max',pool_stride=2)
 
        self.conv03_1 = fluid.dygraph.Conv2D(num_channels=128,num_filters=256,filter_size=3,padding=1,act="relu")
        self.conv03_2 = fluid.dygraph.Conv2D(num_channels=256,num_filters=256,filter_size=3,padding=1,act="relu")
        self.conv03_3 = fluid.dygraph.Conv2D(num_channels=256,num_filters=256,filter_size=3,padding=1,act="relu")
        self.pool03=fluid.dygraph.Pool2D(pool_size=2,pool_type='max',pool_stride=2)
 
        self.conv04_1 = fluid.dygraph.Conv2D(num_channels=256,num_filters=512,filter_size=3,padding=1,act="relu")
        self.conv04_2 = fluid.dygraph.Conv2D(num_channels=512,num_filters=512,filter_size=3,padding=1,act="relu")
        self.conv04_3 = fluid.dygraph.Conv2D(num_channels=512,num_filters=512,filter_size=3,padding=1,act="relu")
 
        self.conv05_1 = fluid.dygraph.Conv2D(num_channels=512,num_filters=512,filter_size=3,padding=2,dilation=2,act='relu')
        self.conv05_2 = fluid.dygraph.Conv2D(num_channels=512,num_filters=512,filter_size=3,padding=2,dilation=2,act='relu')
        self.conv05_3 = fluid.dygraph.Conv2D(num_channels=512,num_filters=512,filter_size=3,padding=2,dilation=2,act='relu')
        self.conv05_4 = fluid.dygraph.Conv2D(num_channels=512,num_filters=256,filter_size=3,padding=2,dilation=2,act='relu')
        self.conv05_5 = fluid.dygraph.Conv2D(num_channels=256,num_filters=128,filter_size=3,padding=2,dilation=2,act='relu')
        self.conv05_6 = fluid.dygraph.Conv2D(num_channels=128,num_filters=64,filter_size=3,padding=2,dilation=2,act='relu')
        self.conv05_7 = fluid.dygraph.Conv2D(num_channels=64,num_filters=1,filter_size=1,padding=0,act=None)
 
    def forward(self, inputs, label=None):
        """前向计算"""
        out = self.pool01(self.conv01_2(self.conv01_1(inputs)))
        out = self.pool02(self.conv02_2(self.conv02_1(out)))
        out = self.pool03(self.conv03_3(self.conv03_2(self.conv03_1(out))))
        out = self.conv04_3(self.conv04_2(self.conv04_1(out)))
 
        out = self.conv05_1(out)
        out = self.conv05_2(out)
        out = self.conv05_3(out)
        out = self.conv05_4(out)
        out = self.conv05_5(out)
        out = self.conv05_6(out)
        out = self.conv05_7(out)
        return out

Day06:PaddleSlim模型压缩
       在第六天的课程中,老师为我们讲解了PaddleSlim与模型压缩,然后讲解了一个实战案例,最后讲述了如何使用Paddle-Lite实现快速部署,并布置了图像分类模型量化的实战作业。要求以图像分类模型MobileNetV1为例,快速使用量化训练接口,包括以下几个步骤:(1)导入依赖;(2)构建模型;(3)定义输入数据;(4)训练模型;(5)量化模型;(6)训练和测试量化后的模型。

Day07:结营
       终于来到了最后一天,短短七天的打卡营结营仪式到来啦!在结营仪式上,首先由大咖为我们分享了实战与比赛经验,然后班主任公布课程排行榜单,最后班主任预告了下一次的打卡营以及百度飞桨未来要举办的打卡营活动。

 

 

1
收藏
回复
在@后输入用户全名并按空格结束,可艾特全站任一用户