首页 Paddle框架 帖子详情
Out of memory error on GPU
收藏
快速回复
Paddle框架 问答深度学习模型训练 4327 9
Out of memory error on GPU
收藏
快速回复
Paddle框架 问答深度学习模型训练 4327 9

paddle version: develop

batch_size=1
train_batch_reader = paddle.batch(paddle.reader.shuffle(reader, buf_size=BATCH_SIZE*2), BATCH_SIZE, drop_last=True)
def reader_flyingchairs(dataset):
    n = len(dataset)

    def reader():
        for i in range(n):
            a, b = dataset[i]
            yield a[0][:,0,:,:].transpose(1,2,0), a[0][:,1,:,:].transpose(1,2,0), b[0].transpose(1, 2, 0)# a single entry of data is created each time
    return reader

some code

    with fluid.dygraph.guard(place=fluid.CUDAPlace(0)):
        for epoch in range(epoch_num):
            for batch_id, data in enumerate(train_batch_reader()):
                start = time.time()
                im1_data = np.array(
                    [x[0] for x in data]).astype('float32')
                im2_data = np.array(
                    [x[1] for x in data]).astype('float32')
                flo_data = np.array(
                    [x[2] for x in data]).astype('float32')
                im_all = np.concatenate((im1_data, im2_data), axis=3).astype(np.float32)
                im_all = im_all/255.0
                im_all = np.swapaxes(np.swapaxes(im_all, 1, 2), 1, 3)
                label = flo_data/20.0
                label = np.swapaxes(np.swapaxes(label, 1, 2), 1, 3)
                print(im_all.shape)
                im_all = fluid.dygraph.to_variable(im_all)
                end = time.time()
                print('read data {} s ----'.format(end - start))
                start = time.time()
                res = model(im_all)
                end = time.time()
                model.clear_gradients()
                print('batch {} time: {} s'.format(batch_id, end - start))

when running after some batch, memory error occurs

read data 0.010515689849853516 s ----
batch 55 time: 0.06910538673400879 s
(1, 6, 384, 512)
read data 0.01302027702331543 s ----
batch 56 time: 0.0730290412902832 s
(1, 6, 384, 512)
read data 0.011269807815551758 s ----
batch 57 time: 0.08451676368713379 s
(1, 6, 384, 512)
read data 0.009588003158569336 s ----
Traceback (most recent call last):
  File "train.py", line 134, in <module>
    main()
  File "train.py", line 119, in main
    flo = model(im_all)
  File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 181, in __call__
    outputs = self.forward(*inputs, **kwargs)
  File "/models/dy_model.py", line 196, in forward
    x = fluid.layers.concat(input=[fluid.layers.leaky_relu(self.conv6_4(x), 0.1), x], axis=1)
  File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 181, in __call__
    outputs = self.forward(*inputs, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/nn.py", line 251, in forward
    'use_mkldnn': False,
  File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/layer_object_helper.py", line 52, in append_op
    stop_gradient=stop_gradient)
  File "/usr/local/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2499, in append_op
    kwargs.get("stop_gradient", False))
  File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/tracer.py", line 47, in trace_op
    not stop_gradient)
RuntimeError:

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::string paddle::platform::GetTraceBackString<std::string>(std::string&&, char const*, int)
1   paddle::memory::detail::GPUAllocator::Alloc(unsigned long*, unsigned long)
2   paddle::memory::detail::BuddyAllocator::RefillPool(unsigned long)
3   paddle::memory::detail::BuddyAllocator::Alloc(unsigned long)
4   void* paddle::memory::legacy::Alloc<paddle::platform::CUDAPlace>(paddle::platform::CUDAPlace const&, unsigned long)
5   paddle::memory::allocation::NaiveBestFitAllocator::AllocateImpl(unsigned long)
6   paddle::memory::allocation::Allocator::Allocate(unsigned long)
7   paddle::memory::allocation::RetryAllocator::AllocateImpl(unsigned long)
8   paddle::memory::allocation::AllocatorFacade::Alloc(paddle::platform::Place const&, unsigned long)
9   paddle::memory::Alloc(paddle::platform::Place const&, unsigned long)
10  paddle::memory::Alloc(paddle::platform::DeviceContext const&, unsigned long)
11  paddle::platform::CudnnWorkspaceHandle::ReallocWorkspace(unsigned long)
12  paddle::operators::CUDNNConvOpKernel<float>::Compute(paddle::framework::ExecutionContext const&) const
13  std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CUDAPlace, false, 0ul, paddle::operators::CUDNNConvOpKernel<float>, paddle::operators::CUDNNConvOpKernel<double>, paddle::operators::CUDNNConvOpKernel<paddle::platform::float16> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&)
14  paddle::imperative::PreparedOp::Run(std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const*, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const*, std::unordered_map<std::string, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> > > > const*)
15  paddle::imperative::OpBase::Run(std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&)
16  paddle::imperative::Tracer::TraceOp(std::string const&, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&, std::map<std::string, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > >, std::less<std::string>, std::allocator<std::pair<std::string const, std::vector<std::shared_ptr<paddle::imperative::VarBase>, std::allocator<std::shared_ptr<paddle::imperative::VarBase> > > > > > const&, std::unordered_map<std::string, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_>, std::hash<std::string>, std::equal_to<std::string>, std::allocator<std::pair<std::string const, boost::variant<boost::blank, int, float, std::string, std::vector<int, std::allocator<int> >, std::vector<float, std::allocator<float> >, std::vector<std::string, std::allocator<std::string> >, bool, std::vector<bool, std::allocator<bool> >, paddle::framework::BlockDesc*, long, std::vector<paddle::framework::BlockDesc*, std::allocator<paddle::framework::BlockDesc*> >, std::vector<long, std::allocator<long> >, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> > > >, paddle::platform::Place const&, bool)

----------------------
Error Message Summary:
----------------------
ResourceExhaustedError:

Out of memory error on GPU 0. Cannot allocate 36.583496MB memory on GPU 0, available memory is only 25.937500MB.

Please check whether there is any other process using GPU 0.
1. If yes, please stop them, or start PaddlePaddle on another GPU.
2. If no, please try one of the following suggestions:
   1) Decrease the batch size of your model.
   2) FLAGS_fraction_of_gpu_memory_to_use is 0.92 now, please set it to a higher value but less than 1.0.
      The command is `export FLAGS_fraction_of_gpu_memory_to_use=xxx`.

0
收藏
回复
全部评论(9)
时间顺序
AIStudio791823
#2 回复于2019-12
os.environ['FLAGS_fraction_of_gpu_memory_to_use'] = "0.99"
os.environ["FLAGS_eager_delete_tensor_gb"] = "0"

I also tried another reader

train_batch_reader = paddle.batch(reader, BATCH_SIZE, drop_last=True)

same error

0
回复
AIStudio784472
#3 回复于2019-12

报错中显示用到CUDNNConvOpKernel,在modle中用到了conv2d 或 conv3d 吗?
另外能否用GLOG看一下 log信息

0
回复
AIStudio790260
#4 回复于2019-12

Please show your model code.

0
回复
AIStudio790260
#5 回复于2019-12

Please call model.eval() in dygraph guard if your model does not run loss.backward, refer to Dygraph guide.

0
回复
AIStudio791823
#6 回复于2019-12

That works! Thanks!

0
回复
继续奋斗啊啊
#7 回复于2021-12

怎么解决的?

0
回复
好好男孩8
#8 回复于2022-03

batch size在哪个文件下调整

0
回复
fi_Past
#9 回复于2022-03

显存太大了吧

0
回复
fi_Past
#10 回复于2022-03

调batch

0
回复
需求/bug反馈?一键提issue告诉我们
发现bug?如果您知道修复办法,欢迎提pr直接参与建设飞桨~
在@后输入用户全名并按空格结束,可艾特全站任一用户