【问题简述】
分布式训练使用SyncBatchNorm的时候报错,使用正常的batchnorm是没有问题的
【代码】
模型启动脚本:python -m paddle.distributed.launch --selected_gpus=4,5,6 --log_dir=$log_dir train_large.py
train_large.py中模型定义:
model = TimeSeriesTransformer()
model = paddle.nn.SyncBatchNorm.convert_sync_batchnorm(model)
optim = paddle.optimizer.Adam(parameters=model.parameters())
model = fleet.distributed_model(model)
optim = fleet.distributed_optimizer(optim)
【报错信息】
C++ Traceback (most recent call last):
--------------------------------------
0 paddle::imperative::BasicEngine::Execute()
1 paddle::imperative::PreparedOp::Run(paddle::imperative::NameVariableWrapperMap const&, paddle::imperative::NameVariableWrapperMap const&, paddle::framework::AttributeMap const&, paddle::framework::AttributeMap const&)
2 std::_Function_handler, paddle::operators::SyncBatchNormGradKernel, paddle::operators::SyncBatchNormGradKernel >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&)
3 paddle::operators::SyncBatchNormGradKernel::Compute(paddle::framework::ExecutionContext const&) const
4 void paddle::operators::SyncBatchNormGradFunctor(paddle::framework::ExecutionContext const&, paddle::experimental::DataLayout, phi::DenseTensor const*, phi::DenseTensor const*, phi::DenseTensor*, phi::DenseTensor const*, phi::DenseTensor*, phi::DenseTensor*, phi::DenseTensor const*, phi::DenseTensor const*, double)
5 phi::DenseTensor::mutable_data(phi::Place const&, paddle::experimental::DataType, unsigned long)
6 phi::DenseTensor::set_type(paddle::experimental::DataType)
【做过的一些debug】
1) 输入的数据都是float32
2) model = paddle.nn.SyncBatchNorm.convert_sync_batchnorm(model) 这一行注释掉的话,可以正常运行
3) 报错的位置:前向计算完成,在loss.backward()的地方报错,可以稳定复现