If it it convenient, , could you please help to collect the log
export MKL_VERBOSE=1
run.sh 2>&1 | tee baseline.log
mkl2019.log and and mkl2014.log respectively?
I recalled, we had discussed the problems. The basic background is explained
https://software.intel.com/en-us/articles/introduction-to-the-conditional-numerical-reproducibility-cnr
thanks
Ying
MKL确实存在Conditional Numerical Reproducibility(CNR)问题:
- 现象1:https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/600167
- 现象2:https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/391364
解决办法:设置以下两个环境变量
export KMP_DETERMINISTIC_REDUCTION=yes
export MKL_CBWR=xxx
,具体值参考下面(即匹配机器型号)
https://software.intel.com/sites/products/parallelmag/singlearticles/issue11/7080_2_IN_ParallelMag_Issue11_CBWR.PDF
Thanks for your apply.
I followed @luotao1 method and set KMP_DETERMINISTIC_REDUCTION=yes, MKL_CBWR=COMPATIBLE
to test again for 2014MKL
and 2019MKL
.
The result as follows:
layers.fc
After exporting environment, the output of 2014MKL(set)
and 2019MKL(set)
is exactly same.
The experiment results showed that model cloud converge as good as baseline, but 2019MKL
can't. So I will conduct some experiments to verify the effect of MKL_CBWR=COMPATIBLE
.
Besides, 2019MKL
.log as follows:(not set MKL_CBWR
)
MKL_VERBOSE Intel(R) MKL 2019.0 Update 1 Product build 20180928 for Intel(R) 64 architecture Intel(R) Advanced Vector Extensions 512 (Intel(R) AVX-512) enabled processors, Lnx 2.00GHz lp64 intel_thread
MKL_VERBOSE SGEMM(N,N,128,16,384,0x7ffcdc458b28,0x7f9e3c78a080,128,0x7f9e3c7bb080,384,0x7ffcdc458b30,0x7f9e3c7c2080,128) 20.99ms CNR:OFF Dyn:1 FastMM:1 TID:0 NThr:1
2019MKL(set)
.log as follows:(set MKL_CBWR=COMPATIBLE
)
MKL_VERBOSE Intel(R) MKL 2019.0 Update 1 Product build 20180928 for Intel(R) 64 architecture Intel(R) Architecture processors, Lnx 2.00GHz lp64 intel_thread
MKL_VERBOSE SGEMM(N,N,128,16,384,0x7ffdc1b3d3c8,0x7f723e89a080,128,0x7f723e8cb080,384,0x7ffdc1b3d3d0,0x7f723e8d2080,128) 7.04ms CNR:COMPATIBLE Dyn:1 FastMM:1 TID:0 NThr:1
上述我在设置KMP_DETERMINISTIC_REDUCTION=yes, MKL_CBWR=COMPATIBLE
下,对比了layers.fc
对blas.MatMul
的输出结果,是完全一致
的,即:
blas.MatMul
在我的model
中,除了blas.MatMul
,还多次用到了blas.GEMM
。在排查时发现,均设置环境KMP_DETERMINISTIC_REDUCTION=yes, MKL_CBWR=COMPATIBLE
变量时,不同输入shape
下,2019MKL和2014MKL输出的结果不一致,表现如下:
blas.GEMM
[4, 128]
[11, 128]
注:可以用fluid.contrib.match_matrix_tensor(x, y, ...)测试blas.GEMM的行为。此接口运算逻辑为 xwy
如何复现?
1.测试脚本
#coding:utf-8
import paddle.fluid as fluid
import numpy as np
import os
SHAPE = [11, 128]
def fake_data(shape, file_path):
if os.path.isfile(file_path):
return np.fromfile(file_path, dtype=np.float32).reshape(shape)
data = np.random.random(shape).astype('float32')
data.tofile(file_path)
return data
place = fluid.CPUPlace()
x = fluid.data(name='x', shape=SHAPE, dtype='float32', lod_level=1)
y = fluid.data(name='y', shape=SHAPE, dtype='float32', lod_level=1)
w_data = fake_data([SHAPE[-1], 3, SHAPE[-1]], 'w.bin')
X = fluid.create_lod_tensor(fake_data(SHAPE, 'x.bin'), [[SHAPE[0]]], place)
Y = fluid.create_lod_tensor(fake_data(SHAPE, 'y.bin'), [[SHAPE[0]]], place)
out,_ = fluid.contrib.match_matrix_tensor(x, y, channel_num=3, param_attr=fluid.initializer.NumpyArrayInitializer(w_data))
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
res = exe.run(feed={'x': X, 'y': Y}, fetch_list=[out],return_numpy=False)
np.savetxt('res.txt', res[0])
2. 细节说明
- whl包:用2014版和2019版
libmklml_intel.so
分别编译paddle whl - 设置变量:
KMP_DETERMINISTIC_REDUCTION=yes, MKL_CBWR=COMPATIBLE
除了blas.MatMul,还多次用到了blas.GEMM。在排查时发现,均设置环境KMP_DETERMINISTIC_REDUCTION=yes, MKL_CBWR=COMPATIBLE变量时,不同输入shape下,2019MKL和2014MKL输出的结果不一致,表现如下
建议能否尝试MKL_CBWR=AVX
或者AVX2
,看看能不能性能提上去,并且结果在可以接受的范围之内
关于在乱序执行的CPU 上 浮点计算不能保证计算结果bit wise 一致性,这个应是CPU固有的问题。比如
(2-63 + 1) + (-1) ≈ 1 + (-1) = 0
versus
2-63 + (1 + (-1)) ≈ 2-63 + 0 = 2-63
象MKL 这样现代基于CPU优化的浮动计算型代码, 尤其在深度学习training 经常有极小数的情况下, 这个问题尤其突出。 所幸MKL提供了结果 一致性 和高性能 trade-off 的方案 MKL_CBWR 。 具体请参考 : MKL user guide
https://software.intel.com/en-us/onemkl-linux-developer-guide-setting-the-environment-variable-for-conditional-numerical-reproducibility
并且还在2019 update 3 版本后加强了这个功能。
所以 请尝试
1)export MKL_CBWR=AVX2
2) 更新 到 2019 update 5 : MKL 下载地址: https://github.com/intel/mkl-dnn/releases/download/v0.21/mklml_lnx_2019.0.5.20190502.tgz
请尝试 export MKL_CBWR=AVX2, STRICT
我们使用的是打入Erf和csrmm2的mkl版本,请问2019 update5已经支持这两个函数了么?如果没有的话,还需要请 @yinghu5 帮忙再打一下。
Paddle/cmake/external/mklml.cmake
Lines 34 to 36 in 341dee0
目前的测试如果不涉及这两个函数, 请下载 https://github.com/intel/mkl-dnn/releases/download/v0.21/mklml_lnx_2019.0.5.20190502.tgz。 不用编译,直接替换动态库 先试试?
1)export MKL_CBWR=AVX2
2) 更新 到 2019 update 5 : MKL 下载地址: https://github.com/intel/mkl-dnn/releases/download/v0.21/mklml_lnx_2019.0.5.20190502.tgz
请尝试 export MKL_CBWR=AVX2, STRICT
我尝试了方法①和方法②,分别对比了上述blas.GEMM
的输出,如下:
blas.GEMM
AVX2
COMPATIBL
AVX2,STRICT
即:
- 不论是2014还是2019、2019 update 5的so文件下,只有设置MKL_CBWR=
COMPATIBL
才会有不同输出 - 2014的mkl.so下,输出与2019是不一样的,且在这个so下,模型可以稳定收敛。
- 同时,我用2019 update 5的so文件编译whl包,也跑了模型训练。目前来看,结果依旧收敛不到2014MKL和baseline的效果(持续更新)
机器的cpuinfo:
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx avx512f avx512dq rdseed adx smap clflushopt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc
另外,2014的so文件编译的whl包,在使用3D的shape作为paddle.fluid.layers.matmul时,会报段错误
?这个可否也帮忙看下issue
Paddle patch
diff --git a/paddle/fluid/platform/dynload/mklml.h b/paddle/fluid/platform/dynload/mklml.h
index 839dcd87f5..d9e6a85fe5 100644
--- a/paddle/fluid/platform/dynload/mklml.h
+++ b/paddle/fluid/platform/dynload/mklml.h
@@ -33,10 +33,14 @@ extern void* mklml_dso_handle;
template <typename... Args> \
auto operator()(Args... args) -> DECLARE_TYPE(__name, args...) { \
using mklmlFunc = decltype(&::__name); \
+ static void* p_##_name = dlsym(RTLD_DEFAULT, #__name); \
+ printf("leo p_##_name_0=%p\n", p_##_name); \
+ if (p_##_name) return reinterpret_cast<mklmlFunc>(p_##_name)(args...); \
std::call_once(mklml_dso_flag, []() { \
mklml_dso_handle = paddle::platform::dynload::GetMKLMLDsoHandle(); \
}); \
- static void* p_##_name = dlsym(mklml_dso_handle, #__name); \
+ p_##_name = dlsym(mklml_dso_handle, #__name); \
+ printf("leo not should run here\n"); \
return reinterpret_cast<mklmlFunc>(p_##_name)(args...); \
} \
}; \
执行时需要加LD_PRELOAD=libmkl2014.so的绝对文件path:libiomp5.so的绝对文件path python xxx.py, 比如:
LD_PRELOAD=/home/yinghu/libmkl2014.so:/home/leozhao/Paddle/build/python/paddle/libs/libiomp5.so python /home/yinghu/test_gemm.py
update libmkl2014.so, add dependency on libiomp5.so, and to run test, need PR#21471, cmd is changed to :
LD_PRELOAD=/home/yinghu/libmkl2014.so python /home/yinghu/test_gemm.py
之前问到过跑实验的机器cpu型号,如下:
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
stepping : 1
microcode : 0xb00001b
cpu MHz : 2200.000
cache size : 30720 KB
physical id : 1
siblings : 24
core id : 12
cpu cores : 12
apicid : 57
initial apicid : 57
fpu : yes
fpu_exception : yes
cpuid level : 20
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local cat_l3 cdp_l3
bogomips : 4395.42
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
processor : 47
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
stepping : 1
microcode : 0xb00001b
cpu MHz : 2200.000
cache size : 30720 KB
physical id : 1
siblings : 24
core id : 13
cpu cores : 12
apicid : 59
initial apicid : 59
fpu : yes
fpu_exception : yes
cpuid level : 20
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local cat_l3 cdp_l3
bogomips : 4395.42
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
LD_PRELOAD=/home/yinghu/libmkl2014.so python /home/yinghu/test_gemm.py
我按照这种方式尝试了下,在cloud上会报如下错误:
sh: /lib64/tls/libc.so.6: version `GLIBC_2.6' not found (required by /home/disk1/normandy/maybach/app-user-20191203185711-11915/workspace/env_run/thirdparty/libiomp5.so)
sh: /lib64/tls/libc.so.6: version `GLIBC_2.4' not found (required by /home/disk1/normandy/maybach/app-user-20191203185711-11915/workspace/env_run/thirdparty/libiomp5.so)
同样的代码在测试机上,可以运行。看了下测试机的glibc版本是2.12
,想问下,提供的so文件依赖的glibc的版本是多少?是否是版本过高的问题??
这个文件是来自于paddle whl里面的吗
单独指定LD_PRELOAD=~/libmkl2014.so python test_gemm.py
,默认是找不到libiomp5.so。会报错:
python: error while loading shared libraries: libiomp5.so: cannot open shared object file: No such file or directory
所以,我从paddle的编译目录下copy出libiomp5.so。
在开发机测 LD_PRELOAD=~/libmkl2014.so:./libiomp5.so python test_gemm.py
可以正常run。
在cloud上就有问题了。
I'm training a model on CPU machine with paddle, but the precision (
PNR
) can not converge up to the baseline with same train dataset.I found that the result trained by paddle whl with
2014MKL
is better than the2019MKL
.2019MKL
means to compile paddle with 2019libmklml_intel.so
, which is used by default in Paddle.2014MKL
means to compile paddle with 2014libmklml_intel.so
Baseline
is trained by other framework tools based on 2014MKL.There are many
Matrix multiplication
in my model, such asblas.MatMul
orblas.GEMM
. I found the output ofMatrix multiplication
has different result while using different versionlibmklml_intel.so
to compile paddle whl.I'm not sure how the difference output of
MatMul
orGEMM
influence the precision. I would appreciate it very much if relevant developers can help to follow this problem.See details as follows:
How to reproduce with Docker:
1. Enviroment & Version
docker.paddlepaddlehub.com/paddle_manylinux_devel:cuda8.0_cudnn7
2.7.15
2. build paddle
3. check diff
You can compare the output of
fluid.layers.fc
by feeding the same data and weight. Here is my scripts. Firstly, you should savex
、y
into.bin
file to keep feed data same.You should install
paddle whl
with different versionlibmklml_intel.so
and run above script to save result intores.txt
. And reinstall another versionpaddle whl
to generate newres.txt
.Then just run followed code to check the difference:
Ouput may be like: