首页 Paddle框架 帖子详情
答疑贴-PaddleCamp第四期14个实战任务
收藏
快速回复
Paddle框架 其他深度学习 6727 25
答疑贴-PaddleCamp第四期14个实战任务
收藏
快速回复
Paddle框架 其他深度学习 6727 25

欢迎大家学习PaddleCamp第四期飞桨的14个企业级实战任务,请大家将学习过程中的问题在该帖下回复,老师会每周回复。

想要加入学习的新同学,可点击链接报名:https://aistudio.baidu.com/aistudio/questionnaire?activityid=567

(扫码二维码报名,并加班主任微信,班主任会安排缴费并入群)

                   ------- 班主任微信 ---------

2
收藏
回复
全部评论(25)
时间顺序
q
quyifei2013
#2 回复于2019-09

老师好,我在做FLOWER102 作业的时候遇到下面的问题,代码已经调通并可以正常运行,acc达到80%,但是停下重启环境后再运行就报错。 使用pdb,检查结果 image_count=6552, class_dim=102,在每个mini_batch训练之前检查data,data是一个长度24的tuple列表,tuple第一个数据是图像数据,第二个数据是label,也都正常,这说明读取train.txt也是正常的。 请老师帮忙看看,我的项目地址是https://aistudio.baidu.com/aistudio/projectdetail/120582。  下面是具体的错误信息:

2019-09-07 14:50:36,084 - [line:557] - INFO: create prog success
2019-09-07 14:50:36,084 - [line:557] - INFO: create prog success
2019-09-07 14:50:36,086 - [line:558] - INFO: train config: {'train_batch_size': 24, 'save_persistable_dir': './persistable-params', 'num_epochs': 120, 'image_enhance_strategy': {'saturation_prob': 0.5, 'need_rotate': True, 'need_distort': True, 'contrast_delta': 0.5, 'need_crop': True, 'hue_delta': 18, 'need_flip': True, 'brightness_delta': 0.125, 'saturation_delta': 0.5, 'brightness_prob': 0.5, 'hue_prob': 0.5, 'contrast_prob': 0.5}, 'data_dir': 'work', 'label_file': 'label_list.txt', 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100], 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'label_dict': {'92': 94, '3': 25, '36': 32, '85': 86, '97': 99, '42': 39, '46': 43, '29': 24, '96': 98, '41': 38, '78': 78, '47': 44, '54': 52, '52': 50, '87': 88, '22': 17, '64': 63, '19': 13, '38': 34, '43': 40, '21': 16, '95': 97, '11': 5, '9': 91, '23': 18, '31': 27, '18': 12, '30': 26, '5': 47, '1': 0, '44': 41, '15': 9, '35': 31, '65': 64, '39': 35, '16': 10, '24': 19, '93': 95, '40': 37, '100': 2, '67': 66, '84': 85, '25': 20, '76': 76, '79': 79, '32': 28, '74': 74, '99': 101, '86': 87, '77': 77, '12': 6, '102': 4, '26': 21, '88': 89, '59': 57, '73': 73, '94': 96, '7': 69, '51': 49, '17': 11, '45': 42, '70': 70, '75': 75, '82': 83, '60': 59, '90': 92, '27': 22, '33': 29, '49': 46, '4': 36, '72': 72, '10': 1, '61': 60, '80': 81, '50': 48, '91': 93, '69': 68, '53': 51, '83': 84, '13': 7, '66': 65, '20': 15, '68': 67, '37': 33, '89': 90, '58': 56, '14': 8, '34': 30, '55': 53, '62': 61, '57': 55, '8': 80, '101': 3, '48': 45, '98': 100, '63': 62, '2': 14, '28': 23, '6': 58, '56': 54, '81': 82, '71': 71}, 'save_freeze_dir': './freeze-model', 'train_file_list': 'train.txt', 'pretrained': True, 'image_count': 6552, 'pretrained_dir': 'work/Pretrained_Model/ResNet50_pretrained', 'use_gpu': True, 'early_stop': {'good_acc1': 0.86, 'sample_frequency': 50, 'successive_limit': 20}, 'momentum_strategy': {'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100], 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'sgd_strategy': {'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100], 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'input_size': [3, 224, 224], 'adam_strategy': {'learning_rate': 0.002}, 'class_dim': 102, 'continue_train': False}
2019-09-07 14:50:36,086 - [line:558] - INFO: train config: {'train_batch_size': 24, 'save_persistable_dir': './persistable-params', 'num_epochs': 120, 'image_enhance_strategy': {'saturation_prob': 0.5, 'need_rotate': True, 'need_distort': True, 'contrast_delta': 0.5, 'need_crop': True, 'hue_delta': 18, 'need_flip': True, 'brightness_delta': 0.125, 'saturation_delta': 0.5, 'brightness_prob': 0.5, 'hue_prob': 0.5, 'contrast_prob': 0.5}, 'data_dir': 'work', 'label_file': 'label_list.txt', 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100], 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'label_dict': {'92': 94, '3': 25, '36': 32, '85': 86, '97': 99, '42': 39, '46': 43, '29': 24, '96': 98, '41': 38, '78': 78, '47': 44, '54': 52, '52': 50, '87': 88, '22': 17, '64': 63, '19': 13, '38': 34, '43': 40, '21': 16, '95': 97, '11': 5, '9': 91, '23': 18, '31': 27, '18': 12, '30': 26, '5': 47, '1': 0, '44': 41, '15': 9, '35': 31, '65': 64, '39': 35, '16': 10, '24': 19, '93': 95, '40': 37, '100': 2, '67': 66, '84': 85, '25': 20, '76': 76, '79': 79, '32': 28, '74': 74, '99': 101, '86': 87, '77': 77, '12': 6, '102': 4, '26': 21, '88': 89, '59': 57, '73': 73, '94': 96, '7': 69, '51': 49, '17': 11, '45': 42, '70': 70, '75': 75, '82': 83, '60': 59, '90': 92, '27': 22, '33': 29, '49': 46, '4': 36, '72': 72, '10': 1, '61': 60, '80': 81, '50': 48, '91': 93, '69': 68, '53': 51, '83': 84, '13': 7, '66': 65, '20': 15, '68': 67, '37': 33, '89': 90, '58': 56, '14': 8, '34': 30, '55': 53, '62': 61, '57': 55, '8': 80, '101': 3, '48': 45, '98': 100, '63': 62, '2': 14, '28': 23, '6': 58, '56': 54, '81': 82, '71': 71}, 'save_freeze_dir': './freeze-model', 'train_file_list': 'train.txt', 'pretrained': True, 'image_count': 6552, 'pretrained_dir': 'work/Pretrained_Model/ResNet50_pretrained', 'use_gpu': True, 'early_stop': {'good_acc1': 0.86, 'sample_frequency': 50, 'successive_limit': 20}, 'momentum_strategy': {'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100], 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'sgd_strategy': {'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100], 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'input_size': [3, 224, 224], 'adam_strategy': {'learning_rate': 0.002}, 'class_dim': 102, 'continue_train': False}
2019-09-07 14:50:36,087 - [line:559] - INFO: build input custom reader and data feeder
2019-09-07 14:50:36,087 - [line:559] - INFO: build input custom reader and data feeder
2019-09-07 14:50:36,091 - [line:572] - INFO: build newwork
2019-09-07 14:50:36,091 - [line:572] - INFO: build newwork
2019-09-07 14:50:36,475 - [line:546] - INFO: load params from pretrained model
2019-09-07 14:50:36,475 - [line:546] - INFO: load params from pretrained model
2019-09-07 14:50:36,648 - [line:601] - INFO: current pass: 0, start read image
2019-09-07 14:50:36,648 - [line:601] - INFO: current pass: 0, start read image

---------------------------------------------------------------------------EnforceNotMet Traceback (most recent call last) in
653 init_log_config()
654 init_train_parameters()
--> 655 train()
in train()
605 loss, acc1, pred_ot = exe.run(main_program,
606 feed=feeder.feed(data),
--> 607 fetch_list=train_fetch_list)
608 t2 = time.time()
609 batch_id += 1
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py in run(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
648 scope=scope,
649 return_numpy=return_numpy,
--> 650 use_program_cache=use_program_cache)
651 else:
652 if fetch_list and program._is_data_parallel and program._program and (
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py in _run(self, program, exe, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
746 self._feed_data(program, feed, feed_var_name, scope)
747 if not use_program_cache:
--> 748 exe.run(program.desc, scope, 0, True, True, fetch_var_name)
749 else:
750 exe.run_cached_prepared_ctx(ctx, scope, False, False, False)
EnforceNotMet: Invoke operator adam error.
Python Callstacks:
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/framework.py", line 1748, in append_op
attrs=kwargs.get("attrs", None))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py", line 1381, in _append_optimize_op
stop_gradient=True)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py", line 386, in _create_optimization_pass
param_and_grad)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py", line 531, in apply_gradients
optimize_ops = self._create_optimization_pass(params_grads)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py", line 561, in apply_optimize
optimize_ops = self.apply_gradients(params_grads)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py", line 600, in minimize
loss, startup_program=startup_program, params_grads=params_grads)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/dygraph/base.py", line 88, in __impl__
return func(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
return wrapped_func(*args, **kwargs)
File "", line 2, in minimize
File "", line 584, in train
optimizer.minimize(avg_cost)
File "", line 656, in
train()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3265, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3183, in run_ast_nodes
if (yield from self.run_code(code, result)):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3018, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/async_helpers.py", line 67, in _pseudo_sync_runner
coro.send(None)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2843, in _run_cell
return runner(coro)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2817, in run_cell
raw_cell, store_history, silent, shell_futures)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 534, in execute_request
user_expressions, allow_stdin,
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 267, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 357, in process_one
yield gen.maybe_future(dispatch(*args))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 1147, in run
yielded = self.gen.send(value)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 1233, in inner
self.run()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/ioloop.py", line 758, in _run_callback
ret = callback()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/events.py", line 127, in _run
self._callback(*self._args)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/base_events.py", line 1425, in _run_once
handle._run()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/base_events.py", line 421, in run_forever
self._run_once()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 505, in start
self.io_loop.start()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
C++ Callstacks:
Enforce failed. Expected param_dims == ctx->GetInputDim("Moment1"), but received param_dims:1000 != ctx->GetInputDim("Moment1"):102.
Param and Moment1 input of AdamOp should have same dimension at [/paddle/paddle/fluid/operators/optimizers/adam_op.cc:64]
PaddlePaddle Call Stacks:
0 0x7ffb5e018808p void paddle::platform::EnforceNotMet::Init(std::string, char const*, int) + 360
1 0x7ffb5e018b57p paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) + 87
2 0x7ffb5f368c5bp paddle::operators::AdamOp::InferShape(paddle::framework::InferShapeContext*) const + 5307
3 0x7ffb5ff73610p paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant const&, paddle::framework::RuntimeContext*) const + 304
4 0x7ffb5ff73a31p paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant const&) const + 529
5 0x7ffb5ff7102cp paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant const&) + 332
6 0x7ffb5e1a247ep paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) + 382
7 0x7ffb5e1a551fp paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector > const&, bool) + 143
8 0x7ffb5e00996dp
9 0x7ffb5e04aca6p
10 0x7ffbe2079199p PyCFunction_Call + 233
11 0x7ffbe21143f9p PyEval_EvalFrameEx + 33545
12 0x7ffbe21164b6p
13 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
14 0x7ffbe21164b6p
15 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
16 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
17 0x7ffbe21164b6p
18 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
19 0x7ffbe21165ebp PyEval_EvalCode + 59
20 0x7ffbe2109c5dp
21 0x7ffbe2079179p PyCFunction_Call + 201
22 0x7ffbe2113dbep PyEval_EvalFrameEx + 31950
23 0x7ffbe204d410p _PyGen_Send + 128
24 0x7ffbe2112953p PyEval_EvalFrameEx + 26723
25 0x7ffbe204d410p _PyGen_Send + 128
26 0x7ffbe2112953p PyEval_EvalFrameEx + 26723
27 0x7ffbe204d410p _PyGen_Send + 128
28 0x7ffbe2113d60p PyEval_EvalFrameEx + 31856
29 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
30 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
31 0x7ffbe21164b6p
32 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
33 0x7ffbe2055c33p
34 0x7ffbe202433ap PyObject_Call + 106
35 0x7ffbe210e6eep PyEval_EvalFrameEx + 9726
36 0x7ffbe21164b6p
37 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
38 0x7ffbe204c6bap
39 0x7ffbe2107af6p
40 0x7ffbe2079179p PyCFunction_Call + 201
41 0x7ffbe2113dbep PyEval_EvalFrameEx + 31950
42 0x7ffbe21164b6p
43 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
44 0x7ffbe204c6bap
45 0x7ffbe2107af6p
46 0x7ffbe2079179p PyCFunction_Call + 201
47 0x7ffbe2113dbep PyEval_EvalFrameEx + 31950
48 0x7ffbe21164b6p
49 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
50 0x7ffbe204c6bap
51 0x7ffbe2107af6p
52 0x7ffbe2079179p PyCFunction_Call + 201
53 0x7ffbe2113dbep PyEval_EvalFrameEx + 31950
54 0x7ffbe21164b6p
55 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
56 0x7ffbe2055b56p
57 0x7ffbe202433ap PyObject_Call + 106
58 0x7ffbe210e6eep PyEval_EvalFrameEx + 9726
59 0x7ffbe204d410p _PyGen_Send + 128
60 0x7ffbe2113d60p PyEval_EvalFrameEx + 31856
61 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
62 0x7ffbe21164b6p
63 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
64 0x7ffbe2055c33p
65 0x7ffbe202433ap PyObject_Call + 106
66 0x7ffbe210e6eep PyEval_EvalFrameEx + 9726
67 0x7ffbe21164b6p
68 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
69 0x7ffbe2055b56p
70 0x7ffbe202433ap PyObject_Call + 106
71 0x7ffbe2189ccap
72 0x7ffbe202433ap PyObject_Call + 106
73 0x7ffbe21104c5p PyEval_EvalFrameEx + 17365
74 0x7ffbe21164b6p
75 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
76 0x7ffbe2055b56p
77 0x7ffbe202433ap PyObject_Call + 106
78 0x7ffbe210e6eep PyEval_EvalFrameEx + 9726
79 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
80 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
81 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
82 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
83 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
84 0x7ffbe21164b6p
85 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
86 0x7ffbe21164b6p
87 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
88 0x7ffbe21165ebp PyEval_EvalCode + 59
89 0x7ffbe2109c5dp
90 0x7ffbe2079179p PyCFunction_Call + 201
91 0x7ffbe2113dbep PyEval_EvalFrameEx + 31950
92 0x7ffbe21164b6p
93 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
94 0x7ffbe21164b6p
95 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
96 0x7ffbe2055b56p
97 0x7ffbe202433ap PyObject_Call + 106
98 0x7ffbe2162ba1p
99 0x7ffbe21634a5p Py_Main + 1493

0
回复
夜夜夜
#3 回复于2019-09

周五的时候FLOWER 10 解压后,还没有类似 ._image_07957.jpg ,今天数据集解压后都是这样的,我制作出来的数据集train.txt 里也包含这些,但是我把数据下载下来后,这些都是打不开的已损坏文件。影响后续batch_reader 数据读入。所以目前只能通过判断字符串解决了,我就想问一下这些._XXXX.jpg有啥用,没用为啥不删掉这些。

0
回复
夜夜夜
#4 回复于2019-09
周五的时候FLOWER 10 解压后,还没有类似 ._image_07957.jpg ,今天数据集解压后都是这样的,我制作出来的数据集train.txt 里也包含这些,但是我把数据下载下来后,这些都是打不开的已损坏文件。影响后续batch_reader 数据读入。所以目前只能通过判断字符串解决了,我就想问一下这些._XXXX.jpg有啥用,没用为啥不删掉这些。
展开

这个问题解决了,过。

1
回复
wudenggang
#5 回复于2019-09

构建 102花卉问题,项目地址在:https://aistudio.baidu.com/bdvgpu/user/85672/120561/notebooks/120561.ipynb?redirects=1

运行着ai studio就重启了,log记录如下,老师帮忙看下。

2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success
2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success
2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success
2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success
2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'}
2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'}
2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'}
2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'}
2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder
2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder
2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder
2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder
2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt
2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt
2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt
2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt
2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed
2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed
2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed
2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed
2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork
2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork
2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork
2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork
2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image
2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image
2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image
2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image

0
回复
q
quyifei2013
#6 回复于2019-09

在代码中看到, regularizer=L2Decay(0.) 是不是相当于没有L2正则化? 一般来说正则化是写到损失函数中的,实际是不是这样操作的?

0
回复
v
vinson9340
#7 回复于2019-09
构建 102花卉问题,项目地址在:https://aistudio.baidu.com/bdvgpu/user/85672/120561/notebooks/120561.ipynb?redirects=1 运行着ai studio就重启了,log记录如下,老师帮忙看下。 2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success 2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success 2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success 2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success 2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'} 2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'} 2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'} 2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'} 2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder 2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder 2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder 2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder 2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt 2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt 2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt 2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt 2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed 2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed 2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed 2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed 2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork 2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork 2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork 2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork 2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image 2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image 2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image 2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image
展开

你的重启问题解决了吗?我的也是这样,一开始训练就自动重启啊

0
回复
V
Vghgreat
#8 回复于2019-09

我把原5种花分类的代码移植到102种花分类,参数都没改,发现报错,label超长,老师能不能指点一下是哪里有问题,我自己实在排插不出来,还是小白
日志:

ose': 74, u'buttercup': 48, u'anthurium': 80, u'hard-leaved': 2, u'geranium': 58, u'orange': 59, u'fritillary': 23, u'carnation': 31, u'blanket': 100, u'pincushion': 22, u'cape': 37, u'monkshood': 9, u'alpine': 35, u'columbine': 84, u'morning': 76, u'windflower': 69, u'lenten': 40, u'californian': 65, u'wallflower': 46, u'spring': 67, u'globe': 10, u'japanese': 62, u'hippeastrum': 91, u'barbeton': 41, u'hibiscus': 83, u'desert-rose': 85, u'cyclamen': 88, u'purple': 17, u'magnolia': 87, u'clematis': 82, u'moon': 7, u'passion': 77, u'marigold': 47, u'bird': 8, u'red': 24, u'common': 50, u'corn': 26, u'water': 73, u'pelargonium': 55, u'great': 38, u'oxeye': 49, u'frangipani': 81, u'cautleya': 61, u'canterbury': 3, u'love': 33, u'spear': 14, u'sweet': 4, u'gaura': 57, u'osteospermum': 66, u'poinsettia': 44, u'blackberry': 102, u'bougainvillea': 95, u'siam': 39, u'gazania': 71, u'silverbush': 64, u'fire': 21, u'tiger': 6, u'king': 13, u'daffodil': 42, u'tree': 86, u'bee': 92, u'black-eyed': 63, u'wild': 52, u'bolero': 45, u'ruby-lipped': 36, u'peruvian': 18, u"colt's": 12, u'mallow': 97, u'petunia': 51, u'trumpet': 101, u'bishop': 56, u'garden': 32, u'pink': 1, u'stemless': 28, u'balloon': 19, u'camellia': 96, u'snapdragon': 11, u'sword': 43, u'canna': 90, u'bearded': 68, u'ball': 93, u'toad': 79, u'pink-yellow': 60, u'watercress': 89, u'globe-flower': 16, u'thorn': 75, u'english': 5, u'bromelia': 99, u'primula': 53}, 'image_count': 6552, 'pretrained': True}
2019-09-15 22:07:27,243 - train.py[line:555] - INFO: build input custom reader and data feeder
2019-09-15 22:07:27,244 - train.py[line:569] - INFO: build newwork
W0915 22:07:28.376420 5309 device_context.cc:259] Please NOTE: device: 0, CUDA Capability: 61, Driver API Version: 9.0, Runtime API Version: 8.0
W0915 22:07:28.378059 5309 device_context.cc:267] device: 0, cuDNN Version: 7.1.
2019-09-15 22:07:28,402 - train.py[line:537] - INFO: load params from retrain model
2019-09-15 22:07:28,640 - train.py[line:598] - INFO: current pass: 0, start read image
2019-09-15 22:07:35,169 - train.py[line:613] - INFO: Pass 0, trainbatch 10, loss 4.66865539551, acc1 0.0, time 0.21 sec
2019-09-15 22:07:40,741 - train.py[line:613] - INFO: Pass 0, trainbatch 20, loss 4.40447187424, acc1 0.0, time 0.21 sec
Exception: /paddle/paddle/fluid/operators/cross_entropy_op.h:159 Assertion `label >= 0 && label < feature_size_` failed (The label is out of the range. 102).
F0915 22:07:43.791612 5309 device_context.cc:333] cudaStreamSynchronize unspecified launch failure errno:4
*** Check failure stack trace: ***
@ 0x7fa5c8e857dd google::LogMessage::Fail()
@ 0x7fa5c8e8928c google::LogMessage::SendToLog()
@ 0x7fa5c8e85303 google::LogMessage::Flush()
@ 0x7fa5c8e8a79e google::LogMessageFatal::~LogMessageFatal()
@ 0x7fa5cae80fcd _ZNSt17_Function_handlerIFvvEZNK6paddle8platform17CUDADeviceContext4WaitEvEUlvE_E9_M_invokeERKSt9_Any_data
@ 0x7fa5cae8ea54 paddle::platform::TemporaryAllocator::Release()
@ 0x7fa5cae83fa1 paddle::platform::CUDADeviceContext::Wait()
@ 0x7fa5caddaa24 paddle::framework::TransDataDevice()
@ 0x7fa5cadd9ade paddle::framework::TransformData()
@ 0x7fa5cadd223d paddle::framework::OperatorWithKernel::PrepareData()
@ 0x7fa5cadd336d paddle::framework::OperatorWithKernel::RunImpl()
@ 0x7fa5cadd3811 paddle::framework::OperatorWithKernel::RunImpl()
@ 0x7fa5cadce3a3 paddle::framework::OperatorBase::Run()
@ 0x7fa5c8ecabae paddle::framework::Executor::RunPreparedContext()
@ 0x7fa5c8ecdc4f paddle::framework::Executor::Run()
@ 0x7fa5c8d2e92d _ZZN8pybind1112cpp_function10initializeIZN6paddle6pybindL22pybind11_init_core_avxERNS_6moduleEEUlRNS2_9framework8ExecutorERKNS6_11ProgramDescEPNS6_5ScopeEibbRKSt6vectorISsSaISsEEE85_vIS8_SB_SD_ibbSI_EINS_4nameENS_9is_methodENS_7siblingEEEEvOT_PFT0_DpT1_EDpRKT2_ENUlRNS_6detail13function_callEE1_4_FUNES10_
@ 0x7fa5c8d70976 pybind11::cpp_function::dispatcher()
@ 0x5573b6 PyEval_EvalFrameEx
@ 0x559921 PyEval_EvalCodeEx
@ 0x551f0d PyEval_EvalFrameEx
@ 0x559921 PyEval_EvalCodeEx
@ 0x550cca PyEval_EvalFrameEx
@ 0x559921 PyEval_EvalCodeEx
@ 0x551f0d PyEval_EvalFrameEx
@ 0x559921 PyEval_EvalCodeEx
@ 0x5af5e2 PyEval_EvalCode
@ 0x57975b (unknown)
@ 0x41a130 PyRun_FileExFlags
@ 0x41ab77 PyRun_SimpleFileExFlags
@ 0x41c3f3 Py_Main
@ 0x7fa627434f45 __libc_start_main
@ 0x5785ee (unknown)
Aborted (core dumped)

# -*- coding: utf-8 -*
"""
训练常用视觉基础网络,用于分类任务
需要将训练图片,类别文件 label_list.txt 放置在同一个文件夹下
程序会先读取 train.txt 文件获取类别数和图片数量
"""

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import numpy as np
import time
import math
import paddle
import paddle.fluid as fluid
import codecs
import logging

from paddle.fluid.initializer import MSRA
from paddle.fluid.initializer import Uniform
from paddle.fluid.param_attr import ParamAttr
from PIL import Image
from PIL import ImageEnhance

train_parameters = {
    "input_size": [3, 224, 224],
    "class_dim": -1,  # 分类数,会在初始化自定义 reader 的时候获得
    "image_count": -1,  # 训练图片数量,会在初始化自定义 reader 的时候获得
    "label_dict": {},
    "data_dir": "data/data12479/hackathon-blossom-flower-classification",  # 训练数据存储地址
    "train_file_list": "train.txt",
    "label_file": "label_list.txt",
    "save_freeze_dir": "./freeze-model",
    "save_persistable_dir": "./persistable-params",
    "continue_train": True,        # 是否接着上一次保存的参数接着训练,优先级高于预训练模型
    "pretrained": True,            # 是否使用预训练的模型
    "pretrained_dir": "data/data6487/ResNet50_pretrained", 
    "mode": "train",
    "num_epochs": 120,
    "train_batch_size": 20,
    "mean_rgb": [127.5, 127.5, 127.5],  # 常用图片的三通道均值,通常来说需要先对训练数据做统计,此处仅取中间值
    "use_gpu": True,
    "image_enhance_strategy": {  # 图像增强相关策略
        "need_distort": True,  # 是否启用图像颜色增强
        "need_rotate": True,   # 是否需要增加随机角度
        "need_crop": True,      # 是否要增加裁剪
        "need_flip": True,      # 是否要增加水平随机翻转
        "hue_prob": 0.5,
        "hue_delta": 18,
        "contrast_prob": 0.5,
        "contrast_delta": 0.5,
        "saturation_prob": 0.5,
        "saturation_delta": 0.5,
        "brightness_prob": 0.5,
        "brightness_delta": 0.125
    },
    "early_stop": {
        "sample_frequency": 50,
        "successive_limit": 3,
        "good_acc1": 0.92
    },
    "rsm_strategy": {
        "learning_rate": 0.002,
        "lr_epochs": [20, 40, 60, 80, 100],
        "lr_decay": [1, 0.5, 0.25, 0.1, 0.01, 0.002]
    },
    "momentum_strategy": {
        "learning_rate": 0.002,
        "lr_epochs": [20, 40, 60, 80, 100],
        "lr_decay": [1, 0.5, 0.25, 0.1, 0.01, 0.002]
    },
    "sgd_strategy": {
        "learning_rate": 0.002,
        "lr_epochs": [20, 40, 60, 80, 100],
        "lr_decay": [1, 0.5, 0.25, 0.1, 0.01, 0.002]
    },
    "adam_strategy": {
        "learning_rate": 0.002
    }
}


class ResNet(object):
    """
    resnet的网络结构类
    """

    def __init__(self, layers=50):
        """
        resnet的网络构造函数
        :param layers: 网络层数
        """
        self.layers = layers

    def name(self):
        """
        获取网络结构名字
        :return:
        """
        return 'resnet'

    def net(self, input, class_dim=1000):
        """
        构建网络结构
        :param input: 输入图片
        :param class_dim: 分类类别
        :return:
        """
        layers = self.layers
        supported_layers = [50, 101, 152]
        assert layers in supported_layers, \
            "supported layers are {} but input layer is {}".format(supported_layers, layers)

        if layers == 50:
            depth = [3, 4, 6, 3]
        elif layers == 101:
            depth = [3, 4, 23, 3]
        elif layers == 152:
            depth = [3, 8, 36, 3]
        num_filters = [64, 128, 256, 512]

        conv = self.conv_bn_layer(
            input=input,
            num_filters=64,
            filter_size=7,
            stride=2,
            act='relu',
            name="conv1")
        conv = fluid.layers.pool2d(
            input=conv,
            pool_size=3,
            pool_stride=2,
            pool_padding=1,
            pool_type='max')

        for block in range(len(depth)):
            for i in range(depth[block]):
                if layers in [101, 152] and block == 2:
                    if i == 0:
                        conv_name = "res" + str(block + 2) + "a"
                    else:
                        conv_name = "res" + str(block + 2) + "b" + str(i)
                else:
                    conv_name = "res" + str(block + 2) + chr(97 + i)
                conv = self.bottleneck_block(
                    input=conv,
                    num_filters=num_filters[block],
                    stride=2 if i == 0 and block != 0 else 1,
                    name=conv_name)

        pool = fluid.layers.pool2d(input=conv, pool_size=7, pool_type='avg', global_pooling=True)
        stdv = 1.0 / math.sqrt(pool.shape[1] * 1.0)
        out = fluid.layers.fc(input=pool,
                              size=class_dim,
                              act='softmax',
                              param_attr=fluid.param_attr.ParamAttr(initializer=Uniform(-stdv, stdv)))
        return out

    def conv_bn_layer(self,
                      input,
                      num_filters,
                      filter_size,
                      stride=1,
                      groups=1,
                      act=None,
                      name=None):
        """
        便捷型卷积结构,包含了batch_normal处理
        :param input: 输入图片
        :param num_filters: 卷积核个数
        :param filter_size: 卷积核大小
        :param stride: 平移
        :param groups: 分组
        :param act: 激活函数
        :param name: 卷积层名字
        :return:
        """
        conv = fluid.layers.conv2d(
            input=input,
            num_filters=num_filters,
            filter_size=filter_size,
            stride=stride,
            padding=(filter_size - 1) // 2,
            groups=groups,
            act=None,
            param_attr=ParamAttr(name=name + "_weights"),
            bias_attr=False,
            name=name + '.conv2d.output.1')
        if name == "conv1":
            bn_name = "bn_" + name
        else:
            bn_name = "bn" + name[3:]
        return fluid.layers.batch_norm(
            input=conv,
            act=act,
            name=bn_name + '.output.1',
            param_attr=ParamAttr(name=bn_name + '_scale'),
            bias_attr=ParamAttr(bn_name + '_offset'),
            moving_mean_name=bn_name + '_mean',
            moving_variance_name=bn_name + '_variance', )

    def shortcut(self, input, ch_out, stride, name):
        """
        转换结构,转换输入和输出一致,方便最后的短链接结构
        :param input:
        :param ch_out:
        :param stride:
        :param name:
        :return:
        """
        ch_in = input.shape[1]
        if ch_in != ch_out or stride != 1:
            return self.conv_bn_layer(input, ch_out, 1, stride, name=name)
        else:
            return input

    def bottleneck_block(self, input, num_filters, stride, name):
        """
        resnet的短路链接结构中的一种,采用压缩方式先降维,卷积后再升维
        利用转换结构将输入变成瓶颈卷积一样的尺寸,最后将两者按照位相加,完成短路链接
        :param input:
        :param num_filters:
        :param stride:
        :param name:
        :return:
        """
        conv0 = self.conv_bn_layer(
            input=input,
            num_filters=num_filters,
            filter_size=1,
            act='relu',
            name=name + "_branch2a")
        conv1 = self.conv_bn_layer(
            input=conv0,
            num_filters=num_filters,
            filter_size=3,
            stride=stride,
            act='relu',
            name=name + "_branch2b")
        conv2 = self.conv_bn_layer(
            input=conv1,
            num_filters=num_filters * 4,
            filter_size=1,
            act=None,
            name=name + "_branch2c")

        short = self.shortcut(
            input, num_filters * 4, stride, name=name + "_branch1")

        return fluid.layers.elementwise_add(
            x=short, y=conv2, act='relu', name=name + ".add.output.5")


def init_log_config():
    """
    初始化日志相关配置
    :return:
    """
    global logger
    logger = logging.getLogger()
    logger.setLevel(logging.INFO)
    log_path = os.path.join(os.getcwd(), 'logs')
    if not os.path.exists(log_path):
        os.makedirs(log_path)
    log_name = os.path.join(log_path, 'train.log')
    sh = logging.StreamHandler()
    fh = logging.FileHandler(log_name, mode='w')
    fh.setLevel(logging.DEBUG)
    formatter = logging.Formatter("%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s")
    fh.setFormatter(formatter)
    sh.setFormatter(formatter)
    logger.addHandler(sh)
    logger.addHandler(fh)


def init_train_parameters():
    """
    初始化训练参数,主要是初始化图片数量,类别数
    :return:
    """
    train_file_list = os.path.join(train_parameters['data_dir'], train_parameters['train_file_list'])
    label_list = os.path.join(train_parameters['data_dir'], train_parameters['label_file'])
    index = 0
    with codecs.open(label_list, encoding='utf-8') as flist:
        lines = [line.strip() for line in flist]
        for line in lines:
            parts = line.strip().split()
            train_parameters['label_dict'][parts[1]] = int(parts[0])
            index += 1
        train_parameters['class_dim'] = index
    with codecs.open(train_file_list, encoding='utf-8') as flist:
        lines = [line.strip() for line in flist]
        train_parameters['image_count'] = len(lines)


def resize_img(img, target_size):
    """
    强制缩放图片
    :param img:
    :param target_size:
    :return:
    """
    target_size = input_size
    img = img.resize((target_size[1], target_size[2]), Image.BILINEAR)
    return img


def random_crop(img, scale=[0.08, 1.0], ratio=[3. / 4., 4. / 3.]):
    aspect_ratio = math.sqrt(np.random.uniform(*ratio))
    w = 1. * aspect_ratio
    h = 1. / aspect_ratio

    bound = min((float(img.size[0]) / img.size[1]) / (w**2),
                (float(img.size[1]) / img.size[0]) / (h**2))
    scale_max = min(scale[1], bound)
    scale_min = min(scale[0], bound)

    target_area = img.size[0] * img.size[1] * np.random.uniform(scale_min,
                                                                scale_max)
    target_size = math.sqrt(target_area)
    w = int(target_size * w)
    h = int(target_size * h)

    i = np.random.randint(0, img.size[0] - w + 1)
    j = np.random.randint(0, img.size[1] - h + 1)

    img = img.crop((i, j, i + w, j + h))
    img = img.resize((train_parameters['input_size'][1], train_parameters['input_size'][2]), Image.BILINEAR)
    return img


def rotate_image(img):
    """
    图像增强,增加随机旋转角度
    """
    angle = np.random.randint(-14, 15)
    img = img.rotate(angle)
    return img


def random_brightness(img):
    """
    图像增强,亮度调整
    :param img:
    :return:
    """
    prob = np.random.uniform(0, 1)
    if prob < train_parameters['image_enhance_strategy']['brightness_prob']:
        brightness_delta = train_parameters['image_enhance_strategy']['brightness_delta']
        delta = np.random.uniform(-brightness_delta, brightness_delta) + 1
        img = ImageEnhance.Brightness(img).enhance(delta)
    return img


def random_contrast(img):
    """
    图像增强,对比度调整
    :param img:
    :return:
    """
    prob = np.random.uniform(0, 1)
    if prob < train_parameters['image_enhance_strategy']['contrast_prob']:
        contrast_delta = train_parameters['image_enhance_strategy']['contrast_delta']
        delta = np.random.uniform(-contrast_delta, contrast_delta) + 1
        img = ImageEnhance.Contrast(img).enhance(delta)
    return img


def random_saturation(img):
    """
    图像增强,饱和度调整
    :param img:
    :return:
    """
    prob = np.random.uniform(0, 1)
    if prob < train_parameters['image_enhance_strategy']['saturation_prob']:
        saturation_delta = train_parameters['image_enhance_strategy']['saturation_delta']
        delta = np.random.uniform(-saturation_delta, saturation_delta) + 1
        img = ImageEnhance.Color(img).enhance(delta)
    return img


def random_hue(img):
    """
    图像增强,色度调整
    :param img:
    :return:
    """
    prob = np.random.uniform(0, 1)
    if prob < train_parameters['image_enhance_strategy']['hue_prob']:
        hue_delta = train_parameters['image_enhance_strategy']['hue_delta']
        delta = np.random.uniform(-hue_delta, hue_delta)
        img_hsv = np.array(img.convert('HSV'))
        img_hsv[:, :, 0] = img_hsv[:, :, 0] + delta
        img = Image.fromarray(img_hsv, mode='HSV').convert('RGB')
    return img


def distort_color(img):
    """
    概率的图像增强
    :param img:
    :return:
    """
    prob = np.random.uniform(0, 1)
    # Apply different distort order
    if prob < 0.35:
        img = random_brightness(img)
        img = random_contrast(img)
        img = random_saturation(img)
        img = random_hue(img)
    elif prob < 0.7:
        img = random_brightness(img)
        img = random_saturation(img)
        img = random_hue(img)
        img = random_contrast(img)
    return img


def custom_image_reader(file_list, data_dir, mode):
    """
    自定义用户图片读取器,先初始化图片种类,数量
    :param file_list:
    :param data_dir:
    :param mode:
    :return:
    """
    with codecs.open(file_list) as flist:
        lines = [line.strip() for line in flist]

    def reader():
        np.random.shuffle(lines)
        for line in lines:
            if mode == 'train' or mode == 'val':
                img_path, label = line.split()
                img = Image.open(img_path)
                try:
                    if img.mode != 'RGB':
                        img = img.convert('RGB')
                    if train_parameters['image_enhance_strategy']['need_distort'] == True:
                        img = distort_color(img)
                    if train_parameters['image_enhance_strategy']['need_rotate'] == True:
                        img = rotate_image(img)
                    if train_parameters['image_enhance_strategy']['need_crop'] == True:
                        img = random_crop(img, train_parameters['input_size'])
                    if train_parameters['image_enhance_strategy']['need_flip'] == True:
                        mirror = int(np.random.uniform(0, 2))
                        if mirror == 1:
                            img = img.transpose(Image.FLIP_LEFT_RIGHT)
                    # HWC--->CHW && normalized
                    img = np.array(img).astype('float32')
                    img -= train_parameters['mean_rgb']
                    img = img.transpose((2, 0, 1))  # HWC to CHW
                    img *= 0.007843                 # 像素值归一化
                    yield img, int(label)
                except Exception as e:
                    pass                            # 以防某些图片读取处理出错,加异常处理
            elif mode == 'test':
                img_path = os.path.join(data_dir, line)
                img = Image.open(img_path)
                if img.mode != 'RGB':
                    img = img.convert('RGB')
                img = resize_img(img, train_parameters['input_size'])
                # HWC--->CHW && normalized
                img = np.array(img).astype('float32')
                img -= train_parameters['mean_rgb']
                img = img.transpose((2, 0, 1))  # HWC to CHW
                img *= 0.007843  # 像素值归一化
                yield img

    return reader


def optimizer_momentum_setting():
    """
    阶梯型的学习率适合比较大规模的训练数据
    """
    learning_strategy = train_parameters['momentum_strategy']
    batch_size = train_parameters["train_batch_size"]
    iters = train_parameters["image_count"] // batch_size
    lr = learning_strategy['learning_rate']

    boundaries = [i * iters for i in learning_strategy["lr_epochs"]]
    values = [i * lr for i in learning_strategy["lr_decay"]]
    learning_rate = fluid.layers.piecewise_decay(boundaries, values)
    optimizer = fluid.optimizer.MomentumOptimizer(learning_rate=learning_rate, momentum=0.9)
    return optimizer


def optimizer_rms_setting():
    """
    阶梯型的学习率适合比较大规模的训练数据
    """
    batch_size = train_parameters["train_batch_size"]
    iters = train_parameters["image_count"] // batch_size
    learning_strategy = train_parameters['rsm_strategy']
    lr = learning_strategy['learning_rate']

    boundaries = [i * iters for i in learning_strategy["lr_epochs"]]
    values = [i * lr for i in learning_strategy["lr_decay"]]

    optimizer = fluid.optimizer.RMSProp(
        learning_rate=fluid.layers.piecewise_decay(boundaries, values))

    return optimizer


def optimizer_sgd_setting():
    """
    loss下降相对较慢,但是最终效果不错,阶梯型的学习率适合比较大规模的训练数据
    """
    learning_strategy = train_parameters['sgd_strategy']
    batch_size = train_parameters["train_batch_size"]
    iters = train_parameters["image_count"] // batch_size
    lr = learning_strategy['learning_rate']

    boundaries = [i * iters for i in learning_strategy["lr_epochs"]]
    values = [i * lr for i in learning_strategy["lr_decay"]]
    learning_rate = fluid.layers.piecewise_decay(boundaries, values)
    optimizer = fluid.optimizer.SGD(learning_rate=learning_rate)
    return optimizer


def optimizer_adam_setting():
    """
    能够比较快速的降低 loss,但是相对后期乏力
    """
    learning_strategy = train_parameters['adam_strategy']
    learning_rate = learning_strategy['learning_rate']
    optimizer = fluid.optimizer.Adam(learning_rate=learning_rate)
    return optimizer


def load_params(exe, program):
    if train_parameters['continue_train'] and os.path.exists(train_parameters['save_persistable_dir']):
        logger.info('load params from retrain model')
        fluid.io.load_persistables(executor=exe,
                                   dirname=train_parameters['save_persistable_dir'],
                                   main_program=program)
    elif train_parameters['pretrained'] and os.path.exists(train_parameters['pretrained_dir']):
        logger.info('load params from pretrained model')
        def if_exist(var):
            return os.path.exists(os.path.join(train_parameters['pretrained_dir'], var.name))

        fluid.io.load_vars(exe, train_parameters['pretrained_dir'], main_program=program,
                           predicate=if_exist)


def train():
    train_prog = fluid.Program()
    train_startup = fluid.Program()
    logger.info("create prog success")
    logger.info("train config: %s", str(train_parameters))
    logger.info("build input custom reader and data feeder")
    file_list = os.path.join(train_parameters['data_dir'], "train.txt")
    mode = train_parameters['mode']
    batch_reader = paddle.batch(custom_image_reader(file_list, train_parameters['data_dir'], mode),
                                batch_size=train_parameters['train_batch_size'],
                                drop_last=True)
                                
    place = fluid.CUDAPlace(0) if train_parameters['use_gpu'] else fluid.CPUPlace()
    # 定义输入数据的占位符
    img = fluid.layers.data(name='img', shape=train_parameters['input_size'], dtype='float32')
    label = fluid.layers.data(name='label', shape=[1], dtype='int64')
    feeder = fluid.DataFeeder(feed_list=[img, label], place=place)

    # 选取不同的网络
    logger.info("build newwork")
    model = ResNet()
    out = model.net(input=img, class_dim=train_parameters['class_dim'])
    cost = fluid.layers.cross_entropy(out, label)
    avg_cost = fluid.layers.mean(x=cost)
    acc_top1 = fluid.layers.accuracy(input=out, label=label, k=1)
    # 选取不同的优化器
    optimizer = optimizer_rms_setting()
    # optimizer = optimizer_momentum_setting()
    # optimizer = optimizer_sgd_setting()
    # optimizer = optimizer_adam_setting()
    optimizer.minimize(avg_cost)
    exe = fluid.Executor(place)

    main_program = fluid.default_main_program()
    exe.run(fluid.default_startup_program())
    train_fetch_list = [avg_cost.name, acc_top1.name, out.name]
    
    load_params(exe, main_program)

    # 训练循环主体
    stop_strategy = train_parameters['early_stop']
    successive_limit = stop_strategy['successive_limit']
    sample_freq = stop_strategy['sample_frequency']
    good_acc1 = stop_strategy['good_acc1']
    successive_count = 0
    stop_train = False
    total_batch_count = 0
    for pass_id in range(train_parameters["num_epochs"]):
        logger.info("current pass: %d, start read image", pass_id)
        batch_id = 0
        for step_id, data in enumerate(batch_reader()):
            t1 = time.time()
            loss, acc1, pred_ot = exe.run(main_program,
                                          feed=feeder.feed(data),
                                          fetch_list=train_fetch_list)
            t2 = time.time()
            batch_id += 1
            total_batch_count += 1
            period = t2 - t1
            loss = np.mean(np.array(loss))
            acc1 = np.mean(np.array(acc1))
            if batch_id % 10 == 0:
                logger.info("Pass {0}, trainbatch {1}, loss {2}, acc1 {3}, time {4}".format(pass_id, batch_id, loss, acc1,
                                                                                            "%2.2f sec" % period))
            # 简单的提前停止策略,认为连续达到某个准确率就可以停止了
            if acc1 >= good_acc1:
                successive_count += 1
                logger.info("current acc1 {0} meets good {1}, successive count {2}".format(acc1, good_acc1, successive_count))
                fluid.io.save_inference_model(dirname=train_parameters['save_freeze_dir'],
                                              feeded_var_names=['img'],
                                              target_vars=[out],
                                              main_program=main_program,
                                              executor=exe)
                if successive_count >= successive_limit:
                    logger.info("end training")
                    stop_train = True
                    break
            else:
                successive_count = 0

            # 通用的保存策略,减小意外停止的损失
            if total_batch_count % sample_freq == 0:
                logger.info("temp save {0} batch train result, current acc1 {1}".format(total_batch_count, acc1))
                fluid.io.save_persistables(dirname=train_parameters['save_persistable_dir'],
                                           main_program=main_program,
                                           executor=exe)
        if stop_train:
            break
    logger.info("training till last epcho, end training")
    fluid.io.save_persistables(dirname=train_parameters['save_persistable_dir'],
                                           main_program=main_program,
                                           executor=exe)
    fluid.io.save_inference_model(dirname=train_parameters['save_freeze_dir'],
                                              feeded_var_names=['img'],
                                              target_vars=[out],
                                              main_program=main_program.clone(for_test=True),
                                              executor=exe)


if __name__ == '__main__':
    init_log_config()
    init_train_parameters()
    train()

 

0
回复
misite_J
#9 回复于2019-09

102类花卉分类,项目地址:https://aistudio.baidu.com/bdvgpu/user/50358/124442/notebooks/124442.ipynb?redirects=1

“fetch_list”与5分类中相同,但是报错EnforceNotMet: Invoke operator fetch error.请问是什么原因呢?

报错信息如下:

2019-09-16 09:13:11,125 - [line:549] - INFO: create prog success
2019-09-16 09:13:11,127 - [line:550] - INFO: train config: {'num_epochs': 40, 'use_gpu': True, 'sgd_strategy': {'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'mean_rgb': [127.5, 127.5, 127.5], 'rsm_strategy': {'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100]}, 'save_persistable_dir': './persistable-params', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data', 'mode': 'train', 'train_batch_size': 16, 'class_dim': 102, 'input_size': [3, 224, 224], 'momentum_strategy': {'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100]}, 'image_count': 6552, 'label_dict': {}, 'image_enhance_strategy': {'need_distort': True, 'need_crop': True, 'saturation_prob': 0.5, 'hue_delta': 18, 'saturation_delta': 0.5, 'contrast_delta': 0.5, 'hue_prob': 0.5, 'need_flip': True, 'brightness_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'brightness_delta': 0.125}, 'early_stop': {'sample_frequency': 30, 'good_acc1': 0.85, 'successive_limit': 3}, 'continue_train': False, 'save_freeze_dir': './freeze-model', 'train_file_list': 'train.txt'}
2019-09-16 09:13:11,129 - [line:551] - INFO: build input custom reader and data feeder
2019-09-16 09:13:11,133 - [line:564] - INFO: build newwork
2019-09-16 09:13:13,611 - [line:593] - INFO: current pass: 0, start read image

---------------------------------------------------------------------------EnforceNotMet Traceback (most recent call last) in
645 init_log_config()
646 init_train_parameters()
--> 647 train()
in train()
597 loss, acc1, pred_ot = exe.run(main_program,
598 feed=feeder.feed(data),
--> 599 fetch_list=train_fetch_list)
600 t2 = time.time()
601 batch_id += 1
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py in run(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
648 scope=scope,
649 return_numpy=return_numpy,
--> 650 use_program_cache=use_program_cache)
651 else:
652 if fetch_list and program._is_data_parallel and program._program and (
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py in _run(self, program, exe, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
746 self._feed_data(program, feed, feed_var_name, scope)
747 if not use_program_cache:
--> 748 exe.run(program.desc, scope, 0, True, True, fetch_var_name)
749 else:
750 exe.run_cached_prepared_ctx(ctx, scope, False, False, False)
EnforceNotMet: Invoke operator fetch error.
Python Callstacks:
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/framework.py", line 1748, in append_op
attrs=kwargs.get("attrs", None))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py", line 437, in _add_feed_fetch_ops
attrs={'col': i})
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py", line 744, in _run
fetch_var_name=fetch_var_name)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py", line 650, in run
use_program_cache=use_program_cache)
File "", line 599, in train
fetch_list=train_fetch_list)
File "", line 647, in
train()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3265, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3183, in run_ast_nodes
if (yield from self.run_code(code, result)):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3018, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/async_helpers.py", line 67, in _pseudo_sync_runner
coro.send(None)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2843, in _run_cell
return runner(coro)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2817, in run_cell
raw_cell, store_history, silent, shell_futures)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 534, in execute_request
user_expressions, allow_stdin,
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 267, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 357, in process_one
yield gen.maybe_future(dispatch(*args))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 1147, in run
yielded = self.gen.send(value)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 1233, in inner
self.run()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/ioloop.py", line 758, in _run_callback
ret = callback()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/events.py", line 127, in _run
self._callback(*self._args)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/base_events.py", line 1425, in _run_once
handle._run()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/base_events.py", line 421, in run_forever
self._run_once()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 505, in start
self.io_loop.start()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
C++ Callstacks:
cudaMemcpy failed in paddle::platform::GpuMemcpySync (0x7f9bd585cc40 -> 0x7f9b96bff040, length: 4): unspecified launch failure at [/paddle/paddle/fluid/platform/gpu_info.cc:280]
PaddlePaddle Call Stacks:
0 0x7f9f9b7ff2e0p void paddle::platform::EnforceNotMet::Init(char const*, char const*, int) + 352
1 0x7f9f9b7ff659p paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int) + 137
2 0x7f9f9d8159ccp paddle::platform::GpuMemcpySync(void*, void const*, unsigned long, cudaMemcpyKind) + 188
3 0x7f9f9b988079p void paddle::memory::Copy(paddle::platform::CPUPlace, void*, paddle::platform::CUDAPlace, void const*, unsigned long, CUstream_st*) + 249
4 0x7f9f9d7b5454p paddle::framework::TensorCopySync(paddle::framework::Tensor const&, boost::variant const&, paddle::framework::Tensor*) + 900
5 0x7f9f9d1f6490p paddle::operators::FetchOp::RunImpl(paddle::framework::Scope const&, boost::variant const&) const + 656
6 0x7f9f9d75802cp paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant const&) + 332
7 0x7f9f9b98947ep paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) + 382
8 0x7f9f9b98c51fp paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector > const&, bool) + 143
9 0x7f9f9b7f096dp
10 0x7f9f9b831ca6p
11 0x7fa0218ab199p PyCFunction_Call + 233
12 0x7fa0219463f9p PyEval_EvalFrameEx + 33545
13 0x7fa0219484b6p
14 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
15 0x7fa0219484b6p
16 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
17 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
18 0x7fa0219484b6p
19 0x7fa0219485a8p PyEval_EvalCodeEx + 72
20 0x7fa0219485ebp PyEval_EvalCode + 59
21 0x7fa02193bc5dp
22 0x7fa0218ab179p PyCFunction_Call + 201
23 0x7fa021945dbep PyEval_EvalFrameEx + 31950
24 0x7fa02187f410p _PyGen_Send + 128
25 0x7fa021944953p PyEval_EvalFrameEx + 26723
26 0x7fa02187f410p _PyGen_Send + 128
27 0x7fa021944953p PyEval_EvalFrameEx + 26723
28 0x7fa02187f410p _PyGen_Send + 128
29 0x7fa021945d60p PyEval_EvalFrameEx + 31856
30 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
31 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
32 0x7fa0219484b6p
33 0x7fa0219485a8p PyEval_EvalCodeEx + 72
34 0x7fa021887c33p
35 0x7fa02185633ap PyObject_Call + 106
36 0x7fa0219406eep PyEval_EvalFrameEx + 9726
37 0x7fa0219484b6p
38 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
39 0x7fa02187e6bap
40 0x7fa021939af6p
41 0x7fa0218ab179p PyCFunction_Call + 201
42 0x7fa021945dbep PyEval_EvalFrameEx + 31950
43 0x7fa0219484b6p
44 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
45 0x7fa02187e6bap
46 0x7fa021939af6p
47 0x7fa0218ab179p PyCFunction_Call + 201
48 0x7fa021945dbep PyEval_EvalFrameEx + 31950
49 0x7fa0219484b6p
50 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
51 0x7fa02187e6bap
52 0x7fa021939af6p
53 0x7fa0218ab179p PyCFunction_Call + 201
54 0x7fa021945dbep PyEval_EvalFrameEx + 31950
55 0x7fa0219484b6p
56 0x7fa0219485a8p PyEval_EvalCodeEx + 72
57 0x7fa021887b56p
58 0x7fa02185633ap PyObject_Call + 106
59 0x7fa0219406eep PyEval_EvalFrameEx + 9726
60 0x7fa02187f410p _PyGen_Send + 128
61 0x7fa021945d60p PyEval_EvalFrameEx + 31856
62 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
63 0x7fa0219484b6p
64 0x7fa0219485a8p PyEval_EvalCodeEx + 72
65 0x7fa021887c33p
66 0x7fa02185633ap PyObject_Call + 106
67 0x7fa0219406eep PyEval_EvalFrameEx + 9726
68 0x7fa0219484b6p
69 0x7fa0219485a8p PyEval_EvalCodeEx + 72
70 0x7fa021887b56p
71 0x7fa02185633ap PyObject_Call + 106
72 0x7fa0219bbccap
73 0x7fa02185633ap PyObject_Call + 106
74 0x7fa0219424c5p PyEval_EvalFrameEx + 17365
75 0x7fa0219484b6p
76 0x7fa0219485a8p PyEval_EvalCodeEx + 72
77 0x7fa021887b56p
78 0x7fa02185633ap PyObject_Call + 106
79 0x7fa0219406eep PyEval_EvalFrameEx + 9726
80 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
81 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
82 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
83 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
84 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
85 0x7fa0219484b6p
86 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
87 0x7fa0219484b6p
88 0x7fa0219485a8p PyEval_EvalCodeEx + 72
89 0x7fa0219485ebp PyEval_EvalCode + 59
90 0x7fa02193bc5dp
91 0x7fa0218ab179p PyCFunction_Call + 201
92 0x7fa021945dbep PyEval_EvalFrameEx + 31950
93 0x7fa0219484b6p
94 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
95 0x7fa0219484b6p
96 0x7fa0219485a8p PyEval_EvalCodeEx + 72
97 0x7fa021887b56p
98 0x7fa02185633ap PyObject_Call + 106
99 0x7fa021994ba1p

0
回复
T
TigerLogician
#10 回复于2019-09
周五的时候FLOWER 10 解压后,还没有类似 ._image_07957.jpg ,今天数据集解压后都是这样的,我制作出来的数据集train.txt 里也包含这些,但是我把数据下载下来后,这些都是打不开的已损坏文件。影响后续batch_reader 数据读入。所以目前只能通过判断字符串解决了,我就想问一下这些._XXXX.jpg有啥用,没用为啥不删掉这些。
展开

谢谢你的建议。这个是MAC OS中的压缩的问题,影响不大。后面可以清理掉。

0
回复
h
hahasoar
#11 回复于2019-09

求助:实验2:花卉分类 se_ResNeXt训练一直不出结果 用的cpu 其他代码没有修改

2019-09-16 11:16:05,737 - INFO - create prog success
2019-09-16 11:16:05,737 - [line:589] - INFO: create prog success
2019-09-16 11:16:05,741 - INFO - train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'contrast_prob': 0.5, 'saturation_prob': 0.5, 'need_crop': True, 'brightness_delta': 0.125, 'brightness_prob': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_flip': True, 'need_rotate': True, 'need_distort': True, 'saturation_delta': 0.5}, 'early_stop': {'sample_frequency': 50, 'good_acc1': 0.92, 'successive_limit': 3}, 'input_size': [3, 224, 224], 'class_dim': 5, 'pretrained_dir': 'data/data6595/SE_ResNext50_32x4d_pretrained', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'train_batch_size': 30, 'use_gpu': False, 'num_epochs': 120, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'label_file': 'label_list.txt', 'pretrained': True, 'mode': 'train', 'dropout_seed': None, 'adam_strategy': {'learning_rate': 0.002}, 'data_dir': 'data/data2815', 'save_freeze_dir': './freeze-model', 'label_dict': {'sunflowers': 3, 'daisy': 0, 'tulips': 4, 'dandelion': 1, 'roses': 2}, 'train_file_list': 'train.txt', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'save_persistable_dir': './persistable-params', 'image_count': 2979}
2019-09-16 11:16:05,741 - [line:590] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'contrast_prob': 0.5, 'saturation_prob': 0.5, 'need_crop': True, 'brightness_delta': 0.125, 'brightness_prob': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_flip': True, 'need_rotate': True, 'need_distort': True, 'saturation_delta': 0.5}, 'early_stop': {'sample_frequency': 50, 'good_acc1': 0.92, 'successive_limit': 3}, 'input_size': [3, 224, 224], 'class_dim': 5, 'pretrained_dir': 'data/data6595/SE_ResNext50_32x4d_pretrained', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'train_batch_size': 30, 'use_gpu': False, 'num_epochs': 120, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'label_file': 'label_list.txt', 'pretrained': True, 'mode': 'train', 'dropout_seed': None, 'adam_strategy': {'learning_rate': 0.002}, 'data_dir': 'data/data2815', 'save_freeze_dir': './freeze-model', 'label_dict': {'sunflowers': 3, 'daisy': 0, 'tulips': 4, 'dandelion': 1, 'roses': 2}, 'train_file_list': 'train.txt', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'save_persistable_dir': './persistable-params', 'image_count': 2979}
2019-09-16 11:16:05,743 - INFO - build input custom reader and data feeder
2019-09-16 11:16:05,743 - [line:591] - INFO: build input custom reader and data feeder
2019-09-16 11:16:05,748 - INFO - build newwork
2019-09-16 11:16:05,748 - [line:605] - INFO: build newwork
2019-09-16 11:16:08,974 - INFO - load params from pretrained model
2019-09-16 11:16:08,974 - [line:578] - INFO: load params from pretrained model
2019-09-16 11:16:10,179 - INFO - current pass: 0, start read image
2019-09-16 11:16:10,179 - [line:634] - INFO: current pass: 0, start read image

0
回复
misite_J
#12 回复于2019-09
102类花卉分类,项目地址:https://aistudio.baidu.com/bdvgpu/user/50358/124442/notebooks/124442.ipynb?redirects=1 “fetch_list”与5分类中相同,但是报错EnforceNotMet: Invoke operator fetch error.请问是什么原因呢? 报错信息如下: 2019-09-16 09:13:11,125 - [line:549] - INFO: create prog success 2019-09-16 09:13:11,127 - [line:550] - INFO: train config: {'num_epochs': 40, 'use_gpu': True, 'sgd_strategy': {'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'mean_rgb': [127.5, 127.5, 127.5], 'rsm_strategy': {'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100]}, 'save_persistable_dir': './persistable-params', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data', 'mode': 'train', 'train_batch_size': 16, 'class_dim': 102, 'input_size': [3, 224, 224], 'momentum_strategy': {'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100]}, 'image_count': 6552, 'label_dict': {}, 'image_enhance_strategy': {'need_distort': True, 'need_crop': True, 'saturation_prob': 0.5, 'hue_delta': 18, 'saturation_delta': 0.5, 'contrast_delta': 0.5, 'hue_prob': 0.5, 'need_flip': True, 'brightness_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'brightness_delta': 0.125}, 'early_stop': {'sample_frequency': 30, 'good_acc1': 0.85, 'successive_limit': 3}, 'continue_train': False, 'save_freeze_dir': './freeze-model', 'train_file_list': 'train.txt'} 2019-09-16 09:13:11,129 - [line:551] - INFO: build input custom reader and data feeder 2019-09-16 09:13:11,133 - [line:564] - INFO: build newwork 2019-09-16 09:13:13,611 - [line:593] - INFO: current pass: 0, start read image ---------------------------------------------------------------------------EnforceNotMet Traceback (most recent call last) in 645 init_log_config() 646 init_train_parameters() --> 647 train() in train() 597 loss, acc1, pred_ot = exe.run(main_program, 598 feed=feeder.feed(data), --> 599 fetch_list=train_fetch_list) 600 t2 = time.time() 601 batch_id += 1 /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py in run(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache) 648 scope=scope, 649 return_numpy=return_numpy, --> 650 use_program_cache=use_program_cache) 651 else: 652 if fetch_list and program._is_data_parallel and program._program and ( /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py in _run(self, program, exe, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache) 746 self._feed_data(program, feed, feed_var_name, scope) 747 if not use_program_cache: --> 748 exe.run(program.desc, scope, 0, True, True, fetch_var_name) 749 else: 750 exe.run_cached_prepared_ctx(ctx, scope, False, False, False) EnforceNotMet: Invoke operator fetch error. Python Callstacks: File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/framework.py", line 1748, in append_op attrs=kwargs.get("attrs", None)) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py", line 437, in _add_feed_fetch_ops attrs={'col': i}) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py", line 744, in _run fetch_var_name=fetch_var_name) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py", line 650, in run use_program_cache=use_program_cache) File "", line 599, in train fetch_list=train_fetch_list) File "", line 647, in train() File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3265, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3183, in run_ast_nodes if (yield from self.run_code(code, result)): File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3018, in run_cell_async interactivity=interactivity, compiler=compiler, result=result) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/async_helpers.py", line 67, in _pseudo_sync_runner coro.send(None) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2843, in _run_cell return runner(coro) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2817, in run_cell raw_cell, store_history, silent, shell_futures) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 536, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 294, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper yielded = next(result) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 534, in execute_request user_expressions, allow_stdin, File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper yielded = next(result) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 267, in dispatch_shell yield gen.maybe_future(handler(stream, idents, msg)) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper yielded = next(result) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 357, in process_one yield gen.maybe_future(dispatch(*args)) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 1147, in run yielded = self.gen.send(value) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 1233, in inner self.run() File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/stack_context.py", line 300, in null_wrapper return fn(*args, **kwargs) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/ioloop.py", line 758, in _run_callback ret = callback() File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/events.py", line 127, in _run self._callback(*self._args) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/base_events.py", line 1425, in _run_once handle._run() File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/base_events.py", line 421, in run_forever self._run_once() File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/platform/asyncio.py", line 132, in start self.asyncio_loop.run_forever() File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 505, in start self.io_loop.start() File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance app.start() File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel_launcher.py", line 16, in app.launch_new_instance() File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) C++ Callstacks: cudaMemcpy failed in paddle::platform::GpuMemcpySync (0x7f9bd585cc40 -> 0x7f9b96bff040, length: 4): unspecified launch failure at [/paddle/paddle/fluid/platform/gpu_info.cc:280] PaddlePaddle Call Stacks: 0 0x7f9f9b7ff2e0p void paddle::platform::EnforceNotMet::Init(char const*, char const*, int) + 352 1 0x7f9f9b7ff659p paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int) + 137 2 0x7f9f9d8159ccp paddle::platform::GpuMemcpySync(void*, void const*, unsigned long, cudaMemcpyKind) + 188 3 0x7f9f9b988079p void paddle::memory::Copy(paddle::platform::CPUPlace, void*, paddle::platform::CUDAPlace, void const*, unsigned long, CUstream_st*) + 249 4 0x7f9f9d7b5454p paddle::framework::TensorCopySync(paddle::framework::Tensor const&, boost::variant const&, paddle::framework::Tensor*) + 900 5 0x7f9f9d1f6490p paddle::operators::FetchOp::RunImpl(paddle::framework::Scope const&, boost::variant const&) const + 656 6 0x7f9f9d75802cp paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant const&) + 332 7 0x7f9f9b98947ep paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) + 382 8 0x7f9f9b98c51fp paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector > const&, bool) + 143 9 0x7f9f9b7f096dp 10 0x7f9f9b831ca6p 11 0x7fa0218ab199p PyCFunction_Call + 233 12 0x7fa0219463f9p PyEval_EvalFrameEx + 33545 13 0x7fa0219484b6p 14 0x7fa0219455b5p PyEval_EvalFrameEx + 29893 15 0x7fa0219484b6p 16 0x7fa0219455b5p PyEval_EvalFrameEx + 29893 17 0x7fa0219461d0p PyEval_EvalFrameEx + 32992 18 0x7fa0219484b6p 19 0x7fa0219485a8p PyEval_EvalCodeEx + 72 20 0x7fa0219485ebp PyEval_EvalCode + 59 21 0x7fa02193bc5dp 22 0x7fa0218ab179p PyCFunction_Call + 201 23 0x7fa021945dbep PyEval_EvalFrameEx + 31950 24 0x7fa02187f410p _PyGen_Send + 128 25 0x7fa021944953p PyEval_EvalFrameEx + 26723 26 0x7fa02187f410p _PyGen_Send + 128 27 0x7fa021944953p PyEval_EvalFrameEx + 26723 28 0x7fa02187f410p _PyGen_Send + 128 29 0x7fa021945d60p PyEval_EvalFrameEx + 31856 30 0x7fa0219461d0p PyEval_EvalFrameEx + 32992 31 0x7fa0219461d0p PyEval_EvalFrameEx + 32992 32 0x7fa0219484b6p 33 0x7fa0219485a8p PyEval_EvalCodeEx + 72 34 0x7fa021887c33p 35 0x7fa02185633ap PyObject_Call + 106 36 0x7fa0219406eep PyEval_EvalFrameEx + 9726 37 0x7fa0219484b6p 38 0x7fa0219455b5p PyEval_EvalFrameEx + 29893 39 0x7fa02187e6bap 40 0x7fa021939af6p 41 0x7fa0218ab179p PyCFunction_Call + 201 42 0x7fa021945dbep PyEval_EvalFrameEx + 31950 43 0x7fa0219484b6p 44 0x7fa0219455b5p PyEval_EvalFrameEx + 29893 45 0x7fa02187e6bap 46 0x7fa021939af6p 47 0x7fa0218ab179p PyCFunction_Call + 201 48 0x7fa021945dbep PyEval_EvalFrameEx + 31950 49 0x7fa0219484b6p 50 0x7fa0219455b5p PyEval_EvalFrameEx + 29893 51 0x7fa02187e6bap 52 0x7fa021939af6p 53 0x7fa0218ab179p PyCFunction_Call + 201 54 0x7fa021945dbep PyEval_EvalFrameEx + 31950 55 0x7fa0219484b6p 56 0x7fa0219485a8p PyEval_EvalCodeEx + 72 57 0x7fa021887b56p 58 0x7fa02185633ap PyObject_Call + 106 59 0x7fa0219406eep PyEval_EvalFrameEx + 9726 60 0x7fa02187f410p _PyGen_Send + 128 61 0x7fa021945d60p PyEval_EvalFrameEx + 31856 62 0x7fa0219461d0p PyEval_EvalFrameEx + 32992 63 0x7fa0219484b6p 64 0x7fa0219485a8p PyEval_EvalCodeEx + 72 65 0x7fa021887c33p 66 0x7fa02185633ap PyObject_Call + 106 67 0x7fa0219406eep PyEval_EvalFrameEx + 9726 68 0x7fa0219484b6p 69 0x7fa0219485a8p PyEval_EvalCodeEx + 72 70 0x7fa021887b56p 71 0x7fa02185633ap PyObject_Call + 106 72 0x7fa0219bbccap 73 0x7fa02185633ap PyObject_Call + 106 74 0x7fa0219424c5p PyEval_EvalFrameEx + 17365 75 0x7fa0219484b6p 76 0x7fa0219485a8p PyEval_EvalCodeEx + 72 77 0x7fa021887b56p 78 0x7fa02185633ap PyObject_Call + 106 79 0x7fa0219406eep PyEval_EvalFrameEx + 9726 80 0x7fa0219461d0p PyEval_EvalFrameEx + 32992 81 0x7fa0219461d0p PyEval_EvalFrameEx + 32992 82 0x7fa0219461d0p PyEval_EvalFrameEx + 32992 83 0x7fa0219461d0p PyEval_EvalFrameEx + 32992 84 0x7fa0219461d0p PyEval_EvalFrameEx + 32992 85 0x7fa0219484b6p 86 0x7fa0219455b5p PyEval_EvalFrameEx + 29893 87 0x7fa0219484b6p 88 0x7fa0219485a8p PyEval_EvalCodeEx + 72 89 0x7fa0219485ebp PyEval_EvalCode + 59 90 0x7fa02193bc5dp 91 0x7fa0218ab179p PyCFunction_Call + 201 92 0x7fa021945dbep PyEval_EvalFrameEx + 31950 93 0x7fa0219484b6p 94 0x7fa0219455b5p PyEval_EvalFrameEx + 29893 95 0x7fa0219484b6p 96 0x7fa0219485a8p PyEval_EvalCodeEx + 72 97 0x7fa021887b56p 98 0x7fa02185633ap PyObject_Call + 106 99 0x7fa021994ba1p
展开

项目地址

https://aistudio.baidu.com/aistudio/projectdetail/124442

0
回复
T
TigerLogician
#13 回复于2019-09
hahasoar #11
求助:实验2:花卉分类 se_ResNeXt训练一直不出结果 用的cpu 其他代码没有修改 2019-09-16 11:16:05,737 - INFO - create prog success 2019-09-16 11:16:05,737 - [line:589] - INFO: create prog success 2019-09-16 11:16:05,741 - INFO - train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'contrast_prob': 0.5, 'saturation_prob': 0.5, 'need_crop': True, 'brightness_delta': 0.125, 'brightness_prob': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_flip': True, 'need_rotate': True, 'need_distort': True, 'saturation_delta': 0.5}, 'early_stop': {'sample_frequency': 50, 'good_acc1': 0.92, 'successive_limit': 3}, 'input_size': [3, 224, 224], 'class_dim': 5, 'pretrained_dir': 'data/data6595/SE_ResNext50_32x4d_pretrained', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'train_batch_size': 30, 'use_gpu': False, 'num_epochs': 120, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'label_file': 'label_list.txt', 'pretrained': True, 'mode': 'train', 'dropout_seed': None, 'adam_strategy': {'learning_rate': 0.002}, 'data_dir': 'data/data2815', 'save_freeze_dir': './freeze-model', 'label_dict': {'sunflowers': 3, 'daisy': 0, 'tulips': 4, 'dandelion': 1, 'roses': 2}, 'train_file_list': 'train.txt', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'save_persistable_dir': './persistable-params', 'image_count': 2979} 2019-09-16 11:16:05,741 - [line:590] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'contrast_prob': 0.5, 'saturation_prob': 0.5, 'need_crop': True, 'brightness_delta': 0.125, 'brightness_prob': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_flip': True, 'need_rotate': True, 'need_distort': True, 'saturation_delta': 0.5}, 'early_stop': {'sample_frequency': 50, 'good_acc1': 0.92, 'successive_limit': 3}, 'input_size': [3, 224, 224], 'class_dim': 5, 'pretrained_dir': 'data/data6595/SE_ResNext50_32x4d_pretrained', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'train_batch_size': 30, 'use_gpu': False, 'num_epochs': 120, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'label_file': 'label_list.txt', 'pretrained': True, 'mode': 'train', 'dropout_seed': None, 'adam_strategy': {'learning_rate': 0.002}, 'data_dir': 'data/data2815', 'save_freeze_dir': './freeze-model', 'label_dict': {'sunflowers': 3, 'daisy': 0, 'tulips': 4, 'dandelion': 1, 'roses': 2}, 'train_file_list': 'train.txt', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'save_persistable_dir': './persistable-params', 'image_count': 2979} 2019-09-16 11:16:05,743 - INFO - build input custom reader and data feeder 2019-09-16 11:16:05,743 - [line:591] - INFO: build input custom reader and data feeder 2019-09-16 11:16:05,748 - INFO - build newwork 2019-09-16 11:16:05,748 - [line:605] - INFO: build newwork 2019-09-16 11:16:08,974 - INFO - load params from pretrained model 2019-09-16 11:16:08,974 - [line:578] - INFO: load params from pretrained model 2019-09-16 11:16:10,179 - INFO - current pass: 0, start read image 2019-09-16 11:16:10,179 - [line:634] - INFO: current pass: 0, start read image
展开

你换成GPU环境再试试

0
回复
T
TigerLogician
#14 回复于2019-09
我把原5种花分类的代码移植到102种花分类,参数都没改,发现报错,label超长,老师能不能指点一下是哪里有问题,我自己实在排插不出来,还是小白 日志: [代码] [代码]  

类别数目改了吗?可以粘贴项目地址方便查看

0
回复
T
TigerLogician
#15 回复于2019-09
在代码中看到, regularizer=L2Decay(0.) 是不是相当于没有L2正则化? 一般来说正则化是写到损失函数中的,实际是不是这样操作的?

一般是通过 regularizer 传递过去的

0
回复
T
TigerLogician
#16 回复于2019-09
构建 102花卉问题,项目地址在:https://aistudio.baidu.com/bdvgpu/user/85672/120561/notebooks/120561.ipynb?redirects=1 运行着ai studio就重启了,log记录如下,老师帮忙看下。 2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success 2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success 2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success 2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success 2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'} 2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'} 2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'} 2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'} 2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder 2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder 2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder 2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder 2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt 2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt 2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt 2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt 2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed 2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed 2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed 2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed 2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork 2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork 2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork 2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork 2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image 2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image 2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image 2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image
展开

看看是不是batchsize太大了

0
回复
F
FlorianJin
#17 回复于2019-09

我来晚了,基础又不够,想问作业如果截止日期前没有完成,以后还可以再做吗?

0
回复
k
kazenofeng
#18 回复于2019-10

循环神经网络模型——情感分析

这段代码什么意思

# 定义长短期记忆网络
def lstm_net(ipt, input_dim):
# 以数据的IDs作为输入
emb = fluid.layers.embedding(input=ipt, size=[input_dim, 128], is_sparse=True)
# 第一个全连接层
fc1 = fluid.layers.fc(input=emb, size=128)
# 进行一个长短期记忆操作
lstm1, _ = fluid.layers.dynamic_lstm(input=fc1, size=128)
# 第一个最大序列池操作
fc2 = fluid.layers.sequence_pool(input=fc1, pool_type='max')
# 第二个最大序列池操作
lstm2 = fluid.layers.sequence_pool(input=lstm1, pool_type='max')
# 以softmax作为全连接的输出层,大小为2,也就是正负面
out = fluid.layers.fc(input=[fc2, lstm2], size=2, act='softmax')
return out

0
回复
liuzz
#19 回复于2019-10

我把原5种花分类的代码移植到102种花分类,参数就改了数据集路径和class_dim,发现这句报错 optimizer.minimize(avg_cost), 项目地址木有找到原因具体日志如下:

2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork

------------
name: "cross_entropy2_9.tmp_0"
type {
type: LOD_TENSOR
lod_tensor {
tensor {
data_type: FP32
dims: -1
dims: 1
}
lod_level: 0
}
}
persistable: false

------------

---------------------------------------------------------------------------EnforceNotMet Traceback (most recent call last) in
654 init_log_config()
655 init_train_parameters()
--> 656 train()
in train()
580 print(cost)
581 print("------------")
--> 582 optimizer.minimize(avg_cost)
583 exe = fluid.Executor(place)
584
in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip)
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/wrapped_decorator.py in __impl__(func, *args, **kwargs)
23 def __impl__(func, *args, **kwargs):
24 wrapped_func = decorator_func(func)
---> 25 return wrapped_func(*args, **kwargs)
26
27 return __impl__
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/dygraph/base.py in __impl__(*args, **kwargs)
86 def __impl__(*args, **kwargs):
87 with _switch_tracer_mode_guard_(is_train=False):
---> 88 return func(*args, **kwargs)
89
90 return __impl__
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip)
591 startup_program=startup_program,
592 parameter_list=parameter_list,
--> 593 no_grad_set=no_grad_set)
594
595 if grad_clip is not None and framework.in_dygraph_mode():
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py in backward(self, loss, startup_program, parameter_list, no_grad_set, callbacks)
491 with program_guard(program, startup_program):
492 params_grads = append_backward(loss, parameter_list,
--> 493 no_grad_set, callbacks)
494 # Note: since we can't use all_reduce_op now,
495 # dgc_op should be the last op of one grad.
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/backward.py in append_backward(loss, parameter_list, no_grad_set, callbacks)
568 grad_to_var,
569 callbacks,
--> 570 input_grad_names_set=input_grad_names_set)
571
572 # Because calc_gradient may be called multiple times,
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/backward.py in _append_backward_ops_(block, ops, target_block, no_grad_dict, grad_to_var, callbacks, input_grad_names_set)
308 # Getting op's corresponding grad_op
309 grad_op_desc, op_grad_to_var = core.get_grad_op_desc(
--> 310 op.desc, cpt.to_text(no_grad_dict[block.idx]), grad_sub_block_list)
311
312 # If input_grad_names_set is not None, extend grad_op_descs only when
EnforceNotMet: grad_op_maker_ should not be null
Operator GradOpMaker has not been registered. at [/paddle/paddle/fluid/framework/op_info.h:69]
PaddlePaddle Call Stacks:
0 0x7f4188b04808p void paddle::platform::EnforceNotMet::Init(std::string, char const*, int) + 360
1 0x7f4188b04b57p paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) + 87
2 0x7f4188b05b1cp paddle::framework::OpInfo::GradOpMaker() const + 108
3 0x7f4188afd21ep
4 0x7f4188b36ca6p
5 0x7f420cb59199p PyCFunction_Call + 233
6 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950
7 0x7f420cbf64b6p
8 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
9 0x7f420cbf64b6p
10 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
11 0x7f420cbf64b6p
12 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
13 0x7f420cbf64b6p
14 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
15 0x7f420cb35c33p
16 0x7f420cb0433ap PyObject_Call + 106
17 0x7f420cbee6eep PyEval_EvalFrameEx + 9726
18 0x7f420cbf64b6p
19 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
20 0x7f420cb35c33p
21 0x7f420cb0433ap PyObject_Call + 106
22 0x7f420cbee6eep PyEval_EvalFrameEx + 9726
23 0x7f420cbf64b6p
24 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
25 0x7f420cbf64b6p
26 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
27 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
28 0x7f420cbf64b6p
29 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
30 0x7f420cbf65ebp PyEval_EvalCode + 59
31 0x7f420cbe9c5dp
32 0x7f420cb59179p PyCFunction_Call + 201
33 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950
34 0x7f420cb2d410p _PyGen_Send + 128
35 0x7f420cbf2953p PyEval_EvalFrameEx + 26723
36 0x7f420cb2d410p _PyGen_Send + 128
37 0x7f420cbf2953p PyEval_EvalFrameEx + 26723
38 0x7f420cb2d410p _PyGen_Send + 128
39 0x7f420cbf3d60p PyEval_EvalFrameEx + 31856
40 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
41 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
42 0x7f420cbf64b6p
43 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
44 0x7f420cb35c33p
45 0x7f420cb0433ap PyObject_Call + 106
46 0x7f420cbee6eep PyEval_EvalFrameEx + 9726
47 0x7f420cbf64b6p
48 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
49 0x7f420cb2c6bap
50 0x7f420cbe7af6p
51 0x7f420cb59179p PyCFunction_Call + 201
52 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950
53 0x7f420cbf64b6p
54 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
55 0x7f420cb2c6bap
56 0x7f420cbe7af6p
57 0x7f420cb59179p PyCFunction_Call + 201
58 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950
59 0x7f420cbf64b6p
60 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
61 0x7f420cb2c6bap
62 0x7f420cbe7af6p
63 0x7f420cb59179p PyCFunction_Call + 201
64 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950
65 0x7f420cbf64b6p
66 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
67 0x7f420cb35b56p
68 0x7f420cb0433ap PyObject_Call + 106
69 0x7f420cbee6eep PyEval_EvalFrameEx + 9726
70 0x7f420cb2d410p _PyGen_Send + 128
71 0x7f420cbf3d60p PyEval_EvalFrameEx + 31856
72 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
73 0x7f420cbf64b6p
74 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
75 0x7f420cb35c33p
76 0x7f420cb0433ap PyObject_Call + 106
77 0x7f420cbee6eep PyEval_EvalFrameEx + 9726
78 0x7f420cbf64b6p
79 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
80 0x7f420cb35b56p
81 0x7f420cb0433ap PyObject_Call + 106
82 0x7f420cc69ccap
83 0x7f420cb0433ap PyObject_Call + 106
84 0x7f420cbf04c5p PyEval_EvalFrameEx + 17365
85 0x7f420cbf64b6p
86 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
87 0x7f420cb35b56p
88 0x7f420cb0433ap PyObject_Call + 106
89 0x7f420cbee6eep PyEval_EvalFrameEx + 9726
90 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
91 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
92 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
93 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
94 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
95 0x7f420cbf64b6p
96 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
97 0x7f420cbf64b6p
98 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
99 0x7f420cbf65ebp PyEval_EvalCode + 59

0
回复
liuzz
#20 回复于2019-10
liuzz #19
我把原5种花分类的代码移植到102种花分类,参数就改了数据集路径和class_dim,发现这句报错 optimizer.minimize(avg_cost), 项目地址木有找到原因具体日志如下: 2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True} 2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True} 2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True} 2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True} 2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True} 2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True} 2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True} 2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True} 2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True} 2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True} 2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True} 2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True} 2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder 2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder 2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder 2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder 2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder 2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder 2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder 2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder 2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder 2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder 2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder 2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder 2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork 2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork 2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork 2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork 2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork 2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork 2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork 2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork 2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork 2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork 2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork 2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork ------------ name: "cross_entropy2_9.tmp_0" type { type: LOD_TENSOR lod_tensor { tensor { data_type: FP32 dims: -1 dims: 1 } lod_level: 0 } } persistable: false ------------ ---------------------------------------------------------------------------EnforceNotMet Traceback (most recent call last) in 654 init_log_config() 655 init_train_parameters() --> 656 train() in train() 580 print(cost) 581 print("------------") --> 582 optimizer.minimize(avg_cost) 583 exe = fluid.Executor(place) 584 in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip) /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/wrapped_decorator.py in __impl__(func, *args, **kwargs) 23 def __impl__(func, *args, **kwargs): 24 wrapped_func = decorator_func(func) ---> 25 return wrapped_func(*args, **kwargs) 26 27 return __impl__ /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/dygraph/base.py in __impl__(*args, **kwargs) 86 def __impl__(*args, **kwargs): 87 with _switch_tracer_mode_guard_(is_train=False): ---> 88 return func(*args, **kwargs) 89 90 return __impl__ /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip) 591 startup_program=startup_program, 592 parameter_list=parameter_list, --> 593 no_grad_set=no_grad_set) 594 595 if grad_clip is not None and framework.in_dygraph_mode(): /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py in backward(self, loss, startup_program, parameter_list, no_grad_set, callbacks) 491 with program_guard(program, startup_program): 492 params_grads = append_backward(loss, parameter_list, --> 493 no_grad_set, callbacks) 494 # Note: since we can't use all_reduce_op now, 495 # dgc_op should be the last op of one grad. /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/backward.py in append_backward(loss, parameter_list, no_grad_set, callbacks) 568 grad_to_var, 569 callbacks, --> 570 input_grad_names_set=input_grad_names_set) 571 572 # Because calc_gradient may be called multiple times, /opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/backward.py in _append_backward_ops_(block, ops, target_block, no_grad_dict, grad_to_var, callbacks, input_grad_names_set) 308 # Getting op's corresponding grad_op 309 grad_op_desc, op_grad_to_var = core.get_grad_op_desc( --> 310 op.desc, cpt.to_text(no_grad_dict[block.idx]), grad_sub_block_list) 311 312 # If input_grad_names_set is not None, extend grad_op_descs only when EnforceNotMet: grad_op_maker_ should not be null Operator GradOpMaker has not been registered. at [/paddle/paddle/fluid/framework/op_info.h:69] PaddlePaddle Call Stacks: 0 0x7f4188b04808p void paddle::platform::EnforceNotMet::Init(std::string, char const*, int) + 360 1 0x7f4188b04b57p paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) + 87 2 0x7f4188b05b1cp paddle::framework::OpInfo::GradOpMaker() const + 108 3 0x7f4188afd21ep 4 0x7f4188b36ca6p 5 0x7f420cb59199p PyCFunction_Call + 233 6 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950 7 0x7f420cbf64b6p 8 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893 9 0x7f420cbf64b6p 10 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893 11 0x7f420cbf64b6p 12 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893 13 0x7f420cbf64b6p 14 0x7f420cbf65a8p PyEval_EvalCodeEx + 72 15 0x7f420cb35c33p 16 0x7f420cb0433ap PyObject_Call + 106 17 0x7f420cbee6eep PyEval_EvalFrameEx + 9726 18 0x7f420cbf64b6p 19 0x7f420cbf65a8p PyEval_EvalCodeEx + 72 20 0x7f420cb35c33p 21 0x7f420cb0433ap PyObject_Call + 106 22 0x7f420cbee6eep PyEval_EvalFrameEx + 9726 23 0x7f420cbf64b6p 24 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893 25 0x7f420cbf64b6p 26 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893 27 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992 28 0x7f420cbf64b6p 29 0x7f420cbf65a8p PyEval_EvalCodeEx + 72 30 0x7f420cbf65ebp PyEval_EvalCode + 59 31 0x7f420cbe9c5dp 32 0x7f420cb59179p PyCFunction_Call + 201 33 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950 34 0x7f420cb2d410p _PyGen_Send + 128 35 0x7f420cbf2953p PyEval_EvalFrameEx + 26723 36 0x7f420cb2d410p _PyGen_Send + 128 37 0x7f420cbf2953p PyEval_EvalFrameEx + 26723 38 0x7f420cb2d410p _PyGen_Send + 128 39 0x7f420cbf3d60p PyEval_EvalFrameEx + 31856 40 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992 41 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992 42 0x7f420cbf64b6p 43 0x7f420cbf65a8p PyEval_EvalCodeEx + 72 44 0x7f420cb35c33p 45 0x7f420cb0433ap PyObject_Call + 106 46 0x7f420cbee6eep PyEval_EvalFrameEx + 9726 47 0x7f420cbf64b6p 48 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893 49 0x7f420cb2c6bap 50 0x7f420cbe7af6p 51 0x7f420cb59179p PyCFunction_Call + 201 52 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950 53 0x7f420cbf64b6p 54 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893 55 0x7f420cb2c6bap 56 0x7f420cbe7af6p 57 0x7f420cb59179p PyCFunction_Call + 201 58 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950 59 0x7f420cbf64b6p 60 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893 61 0x7f420cb2c6bap 62 0x7f420cbe7af6p 63 0x7f420cb59179p PyCFunction_Call + 201 64 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950 65 0x7f420cbf64b6p 66 0x7f420cbf65a8p PyEval_EvalCodeEx + 72 67 0x7f420cb35b56p 68 0x7f420cb0433ap PyObject_Call + 106 69 0x7f420cbee6eep PyEval_EvalFrameEx + 9726 70 0x7f420cb2d410p _PyGen_Send + 128 71 0x7f420cbf3d60p PyEval_EvalFrameEx + 31856 72 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992 73 0x7f420cbf64b6p 74 0x7f420cbf65a8p PyEval_EvalCodeEx + 72 75 0x7f420cb35c33p 76 0x7f420cb0433ap PyObject_Call + 106 77 0x7f420cbee6eep PyEval_EvalFrameEx + 9726 78 0x7f420cbf64b6p 79 0x7f420cbf65a8p PyEval_EvalCodeEx + 72 80 0x7f420cb35b56p 81 0x7f420cb0433ap PyObject_Call + 106 82 0x7f420cc69ccap 83 0x7f420cb0433ap PyObject_Call + 106 84 0x7f420cbf04c5p PyEval_EvalFrameEx + 17365 85 0x7f420cbf64b6p 86 0x7f420cbf65a8p PyEval_EvalCodeEx + 72 87 0x7f420cb35b56p 88 0x7f420cb0433ap PyObject_Call + 106 89 0x7f420cbee6eep PyEval_EvalFrameEx + 9726 90 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992 91 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992 92 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992 93 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992 94 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992 95 0x7f420cbf64b6p 96 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893 97 0x7f420cbf64b6p 98 0x7f420cbf65a8p PyEval_EvalCodeEx + 72 99 0x7f420cbf65ebp PyEval_EvalCode + 59
展开

问题解决了,貌似是我加载的预处理模型不对导致的

0
回复
liuzz
#21 回复于2019-10

老师我有一个问题比如花的五分类问题,如何增加一个other分类呢,比如我上传不属于这五种花的其他图片,预测为其他,应该怎么设置训练

0
回复
需求/bug反馈?一键提issue告诉我们
发现bug?如果您知道修复办法,欢迎提pr直接参与建设飞桨~
在@后输入用户全名并按空格结束,可艾特全站任一用户