答疑贴-PaddleCamp第四期14个实战任务
收藏
欢迎大家学习PaddleCamp第四期飞桨的14个企业级实战任务,请大家将学习过程中的问题在该帖下回复,老师会每周回复。
想要加入学习的新同学,可点击链接报名:https://aistudio.baidu.com/aistudio/questionnaire?activityid=567
(扫码二维码报名,并加班主任微信,班主任会安排缴费并入群)
------- 班主任微信 ---------
2
收藏
老师好,我在做FLOWER102 作业的时候遇到下面的问题,代码已经调通并可以正常运行,acc达到80%,但是停下重启环境后再运行就报错。 使用pdb,检查结果 image_count=6552, class_dim=102,在每个mini_batch训练之前检查data,data是一个长度24的tuple列表,tuple第一个数据是图像数据,第二个数据是label,也都正常,这说明读取train.txt也是正常的。 请老师帮忙看看,我的项目地址是https://aistudio.baidu.com/aistudio/projectdetail/120582。 下面是具体的错误信息:
2019-09-07 14:50:36,084 - [line:557] - INFO: create prog success
2019-09-07 14:50:36,084 - [line:557] - INFO: create prog success
2019-09-07 14:50:36,086 - [line:558] - INFO: train config: {'train_batch_size': 24, 'save_persistable_dir': './persistable-params', 'num_epochs': 120, 'image_enhance_strategy': {'saturation_prob': 0.5, 'need_rotate': True, 'need_distort': True, 'contrast_delta': 0.5, 'need_crop': True, 'hue_delta': 18, 'need_flip': True, 'brightness_delta': 0.125, 'saturation_delta': 0.5, 'brightness_prob': 0.5, 'hue_prob': 0.5, 'contrast_prob': 0.5}, 'data_dir': 'work', 'label_file': 'label_list.txt', 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100], 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'label_dict': {'92': 94, '3': 25, '36': 32, '85': 86, '97': 99, '42': 39, '46': 43, '29': 24, '96': 98, '41': 38, '78': 78, '47': 44, '54': 52, '52': 50, '87': 88, '22': 17, '64': 63, '19': 13, '38': 34, '43': 40, '21': 16, '95': 97, '11': 5, '9': 91, '23': 18, '31': 27, '18': 12, '30': 26, '5': 47, '1': 0, '44': 41, '15': 9, '35': 31, '65': 64, '39': 35, '16': 10, '24': 19, '93': 95, '40': 37, '100': 2, '67': 66, '84': 85, '25': 20, '76': 76, '79': 79, '32': 28, '74': 74, '99': 101, '86': 87, '77': 77, '12': 6, '102': 4, '26': 21, '88': 89, '59': 57, '73': 73, '94': 96, '7': 69, '51': 49, '17': 11, '45': 42, '70': 70, '75': 75, '82': 83, '60': 59, '90': 92, '27': 22, '33': 29, '49': 46, '4': 36, '72': 72, '10': 1, '61': 60, '80': 81, '50': 48, '91': 93, '69': 68, '53': 51, '83': 84, '13': 7, '66': 65, '20': 15, '68': 67, '37': 33, '89': 90, '58': 56, '14': 8, '34': 30, '55': 53, '62': 61, '57': 55, '8': 80, '101': 3, '48': 45, '98': 100, '63': 62, '2': 14, '28': 23, '6': 58, '56': 54, '81': 82, '71': 71}, 'save_freeze_dir': './freeze-model', 'train_file_list': 'train.txt', 'pretrained': True, 'image_count': 6552, 'pretrained_dir': 'work/Pretrained_Model/ResNet50_pretrained', 'use_gpu': True, 'early_stop': {'good_acc1': 0.86, 'sample_frequency': 50, 'successive_limit': 20}, 'momentum_strategy': {'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100], 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'sgd_strategy': {'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100], 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'input_size': [3, 224, 224], 'adam_strategy': {'learning_rate': 0.002}, 'class_dim': 102, 'continue_train': False}
2019-09-07 14:50:36,086 - [line:558] - INFO: train config: {'train_batch_size': 24, 'save_persistable_dir': './persistable-params', 'num_epochs': 120, 'image_enhance_strategy': {'saturation_prob': 0.5, 'need_rotate': True, 'need_distort': True, 'contrast_delta': 0.5, 'need_crop': True, 'hue_delta': 18, 'need_flip': True, 'brightness_delta': 0.125, 'saturation_delta': 0.5, 'brightness_prob': 0.5, 'hue_prob': 0.5, 'contrast_prob': 0.5}, 'data_dir': 'work', 'label_file': 'label_list.txt', 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100], 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'label_dict': {'92': 94, '3': 25, '36': 32, '85': 86, '97': 99, '42': 39, '46': 43, '29': 24, '96': 98, '41': 38, '78': 78, '47': 44, '54': 52, '52': 50, '87': 88, '22': 17, '64': 63, '19': 13, '38': 34, '43': 40, '21': 16, '95': 97, '11': 5, '9': 91, '23': 18, '31': 27, '18': 12, '30': 26, '5': 47, '1': 0, '44': 41, '15': 9, '35': 31, '65': 64, '39': 35, '16': 10, '24': 19, '93': 95, '40': 37, '100': 2, '67': 66, '84': 85, '25': 20, '76': 76, '79': 79, '32': 28, '74': 74, '99': 101, '86': 87, '77': 77, '12': 6, '102': 4, '26': 21, '88': 89, '59': 57, '73': 73, '94': 96, '7': 69, '51': 49, '17': 11, '45': 42, '70': 70, '75': 75, '82': 83, '60': 59, '90': 92, '27': 22, '33': 29, '49': 46, '4': 36, '72': 72, '10': 1, '61': 60, '80': 81, '50': 48, '91': 93, '69': 68, '53': 51, '83': 84, '13': 7, '66': 65, '20': 15, '68': 67, '37': 33, '89': 90, '58': 56, '14': 8, '34': 30, '55': 53, '62': 61, '57': 55, '8': 80, '101': 3, '48': 45, '98': 100, '63': 62, '2': 14, '28': 23, '6': 58, '56': 54, '81': 82, '71': 71}, 'save_freeze_dir': './freeze-model', 'train_file_list': 'train.txt', 'pretrained': True, 'image_count': 6552, 'pretrained_dir': 'work/Pretrained_Model/ResNet50_pretrained', 'use_gpu': True, 'early_stop': {'good_acc1': 0.86, 'sample_frequency': 50, 'successive_limit': 20}, 'momentum_strategy': {'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100], 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'sgd_strategy': {'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100], 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'input_size': [3, 224, 224], 'adam_strategy': {'learning_rate': 0.002}, 'class_dim': 102, 'continue_train': False}
2019-09-07 14:50:36,087 - [line:559] - INFO: build input custom reader and data feeder
2019-09-07 14:50:36,087 - [line:559] - INFO: build input custom reader and data feeder
2019-09-07 14:50:36,091 - [line:572] - INFO: build newwork
2019-09-07 14:50:36,091 - [line:572] - INFO: build newwork
2019-09-07 14:50:36,475 - [line:546] - INFO: load params from pretrained model
2019-09-07 14:50:36,475 - [line:546] - INFO: load params from pretrained model
2019-09-07 14:50:36,648 - [line:601] - INFO: current pass: 0, start read image
2019-09-07 14:50:36,648 - [line:601] - INFO: current pass: 0, start read image
---------------------------------------------------------------------------EnforceNotMet Traceback (most recent call last) in
653 init_log_config()
654 init_train_parameters()
--> 655 train()
in train()
605 loss, acc1, pred_ot = exe.run(main_program,
606 feed=feeder.feed(data),
--> 607 fetch_list=train_fetch_list)
608 t2 = time.time()
609 batch_id += 1
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py in run(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
648 scope=scope,
649 return_numpy=return_numpy,
--> 650 use_program_cache=use_program_cache)
651 else:
652 if fetch_list and program._is_data_parallel and program._program and (
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py in _run(self, program, exe, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
746 self._feed_data(program, feed, feed_var_name, scope)
747 if not use_program_cache:
--> 748 exe.run(program.desc, scope, 0, True, True, fetch_var_name)
749 else:
750 exe.run_cached_prepared_ctx(ctx, scope, False, False, False)
EnforceNotMet: Invoke operator adam error.
Python Callstacks:
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/framework.py", line 1748, in append_op
attrs=kwargs.get("attrs", None))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py", line 1381, in _append_optimize_op
stop_gradient=True)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py", line 386, in _create_optimization_pass
param_and_grad)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py", line 531, in apply_gradients
optimize_ops = self._create_optimization_pass(params_grads)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py", line 561, in apply_optimize
optimize_ops = self.apply_gradients(params_grads)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py", line 600, in minimize
loss, startup_program=startup_program, params_grads=params_grads)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/dygraph/base.py", line 88, in __impl__
return func(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
return wrapped_func(*args, **kwargs)
File "", line 2, in minimize
File "", line 584, in train
optimizer.minimize(avg_cost)
File "", line 656, in
train()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3265, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3183, in run_ast_nodes
if (yield from self.run_code(code, result)):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3018, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/async_helpers.py", line 67, in _pseudo_sync_runner
coro.send(None)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2843, in _run_cell
return runner(coro)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2817, in run_cell
raw_cell, store_history, silent, shell_futures)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 534, in execute_request
user_expressions, allow_stdin,
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 267, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 357, in process_one
yield gen.maybe_future(dispatch(*args))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 1147, in run
yielded = self.gen.send(value)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 1233, in inner
self.run()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/ioloop.py", line 758, in _run_callback
ret = callback()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/events.py", line 127, in _run
self._callback(*self._args)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/base_events.py", line 1425, in _run_once
handle._run()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/base_events.py", line 421, in run_forever
self._run_once()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 505, in start
self.io_loop.start()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
C++ Callstacks:
Enforce failed. Expected param_dims == ctx->GetInputDim("Moment1"), but received param_dims:1000 != ctx->GetInputDim("Moment1"):102.
Param and Moment1 input of AdamOp should have same dimension at [/paddle/paddle/fluid/operators/optimizers/adam_op.cc:64]
PaddlePaddle Call Stacks:
0 0x7ffb5e018808p void paddle::platform::EnforceNotMet::Init(std::string, char const*, int) + 360
1 0x7ffb5e018b57p paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) + 87
2 0x7ffb5f368c5bp paddle::operators::AdamOp::InferShape(paddle::framework::InferShapeContext*) const + 5307
3 0x7ffb5ff73610p paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant const&, paddle::framework::RuntimeContext*) const + 304
4 0x7ffb5ff73a31p paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant const&) const + 529
5 0x7ffb5ff7102cp paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant const&) + 332
6 0x7ffb5e1a247ep paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) + 382
7 0x7ffb5e1a551fp paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector > const&, bool) + 143
8 0x7ffb5e00996dp
9 0x7ffb5e04aca6p
10 0x7ffbe2079199p PyCFunction_Call + 233
11 0x7ffbe21143f9p PyEval_EvalFrameEx + 33545
12 0x7ffbe21164b6p
13 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
14 0x7ffbe21164b6p
15 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
16 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
17 0x7ffbe21164b6p
18 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
19 0x7ffbe21165ebp PyEval_EvalCode + 59
20 0x7ffbe2109c5dp
21 0x7ffbe2079179p PyCFunction_Call + 201
22 0x7ffbe2113dbep PyEval_EvalFrameEx + 31950
23 0x7ffbe204d410p _PyGen_Send + 128
24 0x7ffbe2112953p PyEval_EvalFrameEx + 26723
25 0x7ffbe204d410p _PyGen_Send + 128
26 0x7ffbe2112953p PyEval_EvalFrameEx + 26723
27 0x7ffbe204d410p _PyGen_Send + 128
28 0x7ffbe2113d60p PyEval_EvalFrameEx + 31856
29 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
30 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
31 0x7ffbe21164b6p
32 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
33 0x7ffbe2055c33p
34 0x7ffbe202433ap PyObject_Call + 106
35 0x7ffbe210e6eep PyEval_EvalFrameEx + 9726
36 0x7ffbe21164b6p
37 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
38 0x7ffbe204c6bap
39 0x7ffbe2107af6p
40 0x7ffbe2079179p PyCFunction_Call + 201
41 0x7ffbe2113dbep PyEval_EvalFrameEx + 31950
42 0x7ffbe21164b6p
43 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
44 0x7ffbe204c6bap
45 0x7ffbe2107af6p
46 0x7ffbe2079179p PyCFunction_Call + 201
47 0x7ffbe2113dbep PyEval_EvalFrameEx + 31950
48 0x7ffbe21164b6p
49 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
50 0x7ffbe204c6bap
51 0x7ffbe2107af6p
52 0x7ffbe2079179p PyCFunction_Call + 201
53 0x7ffbe2113dbep PyEval_EvalFrameEx + 31950
54 0x7ffbe21164b6p
55 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
56 0x7ffbe2055b56p
57 0x7ffbe202433ap PyObject_Call + 106
58 0x7ffbe210e6eep PyEval_EvalFrameEx + 9726
59 0x7ffbe204d410p _PyGen_Send + 128
60 0x7ffbe2113d60p PyEval_EvalFrameEx + 31856
61 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
62 0x7ffbe21164b6p
63 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
64 0x7ffbe2055c33p
65 0x7ffbe202433ap PyObject_Call + 106
66 0x7ffbe210e6eep PyEval_EvalFrameEx + 9726
67 0x7ffbe21164b6p
68 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
69 0x7ffbe2055b56p
70 0x7ffbe202433ap PyObject_Call + 106
71 0x7ffbe2189ccap
72 0x7ffbe202433ap PyObject_Call + 106
73 0x7ffbe21104c5p PyEval_EvalFrameEx + 17365
74 0x7ffbe21164b6p
75 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
76 0x7ffbe2055b56p
77 0x7ffbe202433ap PyObject_Call + 106
78 0x7ffbe210e6eep PyEval_EvalFrameEx + 9726
79 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
80 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
81 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
82 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
83 0x7ffbe21141d0p PyEval_EvalFrameEx + 32992
84 0x7ffbe21164b6p
85 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
86 0x7ffbe21164b6p
87 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
88 0x7ffbe21165ebp PyEval_EvalCode + 59
89 0x7ffbe2109c5dp
90 0x7ffbe2079179p PyCFunction_Call + 201
91 0x7ffbe2113dbep PyEval_EvalFrameEx + 31950
92 0x7ffbe21164b6p
93 0x7ffbe21135b5p PyEval_EvalFrameEx + 29893
94 0x7ffbe21164b6p
95 0x7ffbe21165a8p PyEval_EvalCodeEx + 72
96 0x7ffbe2055b56p
97 0x7ffbe202433ap PyObject_Call + 106
98 0x7ffbe2162ba1p
99 0x7ffbe21634a5p Py_Main + 1493
周五的时候FLOWER 10 解压后,还没有类似 ._image_07957.jpg ,今天数据集解压后都是这样的,我制作出来的数据集train.txt 里也包含这些,但是我把数据下载下来后,这些都是打不开的已损坏文件。影响后续batch_reader 数据读入。所以目前只能通过判断字符串解决了,我就想问一下这些._XXXX.jpg有啥用,没用为啥不删掉这些。
这个问题解决了,过。
构建 102花卉问题,项目地址在:https://aistudio.baidu.com/bdvgpu/user/85672/120561/notebooks/120561.ipynb?redirects=1
运行着ai studio就重启了,log记录如下,老师帮忙看下。
2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success
2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success
2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success
2019-09-14 14:48:17,505 - [line:582] - INFO: create prog success
2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'}
2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'}
2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'}
2019-09-14 14:48:17,508 - [line:583] - INFO: train config: {'label_dict': {}, 'pretrained_dir': 'work/SE_ResNext50_32x4d_pretrained', 'train_batch_size': 24, 'early_stop': {'good_acc1': 0.93, 'sample_frequency': 50, 'successive_limit': 3}, 'image_enhance_strategy': {'contrast_prob': 0.5, 'contrast_delta': 0.5, 'need_rotate': True, 'saturation_delta': 0.5, 'saturation_prob': 0.5, 'need_flip': True, 'need_crop': True, 'hue_prob': 0.5, 'need_distort': True, 'hue_delta': 18, 'brightness_delta': 0.125, 'brightness_prob': 0.5}, 'pretrained': True, 'label_file': 'work/hackathon-blossom-flower-classification/cat_to_name.json', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'save_persistable_dir': './persistable-params', 'use_gpu': True, 'train_file_list': 'train.txt', 'num_epochs': 200, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'input_size': [3, 224, 224], 'dropout_seed': None, 'save_freeze_dir': './freeze-model', 'class_dim': 102, 'mode': 'train', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'image_count': 6552, 'data_dir': 'work/hackathon-blossom-flower-classification/flower_data'}
2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder
2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder
2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder
2019-09-14 14:48:17,510 - [line:584] - INFO: build input custom reader and data feeder
2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt
2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt
2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt
2019-09-14 14:48:17,513 - [line:586] - INFO: file_list:work/hackathon-blossom-flower-classification/flower_data/train.txt
2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed
2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed
2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed
2019-09-14 14:48:17,519 - [line:592] - INFO: batch readed
2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork
2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork
2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork
2019-09-14 14:48:17,522 - [line:600] - INFO: build newwork
2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image
2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image
2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image
2019-09-14 14:48:19,639 - [line:629] - INFO: current pass: 0, start read image
在代码中看到, regularizer=L2Decay(0.) 是不是相当于没有L2正则化? 一般来说正则化是写到损失函数中的,实际是不是这样操作的?
你的重启问题解决了吗?我的也是这样,一开始训练就自动重启啊
我把原5种花分类的代码移植到102种花分类,参数都没改,发现报错,label超长,老师能不能指点一下是哪里有问题,我自己实在排插不出来,还是小白
日志:
102类花卉分类,项目地址:https://aistudio.baidu.com/bdvgpu/user/50358/124442/notebooks/124442.ipynb?redirects=1
“fetch_list”与5分类中相同,但是报错EnforceNotMet: Invoke operator fetch error.请问是什么原因呢?
报错信息如下:
2019-09-16 09:13:11,125 - [line:549] - INFO: create prog success
2019-09-16 09:13:11,127 - [line:550] - INFO: train config: {'num_epochs': 40, 'use_gpu': True, 'sgd_strategy': {'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100]}, 'adam_strategy': {'learning_rate': 0.002}, 'mean_rgb': [127.5, 127.5, 127.5], 'rsm_strategy': {'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100]}, 'save_persistable_dir': './persistable-params', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data', 'mode': 'train', 'train_batch_size': 16, 'class_dim': 102, 'input_size': [3, 224, 224], 'momentum_strategy': {'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'learning_rate': 0.002, 'lr_epochs': [20, 40, 60, 80, 100]}, 'image_count': 6552, 'label_dict': {}, 'image_enhance_strategy': {'need_distort': True, 'need_crop': True, 'saturation_prob': 0.5, 'hue_delta': 18, 'saturation_delta': 0.5, 'contrast_delta': 0.5, 'hue_prob': 0.5, 'need_flip': True, 'brightness_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'brightness_delta': 0.125}, 'early_stop': {'sample_frequency': 30, 'good_acc1': 0.85, 'successive_limit': 3}, 'continue_train': False, 'save_freeze_dir': './freeze-model', 'train_file_list': 'train.txt'}
2019-09-16 09:13:11,129 - [line:551] - INFO: build input custom reader and data feeder
2019-09-16 09:13:11,133 - [line:564] - INFO: build newwork
2019-09-16 09:13:13,611 - [line:593] - INFO: current pass: 0, start read image
---------------------------------------------------------------------------EnforceNotMet Traceback (most recent call last) in
645 init_log_config()
646 init_train_parameters()
--> 647 train()
in train()
597 loss, acc1, pred_ot = exe.run(main_program,
598 feed=feeder.feed(data),
--> 599 fetch_list=train_fetch_list)
600 t2 = time.time()
601 batch_id += 1
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py in run(self, program, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
648 scope=scope,
649 return_numpy=return_numpy,
--> 650 use_program_cache=use_program_cache)
651 else:
652 if fetch_list and program._is_data_parallel and program._program and (
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py in _run(self, program, exe, feed, fetch_list, feed_var_name, fetch_var_name, scope, return_numpy, use_program_cache)
746 self._feed_data(program, feed, feed_var_name, scope)
747 if not use_program_cache:
--> 748 exe.run(program.desc, scope, 0, True, True, fetch_var_name)
749 else:
750 exe.run_cached_prepared_ctx(ctx, scope, False, False, False)
EnforceNotMet: Invoke operator fetch error.
Python Callstacks:
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/framework.py", line 1748, in append_op
attrs=kwargs.get("attrs", None))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py", line 437, in _add_feed_fetch_ops
attrs={'col': i})
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py", line 744, in _run
fetch_var_name=fetch_var_name)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/executor.py", line 650, in run
use_program_cache=use_program_cache)
File "", line 599, in train
fetch_list=train_fetch_list)
File "", line 647, in
train()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3265, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3183, in run_ast_nodes
if (yield from self.run_code(code, result)):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 3018, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/async_helpers.py", line 67, in _pseudo_sync_runner
coro.send(None)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2843, in _run_cell
return runner(coro)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/IPython/core/interactiveshell.py", line 2817, in run_cell
raw_cell, store_history, silent, shell_futures)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 534, in execute_request
user_expressions, allow_stdin,
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 267, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelbase.py", line 357, in process_one
yield gen.maybe_future(dispatch(*args))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 1147, in run
yielded = self.gen.send(value)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/gen.py", line 1233, in inner
self.run()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/ioloop.py", line 758, in _run_callback
ret = callback()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/events.py", line 127, in _run
self._callback(*self._args)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/base_events.py", line 1425, in _run_once
handle._run()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/asyncio/base_events.py", line 421, in run_forever
self._run_once()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel/kernelapp.py", line 505, in start
self.io_loop.start()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
C++ Callstacks:
cudaMemcpy failed in paddle::platform::GpuMemcpySync (0x7f9bd585cc40 -> 0x7f9b96bff040, length: 4): unspecified launch failure at [/paddle/paddle/fluid/platform/gpu_info.cc:280]
PaddlePaddle Call Stacks:
0 0x7f9f9b7ff2e0p void paddle::platform::EnforceNotMet::Init(char const*, char const*, int) + 352
1 0x7f9f9b7ff659p paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int) + 137
2 0x7f9f9d8159ccp paddle::platform::GpuMemcpySync(void*, void const*, unsigned long, cudaMemcpyKind) + 188
3 0x7f9f9b988079p void paddle::memory::Copy(paddle::platform::CPUPlace, void*, paddle::platform::CUDAPlace, void const*, unsigned long, CUstream_st*) + 249
4 0x7f9f9d7b5454p paddle::framework::TensorCopySync(paddle::framework::Tensor const&, boost::variant const&, paddle::framework::Tensor*) + 900
5 0x7f9f9d1f6490p paddle::operators::FetchOp::RunImpl(paddle::framework::Scope const&, boost::variant const&) const + 656
6 0x7f9f9d75802cp paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant const&) + 332
7 0x7f9f9b98947ep paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool) + 382
8 0x7f9f9b98c51fp paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector > const&, bool) + 143
9 0x7f9f9b7f096dp
10 0x7f9f9b831ca6p
11 0x7fa0218ab199p PyCFunction_Call + 233
12 0x7fa0219463f9p PyEval_EvalFrameEx + 33545
13 0x7fa0219484b6p
14 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
15 0x7fa0219484b6p
16 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
17 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
18 0x7fa0219484b6p
19 0x7fa0219485a8p PyEval_EvalCodeEx + 72
20 0x7fa0219485ebp PyEval_EvalCode + 59
21 0x7fa02193bc5dp
22 0x7fa0218ab179p PyCFunction_Call + 201
23 0x7fa021945dbep PyEval_EvalFrameEx + 31950
24 0x7fa02187f410p _PyGen_Send + 128
25 0x7fa021944953p PyEval_EvalFrameEx + 26723
26 0x7fa02187f410p _PyGen_Send + 128
27 0x7fa021944953p PyEval_EvalFrameEx + 26723
28 0x7fa02187f410p _PyGen_Send + 128
29 0x7fa021945d60p PyEval_EvalFrameEx + 31856
30 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
31 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
32 0x7fa0219484b6p
33 0x7fa0219485a8p PyEval_EvalCodeEx + 72
34 0x7fa021887c33p
35 0x7fa02185633ap PyObject_Call + 106
36 0x7fa0219406eep PyEval_EvalFrameEx + 9726
37 0x7fa0219484b6p
38 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
39 0x7fa02187e6bap
40 0x7fa021939af6p
41 0x7fa0218ab179p PyCFunction_Call + 201
42 0x7fa021945dbep PyEval_EvalFrameEx + 31950
43 0x7fa0219484b6p
44 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
45 0x7fa02187e6bap
46 0x7fa021939af6p
47 0x7fa0218ab179p PyCFunction_Call + 201
48 0x7fa021945dbep PyEval_EvalFrameEx + 31950
49 0x7fa0219484b6p
50 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
51 0x7fa02187e6bap
52 0x7fa021939af6p
53 0x7fa0218ab179p PyCFunction_Call + 201
54 0x7fa021945dbep PyEval_EvalFrameEx + 31950
55 0x7fa0219484b6p
56 0x7fa0219485a8p PyEval_EvalCodeEx + 72
57 0x7fa021887b56p
58 0x7fa02185633ap PyObject_Call + 106
59 0x7fa0219406eep PyEval_EvalFrameEx + 9726
60 0x7fa02187f410p _PyGen_Send + 128
61 0x7fa021945d60p PyEval_EvalFrameEx + 31856
62 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
63 0x7fa0219484b6p
64 0x7fa0219485a8p PyEval_EvalCodeEx + 72
65 0x7fa021887c33p
66 0x7fa02185633ap PyObject_Call + 106
67 0x7fa0219406eep PyEval_EvalFrameEx + 9726
68 0x7fa0219484b6p
69 0x7fa0219485a8p PyEval_EvalCodeEx + 72
70 0x7fa021887b56p
71 0x7fa02185633ap PyObject_Call + 106
72 0x7fa0219bbccap
73 0x7fa02185633ap PyObject_Call + 106
74 0x7fa0219424c5p PyEval_EvalFrameEx + 17365
75 0x7fa0219484b6p
76 0x7fa0219485a8p PyEval_EvalCodeEx + 72
77 0x7fa021887b56p
78 0x7fa02185633ap PyObject_Call + 106
79 0x7fa0219406eep PyEval_EvalFrameEx + 9726
80 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
81 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
82 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
83 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
84 0x7fa0219461d0p PyEval_EvalFrameEx + 32992
85 0x7fa0219484b6p
86 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
87 0x7fa0219484b6p
88 0x7fa0219485a8p PyEval_EvalCodeEx + 72
89 0x7fa0219485ebp PyEval_EvalCode + 59
90 0x7fa02193bc5dp
91 0x7fa0218ab179p PyCFunction_Call + 201
92 0x7fa021945dbep PyEval_EvalFrameEx + 31950
93 0x7fa0219484b6p
94 0x7fa0219455b5p PyEval_EvalFrameEx + 29893
95 0x7fa0219484b6p
96 0x7fa0219485a8p PyEval_EvalCodeEx + 72
97 0x7fa021887b56p
98 0x7fa02185633ap PyObject_Call + 106
99 0x7fa021994ba1p
谢谢你的建议。这个是MAC OS中的压缩的问题,影响不大。后面可以清理掉。
求助:实验2:花卉分类 se_ResNeXt训练一直不出结果 用的cpu 其他代码没有修改
2019-09-16 11:16:05,737 - INFO - create prog success
2019-09-16 11:16:05,737 - [line:589] - INFO: create prog success
2019-09-16 11:16:05,741 - INFO - train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'contrast_prob': 0.5, 'saturation_prob': 0.5, 'need_crop': True, 'brightness_delta': 0.125, 'brightness_prob': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_flip': True, 'need_rotate': True, 'need_distort': True, 'saturation_delta': 0.5}, 'early_stop': {'sample_frequency': 50, 'good_acc1': 0.92, 'successive_limit': 3}, 'input_size': [3, 224, 224], 'class_dim': 5, 'pretrained_dir': 'data/data6595/SE_ResNext50_32x4d_pretrained', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'train_batch_size': 30, 'use_gpu': False, 'num_epochs': 120, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'label_file': 'label_list.txt', 'pretrained': True, 'mode': 'train', 'dropout_seed': None, 'adam_strategy': {'learning_rate': 0.002}, 'data_dir': 'data/data2815', 'save_freeze_dir': './freeze-model', 'label_dict': {'sunflowers': 3, 'daisy': 0, 'tulips': 4, 'dandelion': 1, 'roses': 2}, 'train_file_list': 'train.txt', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'save_persistable_dir': './persistable-params', 'image_count': 2979}
2019-09-16 11:16:05,741 - [line:590] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'contrast_prob': 0.5, 'saturation_prob': 0.5, 'need_crop': True, 'brightness_delta': 0.125, 'brightness_prob': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_flip': True, 'need_rotate': True, 'need_distort': True, 'saturation_delta': 0.5}, 'early_stop': {'sample_frequency': 50, 'good_acc1': 0.92, 'successive_limit': 3}, 'input_size': [3, 224, 224], 'class_dim': 5, 'pretrained_dir': 'data/data6595/SE_ResNext50_32x4d_pretrained', 'continue_train': False, 'sgd_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'mean_rgb': [127.5, 127.5, 127.5], 'train_batch_size': 30, 'use_gpu': False, 'num_epochs': 120, 'momentum_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'label_file': 'label_list.txt', 'pretrained': True, 'mode': 'train', 'dropout_seed': None, 'adam_strategy': {'learning_rate': 0.002}, 'data_dir': 'data/data2815', 'save_freeze_dir': './freeze-model', 'label_dict': {'sunflowers': 3, 'daisy': 0, 'tulips': 4, 'dandelion': 1, 'roses': 2}, 'train_file_list': 'train.txt', 'rsm_strategy': {'learning_rate': 0.001, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002], 'lr_epochs': [20, 40, 60, 80, 100]}, 'save_persistable_dir': './persistable-params', 'image_count': 2979}
2019-09-16 11:16:05,743 - INFO - build input custom reader and data feeder
2019-09-16 11:16:05,743 - [line:591] - INFO: build input custom reader and data feeder
2019-09-16 11:16:05,748 - INFO - build newwork
2019-09-16 11:16:05,748 - [line:605] - INFO: build newwork
2019-09-16 11:16:08,974 - INFO - load params from pretrained model
2019-09-16 11:16:08,974 - [line:578] - INFO: load params from pretrained model
2019-09-16 11:16:10,179 - INFO - current pass: 0, start read image
2019-09-16 11:16:10,179 - [line:634] - INFO: current pass: 0, start read image
项目地址
https://aistudio.baidu.com/aistudio/projectdetail/124442
你换成GPU环境再试试
类别数目改了吗?可以粘贴项目地址方便查看
一般是通过 regularizer 传递过去的
看看是不是batchsize太大了
我来晚了,基础又不够,想问作业如果截止日期前没有完成,以后还可以再做吗?
循环神经网络模型——情感分析
这段代码什么意思
# 定义长短期记忆网络
def lstm_net(ipt, input_dim):
# 以数据的IDs作为输入
emb = fluid.layers.embedding(input=ipt, size=[input_dim, 128], is_sparse=True)
# 第一个全连接层
fc1 = fluid.layers.fc(input=emb, size=128)
# 进行一个长短期记忆操作
lstm1, _ = fluid.layers.dynamic_lstm(input=fc1, size=128)
# 第一个最大序列池操作
fc2 = fluid.layers.sequence_pool(input=fc1, pool_type='max')
# 第二个最大序列池操作
lstm2 = fluid.layers.sequence_pool(input=lstm1, pool_type='max')
# 以softmax作为全连接的输出层,大小为2,也就是正负面
out = fluid.layers.fc(input=[fc2, lstm2], size=2, act='softmax')
return out
我把原5种花分类的代码移植到102种花分类,参数就改了数据集路径和class_dim,发现这句报错 optimizer.minimize(avg_cost), 项目地址木有找到原因具体日志如下:
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,412 - [line:554] - INFO: train config: {'image_enhance_strategy': {'contrast_delta': 0.5, 'brightness_delta': 0.125, 'need_crop': True, 'saturation_delta': 0.5, 'hue_prob': 0.5, 'hue_delta': 18, 'need_distort': True, 'brightness_prob': 0.5, 'saturation_prob': 0.5, 'contrast_prob': 0.5, 'need_rotate': True, 'need_flip': True}, 'save_persistable_dir': './persistable-params', 'label_dict': {'32': 28, '74': 74, '15': 9, '52': 50, '85': 86, '93': 95, '68': 67, '45': 42, '66': 65, '3': 25, '89': 90, '27': 22, '18': 12, '56': 54, '42': 39, '75': 75, '64': 63, '35': 31, '96': 98, '41': 38, '79': 79, '12': 6, '1': 0, '54': 52, '36': 32, '86': 87, '62': 61, '61': 60, '2': 14, '63': 62, '51': 49, '47': 44, '99': 101, '40': 37, '28': 23, '73': 73, '95': 97, '60': 59, '83': 84, '8': 80, '94': 96, '34': 30, '82': 83, '22': 17, '10': 1, '7': 69, '5': 47, '71': 71, '37': 33, '50': 48, '13': 7, '38': 34, '65': 64, '101': 3, '14': 8, '31': 27, '19': 13, '76': 76, '23': 18, '69': 68, '84': 85, '33': 29, '100': 2, '55': 53, '90': 92, '21': 16, '53': 51, '30': 26, '46': 43, '43': 40, '98': 100, '97': 99, '11': 5, '9': 91, '24': 19, '17': 11, '4': 36, '25': 20, '91': 93, '102': 4, '16': 10, '49': 46, '20': 15, '92': 94, '77': 77, '88': 89, '80': 81, '44': 41, '81': 82, '59': 57, '29': 24, '70': 70, '67': 66, '26': 21, '58': 56, '48': 45, '87': 88, '57': 55, '78': 78, '6': 58, '39': 35, '72': 72}, 'image_count': 5194, 'label_file': 'label_list.txt', 'early_stop': {'good_acc1': 0.92, 'sample_frequency': 50, 'successive_limit': 3}, 'class_dim': 102, 'input_size': [3, 224, 224], 'train_file_list': 'train.txt', 'rsm_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'save_freeze_dir': './freeze-model', 'adam_strategy': {'learning_rate': 0.002}, 'train_batch_size': 24, 'momentum_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'continue_train': True, 'pretrained': True, 'mode': 'train', 'pretrained_dir': 'ResNet50_pretrained', 'data_dir': 'data/data12479/hackathon-blossom-flower-classification/flower_data/train', 'sgd_strategy': {'lr_epochs': [20, 40, 60, 80, 100], 'learning_rate': 0.002, 'lr_decay': [1, 0.5, 0.25, 0.1, 0.01, 0.002]}, 'mean_rgb': [127.5, 127.5, 127.5], 'num_epochs': 120, 'use_gpu': True}
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,420 - [line:555] - INFO: build input custom reader and data feeder
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
2019-10-08 18:42:42,429 - [line:568] - INFO: build newwork
------------
name: "cross_entropy2_9.tmp_0"
type {
type: LOD_TENSOR
lod_tensor {
tensor {
data_type: FP32
dims: -1
dims: 1
}
lod_level: 0
}
}
persistable: false
------------
---------------------------------------------------------------------------EnforceNotMet Traceback (most recent call last) in
654 init_log_config()
655 init_train_parameters()
--> 656 train()
in train()
580 print(cost)
581 print("------------")
--> 582 optimizer.minimize(avg_cost)
583 exe = fluid.Executor(place)
584
in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip)
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/wrapped_decorator.py in __impl__(func, *args, **kwargs)
23 def __impl__(func, *args, **kwargs):
24 wrapped_func = decorator_func(func)
---> 25 return wrapped_func(*args, **kwargs)
26
27 return __impl__
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/dygraph/base.py in __impl__(*args, **kwargs)
86 def __impl__(*args, **kwargs):
87 with _switch_tracer_mode_guard_(is_train=False):
---> 88 return func(*args, **kwargs)
89
90 return __impl__
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip)
591 startup_program=startup_program,
592 parameter_list=parameter_list,
--> 593 no_grad_set=no_grad_set)
594
595 if grad_clip is not None and framework.in_dygraph_mode():
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py in backward(self, loss, startup_program, parameter_list, no_grad_set, callbacks)
491 with program_guard(program, startup_program):
492 params_grads = append_backward(loss, parameter_list,
--> 493 no_grad_set, callbacks)
494 # Note: since we can't use all_reduce_op now,
495 # dgc_op should be the last op of one grad.
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/backward.py in append_backward(loss, parameter_list, no_grad_set, callbacks)
568 grad_to_var,
569 callbacks,
--> 570 input_grad_names_set=input_grad_names_set)
571
572 # Because calc_gradient may be called multiple times,
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/backward.py in _append_backward_ops_(block, ops, target_block, no_grad_dict, grad_to_var, callbacks, input_grad_names_set)
308 # Getting op's corresponding grad_op
309 grad_op_desc, op_grad_to_var = core.get_grad_op_desc(
--> 310 op.desc, cpt.to_text(no_grad_dict[block.idx]), grad_sub_block_list)
311
312 # If input_grad_names_set is not None, extend grad_op_descs only when
EnforceNotMet: grad_op_maker_ should not be null
Operator GradOpMaker has not been registered. at [/paddle/paddle/fluid/framework/op_info.h:69]
PaddlePaddle Call Stacks:
0 0x7f4188b04808p void paddle::platform::EnforceNotMet::Init(std::string, char const*, int) + 360
1 0x7f4188b04b57p paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int) + 87
2 0x7f4188b05b1cp paddle::framework::OpInfo::GradOpMaker() const + 108
3 0x7f4188afd21ep
4 0x7f4188b36ca6p
5 0x7f420cb59199p PyCFunction_Call + 233
6 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950
7 0x7f420cbf64b6p
8 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
9 0x7f420cbf64b6p
10 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
11 0x7f420cbf64b6p
12 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
13 0x7f420cbf64b6p
14 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
15 0x7f420cb35c33p
16 0x7f420cb0433ap PyObject_Call + 106
17 0x7f420cbee6eep PyEval_EvalFrameEx + 9726
18 0x7f420cbf64b6p
19 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
20 0x7f420cb35c33p
21 0x7f420cb0433ap PyObject_Call + 106
22 0x7f420cbee6eep PyEval_EvalFrameEx + 9726
23 0x7f420cbf64b6p
24 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
25 0x7f420cbf64b6p
26 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
27 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
28 0x7f420cbf64b6p
29 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
30 0x7f420cbf65ebp PyEval_EvalCode + 59
31 0x7f420cbe9c5dp
32 0x7f420cb59179p PyCFunction_Call + 201
33 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950
34 0x7f420cb2d410p _PyGen_Send + 128
35 0x7f420cbf2953p PyEval_EvalFrameEx + 26723
36 0x7f420cb2d410p _PyGen_Send + 128
37 0x7f420cbf2953p PyEval_EvalFrameEx + 26723
38 0x7f420cb2d410p _PyGen_Send + 128
39 0x7f420cbf3d60p PyEval_EvalFrameEx + 31856
40 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
41 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
42 0x7f420cbf64b6p
43 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
44 0x7f420cb35c33p
45 0x7f420cb0433ap PyObject_Call + 106
46 0x7f420cbee6eep PyEval_EvalFrameEx + 9726
47 0x7f420cbf64b6p
48 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
49 0x7f420cb2c6bap
50 0x7f420cbe7af6p
51 0x7f420cb59179p PyCFunction_Call + 201
52 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950
53 0x7f420cbf64b6p
54 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
55 0x7f420cb2c6bap
56 0x7f420cbe7af6p
57 0x7f420cb59179p PyCFunction_Call + 201
58 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950
59 0x7f420cbf64b6p
60 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
61 0x7f420cb2c6bap
62 0x7f420cbe7af6p
63 0x7f420cb59179p PyCFunction_Call + 201
64 0x7f420cbf3dbep PyEval_EvalFrameEx + 31950
65 0x7f420cbf64b6p
66 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
67 0x7f420cb35b56p
68 0x7f420cb0433ap PyObject_Call + 106
69 0x7f420cbee6eep PyEval_EvalFrameEx + 9726
70 0x7f420cb2d410p _PyGen_Send + 128
71 0x7f420cbf3d60p PyEval_EvalFrameEx + 31856
72 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
73 0x7f420cbf64b6p
74 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
75 0x7f420cb35c33p
76 0x7f420cb0433ap PyObject_Call + 106
77 0x7f420cbee6eep PyEval_EvalFrameEx + 9726
78 0x7f420cbf64b6p
79 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
80 0x7f420cb35b56p
81 0x7f420cb0433ap PyObject_Call + 106
82 0x7f420cc69ccap
83 0x7f420cb0433ap PyObject_Call + 106
84 0x7f420cbf04c5p PyEval_EvalFrameEx + 17365
85 0x7f420cbf64b6p
86 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
87 0x7f420cb35b56p
88 0x7f420cb0433ap PyObject_Call + 106
89 0x7f420cbee6eep PyEval_EvalFrameEx + 9726
90 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
91 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
92 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
93 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
94 0x7f420cbf41d0p PyEval_EvalFrameEx + 32992
95 0x7f420cbf64b6p
96 0x7f420cbf35b5p PyEval_EvalFrameEx + 29893
97 0x7f420cbf64b6p
98 0x7f420cbf65a8p PyEval_EvalCodeEx + 72
99 0x7f420cbf65ebp PyEval_EvalCode + 59
问题解决了,貌似是我加载的预处理模型不对导致的
老师我有一个问题比如花的五分类问题,如何增加一个other分类呢,比如我上传不属于这五种花的其他图片,预测为其他,应该怎么设置训练