在AIStudio行文本分类示例代码报错怎么办?
收藏
快速回复
AI Studio平台使用 问答其他 1762 1
在AIStudio行文本分类示例代码报错怎么办?
收藏
快速回复
AI Studio平台使用 问答其他 1762 1

运行这个教程出错https://github.com/PaddlePaddle/PaddleHub/wiki/PaddleHub%E6%96%87%E6%9C%AC%E5%88%86%E7%B1%BB%E8%BF%81%E7%A7%BB%E6%95%99%E7%A8%8B。

EnforceNotMet: Input ShapeTensor cannot be found in Op reshape2 at [/paddle/paddle/fluid/framework/op_desc.cc:306],第一次运行没有错误,后来每次运行都会报这个错。

Traceback (most recent call last) in 
     43     num_classes=dataset.num_labels,
     44     config=config)
---> 45 cls_task.finetune_and_eval()
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in finetune_and_eval(self)
    504 
    505     def finetune_and_eval(self):
--> 506         return self.finetune(do_eval=True)
    507 
    508     def finetune(self, do_eval=False):
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in finetune(self, do_eval)
    509         # Start to finetune
    510         with self.phase_guard(phase="train"):
--> 511             self.init_if_necessary()
    512             self._finetune_start_event()
    513             run_states = []
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in init_if_necessary(self)
    166         if not self.is_checkpoint_loaded:
    167             self.is_checkpoint_loaded = True
--> 168             if not self.load_checkpoint():
    169                 self.exe.run(self._base_startup_program)
    170 
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in load_checkpoint(self)
    487             self.config.checkpoint_dir,
    488             self.exe,
--> 489             main_program=self.main_program)
    490 
    491         return is_load_successful
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in main_program(self)
    331     def main_program(self):
    332         if not self.env.is_inititalized:
--> 333             self._build_env()
    334         return self.env.main_program
    335 
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/task.py in _build_env(self)
    244                 with fluid.unique_name.guard(self.env.UNG):
    245                     self.config.strategy.execute(
--> 246                         self.loss, self._base_data_reader, self.config)
    247 
    248         if self.is_train_phase:
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/strategy.py in execute(self, loss, data_reader, config)
    132         scheduled_lr = adam_weight_decay_optimization(
    133             loss, warmup_steps, max_train_steps, self.learning_rate,
--> 134             main_program, self.weight_decay, self.lr_scheduler)
    135 
    136         return scheduled_lr
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddlehub/finetune/optimization.py in adam_weight_decay_optimization(loss, warmup_steps, num_train_steps, learning_rate, main_program, weight_decay, scheduler)
     77         param_list[param.name].stop_gradient = True
     78 
---> 79     _, param_grads = optimizer.minimize(loss)
     80 
     81     if weight_decay > 0:
 in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip)
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/wrapped_decorator.py in __impl__(func, *args, **kwargs)
     23     def __impl__(func, *args, **kwargs):
     24         wrapped_func = decorator_func(func)
---> 25         return wrapped_func(*args, **kwargs)
     26 
     27     return __impl__
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/dygraph/base.py in __impl__(*args, **kwargs)
     85     def __impl__(*args, **kwargs):
     86         with _switch_tracer_mode_guard_(is_train=False):
---> 87             return func(*args, **kwargs)
     88 
     89     return __impl__
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py in minimize(self, loss, startup_program, parameter_list, no_grad_set, grad_clip)
    592             startup_program=startup_program,
    593             parameter_list=parameter_list,
--> 594             no_grad_set=no_grad_set)
    595 
    596         if grad_clip is not None and framework.in_dygraph_mode():
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/optimizer.py in backward(self, loss, startup_program, parameter_list, no_grad_set, callbacks)
    491             with program_guard(program, startup_program):
    492                 params_grads = append_backward(loss, parameter_list,
--> 493                                                no_grad_set, callbacks)
    494                 # Note: since we can't use all_reduce_op now,
    495                 #  dgc_op should be the last op of one grad.
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/backward.py in append_backward(loss, parameter_list, no_grad_set, callbacks)
    569         grad_to_var,
    570         callbacks,
--> 571         input_grad_names_set=input_grad_names_set)
    572 
    573     # Because calc_gradient may be called multiple times,
/opt/conda/envs/python35-paddle120-env/lib/python3.5/site-packages/paddle/fluid/backward.py in _append_backward_ops_(block, ops, target_block, no_grad_dict, grad_to_var, callbacks, input_grad_names_set)
    308         # Getting op's corresponding grad_op
    309         grad_op_desc, op_grad_to_var = core.get_grad_op_desc(
--> 310             op.desc, cpt.to_text(no_grad_dict[block.idx]), grad_sub_block_list)
    311 
    312         # EnforceNotMet: Input ShapeTensor cannot be found in Op reshape2 at [/paddle/paddle/fluid/framework/op_desc.cc:306]
PaddlePaddle Call Stacks: 
0       0x7f20a9e3b750p void paddle::platform::EnforceNotMet::Init(char const*, char const*, int) + 352
1       0x7f20a9e3bac9p paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int) + 137
2       0x7f20a9facd7fp paddle::framework::OpDesc::Input(std::string const&) const + 207
3       0x7f20aa47c62cp paddle::framework::details::OpInfoFiller::operator()(char const*, paddle::framework::OpInfo*) const::{lambda(paddle::framework::OpDesc const&, std::unordered_set, std::equal_to, std::allocator > const&, std::unordered_map, std::equal_to, std::allocator > >*, std::vector > const&)#1}::operator()(paddle::framework::OpDesc const&, std::unordered_set, std::equal_to, std::allocator > const&, std::unordered_map, std::equal_to, std::allocator > >*, std::vector > const&) const + 540
4       0x7f20aa47cb
If input_grad_names_set is not None, extend grad_op_descs only when
a4p std::_Function_handler >, std::allocator > > > (paddle::framework::OpDesc const&, std::unordered_set, std::equal_to, std::allocator > const&, std::unordered_map, std::equal_to, std::allocator > >*, std::vector > const&), paddle::framework::details::OpInfoFiller::operator()(char const*, paddle::framework::OpInfo*) const::{lambda(paddle::framework::OpDesc const&, std::unordered_set, std::equal_to, std::allocator > const&, std::unordered_map, std::equal_to, std::allocator > >*, std::vector > const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::OpDesc const&, std::unordered_set, std::equal_to, std::allocator > const&, std::unordered_map, std::equal_to, std::allocator > >*, std::vector > const&) + 20
5       0x7f20a9e346bap
6       0x7f20a9e6e066p
7       0x7f212e5d6199p PyCFunction_Call + 233
8       0x7f212e670dbep PyEval_EvalFrameEx + 31950
9       0x7f212e6734b6p
10      0x7f212e6705b5p PyEval_EvalFrameEx + 29893
11      0x7f212e6734b6p
12      0x7f212e6705b5p PyEval_EvalFrameEx + 29893
13      0x7f212e6734b6p
14      0x7f212e6705b5p PyEval_EvalFrameEx + 29893
15      0x7f212e6734b6p
16      0x7f212e6735a8p PyEval_EvalCodeEx + 72
17      0x7f212e5b2c33p
18      0x7f212e58133ap PyObject_Call + 106
19      0x7f212e66b6eep PyEval_EvalFrameEx + 9726
20      0x7f212e6734b6p
21      0x7f212e6735a8p PyEval_EvalCodeEx + 72
22      0x7f212e5b2c33p
23      0x7f212e58133ap PyObject_Call + 106
24      0x7f212e66b6eep PyEval_EvalFrameEx + 9726
25      0x7f212e6734b6p
26      0x7f212e6705b5p PyEval_EvalFrameEx + 29893
27      0x7f212e6734b6p
28      0x7f212e6705b5p PyEval_EvalFrameEx + 29893
29      0x7f212e6734b6p
30      0x7f212e6705b5p PyEval_EvalFrameEx + 29893
31      0x7f212e6711d0p PyEval_EvalFrameEx + 32992
32      0x7f212e6734b6p
33      0x7f212e6705b5p PyEval_EvalFrameEx + 29893
34      0x7f212e6734b6p
35      0x7f212e6735a8p PyEval_EvalCodeEx + 72
36      0x7f212e5b2b56p
37      0x7f212e58133ap PyObject_Call + 106
38      0x7f212e59f172p
39      0x7f212e5d951cp _PyObject_GenericGetAttrWithDict + 124
40      0x7f212e66dd2ap PyEval_EvalFrameEx + 19514
41      0x7f212e6711d0p PyEval_EvalFrameEx + 32992
42      0x7f212e6711d0p PyEval_EvalFrameEx + 32992
43      0x7f212e6734b6p
44      0x7f212e6705b5p PyEval_EvalFrameEx + 29893
45      0x7f212e6711d0p PyEval_EvalFrameEx + 32992
46      0x7f212e6734b6p
47      0x7f212e6735a8p PyEval_EvalCodeEx + 72
48      0x7f212e6735ebp PyEval_EvalCode + 59
49      0x7f212e666c5dp
50      0x7f212e5d6179p PyCFunction_Call + 201
51      0x7f212e670dbep PyEval_EvalFrameEx + 31950
52      0x7f212e5aa410p _PyGen_Send + 128
53      0x7f212e66f953p PyEval_EvalFrameEx + 26723
54      0x7f212e5aa410p _PyGen_Send + 128
55      0x7f212e66f953p PyEval_EvalFrameEx + 26723
56      0x7f212e5aa410p _PyGen_Send + 128
57      0x7f212e670d60p PyEval_EvalFrameEx + 31856
58      0x7f212e6711d0p PyEval_EvalFrameEx + 32992
59      0x7f212e6711d0p PyEval_EvalFrameEx + 32992
60      0x7f212e6734b6p
61      0x7f212e6735a8p PyEval_EvalCodeEx + 72
62      0x7f212e5b2c33p
63      0x7f212e58133ap PyObject_Call + 106
64      0x7f212e66b6eep PyEval_EvalFrameEx + 9726
65      0x7f212e6734b6p
66      0x7f212e6705b5p PyEval_EvalFrameEx + 29893
67      0x7f212e5a96bap
68      0x7f212e664af6p
69      0x7f212e5d6179p PyCFunction_Call + 201
70      0x7f212e670dbep PyEval_EvalFrameEx + 31950
71      0x7f212e6734b6p
72      0x7f212e6705b5p PyEval_EvalFrameEx + 29893
73      0x7f212e5a96bap
74      0x7f212e664af6p
75      0x7f212e5d6179p PyCFunction_Call + 201
76      0x7f212e670dbep PyEval_EvalFrameEx + 31950
77      0x7f212e6734b6p
78      0x7f212e6705b5p PyEval_EvalFrameEx + 29893
79      0x7f212e5a96bap
80      0x7f212e664af6p
81      0x7f212e5d6179p PyCFunction_Call + 201
82      0x7f212e670dbep PyEval_EvalFrameEx + 31950
83      0x7f212e6734b6p
84      0x7f212e6735a8p PyEval_EvalCodeEx + 72
85      0x7f212e5b2b56p
86      0x7f212e58133ap PyObject_Call + 106
87      0x7f212e66b6eep PyEval_EvalFrameEx + 9726
88      0x7f212e5aa410p _PyGen_Send + 128
89      0x7f212e670d60p PyEval_EvalFrameEx + 31856
90      0x7f212e6711d0p PyEval_EvalFrameEx + 32992
91      0x7f212e6734b6p
92      0x7f212e6735a8p PyEval_EvalCodeEx + 72
93      0x7f212e5b2c33p
94      0x7f212e58133ap PyObject_Call + 106
95      0x7f212e66b6eep PyEval_EvalFrameEx + 9726
96      0x7f212e6734b6p
97      0x7f212e6735a8p PyEval_EvalCodeEx + 72
98      0x7f212e5b2b56p
99      0x7f212e58133ap PyObject_Call + 106
0
收藏
回复
全部评论(1)
时间顺序
龙Dradon77
#2 回复于2019-10

https://github.com/PaddlePaddle/PaddleHub#%E6%95%99%E7%A8%8B 里的FAQ里有。

0
回复
在@后输入用户全名并按空格结束,可艾特全站任一用户