自己之前使用TensorFlow,做皮革纹理分割,使用UNET精度不够;DeepLabv3+没有足够算力,所以转到百度这边。
在“10分钟快速上手使用PaddleX——DeepLabV3+语义分割”基础上修改的数据,结果出现如下错误,
数据总共只有15张,会不会是数据太少?
return np.array(f1score_list)
UnboundLocalError: local variable 'f1score' referenced before assignment
train:
label:
2020-12-08 15:18:48,652-INFO: If regularizer of a Parameter has been set by 'fluid.ParamAttr' or 'fluid.WeightNormParamAttr' already. The Regularization[L2Decay, regularization_coeff=0.000040] in Optimizer will not take effect, and it will only be applied to other Parameters!
2020-12-08 15:18:49 [INFO] Connecting PaddleHub server to get pretrain weights...
2020-12-08 15:18:52 [INFO] Load pretrain weights from output/deeplab/pretrain/MobileNetV2_x1.0.
2020-12-08 15:18:52,830-WARNING: output/deeplab/pretrain/MobileNetV2_x1.0.pdparams not found, try to load model file saved with [ save_params, save_persistables, save_vars ]
2020-12-08 15:18:53 [INFO] There are 260 varaibles in output/deeplab/pretrain/MobileNetV2_x1.0 are loaded.
2020-12-08 15:18:56 [INFO] [TRAIN] Epoch=1/4300, Step=2/2, loss=2.285694, lr=0.009999, time_each_step=1.87s, eta=8:56:11
2020-12-08 15:18:56 [INFO] [TRAIN] Epoch 1 finished, loss=2.318256, lr=0.009999 .
2020-12-08 15:18:56 [INFO] Start to evaluating(total_samples=8, total_steps=2)...
0%| | 0/2 [00:00100%|██████████| 2/2 [00:01<00:00, 1.11it/s]
2020-12-08 15:18:58 [INFO] [EVAL] Finished, Epoch=1, miou=0.045804, category_iou=[0.54965085 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. ], oacc=0.549625, category_acc=[0.54963854 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. ], kappa=0.0, category_F1-score=[0.7093867 0.7093867 0.7093867 0.7093867 0.7093867 0.7093867 0.7093867
0.7093867 0.7093867 0.7093867 0.7093867 0.7093867] .
2020-12-08 15:18:59 [INFO] Model saved in output/deeplab/best_model.
2020-12-08 15:18:59 [INFO] Model saved in output/deeplab/epoch_1.
2020-12-08 15:18:59 [INFO] Current evaluated best model in eval_dataset is epoch_1, miou=0.045804237248920854
2020-12-08 15:19:00 [INFO] [TRAIN] Epoch=2/4300, Step=2/2, loss=1.709327, lr=0.009997, time_each_step=1.3s, eta=7:29:53
2020-12-08 15:19:00 [INFO] [TRAIN] Epoch 2 finished, loss=1.846357, lr=0.009997 .
2020-12-08 15:19:00 [INFO] Start to evaluating(total_samples=8, total_steps=2)...
100%|██████████| 2/2 [00:01<00:00, 1.09it/s]
---------------------------------------------------------------------------UnboundLocalError Traceback (most recent call last) in
10 save_interval_epochs=1,
11 save_dir='output/deeplab',
---> 12 use_vdl=True)
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/models/deeplabv3p.py in train(self, num_epochs, train_dataset, train_batch_size, eval_dataset, save_interval_epochs, log_interval_steps, save_dir, pretrain_weights, optimizer, learning_rate, lr_decay_power, use_vdl, sensitivities_file, eval_metric_loss, early_stop, early_stop_patience, resume_checkpoint)
356 use_vdl=use_vdl,
357 early_stop=early_stop,
--> 358 early_stop_patience=early_stop_patience)
359
360 def evaluate(self,
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/models/base.py in train_loop(self, num_epochs, train_dataset, train_batch_size, eval_dataset, save_interval_epochs, log_interval_steps, save_dir, use_vdl, early_stop, early_stop_patience)
559 batch_size=eval_batch_size,
560 epoch_id=i + 1,
--> 561 return_details=True)
562 logging.info('[EVAL] Finished, Epoch={}, {} .'.format(
563 i + 1, dict2str(self.eval_metrics)))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/models/deeplabv3p.py in evaluate(self, eval_dataset, batch_size, epoch_id, return_details)
438 category_iou, miou = conf_mat.mean_iou()
439 category_acc, oacc = conf_mat.accuracy()
--> 440 category_f1score = conf_mat.f1_score()
441
442 metrics = OrderedDict(
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/models/utils/seg_eval.py in f1_score(self)
174 else:
175 f1score = 2 * precision * recall / (recall + precision)
--> 176 f1score_list.append(f1score)
177 return np.array(f1score_list)
UnboundLocalError: local variable 'f1score' referenced before assignment
你可以先查一下f1score这个变量,
看看是不是某些条件下,忘记给它赋值了。
数据少可以用数据增广trick
这是验证过程,打印精度信息时报的错。在yaml文件里把vaild先关了试试能不能跑通
如果锁定了是valid出错,train没问题,就可以检查下是不是验证集数据处理的问题了
看错误是在评估时出错,检查下评估数据集。
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/models/utils/seg_eval.py 这个文件可能因为版本问题没update 总之174-176行 f1_score 和 f1score都出现了,手动改一下试试
看的就是f1了,
公开下项目瞅瞅呗。