首页 Paddle框架 帖子详情
paddle.metric.Precision() 已解决
收藏
快速回复
Paddle框架 文章模型训练 639 5
paddle.metric.Precision() 已解决
收藏
快速回复
Paddle框架 文章模型训练 639 5

paddle.metric.Precision()  用paddle 的高层借口会报错

如下

2021-07-16 20:59:40,635 - INFO - unique_endpoints {''}
2021-07-16 20:59:40,636 - INFO - File /home/aistudio/.cache/paddle/hapi/weights/resnet50.pdparams md5 checking...
2021-07-16 20:59:40,981 - INFO - Found /home/aistudio/.cache/paddle/hapi/weights/resnet50.pdparams

The loss value printed in the log is the current step, and the metric is the average value of previous step.
Epoch 1/50

/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dataloader/dataloader_iter.py:89: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
if isinstance(slot[0], (np.ndarray, np.bool, numbers.Number)):
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/norm.py:648: UserWarning: When training, we now always track global mean and variance.
"When training, we now always track global mean and variance.")

---------------------------------------------------------------------------ValueError Traceback (most recent call last) in
20 save_dir="/home/aistudio/lup", #把模型参数、优化器参数保存至自定义的文件夹
21 save_freq=20, #设定每隔多少个epoch保存模型参数及优化器参数
---> 22 log_freq=100 #打印日志的频率
23 )
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model.py in fit(self, train_data, eval_data, batch_size, epochs, eval_freq, log_freq, save_dir, save_freq, verbose, drop_last, shuffle, num_workers, callbacks)
1493 for epoch in range(epochs):
1494 cbks.on_epoch_begin(epoch)
-> 1495 logs = self._run_one_epoch(train_loader, cbks, 'train')
1496 cbks.on_epoch_end(epoch, logs)
1497
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model.py in _run_one_epoch(self, data_loader, callbacks, mode, logs)
1800 if mode != 'predict':
1801 outs = getattr(self, mode + '_batch')(data[:len(self._inputs)],
-> 1802 data[len(self._inputs):])
1803 if self._metrics and self._loss:
1804 metrics = [[l[0] for l in outs[0]]]
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model.py in train_batch(self, inputs, labels)
939 print(loss)
940 """
--> 941 loss = self._adapter.train_batch(inputs, labels)
942 if fluid.in_dygraph_mode() and self._input_info is None:
943 self._update_inputs()
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model.py in train_batch(self, inputs, labels)
667 for metric in self.model._metrics:
668 metric_outs = metric.compute(*(to_list(outputs) + labels))
--> 669 m = metric.update(* [to_numpy(m) for m in to_list(metric_outs)])
670 metrics.append(m)
671
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/metric/metrics.py in update(self, preds, labels)
427 pred = preds[i]
428 label = labels[i]
--> 429 if pred == 1:
430 if pred == label:
431 self.tp += 1
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

官方能把这玩意改了麻

要不然还得改源码

 

6# 回复于2021-07
已解决 : [代码]
展开
0
收藏
回复
全部评论(5)
时间顺序
UnseenMe
#2 回复于2021-07

看起来,感觉这个问题可以去Github提ISSUE了。

0
回复
TC.Long
#3 回复于2021-07

可以提供一下最小复现的代码,然后再在GitHub提个ISSUE,会有官方人员跟进处理的,提ISSUE的链接:https://github.com/PaddlePaddle/Paddle/issues

0
回复
大佬别优化头秃
#4 回复于2021-07
TC.Long #3
可以提供一下最小复现的代码,然后再在GitHub提个ISSUE,会有官方人员跟进处理的,提ISSUE的链接:https://github.com/PaddlePaddle/Paddle/issues

谢谢回复

0
回复
大佬别优化头秃
#5 回复于2021-07
看起来,感觉这个问题可以去Github提ISSUE了。

感谢回复

 

0
回复
大佬别优化头秃
#6 回复于2021-07

已解决 :

from paddle.metric import Metric
###自定义F1-scors 
class precision_scors(Metric):
    """
    Precision (also called positive predictive value) is the fraction of
    relevant instances among the retrieved instances. Refer to
    https://en.wikipedia.org/wiki/Evaluation_of_binary_classifiers
    Noted that this class manages the precision score only for binary
    classification task.
    ......
    """
 
    def __init__(self, name='F1-scors', *args, **kwargs):
        super(F1_scors, self).__init__(*args, **kwargs)
        self.tp = 0  # true positive
        self.fp = 0  # false positive
        self.fn = 0 #false negative
        self._name = name
 
    def update(self, preds, labels):
        """
        Update the states based on the current mini-batch prediction results.
        Args:
            preds (numpy.ndarray): The prediction result, usually the output
               of two-class sigmoid function. It should be a vector (column
               vector or row vector) with data type: 'float64' or 'float32'.
           labels (numpy.ndarray): The ground truth (labels),
               the shape should keep the same as preds.
               The data type is 'int32' or 'int64'.
        """
        # if isinstance(preds, paddle.Tensor):
        #     preds = preds.numpy()
        # elif not _is_numpy_(preds):
        #     raise ValueError("The 'preds' must be a numpy ndarray or Tensor.")
        # if isinstance(labels, paddle.Tensor):
        #     labels = labels.numpy()
        # elif not _is_numpy_(labels):
        #     raise ValueError("The 'labels' must be a numpy ndarray or Tensor.")
 
        sample_num = labels.shape[0]
        preds = np.floor(preds + 0.5).astype("int64")
 
        for i in range(sample_num):
            pred = preds[i]
            label = labels[i]
            if np.any(pred == 1):
                if np.any(pred == label):
                    self.tp += 1
                else:
                    self.fp += 1
            elif np.any(pred == 0):
                if np.any(pred == label):
                    pass
                else:
                    self.fn += 1
 
    def reset(self):
        """
        Resets all of the metric state.
        """
        self.tp = 0
        self.fp = 0
        self.fn = 0
 
    def accumulate(self):
        """
        Calculate the final precision.
        Returns:
           A scaler float: results of the calculated precision.
        """
        ap = self.tp + self.fp 
        #pr = self.tp/ap if ap != 0 else .0
        #bp = self.tp + self.fn
        #re = self.tp/bp if bp !=0 else .0
        #roc = pr + re
        #f1 = (2*pr*re)/roc if roc != 0 else .0
        #return float(f1) #(返回F1)
        return float(self.tp) / ap if ap != 0 else .0 #(返回precision)
        # return float(self.tp) / bp if bp !=0 else .0#(返回recall)
 
    def name(self):
        """
        Returns metric name
        """
        return self._name
0
回复
需求/bug反馈?一键提issue告诉我们
发现bug?如果您知道修复办法,欢迎提pr直接参与建设飞桨~
在@后输入用户全名并按空格结束,可艾特全站任一用户