Closed
Description
Following on from #17853
I'm interested in the mean distribution of the error which i'm calling bias (over or under predicting).
I can do this as a one liner in numpy (np.average(y_pred - y_true)
) but I would prefer to stay in scikit-learn.
Describe the workflow you want to enable
bias(y_true, y_pred)
Describe your proposed solution
It has mostly been implemented in https://github.com/scikit-learn/scikit-learn/blob/fd237278e/sklearn/metrics/_regression.py#L181
Would just have to adjust
output_errors = np.average(np.abs(y_pred - y_true),
weights=sample_weight, axis=0)
to
output_errors = np.average(y_pred - y_true,
weights=sample_weight, axis=0)
Describe alternatives you've considered, if relevant
Additional context
Discussion of whether this is an error metric or not at #17853 (comment)