Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

ENH add CAP curve #28972

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 113 commits into
base: main
Choose a base branch
Loading
from

Conversation

JosephBARBIERDARNAL
Copy link
Contributor

@JosephBARBIERDARNAL JosephBARBIERDARNAL commented May 7, 2024

This PR is a dupplicate from #28752. I'm recreating a PR because I accidentally deleted my fork with the old one.

Reference Issue

fix for #10003

What does this implement/fix?

creation of a CumulativeAccuracyDisplay class for plots

"The CAP of a model represents the cumulative number of positive outcomes along the y-axis versus the corresponding cumulative number of a classifying parameter along the x-axis. The output is called a CAP curve.[1] The CAP is distinct from the receiver operating characteristic (ROC) curve, which plots the true-positive rate against the false-positive rate." (wikipedia definition)

It's mainly inspired from the RocCurveDisplay class.

other

It's currently a work in progress.

TODO

Binary classification

  • raise a ValueError in from_estimator if the estimator is not fitted or is a classifier that was fitted with more than 3 classes;
  • fix pos_label handling when the positive class;
  • add/update a test to check that we have the same result for response_method="decision_function" and response_method="predict_proba" for a LogisticRegression classifier fit with string labels and for all 3 possible values of pos_label;
    • not that this is different from what is already tested in test_display_from_estimator_and_from_prediction;
    • CAP curves should be invariant under order preserving transformations of the predictions. This is way plotting CAP from the unormalized logits or the logistic sigmoid scaled probabilistic predictions should result in the same curve: the logistic sigmoid is strictly monotonic, hence order preserving.
      • joseph --> should be good
  • add an option (enabled by default) to draw CAP curve of the "perfect"/"oracle" model;
  • update the tests to check that the model curves always lie between the "chance level" and "perfect"/"oracle" curves.
  • add a test to check that the display array attributes y_true_cumulative and cumulative_total have the same dtype as y_pred in the test about from_predictions. We can test for y_pred passed either as np.float32 or np.float64.
    • joseph: should be good
  • test that CAPCurveDisplay.from_estimator(LinearSVC().fit(X, y), ...) works (even if it does not have a predict_proba method. This should cover one of the line reported as uncovered by codecov.
  • leverage test_common_curve_display.py to reuse some generic tests on CAPCurveDisplay and maybe remove redundant tests on invalid inputs from test_cap_curve_display.py if any;
    • joseph: should be good
  • add despine argument?
    • olivier: I would rather not do that specifically for that PR but maybe consider a cross-display PR that does that for all *Display classes in scikit-learn. Feel free to open an issue to discuss this with screen shots e.g. on ROC or PR curves and your analysis of pros and cons.
    • joseph: sure. I suggested it here because the ROC and PR curves already have it (see ENH despine keyword for ROC and PR curves #26367). I'm not sure it makes much sense for ConfusionMatrixDisplay (?). I'll open an issue (when this PR will be merged) for CAPCurveDisplay, PredictionErrorDisplay and DetCurveDisplay because I think they're the only ones that don't have this option.
    • should be good --> I added the argument, as per suggested below in the discussion

Regression

  • update the docstrings to make it explicit that either regressors (with positive outcomes) or binary classifiers are accepted and anything else is rejected.
    • joseph: I made a first pass, but I guess it can be improved
  • raise a ValueError with an informative error message if y_true has negative values;
    • joseph: it is now tested here
  • raise a ValueError if all y_true are zeros (the plot would be degenerate and would raise a low level divide by zero warning whith normalize_scale=True);
    • In theory, this should not happen, since if all the y_true are zeros, it will be considered a case of classification
  • add tests with continuous outputs on positive observation data (e.g. using PoissonRegressor) and check that the regressor curve lie between the "chance level" and "perfect" curves;
    • joseph: it is now tested here
  • update the insurance model examples (examples/linear_model/plot_tweedie_regression_insurance_claims.py and examples/linear_model/plot_poisson_regression_non_normal_loss.py) to use the CAPCurveDisplay class instead of manually plotting the Lorenz curves.

Other

  • document the new feature with a new entry under doc/whats_new/upcoming_changes/
  • update the user guide (doc/visualization) to reference this new tool.
    • done here.
    • joseph: One thing I find strange is that the CAP curve is the only one where all the letters of the acronym are in capitals. Is this explicitly intended? There's DetCurveDisplay, RocCurveDisplay and then CAPCurveDisplay, which seems inconsistent.

Nice to have

Copy link

github-actions bot commented May 7, 2024

✔️ Linting Passed

All linting checks passed. Your pull request is in excellent shape! ☀️

Generated for commit: 6ec1f40. Link to the linter CI: here

@lorentzenchr lorentzenchr changed the title add cap curve main class and tests ENH add CAP curve May 11, 2024
@lorentzenchr
Copy link
Member

@JosephBARBIERDARNAL Could you please address all the comments from #28752?

@JosephBARBIERDARNAL
Copy link
Contributor Author

Yes, I'm ready to do it, but I won't be able to get back to it quickly (2 to 3 months) unfortunately. Hope that's okay

@JosephBARBIERDARNAL
Copy link
Contributor Author

JosephBARBIERDARNAL commented Oct 6, 2024

Working on this PR again. Here are the current updates:

Some (likely non-exhaustive) issues:

  • Should the figure be squared when normalize=False? Currently, it does self.ax_.set([...], aspect="equal") in both cases, so the figure dimensions are "random" (i.e., unpredictable)
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression

from sklearn.metrics import CapCurveDisplay

X, y = make_classification(n_samples=1000, n_features=20, n_classes=2, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf = LogisticRegression(max_iter=1000)
clf.fit(X_train, y_train)
y_scores = clf.decision_function(X_test)

fig, ax = plt.subplots(ncols=2, dpi=300, figsize=(12,12))

display = CapCurveDisplay.from_predictions(
    ax=ax[0],
    y_true=y_test,
    y_pred=y_scores,
    name='normalize_scale=False',
    normalize_scale=False,
    plot_chance_level=True
)

display = CapCurveDisplay.from_predictions(
    ax=ax[1],
    y_true=y_test,
    y_pred=y_scores,
    name='normalize_scale=True',
    normalize_scale=True,
    plot_chance_level=True
)

Screenshot 2024-10-06 at 14 49 39

@glemaitre glemaitre self-requested a review November 3, 2024 10:24
@glemaitre
Copy link
Member

I'll come back on this PR after the release. This will be one of my priority to be merged for 1.7.

@glemaitre glemaitre added this to the 1.7 milestone Nov 3, 2024
@JosephBARBIERDARNAL
Copy link
Contributor Author

I'll come back on this PR after the release. This will be one of my priority to be merged for 1.7.

Ok cool. There's still a few things I need to do anyway before a review (cov test and adding the #30023 check).


# compute cumulative sums for true positives and all cases
y_true_cumulative = np.cumsum(y_true_sorted * sample_weight_sorted)
cumulative_total = np.cumsum(sample_weight_sorted)
Copy link
Member

@ogrisel ogrisel Nov 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For information, there was a concurrent PR to fix the lack of cumsum of the sample_weight to define the x-axis of the Lorenz curves in one of our regression examples. To check that this was the correct fix, we ran a quick check on synthetic data with integer valued sample weights to check that there is an exact equivalence between repeated data points and reweighting them by exposure:

#30198 (comment)

Maybe this PR could be expanded to test for this property also holds for CapCurveDisplay.from_predictions with non-constant, integer-valued sample_weight.

EDIT: I am not entirely sure how to write such a test, but possibly we could numpy.interp1d the CAP curve computed on the repeated for the x-axis location of the points of the weighted CAP curve and check that the two curves match up to a small eps at those locations.

Copy link
Member

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are several review comments in #28752 that have not yet been addressed.

I would like to make sure that we do not forget about them when iterating on this PR.

@JosephBARBIERDARNAL to sort things out, please reply in the threads of the review of #28752 and make it explicit which comments have been addressed in #28972 and how, and then mark those threads as resolved.

Also, we need some test and update an existing example, where we compare ROC, DET and CAP curves on the same classifier. I suppose this example is the best candidate:

https://scikit-learn.org/stable/auto_examples/model_selection/plot_det.html

@JosephBARBIERDARNAL
Copy link
Contributor Author

JosephBARBIERDARNAL commented Nov 27, 2024

@ogrisel That sounds good to me. I won't be able to work on it immediately, but I’ll definitely be able to get to it within the next few weeks.

I'll ping you and/or @glemaitre for review

@lorentzenchr
Copy link
Member

2 things are important for me:

  • Even if this PR focuses on binary classification, it should be implemented in a way such that (positive) regression can be added easily, in particular
    • without API hassle (note that it is predict_proba for classification but predict for regression)
    • with the same class/function
  • The plot should contain the lines for "perfect" and "random" as in the screenshot from the paper in FEA add CumulativeAccuracyDisplay #28752 (comment).

@JosephBARBIERDARNAL
Copy link
Contributor Author

  • The plot should contain the lines for "perfect" and "random" as in the screenshot from the paper in
  • Will this actually be visible? I'm not entirely sure how broad the possible shape of such a line could be. In my mind, it would resemble the "perfect" line in ROC curves, which suggests that this line would almost always span the spines of the Matplotlib Axes (except perhaps in cases with very low sample sizes?). However, we could adjust the x/y axis limits to accommodate this scenario.
  • If we decide to implement it, what should we name it, and what should its default style be?

@JosephBARBIERDARNAL
Copy link
Contributor Author

JosephBARBIERDARNAL commented Jan 11, 2025

@JosephBARBIERDARNAL to sort things out, please reply in the threads of the review of #28752 and make it explicit which comments have been addressed in #28972 and how, and then mark those threads as resolved.

I resolved most of the conversations there. I didn't touch some of them if I wasn't sure. Feel free to ping me if any changes are needed.

I'm just not sure what "make it explicit which comments have been addressed in https://github.com/scikit-learn/scikit-learn/pull/28972 and how" means

JosephBARBIERDARNAL and others added 12 commits May 3, 2025 12:24
Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
Copy link
Member

@glemaitre glemaitre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a really quick review. I'll continue tomorrow.

sklearn/metrics/_plot/cap_curve.py Show resolved Hide resolved
if sample_weight is None:
sample_weight = np.ones_like(y_true, dtype=np.float64)

sorted_indices = np.argsort(y_pred)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could not see this comment in the GitHub frontend but it seems important to be addressed.

sklearn/metrics/_plot/cap_curve.py Show resolved Hide resolved
sklearn/metrics/_plot/cap_curve.py Show resolved Hide resolved
Copy link
Member

@ArturoAmorQ ArturoAmorQ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @JosephBARBIERDARNAL. This new feature is definitely very valuable. Here's a batch of comments regarding the modified example.

@@ -4,17 +4,22 @@
====================================
Copy link
Member

@ArturoAmorQ ArturoAmorQ May 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this name for the example would be pertinent anymore. Maybe we can go with something like "Compare ROC, DET and CAP curves". Then we would have to rename the section a bit further below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me! Let me know when this is approved by others.

Copy link
Member

@ogrisel ogrisel May 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree the title needs to be updated.

I am not so sure about using the infinitive form of a verb in a title, though. Maybe "Comparing ROC, DET and CAP curves". Or "Visual assessment of the performance of binary classifiers with ROC, DET and CAP curves".

I also think we should add the Precision-Recall curve to that example in a follow-up PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Comparing ROC, DET and CAP curves" seems good to me. That what I've put, but no strong opinion.

examples/model_selection/plot_det.py Outdated Show resolved Hide resolved
examples/model_selection/plot_det.py Outdated Show resolved Hide resolved
ax=ax_roc,
name=name,
pos_label=pos_label,
plot_chance_level=is_last,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we already have the DummyClassifier, this line is overlapping. I think back in the day we decided that we better avoid the mention to "chance level" if possible

Suggested change
plot_chance_level=is_last,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see my comment

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to keep plot_chance_level=is_last for CAP and ROC.

It's interesting to see that the theoretical/expected "chance level" curve exactly matches the empirical constant predictor curve for CAP while both match exactly for ROC (and DET, also we do not have the option yet for DET).

Copy link
Member

@ogrisel ogrisel May 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we decided that we better avoid the mention to "chance level" if possible

Only in the text of the headers and paragraphs: I prefer discussing the concept with more precise phrasing and then define what we name "chance level" in the API and on the legend of the plot actually is.

examples/model_selection/plot_det.py Outdated Show resolved Hide resolved
Comment on lines +178 to +180
# The diagonal black-dotted lines named "chance level" in the plots above
# correspond to a the expected value of a non-informative classifier on an
# infinite evaluation set.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I we go with avoiding the term "chance level" I would rather remove this whole paragraph and only mention in the paragraph about the DummyClassifier below that non-informative classifiers can be used as baseline.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This paragraph was added intentionally. The two curves (expected curve vs finite sample curve) do not match exactly for the CAP case (contrary to ROC and DET), hence the decision to make this explicit in the discussion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The important word above is "expected" (the empirical curve converges to the expected curve in the large sample limit).

@JosephBARBIERDARNAL
Copy link
Contributor Author

JosephBARBIERDARNAL commented May 7, 2025

A few things that I think need to be addressed:

  • One thing I find strange is that the CAP curve is the only one where all the letters of the acronym are in capitals. There's DetCurveDisplay, RocCurveDisplay and then CAPCurveDisplay, which seems inconsistent.
  • I'm not entirely sure I understand what is intended for chance_level naming in scikit-learn in general. I've seen a few threads mentioning avoiding using it or renaming it. If that's the case, wouldn't now be a good time to not use that name in this display?

@ogrisel
Copy link
Member

ogrisel commented May 13, 2025

One thing I find strange is that the CAP curve is the only one where all the letters of the acronym are in capitals. There's DetCurveDisplay, RocCurveDisplay and then CAPCurveDisplay, which seems inconsistent.

I think we should rename to DETCurveDisplay and ROCCurveDisplay in a follow-up PR with a backward compat aliases:

class ROCCurveDisplay(...):
    ...


# Backward compat alias to keep code implemented for scikit-learn 1.7 and earlier working.
RocCurveDisplay = ROCCurveDisplay

We could introduce a deprecation warning, but I am worried that this will break educational resources (e.g. code in blog posts, tutorials, books) for little benefit, so I would rather go with "soft" deprecation: rename and keep a long term backward compat alias without warnings.

But let's not do that as part of this PR and rather do the renaming/deprecation in a follow-up instead.

@ogrisel
Copy link
Member

ogrisel commented May 13, 2025

I'm not entirely sure I understand what is intended for chance_level naming in scikit-learn in general. I've seen a few threads mentioning avoiding using it or renaming it. If that's the case, wouldn't now be a good time to not use that name in this display?

It's already part of the API, so we have to stick with it for the API. My earlier comment is not to put it in section headers or documentation paragraphs without explanations. I prefer introducing the concept with more precise terms such as "The expected curve of a non-informative or constant classifier whose predictions do not depend on the input features".

Note that for ROC and DET, the expected curve of the non-informative classifier in the infinite sample limit (what we name "chance level" in the API) matches exactly the curve of the constant predictor obtained on a finite size test set. For CAP it's more subtle: the "chance level" line only approximately matches the curve of the constant predictor obtained on a finite size test set. The two would only match in the large sample size limit. I think this is an interesting point and I would like to highlight in this section of the example.

@JosephBARBIERDARNAL
Copy link
Contributor Author

I'm a bit short of time at the moment, but to sum up the TODO:

  • Process this comment. I think this requires one or more decisions, as well as corrections (x/y labels, plot, ...). It's not obvious to me why it would be better for the Lorenz curve to have the same order as the Cap curve by default, but I feel like I'm missing something and I think I need some time to look into it more closely.
  • Make sure the examples are fully updated and clear
  • Add a test to check the sample weight/repetition equivalence property ENH add CAP curve #28972 (comment)

Feel free to add any other missing elements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

6 participants
Morty Proxy This is a proxified and sanitized view of the page, visit original site.