Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

A Bug at the inverse_transform of the KernelPCA #18902

Copy link
Copy link
Closed
@kstoneriv3

Description

@kstoneriv3
Issue body actions

I have a question about the following line of KernelPCA. This line appears in the reprojection of the sample points in KernelPCA but for me, this line seems unnecessary. Here, kernel ridge regression is used to (approximately) reproject the "transformed samples"(, which are coefficients of principal components), using precomputed dual coefficients of the kernel ridge regression. Though we need to add alpha to the diagonal elements of the kernel matrix when computing the dual coefficients (as in _fit_inverse_transform), we do not need to add alpha to the kernel in the prediction stage, as far as I understand.

K.flat[::n_samples + 1] += self.alpha

Is this line really necessary? If so, I would appreciate it if someone could explain why this line is needed.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      Morty Proxy This is a proxified and sanitized view of the page, visit original site.