Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 9b7ff27

Browse filesBrowse files
authored
DOC improve learning-rate AdaBoost estimator (#19919)
1 parent dd7b7e5 commit 9b7ff27
Copy full SHA for 9b7ff27

File tree

1 file changed

+6
-6
lines changed
Filter options

1 file changed

+6
-6
lines changed

‎sklearn/ensemble/_weight_boosting.py

Copy file name to clipboardExpand all lines: sklearn/ensemble/_weight_boosting.py
+6-6Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -313,9 +313,9 @@ class AdaBoostClassifier(ClassifierMixin, BaseWeightBoosting):
313313
In case of perfect fit, the learning procedure is stopped early.
314314
315315
learning_rate : float, default=1.
316-
Learning rate shrinks the contribution of each classifier by
317-
``learning_rate``. There is a trade-off between ``learning_rate`` and
318-
``n_estimators``.
316+
Weight applied to each classifier at each boosting iteration. A higher
317+
learning rate increases the contribution of each classifier. There is
318+
a trade-off between the `learning_rate` and `n_estimators` parameters.
319319
320320
algorithm : {'SAMME', 'SAMME.R'}, default='SAMME.R'
321321
If 'SAMME.R' then use the SAMME.R real boosting algorithm.
@@ -898,9 +898,9 @@ class AdaBoostRegressor(RegressorMixin, BaseWeightBoosting):
898898
In case of perfect fit, the learning procedure is stopped early.
899899
900900
learning_rate : float, default=1.
901-
Learning rate shrinks the contribution of each regressor by
902-
``learning_rate``. There is a trade-off between ``learning_rate`` and
903-
``n_estimators``.
901+
Weight applied to each classifier at each boosting iteration. A higher
902+
learning rate increases the contribution of each classifier. There is
903+
a trade-off between the `learning_rate` and `n_estimators` parameters.
904904
905905
loss : {'linear', 'square', 'exponential'}, default='linear'
906906
The loss function to use when updating the weights after each

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.