Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 2b97ac5

Browse filesBrowse files
authored
DOC Correct typos in 10.3.3.2 Robustness (scikit-learn#30827)
1 parent 5c95ebe commit 2b97ac5
Copy full SHA for 2b97ac5

File tree

Expand file treeCollapse file tree

1 file changed

+2
-2
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+2
-2
lines changed

‎doc/common_pitfalls.rst

Copy file name to clipboardExpand all lines: doc/common_pitfalls.rst
+2-2Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -549,10 +549,10 @@ When we evaluate a randomized estimator performance by cross-validation, we
549549
want to make sure that the estimator can yield accurate predictions for new
550550
data, but we also want to make sure that the estimator is robust w.r.t. its
551551
random initialization. For example, we would like the random weights
552-
initialization of a :class:`~sklearn.linear_model.SGDClassifier` to be
552+
initialization of an :class:`~sklearn.linear_model.SGDClassifier` to be
553553
consistently good across all folds: otherwise, when we train that estimator
554554
on new data, we might get unlucky and the random initialization may lead to
555-
bad performance. Similarly, we want a random forest to be robust w.r.t the
555+
bad performance. Similarly, we want a random forest to be robust w.r.t. the
556556
set of randomly selected features that each tree will be using.
557557

558558
For these reasons, it is preferable to evaluate the cross-validation

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.