Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 559609f

Browse filesBrowse files
authored
MAINT Fix several typos in src and doc files (#26187)
1 parent 463d166 commit 559609f
Copy full SHA for 559609f

File tree

Expand file treeCollapse file tree

12 files changed

+15
-15
lines changed
Filter options
Expand file treeCollapse file tree

12 files changed

+15
-15
lines changed

‎doc/computing/computational_performance.rst

Copy file name to clipboardExpand all lines: doc/computing/computational_performance.rst
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -195,7 +195,7 @@ support vectors.
195195
.. centered:: |nusvr_model_complexity|
196196

197197
For :mod:`sklearn.ensemble` of trees (e.g. RandomForest, GBT,
198-
ExtraTrees etc) the number of trees and their depth play the most
198+
ExtraTrees, etc.) the number of trees and their depth play the most
199199
important role. Latency and throughput should scale linearly with the number
200200
of trees. In this case we used directly the ``n_estimators`` parameter of
201201
:class:`~ensemble.GradientBoostingRegressor`.

‎doc/developers/contributing.rst

Copy file name to clipboardExpand all lines: doc/developers/contributing.rst
+2-2Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -548,8 +548,8 @@ message, the following actions are taken.
548548
[cd build gh] CD is run only for GitHub Actions
549549
[cd build cirrus] CD is run only for Cirrus CI
550550
[lint skip] Azure pipeline skips linting
551-
[scipy-dev] Build & test with our dependencies (numpy, scipy, etc ...) development builds
552-
[nogil] Build & test with the nogil experimental branches of CPython, Cython, NumPy, SciPy...
551+
[scipy-dev] Build & test with our dependencies (numpy, scipy, etc.) development builds
552+
[nogil] Build & test with the nogil experimental branches of CPython, Cython, NumPy, SciPy, ...
553553
[pypy] Build & test with PyPy
554554
[azure parallel] Run Azure CI jobs in parallel
555555
[float32] Run float32 tests by setting `SKLEARN_RUN_FLOAT32_TESTS=1`. See :ref:`environment_variable` for more details

‎doc/getting_started.rst

Copy file name to clipboardExpand all lines: doc/getting_started.rst
+2-2Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,8 @@ The :term:`fit` method generally accepts 2 inputs:
3737
represented as rows and features are represented as columns.
3838
- The target values :term:`y` which are real numbers for regression tasks, or
3939
integers for classification (or any other discrete set of values). For
40-
unsupervized learning tasks, ``y`` does not need to be specified. ``y`` is
41-
usually 1d array where the ``i`` th entry corresponds to the target of the
40+
unsupervised learning tasks, ``y`` does not need to be specified. ``y`` is
41+
usually a 1d array where the ``i`` th entry corresponds to the target of the
4242
``i`` th sample (row) of ``X``.
4343

4444
Both ``X`` and ``y`` are usually expected to be numpy arrays or equivalent

‎doc/modules/cross_decomposition.rst

Copy file name to clipboardExpand all lines: doc/modules/cross_decomposition.rst
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ PLS draws similarities with `Principal Component Regression
2828
<https://en.wikipedia.org/wiki/Principal_component_regression>`_ (PCR), where
2929
the samples are first projected into a lower-dimensional subspace, and the
3030
targets `y` are predicted using `transformed(X)`. One issue with PCR is that
31-
the dimensionality reduction is unsupervized, and may lose some important
31+
the dimensionality reduction is unsupervised, and may lose some important
3232
variables: PCR would keep the features with the most variance, but it's
3333
possible that features with a small variances are relevant from predicting
3434
the target. In a way, PLS allows for the same kind of dimensionality

‎doc/modules/feature_extraction.rst

Copy file name to clipboardExpand all lines: doc/modules/feature_extraction.rst
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -846,7 +846,7 @@ Note that the dimensionality does not affect the CPU training time of
846846
algorithms which operate on CSR matrices (``LinearSVC(dual=True)``,
847847
``Perceptron``, ``SGDClassifier``, ``PassiveAggressive``) but it does for
848848
algorithms that work with CSC matrices (``LinearSVC(dual=False)``, ``Lasso()``,
849-
etc).
849+
etc.).
850850

851851
Let's try again with the default setting::
852852

‎doc/modules/lda_qda.rst

Copy file name to clipboardExpand all lines: doc/modules/lda_qda.rst
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ Mathematical formulation of LDA dimensionality reduction
137137
First note that the K means :math:`\mu_k` are vectors in
138138
:math:`\mathcal{R}^d`, and they lie in an affine subspace :math:`H` of
139139
dimension at most :math:`K - 1` (2 points lie on a line, 3 points lie on a
140-
plane, etc).
140+
plane, etc.).
141141

142142
As mentioned above, we can interpret LDA as assigning :math:`x` to the class
143143
whose mean :math:`\mu_k` is the closest in terms of Mahalanobis distance,

‎sklearn/ensemble/_hist_gradient_boosting/splitting.pyx

Copy file name to clipboardExpand all lines: sklearn/ensemble/_hist_gradient_boosting/splitting.pyx
+2-2Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -499,9 +499,9 @@ cdef class Splitter:
499499
split_infos[split_info_idx].feature_idx = feature_idx
500500

501501
# For each feature, find best bin to split on
502-
# Start with a gain of -1 (if no better split is found, that
502+
# Start with a gain of -1 if no better split is found, that
503503
# means one of the constraints isn't respected
504-
# (min_samples_leaf, etc) and the grower will later turn the
504+
# (min_samples_leaf, etc.) and the grower will later turn the
505505
# node into a leaf.
506506
split_infos[split_info_idx].gain = -1
507507
split_infos[split_info_idx].is_categorical = is_categorical[feature_idx]

‎sklearn/metrics/_classification.py

Copy file name to clipboardExpand all lines: sklearn/metrics/_classification.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -316,7 +316,7 @@ def confusion_matrix(
316316
[0, 0, 1],
317317
[1, 0, 2]])
318318
319-
In the binary case, we can extract true positives, etc as follows:
319+
In the binary case, we can extract true positives, etc. as follows:
320320
321321
>>> tn, fp, fn, tp = confusion_matrix([0, 1, 0, 1], [1, 1, 1, 0]).ravel()
322322
>>> (tn, fp, fn, tp)

‎sklearn/model_selection/tests/test_search.py

Copy file name to clipboardExpand all lines: sklearn/model_selection/tests/test_search.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -379,7 +379,7 @@ def test_no_refit():
379379
and hasattr(grid_search, "best_params_")
380380
)
381381

382-
# Make sure the functions predict/transform etc raise meaningful
382+
# Make sure the functions predict/transform etc. raise meaningful
383383
# error messages
384384
for fn_name in (
385385
"predict",

‎sklearn/neural_network/_multilayer_perceptron.py

Copy file name to clipboardExpand all lines: sklearn/neural_network/_multilayer_perceptron.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -360,7 +360,7 @@ def _backprop(self, X, y, activations, deltas, coef_grads, intercept_grads):
360360
return loss, coef_grads, intercept_grads
361361

362362
def _initialize(self, y, layer_units, dtype):
363-
# set all attributes, allocate weights etc for first call
363+
# set all attributes, allocate weights etc. for first call
364364
# Initialize parameters
365365
self.n_iter_ = 0
366366
self.t_ = 0

‎sklearn/utils/tests/test_class_weight.py

Copy file name to clipboardExpand all lines: sklearn/utils/tests/test_class_weight.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -274,7 +274,7 @@ def test_compute_sample_weight_more_than_32():
274274
assert_array_almost_equal(weight, np.ones(y.shape[0]))
275275

276276

277-
def test_class_weight_does_not_contains_more_classses():
277+
def test_class_weight_does_not_contains_more_classes():
278278
"""Check that class_weight can contain more labels than in y.
279279
280280
Non-regression test for #22413

‎sklearn/utils/tests/test_estimator_html_repr.py

Copy file name to clipboardExpand all lines: sklearn/utils/tests/test_estimator_html_repr.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -205,7 +205,7 @@ def test_estimator_html_repr_pipeline():
205205

206206

207207
@pytest.mark.parametrize("final_estimator", [None, LinearSVC()])
208-
def test_stacking_classsifer(final_estimator):
208+
def test_stacking_classifier(final_estimator):
209209
estimators = [
210210
("mlp", MLPClassifier(alpha=0.001)),
211211
("tree", DecisionTreeClassifier()),

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.