Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit a1cd90c

Browse filesBrowse files
authored
STYLE: Fix emphasis vs definitions style guide compliance (Issue #721) (#723)
* Fix emphasis vs definitions in linear_algebra.md - Changed all first-use technical terms from italic to bold - Complies with QuantEcon style guide: bold for definitions, italic for emphasis - Related to issue #721 * Fix emphasis vs definitions in linear_models.md - Changed technical terms from italic to bold on first definition - Fixed: probability distributions, companion matrix, vector autoregression, deterministic/indeterministic seasonal, linear time trend, moving average, martingale with drift, unconditional mean/variance-covariance matrix, ensemble, cross-sectional average, autocovariance function, stationary distribution, covariance stationary, ergodicity, Markov property, conditional covariance matrix, discrete Lyapunov - Related to issue #721 * Fix emphasis vs definitions in lln_clt.md - Changed Kolmogorov's strong law from italic to bold (named theorem) - Changed variance-covariance matrix from italic to bold (definition) - Related to issue #721 * Fix emphasis vs definitions in markov_asset.md - Changed Gordon formula from italic to bold (named formula) - Changed Lucas tree model terms (tree, fruit, shares, dividend) to bold - Changed infinite horizon, call option, strike price to bold (definitions) - Note: kept 'exercises' and 'not to exercise' as italic (emphasis of choice) - Related to issue #721 * Fix emphasis vs definitions in markov_perf.md - Changed 'Markov perfect equilibrium' from italic to bold in formal definition - Related to issue #721 * Fix emphasis vs definitions in mccall_model.md - Changed 'values' from italic to bold when introducing value functions concept - Note: 'Bellman equation', 'policy', 'reservation wage' already use bold - Related to issue #721 * Fix emphasis vs definitions in mle.md - Changed 'parametric class' from italic to bold (technical concept) - Changed 'Poisson regression' from italic to bold (named model) - Changed 'cumulative normal distribution' from italic to bold (technical term) - Note: 'maximum likelihood estimates' already uses bold - Related to issue #721 * Fix emphasis vs definitions in ols.md - Changed 'exogenous' from italic to bold (key econometric term) - Changed 'marginal effect' from italic to bold (technical definition) - Changed 'the sum of squared residuals' from italic to bold (OLS definition) - Note: 'omitted variable bias', 'multivariate regression model', 'endogeneity', 'two-stage least squares', 'instrument' already use bold - Related to issue #721 * Fix emphasis vs definitions in rational_expectations.md Changes per #721: - rational expectations equilibrium (first introduction) - perceived law of motion - actual law of motion - belief function - Euler equation - transversality condition - recursive competitive equilibrium - planning problem All terms changed from italic to bold as they are definitions per style guide. * Fix emphasis vs definitions in re_with_feedback.md Changes per #721: - backward shift (operator definition) - lag (operator definition) - forward shift (operator definition) - explosive solution All terms changed from italic to bold as they are definitions per style guide. * Fix emphasis vs definitions in samuelson.md Changes per #721: - second-order linear difference equation - national output identity - consumption function - accelerator - accelerator coefficient - aggregate demand - aggregate supply - business cycles - stochastic linear difference equation - marginal propensity to consume - steady state - random - stochastic - shocks - disturbances - second-order scalar linear stochastic difference equation - characteristic polynomial - zeros - roots All terms changed from italic to bold as they are definitions per style guide. * Fix emphasis vs definitions in sir_model.md Changes per #721: - transmission rate - infection rate - recovery rate - effective reproduction number All terms changed from italic to bold as they are definitions per style guide. * Fix emphasis vs definitions in uncertainty_traps.md Changes per #721: - propagation mechanism Term changed from italic to bold as it is a definition per style guide. * Fix emphasis vs definitions in von_neumann_model.md Changes per #721: - activities - goods - input matrix - output matrix - intensity - goods used in production - total outputs - productive - cost - revenue - costs - revenues - irreducibility All terms changed from italic to bold as they are definitions per style guide. * Fix emphasis vs definitions in ak_aiyagari.md Changes per #721: - Lifecycle patterns - Within-cohort heterogeneity - Cross-cohort interactions All terms changed from italic to bold as they are definitions per style guide. * Fix emphasis vs definitions in ak2.md Changes per #721: - numeraire Term changed from italic to bold as it is a definition per style guide. * Fix emphasis vs definitions in cake eating lectures Changes per #721: - exogenous (cake_eating_egm.md) - adapted (cake_eating_stochastic.md) - state (cake_eating_stochastic.md) - control (cake_eating_stochastic.md) - topologically conjugate (cake_eating_time_iter.md) All terms changed from italic to bold as they are definitions per style guide. * Fix emphasis vs definitions in career and cass_koopmans_1 Changes per #721: - career (career.md) - job (career.md) - aggregation theory (cass_koopmans_1.md) All terms changed from italic to bold as they are definitions per style guide. * Fix emphasis vs definitions in likelihood_bayes.md Changes per #721: - recursion - multiplicative decomposition Terms changed from italic to bold as they are definitions per style guide. * Fix emphasis vs definitions in morris_learn.md Changes per #721: - prior distributions - posterior distributions - speculative behavior - ex dividend - Short sales are prohibited - Harsanyi Common Priors Doctrine All terms changed from italic to bold as they are definitions per style guide. * Fix emphasis vs definitions in odu and opt_transport Changes per #721: - reservation wage (odu.md) - reservation wage functional equation (odu.md) - matrix (opt_transport.md) - vector (opt_transport.md) All terms changed from italic to bold as they are definitions per style guide. * Fix emphasis vs definitions in kalman and ifp_advanced Changes per #721: - prior (kalman.md) - filtering distribution (kalman.md) - predictive (kalman.md) - Kalman gain (kalman.md) - predictive distribution (kalman.md) - savings (ifp_advanced.md) All terms changed from italic to bold as they are definitions per style guide. * Fix emphasis vs definitions in cass_fiscal.md Changes per #721: - Household (changed from italic to bold - also fixed typo 'Frim' to 'Firm') - Firm Terms changed from italic to bold as they are economic agents being defined per style guide. * Fix emphasis vs definitions in exchangeable.md Changes per #721: - conditionally (as part of 'conditionally independently and identically distributed') Term changed from italic to bold as it is a definition per style guide. * Revert incorrect emphasis changes back to italic Per review feedback on #721: - ifp_advanced.md: 'savings' - emphasis on grid type, not definition - exchangeable.md: 'conditionally' - alternate description, not definition - cass_koopmans_1.md: 'aggregation theory' - referencing theory, not defining it These should remain italic for emphasis, not bold for definitions. * Revert incorrect emphasis-to-bold changes (batch 2) Reverted changes in 3 files where emphasis/contrast was incorrectly changed to bold: - opt_transport.md: matrix/vector (contrast between types) - cake_eating_egm.md: exogenous (contrast with endogenous) - ak_aiyagari.md: section headers (organizational emphasis) These are not formal definitions, so should remain italic per style guide. * Revert checked emphasis comments back to italic (batch 3) Based on PR review feedback with checked [x] emphasis comments, reverted the following terms from bold back to italic (emphasis, not definitions): - linear_models.md: ergodicity (concept emphasis) - markov_asset.md: tree, fruit, shares, dividend (metaphorical emphasis) - mccall_model.md: values (concept emphasis) - mle.md: parametric class (emphasis not definition) - morris_learn.md: prior/posterior distributions, speculative behavior, ex dividend, Short sales, Harsanyi Common Priors Doctrine (emphasis) - ols.md: exogenous, marginal effect (emphasis not definitions) - rational_expectations.md: rational expectations equilibrium (intro emphasis), perceived/actual law of motion (intro emphasis, formal definitions come later) - samuelson.md: second-order linear difference equation, national output identity, consumption function, accelerator, accelerator coefficient, aggregate demand/supply, random, stochastic, shocks, disturbances (emphasis not definitions) These are emphasis on concepts or references, not formal definitions. * Fix typos introduced during formatting changes Fixed the following typos: - linear_algebra.md: removed extra '.*.' after 'square' and 'symmetric' - linear_algebra.md: removed extra '.l.' after 'diagonal' - sir_model.md: removed extra 'd)' after 'infected)' - von_neumann_model.md: removed extra '.).' after 'consumed)' - von_neumann_model.md: removed extra '****' after 'outputs' - von_neumann_model.md: removed extra 'es' from 'activitieses' → 'activities'
1 parent 96d7d7f commit a1cd90c
Copy full SHA for a1cd90c
Expand file treeCollapse file tree

21 files changed

+112
-112
lines changed
Open diff view settings
Collapse file

‎lectures/ak2.md‎

Copy file name to clipboardExpand all lines: lectures/ak2.md
+1-1Lines changed: 1 addition & 1 deletion
  • Display the source diff
  • Display the rich diff
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ Units of the rental rates are:
209209
* for $r_t$, output at time $t$ per unit of capital at time $t$
210210
211211
212-
We take output at time $t$ as *numeraire*, so the price of output at time $t$ is one.
212+
We take output at time $t$ as **numeraire**, so the price of output at time $t$ is one.
213213
214214
The firm's profits at time $t$ are
215215
Collapse file

‎lectures/cake_eating_stochastic.md‎

Copy file name to clipboardExpand all lines: lectures/cake_eating_stochastic.md
+3-3Lines changed: 3 additions & 3 deletions
  • Display the source diff
  • Display the rich diff
Original file line numberDiff line numberDiff line change
@@ -164,13 +164,13 @@ In summary, the agent's aim is to select a path $c_0, c_1, c_2, \ldots$ for cons
164164
1. nonnegative,
165165
1. feasible in the sense of {eq}`outcsdp0`,
166166
1. optimal, in the sense that it maximizes {eq}`texs0_og2` relative to all other feasible consumption sequences, and
167-
1. *adapted*, in the sense that the action $c_t$ depends only on
167+
1. **adapted**, in the sense that the action $c_t$ depends only on
168168
observable outcomes, not on future outcomes such as $\xi_{t+1}$.
169169

170170
In the present context
171171

172-
* $x_t$ is called the *state* variable --- it summarizes the "state of the world" at the start of each period.
173-
* $c_t$ is called the *control* variable --- a value chosen by the agent each period after observing the state.
172+
* $x_t$ is called the **state** variable --- it summarizes the "state of the world" at the start of each period.
173+
* $c_t$ is called the **control** variable --- a value chosen by the agent each period after observing the state.
174174

175175
### The Policy Function Approach
176176

Collapse file

‎lectures/cake_eating_time_iter.md‎

Copy file name to clipboardExpand all lines: lectures/cake_eating_time_iter.md
+1-1Lines changed: 1 addition & 1 deletion
  • Display the source diff
  • Display the rich diff
Original file line numberDiff line numberDiff line change
@@ -237,7 +237,7 @@ whenever $\sigma \in \mathscr P$.
237237
It is possible to prove that there is a tight relationship between iterates of
238238
$K$ and iterates of the Bellman operator.
239239

240-
Mathematically, the two operators are *topologically conjugate*.
240+
Mathematically, the two operators are **topologically conjugate**.
241241

242242
Loosely speaking, this means that if iterates of one operator converge then
243243
so do iterates of the other, and vice versa.
Collapse file

‎lectures/career.md‎

Copy file name to clipboardExpand all lines: lectures/career.md
+2-2Lines changed: 2 additions & 2 deletions
  • Display the source diff
  • Display the rich diff
Original file line numberDiff line numberDiff line change
@@ -66,8 +66,8 @@ from matplotlib import cm
6666

6767
In what follows we distinguish between a career and a job, where
6868

69-
* a *career* is understood to be a general field encompassing many possible jobs, and
70-
* a *job* is understood to be a position with a particular firm
69+
* a **career** is understood to be a general field encompassing many possible jobs, and
70+
* a **job** is understood to be a position with a particular firm
7171

7272
For workers, wages can be decomposed into the contribution of job and career
7373

Collapse file

‎lectures/cass_fiscal.md‎

Copy file name to clipboardExpand all lines: lectures/cass_fiscal.md
+2-2Lines changed: 2 additions & 2 deletions
  • Display the source diff
  • Display the rich diff
Original file line numberDiff line numberDiff line change
@@ -147,8 +147,8 @@ $$ (eq:gov_budget)
147147
148148
Given a budget-feasible government policy $\{g_t\}_{t=0}^\infty$ and $\{\tau_{ct}, \tau_{kt}, \tau_{nt}, \tau_{ht}\}_{t=0}^\infty$ subject to {eq}`eq:gov_budget`,
149149
150-
- *Household* chooses $\{c_t\}_{t=0}^\infty$, $\{n_t\}_{t=0}^\infty$, and $\{k_{t+1}\}_{t=0}^\infty$ to maximize utility{eq}`eq:utility` subject to budget constraint{eq}`eq:house_budget`, and
151-
- *Frim* chooses sequences of capital $\{k_t\}_{t=0}^\infty$ and $\{n_t\}_{t=0}^\infty$ to maximize profits
150+
- **Household** chooses $\{c_t\}_{t=0}^\infty$, $\{n_t\}_{t=0}^\infty$, and $\{k_{t+1}\}_{t=0}^\infty$ to maximize utility{eq}`eq:utility` subject to budget constraint{eq}`eq:house_budget`, and
151+
- **Firm** chooses sequences of capital $\{k_t\}_{t=0}^\infty$ and $\{n_t\}_{t=0}^\infty$ to maximize profits
152152
153153
$$
154154
\sum_{t=0}^\infty q_t [F(k_t, n_t) - \eta_t k_t - w_t n_t]
Collapse file

‎lectures/kalman.md‎

Copy file name to clipboardExpand all lines: lectures/kalman.md
+5-5Lines changed: 5 additions & 5 deletions
  • Display the source diff
  • Display the rich diff
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ One way to summarize our knowledge is a point prediction $\hat x$
8585
* Then it is better to summarize our initial beliefs with a bivariate probability density $p$
8686
* $\int_E p(x)dx$ indicates the probability that we attach to the missile being in region $E$.
8787

88-
The density $p$ is called our *prior* for the random variable $x$.
88+
The density $p$ is called our **prior** for the random variable $x$.
8989

9090
To keep things tractable in our example, we assume that our prior is Gaussian.
9191

@@ -317,7 +317,7 @@ We have obtained probabilities for the current location of the state (missile) g
317317
This is called "filtering" rather than forecasting because we are filtering
318318
out noise rather than looking into the future.
319319

320-
* $p(x \,|\, y) = N(\hat x^F, \Sigma^F)$ is called the *filtering distribution*
320+
* $p(x \,|\, y) = N(\hat x^F, \Sigma^F)$ is called the **filtering distribution**
321321

322322
But now let's suppose that we are given another task: to predict the location of the missile after one unit of time (whatever that may be) has elapsed.
323323

@@ -331,7 +331,7 @@ Let's suppose that we have one, and that it's linear and Gaussian. In particular
331331
x_{t+1} = A x_t + w_{t+1}, \quad \text{where} \quad w_t \sim N(0, Q)
332332
```
333333

334-
Our aim is to combine this law of motion and our current distribution $p(x \,|\, y) = N(\hat x^F, \Sigma^F)$ to come up with a new *predictive* distribution for the location in one unit of time.
334+
Our aim is to combine this law of motion and our current distribution $p(x \,|\, y) = N(\hat x^F, \Sigma^F)$ to come up with a new **predictive** distribution for the location in one unit of time.
335335

336336
In view of {eq}`kl_xdynam`, all we have to do is introduce a random vector $x^F \sim N(\hat x^F, \Sigma^F)$ and work out the distribution of $A x^F + w$ where $w$ is independent of $x^F$ and has distribution $N(0, Q)$.
337337

@@ -356,7 +356,7 @@ $$
356356
$$
357357

358358
The matrix $A \Sigma G' (G \Sigma G' + R)^{-1}$ is often written as
359-
$K_{\Sigma}$ and called the *Kalman gain*.
359+
$K_{\Sigma}$ and called the **Kalman gain**.
360360

361361
* The subscript $\Sigma$ has been added to remind us that $K_{\Sigma}$ depends on $\Sigma$, but not $y$ or $\hat x$.
362362

@@ -373,7 +373,7 @@ Our updated prediction is the density $N(\hat x_{new}, \Sigma_{new})$ where
373373
\end{aligned}
374374
```
375375

376-
* The density $p_{new}(x) = N(\hat x_{new}, \Sigma_{new})$ is called the *predictive distribution*
376+
* The density $p_{new}(x) = N(\hat x_{new}, \Sigma_{new})$ is called the **predictive distribution**
377377

378378
The predictive distribution is the new density shown in the following figure, where
379379
the update has used parameters.
Collapse file

‎lectures/likelihood_bayes.md‎

Copy file name to clipboardExpand all lines: lectures/likelihood_bayes.md
+2-2Lines changed: 2 additions & 2 deletions
  • Display the source diff
  • Display the rich diff
Original file line numberDiff line numberDiff line change
@@ -129,8 +129,8 @@ $$
129129
where we use the conventions
130130
that $f(w^t) = f(w_1) f(w_2) \ldots f(w_t)$ and $g(w^t) = g(w_1) g(w_2) \ldots g(w_t)$.
131131

132-
Notice that the likelihood process satisfies the *recursion* or
133-
*multiplicative decomposition*
132+
Notice that the likelihood process satisfies the **recursion** or
133+
**multiplicative decomposition**
134134

135135
$$
136136
L(w^t) = \ell (w_t) L (w^{t-1}) .
Collapse file

‎lectures/linear_algebra.md‎

Copy file name to clipboardExpand all lines: lectures/linear_algebra.md
+32-32Lines changed: 32 additions & 32 deletions
  • Display the source diff
  • Display the rich diff
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ from scipy.linalg import inv, solve, det, eig
8585
```{index} single: Linear Algebra; Vectors
8686
```
8787

88-
A *vector* of length $n$ is just a sequence (or array, or tuple) of $n$ numbers, which we write as $x = (x_1, \ldots, x_n)$ or $x = [x_1, \ldots, x_n]$.
88+
A **vector** of length $n$ is just a sequence (or array, or tuple) of $n$ numbers, which we write as $x = (x_1, \ldots, x_n)$ or $x = [x_1, \ldots, x_n]$.
8989

9090
We will write these sequences either horizontally or vertically as we please.
9191

@@ -225,15 +225,15 @@ x + y
225225
```{index} single: Vectors; Norm
226226
```
227227

228-
The *inner product* of vectors $x,y \in \mathbb R ^n$ is defined as
228+
The **inner product** of vectors $x,y \in \mathbb R ^n$ is defined as
229229

230230
$$
231231
x' y := \sum_{i=1}^n x_i y_i
232232
$$
233233

234-
Two vectors are called *orthogonal* if their inner product is zero.
234+
Two vectors are called **orthogonal** if their inner product is zero.
235235

236-
The *norm* of a vector $x$ represents its "length" (i.e., its distance from the zero vector) and is defined as
236+
The **norm** of a vector $x$ represents its "length" (i.e., its distance from the zero vector) and is defined as
237237

238238
$$
239239
\| x \| := \sqrt{x' x} := \left( \sum_{i=1}^n x_i^2 \right)^{1/2}
@@ -273,7 +273,7 @@ np.linalg.norm(x) # Norm of x, take three
273273

274274
Given a set of vectors $A := \{a_1, \ldots, a_k\}$ in $\mathbb R ^n$, it's natural to think about the new vectors we can create by performing linear operations.
275275

276-
New vectors created in this manner are called *linear combinations* of $A$.
276+
New vectors created in this manner are called **linear combinations** of $A$.
277277

278278
In particular, $y \in \mathbb R ^n$ is a linear combination of $A := \{a_1, \ldots, a_k\}$ if
279279

@@ -282,9 +282,9 @@ y = \beta_1 a_1 + \cdots + \beta_k a_k
282282
\text{ for some scalars } \beta_1, \ldots, \beta_k
283283
$$
284284

285-
In this context, the values $\beta_1, \ldots, \beta_k$ are called the *coefficients* of the linear combination.
285+
In this context, the values $\beta_1, \ldots, \beta_k$ are called the **coefficients** of the linear combination.
286286

287-
The set of linear combinations of $A$ is called the *span* of $A$.
287+
The set of linear combinations of $A$ is called the **span** of $A$.
288288

289289
The next figure shows the span of $A = \{a_1, a_2\}$ in $\mathbb R ^3$.
290290

@@ -349,7 +349,7 @@ plt.show()
349349
If $A$ contains only one vector $a_1 \in \mathbb R ^2$, then its
350350
span is just the scalar multiples of $a_1$, which is the unique line passing through both $a_1$ and the origin.
351351

352-
If $A = \{e_1, e_2, e_3\}$ consists of the *canonical basis vectors* of $\mathbb R ^3$, that is
352+
If $A = \{e_1, e_2, e_3\}$ consists of the **canonical basis vectors** of $\mathbb R ^3$, that is
353353

354354
$$
355355
e_1 :=
@@ -399,8 +399,8 @@ The condition we need for a set of vectors to have a large span is what's called
399399

400400
In particular, a collection of vectors $A := \{a_1, \ldots, a_k\}$ in $\mathbb R ^n$ is said to be
401401

402-
* *linearly dependent* if some strict subset of $A$ has the same span as $A$.
403-
* *linearly independent* if it is not linearly dependent.
402+
* **linearly dependent** if some strict subset of $A$ has the same span as $A$.
403+
* **linearly independent** if it is not linearly dependent.
404404

405405
Put differently, a set of vectors is linearly independent if no vector is redundant to the span and linearly dependent otherwise.
406406

@@ -469,19 +469,19 @@ Often, the numbers in the matrix represent coefficients in a system of linear eq
469469

470470
For obvious reasons, the matrix $A$ is also called a vector if either $n = 1$ or $k = 1$.
471471

472-
In the former case, $A$ is called a *row vector*, while in the latter it is called a *column vector*.
472+
In the former case, $A$ is called a **row vector**, while in the latter it is called a **column vector**.
473473

474-
If $n = k$, then $A$ is called *square*.
474+
If $n = k$, then $A$ is called **square**.
475475

476-
The matrix formed by replacing $a_{ij}$ by $a_{ji}$ for every $i$ and $j$ is called the *transpose* of $A$ and denoted $A'$ or $A^{\top}$.
476+
The matrix formed by replacing $a_{ij}$ by $a_{ji}$ for every $i$ and $j$ is called the **transpose** of $A$ and denoted $A'$ or $A^{\top}$.
477477

478-
If $A = A'$, then $A$ is called *symmetric*.
478+
If $A = A'$, then $A$ is called **symmetric**.
479479

480-
For a square matrix $A$, the $i$ elements of the form $a_{ii}$ for $i=1,\ldots,n$ are called the *principal diagonal*.
480+
For a square matrix $A$, the $i$ elements of the form $a_{ii}$ for $i=1,\ldots,n$ are called the **principal diagonal**.
481481

482-
$A$ is called *diagonal* if the only nonzero entries are on the principal diagonal.
482+
$A$ is called **diagonal** if the only nonzero entries are on the principal diagonal.
483483

484-
If, in addition to being diagonal, each element along the principal diagonal is equal to 1, then $A$ is called the *identity matrix* and denoted by $I$.
484+
If, in addition to being diagonal, each element along the principal diagonal is equal to 1, then $A$ is called the **identity matrix** and denoted by $I$.
485485

486486
### Matrix Operations
487487

@@ -641,9 +641,9 @@ See [here](https://python-programming.quantecon.org/numpy.html#matrix-multiplica
641641

642642
Each $n \times k$ matrix $A$ can be identified with a function $f(x) = Ax$ that maps $x \in \mathbb R ^k$ into $y = Ax \in \mathbb R ^n$.
643643

644-
These kinds of functions have a special property: they are *linear*.
644+
These kinds of functions have a special property: they are **linear**.
645645

646-
A function $f \colon \mathbb R ^k \to \mathbb R ^n$ is called *linear* if, for all $x, y \in \mathbb R ^k$ and all scalars $\alpha, \beta$, we have
646+
A function $f \colon \mathbb R ^k \to \mathbb R ^n$ is called **linear** if, for all $x, y \in \mathbb R ^k$ and all scalars $\alpha, \beta$, we have
647647

648648
$$
649649
f(\alpha x + \beta y) = \alpha f(x) + \beta f(y)
@@ -773,7 +773,7 @@ In particular, the following are equivalent
773773
1. The columns of $A$ are linearly independent.
774774
1. For any $y \in \mathbb R ^n$, the equation $y = Ax$ has a unique solution.
775775

776-
The property of having linearly independent columns is sometimes expressed as having *full column rank*.
776+
The property of having linearly independent columns is sometimes expressed as having **full column rank**.
777777

778778
#### Inverse Matrices
779779

@@ -788,7 +788,7 @@ solution is $x = A^{-1} y$.
788788
A similar expression is available in the matrix case.
789789

790790
In particular, if square matrix $A$ has full column rank, then it possesses a multiplicative
791-
*inverse matrix* $A^{-1}$, with the property that $A A^{-1} = A^{-1} A = I$.
791+
**inverse matrix** $A^{-1}$, with the property that $A A^{-1} = A^{-1} A = I$.
792792

793793
As a consequence, if we pre-multiply both sides of $y = Ax$ by $A^{-1}$, we get $x = A^{-1} y$.
794794

@@ -800,11 +800,11 @@ This is the solution that we're looking for.
800800
```
801801

802802
Another quick comment about square matrices is that to every such matrix we
803-
assign a unique number called the *determinant* of the matrix --- you can find
803+
assign a unique number called the **determinant** of the matrix --- you can find
804804
the expression for it [here](https://en.wikipedia.org/wiki/Determinant).
805805

806806
If the determinant of $A$ is not zero, then we say that $A$ is
807-
*nonsingular*.
807+
**nonsingular**.
808808

809809
Perhaps the most important fact about determinants is that $A$ is nonsingular if and only if $A$ is of full column rank.
810810

@@ -929,8 +929,8 @@ $$
929929
A v = \lambda v
930930
$$
931931

932-
then we say that $\lambda$ is an *eigenvalue* of $A$, and
933-
$v$ is an *eigenvector*.
932+
then we say that $\lambda$ is an **eigenvalue** of $A$, and
933+
$v$ is an **eigenvector**.
934934

935935
Thus, an eigenvector of $A$ is a vector such that when the map $f(x) = Ax$ is applied, $v$ is merely scaled.
936936

@@ -1034,7 +1034,7 @@ to one.
10341034

10351035
### Generalized Eigenvalues
10361036

1037-
It is sometimes useful to consider the *generalized eigenvalue problem*, which, for given
1037+
It is sometimes useful to consider the **generalized eigenvalue problem**, which, for given
10381038
matrices $A$ and $B$, seeks generalized eigenvalues
10391039
$\lambda$ and eigenvectors $v$ such that
10401040

@@ -1076,10 +1076,10 @@ $$
10761076
$$
10771077

10781078
The norms on the right-hand side are ordinary vector norms, while the norm on
1079-
the left-hand side is a *matrix norm* --- in this case, the so-called
1080-
*spectral norm*.
1079+
the left-hand side is a **matrix norm** --- in this case, the so-called
1080+
**spectral norm**.
10811081

1082-
For example, for a square matrix $S$, the condition $\| S \| < 1$ means that $S$ is *contractive*, in the sense that it pulls all vectors towards the origin [^cfn].
1082+
For example, for a square matrix $S$, the condition $\| S \| < 1$ means that $S$ is **contractive**, in the sense that it pulls all vectors towards the origin [^cfn].
10831083

10841084
(la_neumann)=
10851085
#### {index}`Neumann's Theorem <single: Neumann's Theorem>`
@@ -1112,7 +1112,7 @@ $$
11121112
\rho(A) = \lim_{k \to \infty} \| A^k \|^{1/k}
11131113
$$
11141114

1115-
Here $\rho(A)$ is the *spectral radius*, defined as $\max_i |\lambda_i|$, where $\{\lambda_i\}_i$ is the set of eigenvalues of $A$.
1115+
Here $\rho(A)$ is the **spectral radius**, defined as $\max_i |\lambda_i|$, where $\{\lambda_i\}_i$ is the set of eigenvalues of $A$.
11161116

11171117
As a consequence of Gelfand's formula, if all eigenvalues are strictly less than one in modulus,
11181118
there exists a $k$ with $\| A^k \| < 1$.
@@ -1128,8 +1128,8 @@ Let $A$ be a symmetric $n \times n$ matrix.
11281128

11291129
We say that $A$ is
11301130

1131-
1. *positive definite* if $x' A x > 0$ for every $x \in \mathbb R ^n \setminus \{0\}$
1132-
1. *positive semi-definite* or *nonnegative definite* if $x' A x \geq 0$ for every $x \in \mathbb R ^n$
1131+
1. **positive definite** if $x' A x > 0$ for every $x \in \mathbb R ^n \setminus \{0\}$
1132+
1. **positive semi-definite** or **nonnegative definite** if $x' A x \geq 0$ for every $x \in \mathbb R ^n$
11331133

11341134
Analogous definitions exist for negative definite and negative semi-definite matrices.
11351135

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.