Skip to main content

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Visit Stack Exchange
Asked
Modified 2 days ago
Viewed 24 times
1
$\begingroup$

The following is from High-Dimensional Statistics: A Non-Asymptotic Viewpoint by Wainwright.

Throughout, all matrices will be symmetric in $\mathbb{R}^{d \times d}$. For a matrix, let $\lVert A \rVert_2$ be the largest singular value of $A$. We write $A \preceq B$ if $B-A$ is positive semi-definite. For a symmetric matrix $A$ with an eigendecomposition $A = U D U^\top$ and a function $f \colon \mathbb{R} \to \mathbb{R}$, define $f(A) = U f(D) U^\top$, where $f(D)$ is a diagonal matrix obtained by applying $f$ to the diagonal entries of $D$.

A zero-mean symmetric random matrix $Q \in \mathcal{S}^{d \times d}$ is sub-Gaussian with matrix parameter $V \in \mathcal{S}^{d \times d}_+$ if

$$\mathbb{E}(e^{\lambda Q}) = \Psi_Q(\lambda) \preceq e^{\frac{\lambda^2 V}{2}}, \quad \forall \lambda \in \mathbb{R}$$

We want to prove the following theorem:

Let $Q_1, \ldots, Q_n$ be a sequence of zero-mean independent symmetric random matrices that satisfy the sub-Gaussian condition with parameters $V_1, \ldots, V_n$. Then, for all $\delta > 0$, we have the upper tail bound

\begin{align*} \mathbb{P}\left( \left\| \frac{1}{n} \sum_{i=1}^n Q_i \right\|_2 \geq \delta \right) \leq 2 \cdot \text{rank}\left( \sum_{i=1}^n V_i \right) \cdot \exp\left(-\frac{n\delta^2}{2\sigma^2}\right) \leq 2d \cdot \exp\left(-\frac{n\delta^2}{2\sigma^2}\right), \end{align*}

where $\sigma^2 = \left\| \frac{1}{n} \sum_{i=1}^n V_i \right\|_2$.

I understand the proof in the case when $\sum_{i=1}^n V_i$ is full rank, but I don't understand the proof when the rank of $\sum_{i=1}^n V_i$ is less than $d$. The proof to extend the case when the rank is less than $d$ from when the rank is $d$ goes as follows:

Now suppose that the matrix $V =\sum_{i=1}^n V_i$ is not full-rank, say of rank $r < d$. In this case, an eigendecomposition yields $V = UDU^\top$, where $U \in \mathbb{R}^{d \times r}$ has orthonormal columns. Introducing the shorthand $Q = \sum_{i=1}^n Q_i$, the $r$-dimensional matrix $\tilde{Q} = U^\top QU$ then captures all randomness in $Q$, and in particular we have $\lVert \tilde{Q} \rVert_2 = \lVert Q\rVert_2$. We can thus apply the same argument to bound $\lVert \tilde{Q} \rVert_2$, leading to a pre-factor of $r$ instead of $d$.

First, it seems as though it may not hold that $\lVert \tilde{Q} \rVert_2 = \lVert Q \rVert_2$. For example, $U = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{pmatrix}$ and $Q = \begin{pmatrix} 2 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 4 \end{pmatrix}$. Furthermore, how do we prove that the random matrix $U^\top Q_i U$ is subgaussian with respect to $V_i$?

Thank you.

enter image description here enter image description here enter image description here enter image description here enter image description here

$\endgroup$

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.

Morty Proxy This is a proxified and sanitized view of the page, visit original site.