What is their "Hilbert space" and how strong are the assumptions? #10
Replies: 1 comment 2 replies
-
|
Hey! The covariance is probably best defined as is done in the wikipedia page https://en.wikipedia.org/wiki/Covariance_operator so that one just deals with Lebesgue integrals, this was the first point of confusion for me. Then adjointedness is direct (and boundedness seems to be tacitly assumed). Regarding the operator I do agree that it seems to impose strong requirements on the components of I have a more basic question related to this that we also did not have time to get to yesterday: When is the infinite dimensional case actually relevant in practice? I am vaguely familiar with kernel methods where I know infinite dimensional Hilbert spaces can show up but is that related to the setup of the authors (infinite dimensional covariates)? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Following up on my confusion on the seminar today. I just can't seem to understand how the paper is supposed to work out with their generalities.
Therefore, I write this post in hope that someone can chime in or tell me where I am confused.
So we have random elements
xin some Hilbert space. For every vector, we can construct the tensor productxx^Twhich by definition is a linear operator that takes a vectory \mapsto x<x,y>where<.,.>is the inner product of the Hilbert space.This operator is self-adjoint, since
<y,xx^Tz> = <xx^Ty,z>. It has the operator norm bounded via Schwartz inequality|xx^Tv|/|v| <= |x|^2.Then we compute its expectation, via some suitable integral
E[xx^T].I think (hope) that the result will be self-adjoint and bounded still, though those details are beyond me. Since it is linear and bounded, it is compact. Since it is self-adjoint, it has a countable specturm with countable orthonormizable eigenvectors. The details can be found in many places. E.g. Theorem 2.39 in 1. So I interpret the notation in the paper as if
V^Tis a linear operator from the Hilbert space to the coordinate representation inl^2, the hilbert space of square-summable real number sequences. So the vectorszin the articles are random elements inl^2.Don't we have very strong assumptions about them now? Since every realization of
zmust be square summable, and its components must at the same time be independent.I can come up with the following case: let the
ith component ofzhave support on[-1/n,1/n]then, at worst, the realizations ofzwill have norm\sum 1/n^2, which is finite. But this says a lot about how well-excited the different eigenspaces ofE[xx^T]are excited.Furthermore, they do use the notation
\lambda ^T z, so that means that\lambda^Tandzare in dual spaces. And sincezmust be inl^2,\lambdamust be too. And that is, according to the text, the Hilbert spaceH.Notably, the wikipedia text on separable hilbert spaces notes that it was common before to always refer to
l^2as the hilbert space. So I guess this is what they mean in the article...tl;dr
l^2when they say "Hilbert space?"x, coming from the fact that the components ofzare subgaussian independent, but the element is inl^2?Footnotes
M. Carrasco, J.-P. Florens, and E. Renault, “Chapter 77 Linear Inverse
Problems in Structural Econometrics Estimation Based on Spectral Decom-
position and Regularization,” in Handbook of Econometrics, vol. 6, pp. 5633–
5751, Elsevier, 2007. ↩
Beta Was this translation helpful? Give feedback.
All reactions