Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 8, Issue 2 (2021)
  4. Malliavin–Stein method: a survey of some ...

Malliavin–Stein method: a survey of some recent developments
Volume 8, Issue 2 (2021), pp. 141–177
Ehsan Azmoodeh   Giovanni Peccati   Xiaochuan Yang ORCID icon link to view author Xiaochuan Yang details  

Authors

 
Placeholder
https://doi.org/10.15559/21-VMSTA184
Pub. online: 22 June 2021      Type: Research Article      Open accessOpen Access

Received
12 February 2021
Revised
28 May 2021
Accepted
10 June 2021
Published
22 June 2021

Abstract

Initiated around the year 2007, the Malliavin–Stein approach to probabilistic approximations combines Stein’s method with infinite-dimensional integration by parts formulae based on the use of Malliavin-type operators. In the last decade, Malliavin–Stein techniques have allowed researchers to establish new quantitative limit theorems in a variety of domains of theoretical and applied stochastic analysis. The aim of this survey is to illustrate some of the latest developments of the Malliavin–Stein method, with specific emphasis on extensions and generalizations in the framework of Markov semigroups and of random point measures.

1 Introduction and overview

The Malliavin–Stein method for probabilistic approximations was initiated in the paper [64], with the aim of providing a quantitative counterpart to the (one- and multi-dimensional) central limit theorems for random variables living in the Wiener chaos of a general separable Gaussian field. As formally discussed in the sections to follow, the basic idea of the approach initiated in [64] is that, in order to assess the discrepancy between some target law (Normal or Gamma, for instance), and the distribution of a nonlinear functional of a Gaussian field, one can fruitfully apply infinite-dimensional integration by parts formulae from the Malliavin calculus of variations [57, 66, 77, 78] to the general bounds associated with the so-called Stein’s method for probabilistic approximations [66, 23]. In particular, the Malliavin–Stein approach captures and amplifies the essence of [21], where Stein’s method was combined with finite-dimensional integration by parts formulae for Gaussian vectors, in order to deduce second order Poincaré inequalities – as applied to random matrix models with Gaussian-subordinated entries (see also [70, 96]).
We recall that, as initiated by P. Malliavin in the path-breaking reference [56], the Malliavin calculus is an infinite-dimensional differential calculus, whose operators act on smooth nonlinear functionals of Gaussian fields (or of more general probabilistic objects). As vividly described in the classical references [57, 77], as well as in the more recent books [66, 78], since its inception such a theory has generated a staggering number of applications, ranging, e.g., from mathematical physics to stochastic differential equations, and from mathematical finance to stochastic geometry (in particular, models involving stabilization, but also hyperplane, flat or cylinder processes), analysis on manifolds and mathematical statistics. On the other hand, the similarly successful and popular Stein’s method (as created by Ch. Stein in the classical reference [92] – see also the 1986 monograph [93]) is a collection of analytical techniques, allowing one to estimate the distance between the distributions of two random objects, by using characterizing differential operators (or difference operator in the case where the random variables of interest are discrete). The discovery in [64] that the two theories can be fruitfully combined has been a major breakthrough in the domain of probabilistic limit theorems and approximations.
Since the publication of [64], the Malliavin–Stein method has generated several hundreds of papers, with ramifications in many (often unexpected) directions, including functional inequalities, random matrix theory, stochastic geometry, noncommutative probability and computer sciences. Many of hese developments largely exceed the scope of the present survey, and we invite the interested reader to consult the following general references (i)–(iii) for a more detailed presentation: (i) the webpage [1] is a constantly updated resource, listing all existing papers written around the Malliavin–Stein method; (ii) the monograph [66], written in 2012, contains a self-contained presentation of Malliavin calculus and Stein’s method, as applied to functionals of general Gaussian fields, with specific emphasis on random variables belonging to a fixed Wiener chaos; (iii) the text [81] is a collection of surveys, containing an in-depth presentation of variational techniques on the Poisson space (including the Malliavin–Stein method), together with their application to asymptotic problems arising in stochastic geometry. The following more specific references (a)–(c) point to some recent developments that we find particularly exciting and ripe for further developments: (a) the papers [58, 59, 68, 82, 85, 88, 94] provide a representative overview of applications of Malliavin–Stein techniques to the study of nodal sets associated with Gaussian random fields on two-dimensional manifolds; (b) the papers [62, 74] – and many of the reference therein – display a pervasive use of Malliavin–Stein techniques to determine rates of convergence in total variation in the Breuer–Major Theorem; (c) references [19, 61] deal with the problem of tightness and functional convergence in the Breuer–Major theorem evoked at Point (b).
The aim of the present survey is twofold. On the one hand, we aim at presenting the essence of the Malliavin–Stein method for functionals of Gaussian fields, by discussing the crucial elements of Malliavin calculus and Stein’s method together with their interaction (see Section 2 and Section 3). On the other hand, we aim at introducing the reader to some of the most recent developments of the theory, with specific focus on the general theory of Markov semigroups in a diffusive setting (following the seminal references [52, 5], as well as [73, 53, 54]), and on integration by parts formulae (and associated operators) in the context of functionals of a random point measure [37, 38, 55, 49, 48, 90]. This corresponds to the content of Section 4 and Section 5, respectively. Finally, Section 6 deals with some recent results (and open problems) concerning ${\chi ^{2}}$ approximations.
From now on, every random object will be defined on a suitable common probability space $(\Omega ,\mathcal{F},\mathbb{P})$, with $\mathbb{E}$ indicating mathematical expectation with respect to $\mathbb{P}$. Throughout the paper, the symbol $\mathcal{N}(\mu ,{\sigma ^{2}})$ will be a shorthand for the one-dimensional Gaussian distribution with mean $\mu \in \mathbb{R}$ and variance ${\sigma ^{2}}>0$. In particular, $X\sim \mathcal{N}(\mu ,{\sigma ^{2}})$ if and only if
\[ \mathbb{P}[X\in A]={\int _{A}}{e^{-\frac{{(x-\mu )^{2}}}{2{\sigma ^{2}}}}}\frac{dx}{\sqrt{2\pi {\sigma ^{2}}}},\]
for every Borel set $A\subset \mathbb{R}$.

2 Elements of Stein’s method for normal approximations

In this section, we briefly introduce the main ingredients of Stein’s method for normal approximations in dimension one. The approximation will be performed with respect to the total variation and 1-Wasserstein distances between the distributions of two random variables; more detailed information about these distances can be found in [66, Appendix C] and the references therein.
The crucial intuition behind Stein’s method lies in the following heuristic reasoning: it is a well-known fact (see, e.g., Lemma 2.1-(e) below) that a random variable X has the standard $\mathcal{N}(0,1)$ distribution if and only if
(2.1)
\[ \mathbb{E}[Xf(X)-{f^{\prime }}(X)]=0,\]
for every smooth mapping $f:\mathbb{R}\to \mathbb{R}$; heuristically, it follows that, if X is a random variable such that the quantity $\mathbb{E}[Xf(X)-{f^{\prime }}(X)]$ is close to zero for a large class of test functions f, then the distribution of X should be close to Gaussian.
The fact that such a heuristic argument can be made rigorous and applied in a wide array of probabilistic models was the main discovery of Stein’s original contribution [92], where the foundations of Stein’s method were first laid. The reader is referred to Stein’s monograph [93], as well as the books [23, 66], for an exhaustive presentation of the theory and its applications (in particular, for extensions to multidimensional approximations).
We recall that the total variation distance, between the laws of two real-valued random variables F and G, is defined by
(2.2)
\[ {d_{TV}}(F,G):=\underset{B\in \mathcal{B}(\mathbb{R})}{\sup }\Big|\mathbb{P}[F\in B]-\mathbb{P}[G\in B]\Big|.\]
One has to note that the topology induced by the distance ${d_{TV}}$ – on the set of all probability measures on $\mathbb{R}$ – is stronger than the topology of convergence in distribution; one sometimes uses the following equivalent representation of ${d_{TV}}$ (see, e.g., [66, p. 213]):
(2.3)
\[\begin{array}{cc}& \displaystyle {d_{TV}}(F,G)\\ {} & \displaystyle =\frac{1}{2}\sup \Big\{\big|\mathbb{E}[h(F)]-\mathbb{E}[h(G)]\big|\hspace{0.1667em}:\hspace{0.1667em}h\hspace{2.5pt}\text{is Borel measurable and}\hspace{2.5pt}\| h{\| _{\infty }}\le 1\Big\}.\end{array}\]
The 1-Wasserstein distance ${d_{W}}$, between the distributions of two real-valued integrable random variables F and G, is given by
(2.4)
\[ {d_{W}}(F,G):=\underset{h\in \mathrm{Lip}(\mathrm{1})}{\sup }\Big|\mathbb{E}[h(F)]-\mathbb{E}[h(G)]\Big|,\]
where $\mathrm{Lip}(\mathrm{K})$, $K>0$, stands for the class of all Lipschitz mappings $h:\mathbb{R}\to \mathbb{R}$ such that h has a Lipschitz constant $\le K$. As for total variation, the topology induced by ${d_{W}}$ – on the set of all probability measures on $\mathbb{R}$ having a finite absolute first moment – is stronger than the topology of convergence in distribution; it is also interesting to recall the dual representation
(2.5)
\[ {d_{W}}(F,G)=\inf \mathbb{E}\hspace{0.1667em}\big|X-Y\big|,\]
where the infimum is taken over all couplings $(X,Y)$ of F and G; see, e.g., [97, p. 95] for a discussion of this fact.
The following classical result, whose complete proof can be found, e.g., in [66, p. 64 and p. 67], contains all the elements of Stein’s method that are needed for our discussion; as for many fundamental findings in the area, this result can be traced back to [92].
Lemma 2.1.
Let $N\sim \mathcal{N}(0,1)$ be a standard Gaussian random variable.
  • (a) Fix $h:\mathbb{R}\to [0,1]$, a Borel-measurable function. Define ${f_{h}}:\mathbb{R}\to \mathbb{R}$ as
    (2.6)
    \[ {f_{h}}(x):={e^{\frac{{x^{2}}}{2}}}{\int _{-\infty }^{x}}\{h(y)-\mathbb{E}[h(N)]\}{e^{-\frac{{y^{2}}}{2}}}dy,\hspace{1em}x\in \mathbb{R}.\]
    Then, ${f_{h}}$ is continuous on $\mathbb{R}$ with $\| {f_{h}}{\| _{\infty }}\le \sqrt{\frac{\pi }{2}}$ and ${f_{h}}\in \mathrm{Lip}(2)$. Moreover, there exists a version of ${f^{\prime }_{h}}$ verifying
    (2.7)
    \[ {f^{\prime }_{h}}(x)-x{f_{h}}(x)=h(x)-\mathbb{E}[h(N)],\hspace{1em}\textit{for all}\hspace{2.5pt}x\in \mathbb{R}.\]
  • (b) Consider $h:\mathbb{R}\to \mathbb{R}\in \mathrm{Lip}(1)$, and define ${f_{h}}:\mathbb{R}\to \mathbb{R}$ as in (2.6). Then, ${f_{h}}$ is of class ${C^{1}}$ on $\mathbb{R}$, with $\| {f^{\prime }_{h}}{\| _{\infty }}\le 1$ and ${f^{\prime }_{h}}\in \mathrm{Lip}(2)$, and ${f_{h}}$ solves (2.7).
  • (c) Let X be an integrable random variable. Then
    \[ {d_{TV}}(X,N)\le \underset{f}{\sup }\Big|\mathbb{E}\big[f(X)X-{f^{\prime }}(X)\big]\Big|\]
    where the supremum is taken over all pairs $(f,{f^{\prime }})$ such that f is a Lipschitz function whose absolute value is bounded by $\sqrt{\frac{\pi }{2}}$, and ${f^{\prime }}$ is a version of the derivative of f satisfying $\| {f^{\prime }}{\| _{\infty }}\le 2$.
  • (d) Let X be an integrable random variable. Then,
    \[ {d_{W}}(X,N)\le \underset{f}{\sup }\Big|\mathbb{E}\big[f(X)X-{f^{\prime }}(X)\big]\Big|\]
    where the supremum is taken over all ${C^{1}}$ functions $f:\mathbb{R}\to \mathbb{R}$ such that $\| {f^{\prime }}{\| _{\infty }}\le 2$ and ${f^{\prime }}\in \mathrm{Lip}(2)$.
  • (e) Let X be a general random variable. Then $X\sim \mathcal{N}(0,1)$ if and only if $\mathbb{E}[{f^{\prime }}(X)-Xf(X)]=0$ for every absolutely continuous function f such that $\mathbb{E}|{f^{\prime }}(N)|<+\infty $.
Sketch of the proof. Points (a) and (b) can be verified by a direct computation. Point (c) and Point (d) follow by plugging the left-hand side of (2.7) into (2.3) and (2.4), respectively. Finally, the fact that the relation $\mathbb{E}[{f^{\prime }}(X)-Xf(X)]=0$ implies that $X\sim \mathcal{N}(0,1)$ is a direct consequence of Point (c), whereas the reverse implication follows by an integration by parts argument.  □

3 Normal approximation with Stein’s method and Malliavin calculus

The first part of the present section contains some elements of Gaussian analysis and Malliavin calculus. The reader can consult, for instance, the references [66, 77, 57, 78] for further details. In Section 3.2 we will shortly explore the connection between Malliavin calculus and the version of Stein’s method presented in Section 2.

3.1 Isonormal processes, multiple integrals, and the Malliavin operators

Let $\mathfrak{H}$ be a real separable Hilbert space. For any $q\ge 1$, we write ${\mathfrak{H}^{\otimes q}}$ and ${\mathfrak{H}^{\odot q}}$ to indicate, respectively, the qth tensor power and the qth symmetric tensor power of $\mathfrak{H}$; we also set by convention ${\mathfrak{H}^{\otimes 0}}={\mathfrak{H}^{\odot 0}}=\mathbb{R}$. When $\mathfrak{H}={L^{2}}(A,\mathcal{A},\mu )=:{L^{2}}(\mu )$, where μ is a σ-finite and nonatomic measure on the measurable space $(A,\mathcal{A})$, then ${\mathfrak{H}^{\otimes q}}\simeq {L^{2}}({A^{q}},{\mathcal{A}^{q}},{\mu ^{q}})=:{L^{2}}({\mu ^{q}})$, and ${\mathfrak{H}^{\odot q}}\simeq {L_{s}^{2}}({A^{q}},{\mathcal{A}^{q}},{\mu ^{q}}):={L_{s}^{2}}({\mu ^{q}})$, where ${L_{s}^{2}}({\mu ^{q}})$ stands for the subspace of ${L^{2}}({\mu ^{q}})$ composed of those functions that are ${\mu ^{q}}$-almost everywhere symmetric. We denote by $W=\{W(h):h\in \mathfrak{H}\}$ an isonormal Gaussian process over $\mathfrak{H}$. This means that W is a centered Gaussian family with a covariance structure given by the relation $\mathbb{E}\left[W(h)W(g)\right]={\langle h,g\rangle _{\mathfrak{H}}}$. Without loss of generality, we can also assume that $\mathcal{F}=\sigma (W)$, that is, $\mathcal{F}$ is generated by W, and use the shorthand notation ${L^{2}}(\Omega ):={L^{2}}(\Omega ,\mathcal{F},\mathbb{P})$.
For every $q\ge 1$, the symbol ${C_{q}}$ stands for the qth Wiener chaos of W, defined as the closed linear subspace of ${L^{2}}(\Omega )$ generated by the family $\{{H_{q}}(W(h)):h\in \mathfrak{H},{\left\| h\right\| _{\mathfrak{H}}}=1\}$, where ${H_{q}}$ is the qth Hermite polynomial, defined as follows:
(3.1)
\[ {H_{q}}(x)={(-1)^{q}}{e^{\frac{{x^{2}}}{2}}}\frac{{d^{q}}}{d{x^{q}}}\big({e^{-\frac{{x^{2}}}{2}}}\big).\]
We write by convention ${C_{0}}=\mathbb{R}$. For any $q\ge 1$, the mapping ${I_{q}}({h^{\otimes q}})={H_{q}}(W(h))$ can be extended to a linear isometry between the symmetric tensor product ${\mathfrak{H}^{\odot q}}$ (equipped with the modified norm $\sqrt{q!}{\left\| \cdot \right\| _{{\mathfrak{H}^{\otimes q}}}}$) and the qth Wiener chaos ${C_{q}}$. For $q=0$, we write by convention ${I_{0}}(c)=c$, $c\in \mathbb{R}$.
It is well known that ${L^{2}}(\Omega )$ can be decomposed into the infinite orthogonal sum of the spaces ${C_{q}}$: this means that any square-integrable random variable $F\in {L^{2}}(\Omega )$ admits the following Wiener–Itô chaotic expansion
(3.2)
\[ F={\sum \limits_{q=0}^{\infty }}{I_{q}}({f_{q}}),\]
where the series converges in ${L^{2}}(\Omega )$, ${f_{0}}=E[F]$, and the kernels ${f_{q}}\in {\mathfrak{H}^{\odot q}}$, $q\ge 1$, are uniquely determined by F. For every $q\ge 0$, we denote by ${J_{q}}$ the orthogonal projection operator on the qth Wiener chaos. In particular, if $F\in {L^{2}}(\Omega )$ has the form (3.2), then ${J_{q}}F={I_{q}}({f_{q}})$ for every $q\ge 0$.
Let $\{{e_{k}},\hspace{0.1667em}k\ge 1\}$ be a complete orthonormal system in $\mathfrak{H}$. Given $f\in {\mathfrak{H}^{\odot p}}$ and $g\in {\mathfrak{H}^{\odot q}}$, for every $r=0,\dots ,p\wedge q$, the contraction of f and g of order r is the element of ${\mathfrak{H}^{\otimes (p+q-2r)}}$ defined by
(3.3)
\[ f{\otimes _{r}}g={\sum \limits_{{i_{1}},\dots ,{i_{r}}=1}^{\infty }}{\langle f,{e_{{i_{1}}}}\otimes \cdots \otimes {e_{{i_{r}}}}\rangle _{{\mathfrak{H}^{\otimes r}}}}\otimes {\langle g,{e_{{i_{1}}}}\otimes \cdots \otimes {e_{{i_{r}}}}\rangle _{{\mathfrak{H}^{\otimes r}}}}.\]
Notice that the definition of $f{\otimes _{r}}g$ does not depend on the particular choice of $\{{e_{k}},\hspace{0.1667em}k\ge 1\}$, and that $f{\otimes _{r}}g$ is not necessarily symmetric; we denote its symmetrization by $f{\widetilde{\otimes }_{r}}g\in {\mathfrak{H}^{\odot (p+q-2r)}}$. Moreover, $f{\otimes _{0}}g=f\otimes g$ equals the tensor product of f and g while, for $p=q$, $f{\otimes _{q}}g={\langle f,g\rangle _{{\mathfrak{H}^{\otimes q}}}}$. When $\mathfrak{H}={L^{2}}(A,\mathcal{A},\mu )$ and $r=1,\dots ,p\wedge q$, the contraction $f{\otimes _{r}}g$ is the element of ${L^{2}}({\mu ^{p+q-2r}})$ given by
(3.4)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle f{\otimes _{r}}g({x_{1}},\dots ,{x_{p+q-2r}})\\ {} & & \displaystyle ={\int _{{A^{r}}}}f({x_{1}},\dots ,{x_{p-r}},{a_{1}},\dots ,{a_{r}})\times \\ {} & & \displaystyle \hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\times g({x_{p-r+1}},\dots ,{x_{p+q-2r}},{a_{1}},\dots ,{a_{r}})d\mu ({a_{1}})...d\mu ({a_{r}}).\end{array}\]
It is a standard fact of Gaussian analysis that the following multiplication formula holds: if $f\in {\mathfrak{H}^{\odot p}}$ and $g\in {\mathfrak{H}^{\odot q}}$, then
(3.5)
\[ {I_{p}}(f){I_{q}}(g)={\sum \limits_{r=0}^{p\wedge q}}r!\left(\genfrac{}{}{0pt}{}{p}{r}\right)\left(\genfrac{}{}{0pt}{}{q}{r}\right){I_{p+q-2r}}(f{\widetilde{\otimes }_{r}}g).\]
We now introduce some basic elements of the Malliavin calculus with respect to the isonormal Gaussian process W.
Let $\mathcal{S}$ be the set of all cylindrical random variables of the form
(3.6)
\[ F=g\left(W({\varphi _{1}}),\dots ,W({\varphi _{n}})\right),\]
where $n\ge 1$, $g:{\mathbb{R}^{n}}\to \mathbb{R}$ is an infinitely differentiable function such that its partial derivatives have polynomial growth, and ${\varphi _{i}}\in \mathfrak{H}$, $i=1,\dots ,n$. The Malliavin derivative of F with respect to W is the element of ${L^{2}}(\Omega ,\mathfrak{H})$ defined as
\[ DF\hspace{0.2778em}=\hspace{0.2778em}{\sum \limits_{i=1}^{n}}\frac{\partial g}{\partial {x_{i}}}\left(W({\varphi _{1}}),\dots ,W({\varphi _{n}})\right){\varphi _{i}}.\]
In particular, $DW(h)=h$ for every $h\in \mathfrak{H}$. By iteration, one can define the mth derivative ${D^{m}}F$, which is an element of ${L^{2}}(\Omega ,{\mathfrak{H}^{\odot m}})$, for every $m\ge 2$. For $m\ge 1$ and $p\ge 1$, ${\mathbb{D}^{m,p}}$ denotes the closure of $\mathcal{S}$ with respect to the norm $\| \cdot {\| _{m,p}}$, defined by the relation
\[ \| F{\| _{m,p}^{p}}\hspace{0.2778em}=\hspace{0.2778em}\mathbb{E}\left[|F{|^{p}}\right]+{\sum \limits_{i=1}^{m}}\mathbb{E}\left[\| {D^{i}}F{\| _{{\mathfrak{H}^{\otimes i}}}^{p}}\right].\]
We often use the (canonical) notation ${\mathbb{D}^{\infty }}:={\textstyle\bigcap _{m\ge 1}}{\textstyle\bigcap _{p\ge 1}}{\mathbb{D}^{m,p}}$. For example, it is a well-known fact that any random variable F that is a finite linear combination of multiple Wiener–Itô integrals is an element of ${\mathbb{D}^{\infty }}$. The Malliavin derivative D obeys the following chain rule. If $\phi :{\mathbb{R}^{n}}\to \mathbb{R}$ is continuously differentiable with bounded partial derivatives and if $F=({F_{1}},\dots ,{F_{n}})$ is a vector of elements of ${\mathbb{D}^{1,2}}$, then $\phi (F)\in {\mathbb{D}^{1,2}}$ and
(3.7)
\[ D\hspace{0.1667em}\phi (F)={\sum \limits_{i=1}^{n}}\frac{\partial \phi }{\partial {x_{i}}}(F)D{F_{i}}.\]
Note also that a random variable F as in (3.2) is in ${\mathbb{D}^{1,2}}$ if and only if ${\textstyle\sum _{q=1}^{\infty }}q\| {J_{q}}F{\| _{{L^{2}}(\Omega )}^{2}}<\infty $ and in this case one has the following explicit relation:
\[ \mathbb{E}\left[\| DF{\| _{\mathfrak{H}}^{2}}\right]={\sum \limits_{q=1}^{\infty }}q\| {J_{q}}F{\| _{{L^{2}}(\Omega )}^{2}}.\]
If $\mathfrak{H}={L^{2}}(A,\mathcal{A},\mu )$ (with μ nonatomic), then the derivative of a random variable F as in (3.2) can be identified with the element of ${L^{2}}(A\times \Omega )$ given by
(3.8)
\[ {D_{t}}F={\sum \limits_{q=1}^{\infty }}q{I_{q-1}}\left({f_{q}}(\cdot ,t)\right),\hspace{1em}t\in A.\]
The operator L, defined as $\mathbf{L}={\textstyle\sum _{q=0}^{\infty }}-q{J_{q}}$, is the infinitesimal generator of the Ornstein–Uhlenbeck semigroup. The domain of L is
\[ \mathrm{Dom}\mathbf{L}=\{F\in {L^{2}}(\Omega ):{\sum \limits_{q=1}^{\infty }}{q^{2}}{\left\| {J_{q}}F\right\| _{{L^{2}}(\Omega )}^{2}}<\infty \}={\mathbb{D}^{2,2}}\text{.}\]
For any $F\in {L^{2}}(\Omega )$, we define ${\mathbf{L}^{-1}}F={\textstyle\sum _{q=1}^{\infty }}-\frac{1}{q}{J_{q}}(F)$. The operator ${\mathbf{L}^{-1}}$ is called the pseudoinverse of L. Indeed, for any $F\in {L^{2}}(\Omega )$, we have that ${\mathbf{L}^{-1}}F\in \mathrm{Dom}\mathbf{L}={\mathbb{D}^{2,2}}$, and
(3.9)
\[ \mathbf{L}{\mathbf{L}^{-1}}F=F-\mathbb{E}(F).\]
The following infinite dimensional Malliavin integration by parts formula plays a crucial role in the analysis (see, for instance, [66, Section 2.9] for a proof).
Lemma 3.1.
Suppose that $F\in {\mathbb{D}^{1,2}}$ and $G\in {L^{2}}(\Omega )$. Then, ${\mathbf{L}^{-1}}G\in {\mathbb{D}^{2,2}}$ and
(3.10)
\[ \mathbb{E}[FG]=\mathbb{E}[F]\mathbb{E}[G]+\mathbb{E}[{\langle DF,-D{\mathbf{L}^{-1}}G\rangle _{\mathfrak{H}}}].\]
Inspired by the Malliavin integration by parts formula appearing in Lemma 3.1, we now introduce a class of iterated Gamma operators. We will need such operators in Section 6.
Definition 3.2 (See Chapter 8 in [66]).
Let $F\in {\mathbb{D}^{\infty }}$; the sequence of random variables ${\{{\Gamma _{i}}(F)\}_{i\ge 0}}\subset {\mathbb{D}^{\infty }}$ is recursively defined as follows. Set ${\Gamma _{0}}(F)=F$ and, for every $i\ge 1$,
\[ {\Gamma _{i}}(F)={\langle DF,-D{\mathbf{L}^{-1}}{\Gamma _{i-1}}(F)\rangle _{\mathfrak{H}}}.\]
Definition 3.3 (Cumulants).
Let F be a real-valued random variable such that $\mathbb{E}|F{|^{m}}<\infty $ for some integer $m\ge 1$, and write ${\varphi _{F}}(t)=\mathbb{E}[{e^{itF}}]$, $t\in \mathbb{R}$, for the characteristic function of F. Then, for $r=1,\dots ,m$, the rth cumulant of F, denoted by ${\kappa _{r}}(F)$, is given by
(3.11)
\[ {\kappa _{r}}(F)={(-i)^{r}}\frac{{d^{r}}}{d{t^{r}}}\log {\varphi _{F}}(t){|_{t=0}}.\]
Remark 3.4.
When $\mathbb{E}(F)=0$, then the first four cumulants of F are the following: ${\kappa _{1}}(F)=\mathbb{E}[F]=0$, ${\kappa _{2}}(F)=\mathbb{E}[{F^{2}}]=\operatorname{Var}(F)$, ${\kappa _{3}}(F)=\mathbb{E}[{F^{3}}]$, and
\[ {\kappa _{4}}(F)=\mathbb{E}[{F^{4}}]-3\mathbb{E}{[{F^{2}}]^{2}}.\]
The following statement explicitly connects the expectation of the random variables ${\Gamma _{r}}(F)$ to the cumulants of F.
Proposition 3.5 (See Chapter 8 in [66]).
Let $F\in {\mathbb{D}^{\infty }}$. Then ${\kappa _{r}}(F)=(r-1)!\mathbb{E}[{\Gamma _{r-1}}(F)]$ for every $r\ge 1$.
As announced, in the next subsection we show how to use the above Malliavin machinery in order to study the Stein’s bounds presented in Section 2.

3.2 Connection with Stein’s method

Let $F\in {\mathbb{D}^{1,2}}$ with $\mathbb{E}[F]=0$ and $\mathbb{E}[{F^{2}}]=1$. Take a ${C^{1}}$ function such that $\| f\| \le \sqrt{\frac{\pi }{2}}$ and $\| {f^{\prime }}\| \le 2$. Using the Malliavin integration by parts formula stated in Lemma 3.1 together with the chain rule (3.7), we can write
(3.12)
\[ \begin{aligned}{}\Big|\mathbb{E}[{f^{\prime }}(F)-Ff(F)]\Big|& =\Big|\mathbb{E}[{f^{\prime }}(F)\left(1-{\langle DF,-D{\mathbf{L}^{-1}}F\rangle _{\mathfrak{H}}}\right)]\Big|\\ {} & \le 2\hspace{0.1667em}\mathbb{E}\Big|1-{\langle DF,-D{\mathbf{L}^{-1}}F\rangle _{\mathfrak{H}}}\Big|.\end{aligned}\]
If we furthermore assume that $F\in {\mathbb{D}^{1,4}}$, then the random variable $1-{\langle DF,-D{\mathbf{L}^{-1}}F\rangle _{\mathfrak{H}}}$ is square-integrable, using the Cauchy–Schwarz inequality we infer that
\[ \Big|\mathbb{E}[{f^{\prime }}(F)-Ff(F)]\Big|\le 2\sqrt{\operatorname{Var}\left({\langle DF,-D{\mathbf{L}^{-1}}F\rangle _{\mathfrak{H}}}\right)}.\]
Note that in above we used the fact that $\mathbb{E}[{\langle DF,-D{\mathbf{L}^{-1}}F\rangle _{\mathfrak{H}}}]=\mathbb{E}[{F^{2}}]=1$. The above arguments combined with Lemma 2.1 yield immediately2 the next crucial statement, originally proved in [64].
Theorem 3.6.
Let $F\in {\mathbb{D}^{1,2}}$ be a generic random element with $\mathbb{E}[F]=0$ and $\mathbb{E}[{F^{2}}]=1$. Let $N\sim \mathcal{N}(0,1)$. Assume further that F has a density with respect to the Lebesgue measure. Then,
\[ {d_{TV}}(F,N)\le 2\hspace{0.1667em}\mathbb{E}\Big|1-{\langle DF,-D{\mathbf{L}^{-1}}F\rangle _{\mathfrak{H}}}\Big|.\]
Moreover, assume that $F\in {\mathbb{D}^{1,4}}$, then
\[ {d_{TV}}(F,N)\le 2\sqrt{\operatorname{Var}\left({\langle DF,-D{\mathbf{L}^{-1}}F\rangle _{\mathfrak{H}}}\right)}.\]
In particular case, if $F={I_{q}}(f)$ belongs to the Wiener chaos of order $q\ge 2$, then
(3.13)
\[ {d_{TV}}(F,N)\le 2\sqrt{\frac{q-1}{3q}\Big(\mathbb{E}[{F^{4}}]-3\Big)}.\]
Note that, by virtue of Lemma 2.1, similar bounds can be immediately obtained for the Wasserstein distance ${d_{W}}$ (and many more – see [66, Chapter 5]). In particular, the previous statement allows one to recover the following central limit theorem for chaotic random variables, first proved in [80].
Corollary 3.7 (Fourth Moment Theorem).
Let ${\{{F_{n}}\}_{n\ge 1}}={\{{I_{q}}({f_{n}})\}_{n\ge 1}}$ be a sequence of random elements in a fixed Wiener chaos of order $q\ge 2$ such that $\mathbb{E}[{F_{n}^{2}}]=q!\| {f_{n}}{\| ^{2}}=1$. Assume that $N\sim \mathcal{N}(0,1)$. Then, as n tends to infinity, the following assertions are equivalent.
(I)
${F_{n}}\longrightarrow N$ in distribution.
(II)
$\mathbb{E}[{F_{n}^{4}}]\longrightarrow 3\hspace{0.1667em}(=\mathbb{E}[{N^{4}}])$.
As demonstrated by the webpage [1], the ‘fourth moment theorem’ stated in Corollary 3.7 has been the starting point of a very active line of research, composed of several hundred papers connected with disparate applications. In the next section, we will implicitly provide a general version of Theorem 3.6 (with the 1-Wasserstein distance replacing the total variation distance), whose proof relies only on the spectral properties of the Ornstein–Uhlenbeck generator L and on the so-called Γ calculus (see, e.g., [18]).

4 The Markov triple approach

In this section, we introduce a general framework for studying and generalizing the fourth moment phenomenon appearing in the statement of Corollary 3.7. The forthcoming approach was first introduced in [52] by M. Ledoux, and then further developed and generalised in [5, 8].

4.1 Diffusive fourth moment structures

We start with definition of our general setup.
Definition 4.1.
A diffusive fourth moment structure is a triple $(E,\mu ,\mathbf{L})$ such that:
  • (a) $(E,\mu )$ is a probability space;
  • (b) L is a symmetric unbounded operator defined on some dense subset of ${L^{2}}(E,\mu )$, that we denote by $\mathcal{D}(\mathbf{L})$ (the set $\mathcal{D}(\mathbf{L})$ is called the domain of L);
  • (c) the associated carré-du-champ operator Γ is a symmetric bilinear operator, and is defined by
    (4.1)
    \[ 2\Gamma \left[X,Y\right]:=\mathbf{L}\left[XY\right]-X\mathbf{L}\left[Y\right]-Y\mathbf{L}\left[X\right];\]
  • (d) the operator L is diffusive, meaning that, for any ${\mathcal{C}_{b}^{2}}$ function $\varphi :\mathbb{R}\to \mathbb{R}$, any $X\in \mathcal{D}(\mathbf{L})$, it holds that $\varphi (X)\in \mathcal{D}(\mathbf{L})$ and
    (4.2)
    \[ \mathbf{L}\left[\varphi (X)\right]={\varphi ^{\prime }}(X)\mathbf{L}[X]+{\varphi ^{\prime\prime }}(X)\Gamma [X,X];\]
    Note that $\mathbf{L}[1]=0$ (by taking $\varphi =1\in {\mathcal{C}_{b}^{2}}$). The latter property is equivalent to say that the operator Γ satisfies the chain rule:
    \[ \Gamma \left[\varphi (X),X\right]={\varphi ^{\prime }}(X)\Gamma [X,X];\]
  • (e) the operator $-\mathbf{L}$ diagonalizes the space ${L^{2}}(E,\mu )$ with $\textbf{sp}(-\mathbf{L})=\mathbb{N}$, meaning that
    \[ {L^{2}}(E,\mu )={\underset{i=0}{\overset{\infty }{\bigoplus }}}\textbf{Ker}(\mathbf{L}+i\textbf{Id});\]
  • (f) for any pair of eigenfunctions $(X,Y)$ of the operator $-\mathbf{L}$ associated with the eigenvalues $({p_{1}},{p_{2}})$,
    (4.3)
    \[ XY\in \underset{i\le {p_{1}}+{p_{2}}}{\bigoplus }\textbf{Ker}\left(\mathbf{L}+i\textbf{Id}\right).\]
In this context, we usually write $\Gamma [X]$ instead of $\Gamma [X,X]$ and $\mathbb{E}$ denotes the integration against probability measure μ.
Remark 4.2.
  • (1) Property (d) together with symmetric property of the operator L determine a functional calculus through the following fundamental integration by parts formula: for any X, Y in $\mathcal{D}(\mathbf{L})$ and $\varphi \in {\mathcal{C}_{b}^{2}}$,
    (4.4)
    \[ \mathbb{E}\left[{\varphi ^{\prime }}(X)\Gamma \left[X,Y\right]\right]=-\mathbb{E}\left[\varphi (X)\mathbf{L}\left[Y\right]\right]=-\mathbb{E}\left[Y\mathbf{L}\left[\varphi (X)\right]\right].\]
  • (2) The results in this section can be stated under the weaker assumption that $\textbf{sp}(-\mathbf{L})=\{0={\lambda _{0}}<{\lambda _{1}},\dots ,{\lambda _{k}}<\cdots \hspace{0.1667em}\}\subset {\mathbb{R}_{+}}$ is discrete. However, to keep a transparent presentation, we restrict ourselves to the assumption $\textbf{sp}(-\mathbf{L})=\mathbb{N}$. The reader is referred to [5] for further details.
  • (3) We point out that, by a recursive argument, assumption (4.3) yields that for any $X\in \textbf{Ker}(\mathbf{L}+p\textbf{Id})$ and any polynomial P of degree m, we have
    (4.5)
    \[ P(X)\in \underset{i\le mp}{\bigoplus }\textbf{Ker}\left(\mathbf{L}+i\textbf{Id}\right).\]
  • (4) The eigenspaces of a diffusive fourth moment structure are hypercontractive (see [10] for details and sufficient conditions), that is, there exists a constant $C(M,k)$ such that for any $X\in {\textstyle\bigoplus _{i\le M}}\textbf{Ker}\left(\mathbf{L}+i\textbf{Id}\right)$:
    (4.6)
    \[ \mathbb{E}({X^{2k}})\le C(M,k)\hspace{2.5pt}\mathbb{E}{({X^{2}})^{k}}.\]
  • (5) Property (f) in the previous definition roughly implies that eigenfunctions of L in a diffusive fourth moment structure behave like orthogonal polynomials with respect to multiplication.
For further details on our setup, we refer the reader to [18] as well as [5, 8]. The next example describes some diffiusive fourth moment structures. The reader can consult [8, Section 2.2] for two classical methods for building further diffusive fourth moment structures starting from known ones.
Example 4.3.
  • (a) Finite-Dimensional Gaussian Structures: Let $d\ge 1$ and denote by ${\gamma _{d}}$ the d-dimensional standard Gaussian measure on ${\mathbb{R}^{d}}$. It is well known (see, for example, [18]), that ${\gamma _{d}}$ is the invariant measure of the Ornstein–Uhlenbeck generator, defined for any test function φ by
    (4.7)
    \[ \mathbf{L}\varphi (x)=\Delta \varphi -{\sum \limits_{i=1}^{d}}{x_{i}}{\partial _{i}}\varphi (x).\]
    Its spectrum is given by $-{\mathbb{N}_{0}}$ and the eigenspaces are of the form
    \[ \textbf{Ker}(\mathbf{L}+k\textbf{Id})=\left\{\sum \limits_{{i_{1}}+{i_{2}}+\cdots +{i_{d}}=k}\alpha ({i_{1}},\dots ,{i_{d}}){\prod \limits_{j=1}^{d}}{H_{{i_{j}}}}({x_{j}})\right\},\]
    where ${H_{n}}$ denotes the Hermite polynomial of order n. Since, eigenfunctions of L are multivariate polynomials so it is straightforward to see that assumption (f) is also verified.
  • (b) Wiener space and isonormal processes: Letting $d\to \infty $ in the setup of the previous item (a) one recovers the infinite dimensional generator of the Ornstein–Uhlenbeck semigroup for isonormal processes, as defined in Section 3.1. It is easily verified in particular, by using (3.5), that $(\Omega ,\mathcal{F},\mathbf{L})$ is also a diffusive fourth moment structure.
  • (c) Laguerre Structure: Let $\nu \ge -1$, and ${\pi _{1,\nu }}(dx)={x^{\nu -1}}\frac{{\mathrm{e}^{-x}}}{\Gamma (\nu )}{\textbf{1}_{(0,\infty )}}\mathrm{d}x$ be the Gamma distribution with parameter ν on ${\mathbb{R}_{+}}$. The associated Laguerre generator is defined for any test function φ (in dimension one) by
    (4.8)
    \[ {\mathbf{L}_{1,\nu }}(\varphi )=x{\varphi ^{\prime\prime }}(x)+(\nu +1-x){\varphi ^{\prime }}(x).\]
    By a classical tensorization procedure, we obtain the Laguerre generator in dimension d associated with the measure
    \[ {\pi _{d,\nu }}(\mathrm{d}x)={\pi _{1,\nu }}(\mathrm{d}{x_{1}}){\pi _{1,\nu }}(\mathrm{d}{x_{2}})\cdots {\pi _{1,\nu }}(\mathrm{d}{x_{d}}),\]
    where $x=({x_{1}},{x_{2}},\dots ,{x_{d}})$:
    (4.9)
    \[ {\mathbf{L}_{d,\nu }}(\varphi )={\sum \limits_{i=1}^{d}}\Big({x_{i}}{\partial _{i,i}}\varphi +(\nu +1-{x_{i}}){\partial _{i}}\varphi \Big).\]
    It is also classical that (see, for example, [18]) the spectrum of ${\mathbf{L}_{d,\nu }}$ is given by $-{\mathbb{N}_{0}}$ and moreover that
    (4.10)
    \[ \textbf{Ker}({\mathbf{L}_{d,p}}+k\textbf{Id})=\left\{\sum \limits_{{i_{1}}+{i_{2}}+\cdots +{i_{d}}=k}\alpha ({i_{1}},\dots ,{i_{d}}){\prod \limits_{j=1}^{d}}{L_{{i_{j}}}^{(\nu )}}({x_{j}})\right\},\]
    where ${L_{n}^{(\nu )}}$ stands for the Laguerre polynomial of order n with parameter ν which is defined by
    \[ {L_{n}^{(\nu )}}(x)=\frac{{x^{-\nu }}{e^{x}}}{n!}\frac{{d^{n}}}{d{x^{n}}}\left({e^{-x}}{x^{n+\nu }}\right).\]
In the next subsection, we demonstrate how a diffusive fourth moment structure can be combined with the tools of Γ calculus, in order to deduce substantial generalizations of Theorem 3.6.

4.2 Connection with Γ calculus

Throughout this section, we assume that $(E,\mu ,\mathbf{L})$ is a diffiusive fourth moment structure. Our principal aim is to prove a fourth moment criterion analogous to that of (3.13) for eigenfunctions of the operator L. To do this, we assume that $X\in \textbf{Ker}(\mathbf{L}+q\textbf{Id})$ for some $q\ge 1$ with $\mathbb{E}[{X^{2}}]=1$. The arguments implemented in the proof will clearly demonstrate that requirements (d) and (f) in Definition 4.1 are the most crucial elements in order to establish our estimates.
Proposition 4.4.
Let $q\ge 1$. Assume that $X\in \textbf{\textit{Ker}}(\mathbf{L}+q\textbf{\textit{Id}})$ with $\mathbb{E}[{X^{2}}]=1$. Then,
\[ \operatorname{Var}\left(\Gamma [X]\right)\le \frac{{q^{2}}}{3}\left\{\mathbb{E}[{X^{4}}]-3\right\}.\]
Proof.
First note that by using integration by parts formula (4.4), we have $\mathbb{E}[\Gamma [X]]=-\mathbb{E}[X\mathbf{L}X]=q\mathbb{E}[{X^{2}}]=q$. Secondly, by using the definition of the carré-du-champ operator Γ and the fact that $\mathbf{L}X=-qX$, one easily verifies that
\[ \Gamma [X]-q=\frac{1}{2}\left(\mathbf{L}+2q\textbf{Id}\right)({X^{2}}-1).\]
Next, taking into account properties (f) and (g) we can conclude that
\[ {X^{2}}-1\in \underset{1\le i\le 2q}{\bigoplus }\textbf{Ker}\left(\mathbf{L}+i\textbf{Id}\right).\]
For the rest of the proof, we use the notation ${J_{i}}$ to denote the projection of a square-integrable element X onto the eigenspace $\textbf{Ker}\left(\mathbf{L}+i\textbf{Id}\right)$. Now,
\[\begin{aligned}{}& \operatorname{Var}\left(\Gamma [X]\right)\\ {} & =\mathbb{E}\left[{\left(\Gamma [X]-q\right)^{2}}\right]=\frac{1}{4}\mathbb{E}\left[\left(\mathbf{L}+2q\textbf{Id}\right)({X^{2}}-1)\times \left(\mathbf{L}+2q\textbf{Id}\right)({X^{2}}-1)\right]\\ {} & =\frac{1}{4}\mathbb{E}\left[\mathbf{L}({X^{2}}-1)\left(\mathbf{L}+2q\textbf{Id}\right)({X^{2}}-1)\right]\hspace{-0.1667em}+\hspace{-0.1667em}\frac{q}{2}\mathbb{E}\left[({X^{2}}-1)\left(\mathbf{L}+2q\textbf{Id}\right)({X^{2}}-1)\right]\\ {} & =\frac{1}{4}\hspace{-0.1667em}\hspace{-0.1667em}\sum \limits_{1\le i\le 2q}(-i)(2q-i)\mathbb{E}\left[{\left({J_{i}}({X^{2}}-1)\right)^{2}}\right]\hspace{-0.1667em}+\hspace{-0.1667em}\frac{q}{2}\mathbb{E}\left[({X^{2}}-1)\left(\mathbf{L}+2q\textbf{Id}\right)({X^{2}}-1)\right]\\ {} & \le \frac{q}{2}\mathbb{E}\left[({X^{2}}-1)\left(\mathbf{L}+2q\textbf{Id}\right)({X^{2}}-1)\right]\\ {} & =q\mathbb{E}\left[({X^{2}}-1)(\Gamma [X]-q)\right]=q\mathbb{E}\left[({X^{2}}-1)\Gamma [X]\right]\\ {} & =q\mathbb{E}\left[\Gamma [\frac{{X^{3}}}{3}-X,X]\right]=-q\mathbb{E}\left[\left(\frac{{X^{3}}}{3}-X\right)\mathbf{L}X\right]\\ {} & ={q^{2}}\mathbb{E}\left[X\left(\frac{{X^{3}}}{3}-X\right)\right]={q^{2}}\mathbb{E}\left[\frac{{X^{4}}}{3}-{X^{2}}\right]\\ {} & =\frac{{q^{2}}}{3}\left\{\mathbb{E}[{X^{4}}]-3\right\},\end{aligned}\]
thus yielding the desired conclusion.  □
In order to avoid some technicalities, we now present a quantitative bound in the 1-Wasserstein distance ${d_{W}}$ (and not in the more challenging total variation distance ${d_{TV}}$) for eigenfunctions of the operator L. This requires to adapt the Stein’s method machinery presented in Section 2 to our setting, as a direct application of the integration by part formula (4.4). The arguments below are borrowed in particular from [52, Proposition 1].
Proposition 4.5.
Let $(E,\mu ,\mathbf{L})$ be a diffiusive fourth moment structure. Assume that $X\in \textbf{\textit{Ker}}(\mathbf{L}+q\textbf{\textit{Id}})$ for some $q\ge 1$ with $\mathbb{E}[{X^{2}}]=1$. Let $N\sim \mathcal{N}(0,1)$. Then,
\[ {d_{W}}(X,N)\le \frac{2}{q}\operatorname{Var}{\left(\Gamma [X]\right)^{\frac{1}{2}}}.\]
Proof.
For every function f of class ${C^{1}}$ on $\mathbb{R}$, with $\| {f^{\prime }}{\| _{\infty }}\le 1$ and ${f^{\prime }}\in \mathrm{Lip}(2)$ according to Part (b) in Lemma 2.1, it is enough to show that
\[ \Big|\mathbb{E}\left[{f^{\prime }}(X)-Xf(X)\right]\Big|\le \frac{2}{q}\operatorname{Var}{\left(\Gamma [X]\right)^{\frac{1}{2}}}.\]
Since $\mathbf{L}X=-qX$, and diffusivity of the operator Γ together with integration by parts formula (4.4), one can write that
\[\begin{aligned}{}\mathbb{E}\left[{f^{\prime }}(X)-Xf(X)\right]& =\mathbb{E}\left[{f^{\prime }}(X)+\frac{1}{q}\mathbf{L}(X)f(X)\right]=\mathbb{E}\left[{f^{\prime }}(X)-\frac{1}{q}\Gamma [f(X),X]\right]\\ {} & =\mathbb{E}\left[{f^{\prime }}(X)-\frac{1}{q}{f^{\prime }}(X)\Gamma [X]\right]\\ {} & =\frac{1}{q}\mathbb{E}\left[{f^{\prime }}(X)\left(q-\Gamma [X]\right)\right].\end{aligned}\]
Now, the claim follows at once by using the Cauchy–Schwarz inequality and noting that $\mathbb{E}[\Gamma [X]]=q\hspace{0.1667em}\mathbb{E}[{X^{2}}]=q$.  □
We end this section with the following general version of the fourth moment theorem for eigenfunctions of the operator L, obtained by combining Propositions 4.4 and 4.5.
Theorem 4.6.
Let $(E,\mu ,\mathbf{L})$ be a diffiusive fourth moment structure. Assume that $X\in \textbf{\textit{Ker}}(\mathbf{L}+q\textbf{\textit{Id}})$ for some $q\ge 1$ with $\mathbb{E}[{X^{2}}]=1$. Let $N\sim \mathcal{N}(0,1)$. Then,
\[ {d_{W}}(X,N)\le \frac{2}{\sqrt{3}}\hspace{0.1667em}\sqrt{\mathbb{E}[{X^{4}}]-3}.\]
It follows that, if ${\{{X_{n}}\}_{n\ge 1}}$ is a sequence of eigenfunctions in a fixed eigenspace $\textbf{\textit{Ker}}(\mathbf{L}+q\textbf{\textit{Id}})$ where $q\ge 1$ and $\mathbb{E}[{X_{n}^{2}}]=1$ for all $n\ge 1$, then the following implication holds: $\mathbb{E}[{X_{n}^{4}}]\to 3$ if and only if ${X_{n}}$ converges in distribution towards the standard Gaussian random variable N.
Remark 4.7.
The fact that the condition $\mathbb{E}[{X_{n}^{4}}]\to 3$ is necessary for convergence to the Gaussian random variable is a direct consequence of the hypercontractive estimate (4.6).

4.3 Transport distances, Stein discrepancy and Γ calculus

The general setting of the Markov triple together with Γ calculus provide a suitable framework to study functional inequalities such as the classical logarithmic Sobolev inequality or the celebrated Talagrand quadratic transportation cost inequality. For simplicity, here we restrict ourselves to the setting of Wiener structure and the Gaussian measure to be our reference measure. The reader may consult references [53, 54] for a presentation of the general setting, and [72, 73] for some previous references connecting fourth moment theorems and entropic estimates.
Let $d\ge 1$, and $d\gamma (x)={(2\pi )^{-\frac{d}{2}}}{e^{-\frac{|x|}{2}}}dx$ be the standard Gaussian measure on ${\mathbb{R}^{d}}$. Assume that $d\nu =hd\gamma $ is a probability measure on ${\mathbb{R}^{d}}$ with a (smooth) density function $h:{\mathbb{R}^{d}}\to {\mathbb{R}_{+}}$ with respect to the Gaussian measure γ. Inspired from Gaussian integration by parts formula we introduce first the crucial notion of a Stein kernel ${\tau _{\nu }}$ associated with the probability measure ν and, then, the concept of Stein discrepancy.
Definition 4.8.
(a) A measurable matrix-valued map ${\tau _{\nu }}$ on ${\mathbb{R}^{d}}$ is called a Stein kernel for the centered probability measure ν if for every smooth test function $\phi :{\mathbb{R}^{d}}\to \mathbb{R}$,
\[ {\int _{{\mathbb{R}^{d}}}}x\cdot \nabla \phi d\nu ={\int _{{\mathbb{R}^{d}}}}{\langle {\tau _{\nu }},\operatorname{Hess}(\phi )\rangle _{\operatorname{HS}}}d\nu ,\]
where $\operatorname{Hess}(\phi )$ stands for the Hessian of ϕ, and ${\langle \hspace{0.1667em},\hspace{0.1667em}\rangle _{\operatorname{HS}}}$, and $\| \hspace{0.1667em},\hspace{0.1667em}{\| _{\operatorname{HS}}}$ denote the usual Hilbert–Schmidt scalar product and norm, respectively.
(b) The Stein discrepancy of ν with respect to γ is defined as
\[ \operatorname{S}(\nu ,\gamma )=\inf {\Big({\int _{{\mathbb{R}^{d}}}}\| {\tau _{\nu }}-\textbf{Id}{\| _{\operatorname{HS}}^{2}}d\nu \Big)^{\frac{1}{2}}}\]
where the infimum is taken over all Stein kernels of ν, and takes the value $+\infty $ if a Stein kernel for ν does not exist.
We recall that the Stein kernel ${\tau _{\nu }}$ is uniquely defined in dimension $d=1$, and that unicity may fail in higher dimensions $d\ge 2$, see [73, Appendix A]. Also, ${\tau _{\gamma }}={\textbf{Id}_{d\times d}}$ is the identity matrix. We further refer to [40, 25] for existence of the Stein kernel in general settings. The interest of the Stein’s discrepancy comes, e.g., from the fact that – as a simple application of Stein’s method –
\[ {d_{TV}}(\nu ,\gamma )\le 2{\int _{\mathbb{R}}}|{\tau _{\nu }}-1|d\nu \le 2{\Big({\int _{\mathbb{R}}}|{\tau _{\nu }}-1{|^{2}}d\nu \Big)^{\frac{1}{2}}},\]
yielding that ${d_{TV}}(\nu ,\gamma )\le 2\operatorname{S}(\nu ,\gamma )$; see [53] for further details.
Next, we need the notion of Wasserstein distance. Let $p\ge 1$. Given two probability measures ν and μ on the Borel sets of ${\mathbb{R}^{d}}$, whose marginals have finite moments of order p, we define the p-Wasserstein distance between ν and μ as
\[ {\operatorname{W}_{p}}(\nu ,\mu )=\underset{\pi }{\inf }{\Big({\int _{{\mathbb{R}^{d}}\times {\mathbb{R}^{d}}}}|x-y{|^{p}}d\pi (x,y)\Big)^{\frac{1}{p}}}\]
where the infimum is taken over all probability measures π of ${\mathbb{R}^{d}}\times {\mathbb{R}^{d}}$ with marginals ν and μ; note that ${\mathrm{W}_{1}}={d_{W}}$, as defined in Section 2.
We recall that, for a measure $\nu =h\gamma $ with a smooth density function h on ${\mathbb{R}^{d}}$,
\[ \operatorname{H}(\nu ,\gamma ):={\int _{{\mathbb{R}^{d}}}}h\log hd\gamma ={\operatorname{Ent}_{\gamma }}(h)\]
is the relative entropy of the measure ν with respect to γ, and
\[ \operatorname{I}(\nu ,\gamma ):={\int _{{\mathbb{R}^{d}}}}\frac{|\nabla h{|^{2}}}{h}d\gamma \]
is the Fisher information of ν with respect to γ. After having established these notions, we can state two popular probabilistic/entropic functional inequalities:
  • (i) [Logarithmic Sobolev inequality]: $\hspace{2em}\operatorname{H}(\nu ,\gamma )\le \frac{1}{2}\operatorname{I}(\nu ,\gamma )$.
  • (ii) [Talagrand quadratic transportation cost inequality]:
    \[ \hspace{2em}{\operatorname{W}_{2}^{2}}(\nu ,\gamma )\le 2\operatorname{H}(\nu ,\gamma ).\]
The next theorem is borrowed from [53], and represents a significant improvement of the previous logarithmic Sobolev and Talagrand inequalities based on the use of Stein discrepancies: the techniques used in the proof are based on an interpolation argument along the Ornstein–Uhlenbeck semigroup. The theorem establishes connections between the relative entropy H, the Stein discrepancy S, the Fisher information I, and the Wasserstein distance W, customarily called the HSI and the WSH inequalities. The reader is also referred to the recent works [40, 25, 89] for related estimates of the Stein discrepancy based on the use of Poincaré inequalities, as well as on optimal transport techniques. See [15] for a further amplification of the approach of [53], with applications to the quantitative multidimensional CLT in the 2-Wasserstein distance. See also [33].
Theorem 4.9.
Let $d\nu =hd\gamma $ be a centered probability measure on ${\mathbb{R}^{d}}$ with smooth density function h with respect to the standard Gaussian measure γ.
  • (1) Then the following Gaussian HSI inequality holds:
    \[ \operatorname{H}(\nu ,\gamma )\le \frac{1}{2}{\operatorname{S}^{2}}(\nu ,\gamma )\log \Big(1+\frac{\operatorname{I}(\nu ,\gamma )}{{\operatorname{S}^{2}}(\nu ,\gamma )}\Big).\]
  • (2) Assume further that $\operatorname{S}(\nu ,\gamma )$ and $\operatorname{H}(\nu ,\gamma )$ are both positive and finite. Then, the following Gaussian WSH inequality holds:
    \[ {\operatorname{W}_{2}}(\nu ,\gamma )\le \operatorname{S}(\nu ,\gamma )\arccos \left({e^{-\frac{\operatorname{H}(\nu ,\gamma )}{{\operatorname{S}^{2}}(\nu ,\gamma )}}}\right).\]
The next subsection deals with the challenging problem of quantitative probabilistic approximations in infinite dimension.

4.4 Functional approximations and Dirichlet structures

Although Stein’s method is already successfully used for quantifying functional limit theorems of the Donsker type (see [11, 12], as well as [34, 35, 45, 91] for a discussion of recent developments), the general problem of assessing the discrepancy between probability distributions on infinite-dimensional spaces (like, e.g., on classes of smooth functions or on the Skorohod space) is essentially open.
In the last years a new direction of research has emerged, where the ideas behind the Malliavin–Stein approach are applied in the framework of Dirichlet structures, in order to deal with quantitative estimates on the probabilistic approximation of Hilbert space-valued random variables. A general (and impressive!) contribution on the matter is the recent work by Bourguin and Campese [17], where the authors are able to retrieve several Hilbert space counterparts of the finite-dimensional results discussed in Section 3 above. Bourguin and Campese’s approach (whose discussion requires preliminaries that go beyond the scope of our survey) represents a substantial addition to a line of investigation intiated by L. Coutin and L. Decreusefond in the seminal works [26, 29, 27, 28, 30].
As a quick illustration, we conclude the section with two representative statements, taken from [26, 30] and [29], respectively.
Theorem 4.10 (See [26] and Section 3.2 in [30]).
Let $({N_{\lambda }}(t):t\ge 0)$ be a Poisson process with intensity λ. Then, as $\lambda \to \infty $,
\[ \left(\frac{{N_{\lambda }}(t)-\lambda t}{\sqrt{\lambda }}:t\ge 0\right)\hspace{0.2778em}\Longrightarrow \hspace{0.2778em}\left(B(t):t\ge 0\right)\]
where the convergence takes place weakly in the Skorohkod space. Moreover, for every $\beta <\frac{1}{2}$ consider the so-called Besov–Liouville space ${I_{\beta ,2}}$,
\[ {I_{\beta ,2}}=\Big\{f\hspace{0.1667em}:\hspace{0.1667em}\exists \hspace{0.1667em}\dot{f},\hspace{0.1667em}f(x)=\frac{1}{\Gamma (\beta )}{\int _{0}^{x}}{(x-t)^{\beta -1}}\dot{f}(t)dt\Big\}.\]
Let ${\mu _{\beta }}$ denote the Wiener measure on the space ${I_{\beta ,2}}$, and ${Q_{\lambda }}$ be the probability measure induced by $\left({N_{\lambda }}(t):t\ge 0\right)$ . Then, there exists a constant ${c_{\beta }}$ such that
\[ \underset{\| F{\| _{{C_{b}^{2}}({I_{\beta ,2}},\mathbb{R})}}\le 1}{\sup }\Big|\int Fd{Q_{\lambda }}-\int Fd{\mu _{\beta }}\Big|\le \frac{{c_{\beta }}}{\sqrt{\lambda }}\]
where ${C_{b}^{2}}({I_{\beta ,2}},\mathbb{R})$ is the set of twice Fréchet differentiable functionals on ${I^{\beta ,2}}$.
The next result aims to provide a rate of convergence in the Donsker theorem in Wasserstein distance. Let $\eta \in (0,1)$, $p\ge 1$. Define the fractional Sobolev space ${W_{\eta ,p}}$ as the closure of the space ${C^{1}}$ w.r.t. norm
\[ \| f{\| _{\eta ,p}^{p}}:={\int _{0}^{1}}|f(t){|^{p}}dt+{\int _{0}^{1}}{\int _{0}^{1}}\frac{|f(t)-f(s){|^{p}}}{|t-s{|^{1+p\eta }}}dsdt.\]
Also, for $n\ge 1$, define ${\mathcal{A}^{n}}=\{(k,j)\hspace{0.1667em}:\hspace{0.1667em}1\le k\le d,\hspace{0.1667em}0\le j\le n-1\}$, and let
\[ {S^{n}}=\sum \limits_{(k,j)\in {\mathcal{A}^{n}}}{X_{(k,j)}}{h_{(k,j)}^{n}},\hspace{1em}{h_{(k,j)}^{n}}(t)=\sqrt{n}{\int _{0}^{t}}{\textbf{1}_{[j/n,(j+1)/n]}}(s)ds\hspace{0.1667em}{e_{k}}\]
where $({e_{k}}):1\le k\le d$ is the canonical basis of ${\mathbb{R}^{d}}$, and $({X_{(k,j)}},(k,j)\in {\mathcal{A}^{n}})$ is a family of independent identically distributed, ${\mathbb{R}^{d}}$-valued, random variables with $\mathbb{E}[X]=0$, and $\mathbb{E}\| X{\| _{{\mathbb{R}^{d}}}^{2}}=1$, where X is a random variable which has their common distribution.
Theorem 4.11 (See Section 3 in [29]).
Let $W={W_{\eta ,p}}\left([0,1],{\mathbb{R}^{d}}\right)$, and ${\mu _{\eta ,p}}$ be the law of the d-dimensional Brownian motion B on the space W. Then, there exists a constant c such that for $X\in {L^{p}}(W;{\mathbb{R}^{d}},{\mu _{\eta ,p}})$ with $p\ge 3$,
\[ \underset{F\in {\textit{Lip}_{1}}({W_{\eta ,p}})}{\sup }\Big|\mathbb{E}[F({S^{n}})]-\mathbb{E}[F(B)]\Big|\le c\hspace{0.1667em}\| X{\| _{{L^{p}}}^{p}}\hspace{0.1667em}{n^{-\frac{1}{6}+\frac{\eta }{3}}}\ln n\]
where
\[ {\textit{Lip}_{1}}({W_{\eta ,p}}):=\Big\{F:{W_{\eta ,p}}\to {\mathbb{R}^{d}}:\| F(x)-F(y){\| _{{\mathbb{R}^{d}}}}\le \| x-y{\| _{\eta ,p}},\hspace{0.1667em}\forall x,y\in {W_{\eta ,p}}\Big\}.\]
Further applications of the Malliavin–Stein techniques in the framework of Dirichlet structures are contained in [32, 31]. The next section focuses on a discrete Markov structure for which exact fourth moment estimates are available.

5 Bounds on the Poisson space: fourth moments, second-order Poincaré estimates and two-scale stabilization

We will now describe a nondiffusive Markov triple for which a fourth moment result analogous to Proposition 4.5 holds. Such a Markov triple is associated with the space of square-integrable functionals of a Poisson measure on a general pair $(Z,\mathcal{Z})$, where Z is a Polish space and $\mathcal{Z}$ is the associated Borel σ-field. The requirement that Z is Polish – together with several other assumptions adopted in the present section – is made in order to simplify the discussion; the reader is referred to [37, 38] for statements and proofs in the most general setting. See also [50, 51] for an exhaustive presentation of tools of stochastic analysis for functionals of Poisson processes, as well as [81] for a discussion of the relevance of variational techniques in the framework of modern stochastic geometry.

5.1 Setup

Let μ be a nonatomic σ-finite measure on $(Z,\mathcal{Z})$, and set ${\mathcal{Z}_{\mu }}:=\{B\in \mathcal{Z}\hspace{0.1667em}:\hspace{0.1667em}\mu (B)<\infty \}$. In what follows, we will denote by
\[ \eta =\{\eta (B)\hspace{0.1667em}:\hspace{0.1667em}B\in \mathcal{Z}\}\]
a Poisson measure on $(Z,\mathcal{Z})$ with control (or intensity) μ. This means that η is a random field indexed by the elements of $\mathcal{Z}$, satisfying the following two properties: (i) for every finite collection ${B_{1}},\dots ,{B_{m}}\in \mathcal{Z}$ of pairwise disjoint sets, the random variables $\eta ({B_{1}}),\dots ,\eta ({B_{m}})$ are stochastically independent, and (ii) for every $B\in \mathcal{Z}$, the random variable $\eta (B)$ has the Poisson distribution with mean $\mu (B)$.3 Whenever $B\in {\mathcal{Z}_{\mu }}$, we also write $\hat{\eta }(B):=\eta (B)-\mu (B)$ and denote by
\[ \hat{\eta }=\{\hat{\eta }(B)\hspace{0.1667em}:\hspace{0.1667em}B\in {\mathcal{Z}_{\mu }}\}\]
the compensated Poisson measure associated with η. Throughout this section, we assume that $\mathcal{F}=\sigma (\eta )$.
It is a well-known fact that one can regard the Poisson measure η as a random element taking values in the space ${\mathbf{N}_{\sigma }}={\mathbf{N}_{\sigma }}(Z)$ of all σ-finite point measures χ on $(Z,\mathcal{Z})$ that satisfy $\chi (B)\in {\mathbb{N}_{0}}\cup \{+\infty \}$ for all $B\in \mathcal{Z}$. Such a space is equipped with the smallest σ-field ${\mathcal{N}_{\sigma }}:={\mathcal{N}_{\sigma }}(Z)$ such that, for each $B\in \mathcal{Z}$, the mapping ${\mathbf{N}_{\sigma }}\ni \chi \mapsto \chi (B)\in [0,+\infty ]$ is measurable. In view of our assumptions on Z and following, e.g., [51, Section 6.1], throughout the paper we can assume without loss of generality that η is proper, in the sense that η can be P-a.s. represented in the form
(5.1)
\[ \eta ={\sum \limits_{n=1}^{\eta (Z)}}{\delta _{{X_{n}}}},\]
where $\{{X_{n}}:n\ge 1\}$ is a countable collection of random elements with values in $\mathcal{Z}$ and where we write ${\delta _{z}}$ for the Dirac measure at z. Since we assume μ to be nonatomic, one has that ${X_{k}}\ne {X_{n}}$ for every $k\ne n$, P-a.s.
Now denote by $\mathbf{F}({\mathbf{N}_{\sigma }})$ the class of all measurable functions $\mathfrak{f}:{\mathbf{N}_{\sigma }}\to \mathbb{R}$ and by ${\mathcal{L}^{0}}(\Omega ):={\mathcal{L}^{0}}(\Omega ,\mathcal{F})$ the class of real-valued, measurable functions F on Ω. Note that, as $\mathcal{F}=\sigma (\eta )$, each $F\in {\mathcal{L}^{0}}(\Omega )$ has the form $F=\mathfrak{f}(\eta )$ for some measurable function $\mathfrak{f}$. This $\mathfrak{f}$, called a representative of F, is ${P_{\eta }}$-a.s. uniquely defined, where ${P_{\eta }}=P\circ {\eta ^{-1}}$ is the image measure of P under η. Using a representative $\mathfrak{f}$ of F, one can introduce the add-one-cost operator ${D^{+}}={({D_{z}^{+}})_{z\in \mathcal{Z}}}$ on ${\mathcal{L}^{0}}(\Omega )$ as follows:
(5.2)
\[ {D_{z}^{+}}F:=\mathfrak{f}(\eta +{\delta _{z}})-\mathfrak{f}(\eta )\hspace{0.1667em},\hspace{1em}z\in \mathcal{Z}.\]
Similarly, we define ${D^{-}}$ on ${\mathcal{L}^{0}}(\Omega )$ as
(5.3)
\[ {D_{z}^{-}}F:=\mathfrak{f}(\eta )-\mathfrak{f}(\eta -{\delta _{z}})\hspace{0.1667em},\hspace{0.1667em}\hspace{0.1667em}\hspace{2.5pt}\text{if}\hspace{5pt}z\in \mathrm{supp}(\eta )\hspace{0.1667em},\hspace{0.1667em}\hspace{0.1667em}\text{and}\hspace{5pt}{D_{z}^{-}}F:=0,\hspace{0.1667em}\hspace{0.1667em}\text{otherwise,}\]
where $\mathrm{supp}(\chi ):=\big\{z\in \mathcal{Z}\hspace{0.1667em}:\hspace{0.1667em}\text{for all}\hspace{2.5pt}A\in \mathcal{Z}\hspace{2.5pt}\text{s.t}.z\in A\text{:}\hspace{2.5pt}\chi (A)\ge 1\big\}$ is the support of the measure $\chi \in {\mathbf{N}_{\sigma }}$. We call $-{D^{-}}$ the remove-one-cost operator associated with η. We stress that the definitions of ${D^{+}}F$ and ${D^{-}}F$ are, respectively, $P\otimes \mu $-a.e. and P-a.s. independent of the choice of the representative $\mathfrak{f}$ – see, e.g., the discussion in [37, Section 2] and the references therein. Note that the operator ${D^{+}}$ can be straightforwardly iterated as follows: set ${D^{(1)}}:={D^{+}}$ and, for $n\ge 2$ and ${z_{1}},\dots ,{z_{n}}\in Z$ and $F\in {\mathcal{L}^{0}}(\Omega )$, recursively define
\[ {D_{{z_{1}},\dots ,{z_{n}}}^{(n)}}F:={D_{{z_{1}}}^{+}}\big({D_{{z_{2}},\dots ,{z_{n}}}^{(n-1)}}F\big).\]

5.2 ${L^{1}}$ integration by parts

One of the most fundamental formulae in the theory of Poisson processes is the so-called Mecke formula stating that, for each measurable function $h:{\mathbf{N}_{\sigma }}\times Z\to [0,+\infty ]$, the identity
(5.4)
\[ \mathbb{E}\bigg[{\int _{Z}}h(\eta +{\delta _{z}},z)\mu (dz)\bigg]=\mathbb{E}\bigg[{\int _{Z}}h(\eta ,z)\eta (dz)\bigg]\]
holds true. In fact, the equation (5.4) characterizes the Poisson process, see [51, Chapter 4] for a detailed discussion. Such a formula can be used in order to define an (approximate) integration by parts formula on the Poisson space.
For random variables $F,G\in {\mathcal{L}^{0}}(\Omega )$ such that ${D^{+}}F\hspace{0.1667em}{D^{+}}G\in {L^{1}}(P\otimes \mu )$, we define
(5.5)
\[ {\Gamma _{0}}(F,G):=\frac{1}{2}\left\{{\int _{Z}}({D_{z}^{+}}F{D_{z}^{+}}G)\hspace{0.1667em}\mu (dz)+{\int _{Z}}({D_{z}^{-}}F{D_{z}^{-}}G)\hspace{0.1667em}\eta (dz)\right\}\]
which verifies $\mathbb{E}[|{\Gamma _{0}}(F,G)|]<\infty $, and $\mathbb{E}[{\Gamma _{0}}(F,G)]=\mathbb{E}[{\textstyle\int _{Z}}({D_{z}^{+}}F{D_{z}^{+}}G)\hspace{0.1667em}\mu (dz)]$, in view of the Mecke formula. The following statement, taken from [37], can be regarded as an integration by parts formula in the framework of Poisson random measures, playing a role similar to that of Lemma 3.1 in the setting of Gaussian fields. It is an almost direct consequence of (5.4).
Lemma 5.1 (${L^{1}}$ integration by parts).
Let $G,H\in {\mathcal{L}^{0}}(\Omega )$ be such that
\[ G{D^{+}}H,\hspace{0.1667em}\hspace{0.1667em}{D^{+}}G\hspace{0.1667em}{D^{+}}H\in {L^{1}}(P\otimes \mu ).\]
Then,
(5.6)
\[ \mathbb{E}\left[G\left({\int _{Z}}{D_{z}^{+}}H\hspace{0.1667em}\mu (dz)-{\int _{Z}}{D_{z}^{-}}H\hspace{0.1667em}\eta (dz)\right)\right]=-\mathbb{E}[{\Gamma _{0}}(G,H)].\]

5.3 Multiple integrals

For an integer $p\ge 1$ we denote by ${L^{2}}({\mu ^{p}})$ the Hilbert space of all square-integrable and real-valued functions on ${\mathcal{Z}^{p}}$ and we write ${L_{s}^{2}}({\mu ^{p}})$ for the subspace of those functions in ${L^{2}}({\mu ^{p}})$ which are ${\mu ^{p}}$-a.e. symmetric. Moreover, for ease of notation, we denote by $\| \cdot {\| _{2}}$ and ${\langle \cdot ,\cdot \rangle _{2}}$ the usual norm and scalar product on ${L^{2}}({\mu ^{p}})$ for whatever value of p. We further define ${L^{2}}({\mu ^{0}}):=\mathbb{R}$. For $f\in {L^{2}}({\mu ^{p}})$, we denote by ${I_{p}}(f)$ the multiple Wiener–Itô integral of f with respect to $\hat{\eta }$. If $p=0$, then, by convention, ${I_{0}}(c):=c$ for each $c\in \mathbb{R}$. Now let $p,q\ge 0$ be integers. The following basic properties are proved, e.g., in [50], and are analogous to the properties of multiple integrals in a Gaussian framework, as discussed in Section 3.1:
  • 1. ${I_{p}}(f)={I_{p}}(\tilde{f})$, where $\tilde{f}$ denotes the canonical symmetrization of $f\in {L^{2}}({\mu ^{p}})$;
  • 2. ${I_{p}}(f)\in {L^{2}}(P)$, and $\mathbb{E}\big[{I_{p}}(f){I_{q}}(g)\big]={\delta _{p,q}}\hspace{0.1667em}p!\hspace{0.1667em}{\langle \tilde{f},\tilde{g}\rangle _{2}}$, where ${\delta _{p,q}}$ denotes the Kronecker delta symbol.
As in the Gaussian framework of Section 3.1, for $p\ge 0$ the Hilbert space consisting of all random variables ${I_{p}}(f)$, $f\in {L^{2}}({\mu ^{p}})$, is called the p-th Wiener chaos associated with η, and is customarily denoted by ${C_{p}}$. It is a crucial fact that every $F\in {L^{2}}(P)$ admits a unique representation
(5.7)
\[ F=\mathbb{E}[F]+{\sum \limits_{p=1}^{\infty }}{I_{p}}({f_{p}})\hspace{0.1667em},\]
where ${f_{p}}\in {L_{s}^{2}}({\mu ^{p}})$, $p\ge 1$, are suitable symmetric kernel functions, and the series converges in ${L^{2}}(P)$. Identity (5.7) is the analogue of relation (3.2), and is once again referred to as the chaotic decomposition of the functional $F\in {L^{2}}(P)$.
The multiple integrals discussed in this section also enjoy multiplicative properties similar to formula (3.5) above – see, e.g., [50, Proposition 5] for a precise statement. One consequence of such product formulae is that, if $F\in {C_{p}}$ and $G\in {C_{q}}$ are such that $FG$ is square-integrable, then
(5.8)
\[ FG\in {\underset{r=0}{\overset{p+q}{\bigoplus }}}{C_{r}},\]
which can be seen as a property analogous to (4.3).

5.4 Malliavin operators

We now briefly discuss Malliavin operators on the Poisson space.
  • 1. The domain $\mathrm{dom}\hspace{0.1667em}D$ of the Malliavin derivative operator D is the set of all $F\in {L^{2}}(P)$ such that the chaotic decomposition (5.7) of F satisfies ${\textstyle\sum _{p=1}^{\infty }}p\hspace{0.1667em}p!\| {f_{p}}{\| _{2}^{2}}<\infty $. For such an F, the random function $Z\ni z\mapsto {D_{z}}F\in {L^{2}}(P)$ is defined via
    (5.9)
    \[ {D_{z}}F={\sum \limits_{p=1}^{\infty }}p{I_{p-1}}\big({f_{p}}(z,\cdot )\big)\hspace{0.1667em},\]
    whenever z is such that the series is converging in ${L^{2}}(P)$ (this happens μ-a.e.), and set to zero otherwise; note that ${f_{p}}(z,\cdot )$ is an a.e. symmetric function on ${Z^{p-1}}$. Hence, $DF={({D_{z}}F)_{z\in \mathcal{Z}}}$ is indeed an element of ${L^{2}}\big(P\otimes \mu \big)$. It is well-known that $F\in \mathrm{dom}\hspace{0.1667em}D$ if and only if ${D^{+}}F\in {L^{2}}\big(P\otimes \mu \big)$, and in this case
    (5.10)
    \[ {D_{z}}F={D_{z}^{+}}F,\hspace{1em}P\otimes \mu \text{-a.e.}\]
  • 2. The domain $\mathrm{dom}\hspace{0.1667em}\mathbf{L}$ of the Ornstein–Uhlenbeck generator L is the set of those $F\in {L^{2}}(P)$ whose chaotic decomposition (5.7) verifies the condition ${\textstyle\sum _{p=1}^{\infty }}{p^{2}}\hspace{0.1667em}p!\| {f_{p}}{\| _{2}^{2}}<\infty $ (so that $\mathrm{dom}\hspace{0.1667em}\mathbf{L}\subset \mathrm{dom}\hspace{0.1667em}D$) and, for $F\in \mathrm{dom}\hspace{0.1667em}\mathbf{L}$, one defines
    (5.11)
    \[ \mathbf{L}F=-{\sum \limits_{p=1}^{\infty }}p{I_{p}}({f_{p}})\hspace{0.1667em}.\]
    By definition, $\mathbb{E}[\mathbf{L}F]=0$; also, from (5.11) it is easy to see that L is symmetric, in the sense that
    \[ \mathbb{E}\big[(\mathbf{L}F)G\big]=\mathbb{E}\big[F(\mathbf{L}G)\big]\]
    for all $F,G\in \mathrm{dom}\hspace{0.1667em}\mathbf{L}$. Note that, from (5.11), it is immediate that the spectrum of $-\mathbf{L}$ is given by the nonnegative integers and that $F\in \mathrm{dom}\hspace{0.1667em}\mathbf{L}$ is an eigenfunction of $-\mathbf{L}$ with corresponding eigenvalue p if and only if $F={I_{p}}({f_{p}})$ for some ${f_{p}}\in {L_{s}^{2}}({\mu ^{p}})$, that is:
    \[ {C_{p}}=\mathrm{Ker}(\mathbf{L}+pI).\]
    The following identity corresponds to formula (65) in [50]: if $F\in \mathrm{dom}\hspace{0.1667em}\mathbf{L}$ is such that ${D^{+}}F\in {L^{1}}(P\otimes \mu )$, then
    (5.12)
    \[ \mathbf{L}F={\int _{\mathcal{Z}}}\big({D_{z}^{+}}F\big)\mu (dz)-{\int _{\mathcal{Z}}}\big({D_{z}^{-}}F\big)\eta (dz)\hspace{0.1667em}.\]
    Define for any $F\in {L^{2}}(P)$ the pseudoinverse ${\mathbf{L}^{-1}}$ by
    \[ {\mathbf{L}^{-1}}F=-{\sum \limits_{p=1}^{\infty }}\frac{1}{p}{I_{p}}({f_{p}}).\]
    Recall [50, Section 8] the covariance identity
    (5.13)
    \[ \operatorname{Cov}(F,G)=-\int \mathbb{E}[{D_{z}}G{D_{z}}{\mathbf{L}^{-1}}F]\mu (dz).\]
  • 3. For suitable random variables $F,G\in \mathrm{dom}\hspace{0.1667em}\mathbf{L}$ such that $FG\in \mathrm{dom}\hspace{0.1667em}\mathbf{L}$, we introduce the carré du champ operator Γ associated with L by
    (5.14)
    \[ \Gamma (F,G):=\frac{1}{2}\big(\mathbf{L}(FG)-F\mathbf{L}G-G\mathbf{L}F\big)\hspace{0.1667em}.\]
    The symmetry of L implies immediately the crucial integration by parts formula
    (5.15)
    \[ \mathbb{E}\big[(\mathbf{L}F)G\big]=\mathbb{E}\big[F(\mathbf{L}G)\big]=-\mathbb{E}\big[\Gamma (F,G)\big];\]
    we will see below that, for many random variables F, G, relation (5.15) is indeed the same as identity appearing in Lemma 5.1.
The following result – proved in [37] – provides an explicit representation of the carré-du-champ operator Γ in terms of ${\Gamma _{0}}$, as introduced in (5.5).
Proposition 5.2.
For all $F,G\in \mathrm{dom}\hspace{0.1667em}\mathbf{L}$ such that $FG\in \mathrm{dom}\hspace{0.1667em}\mathbf{L}$ and
\[ DF,\hspace{0.1667em}DG,\hspace{0.1667em}FDG,\hspace{0.1667em}GDF\in {L^{1}}(P\otimes \mu ),\]
we have that $DF={D^{+}}F$, $DG={D^{+}}G$, in such a way that $DF\hspace{0.1667em}DG\hspace{-0.1667em}=\hspace{-0.1667em}{D^{+}}F\hspace{0.1667em}{D^{+}}G\hspace{-0.1667em}\in \hspace{-0.1667em}{L^{1}}(P\otimes \mu )$, and
(5.16)
\[ \Gamma (F,G)={\Gamma _{0}}(F,G),\]
where ${\Gamma _{0}}$ is defined in (5.5).
One crucial consequence of this result is that the operator Γ is not diffusive, in the sense that the triple $(\Omega ,P,\mathbf{L})$ is not a diffusive fourth moment structure, as introduced in Definition 4.1; it follows in particular that the machinery of Section 4 cannot be directly applied.

5.5 Fourth moment theorems

Starting at least from the reference [83] (where Malliavin calculus and Stein’s method were first combined on the Poisson space), the establishing a fourth moment bound similar to Theorem 4.6 on the Poisson space has been an open problem for several years. As recalled above, the main difficulty in achieving such a result is the discrete nature of add-one-cost and remove-one-cost operators, preventing in particular the triple $(\Omega ,P,\mathbf{L})$ from enjoying a diffusive property.
The next statement contains one of the main bounds proved in [38], and shows that a quantitative fourth moment bound is available on the Poisson space. Such a bound (which also has a multidimensional extension) is proved by a clever combination of Malliavin-type techniques with an infinitesimal version of the exchangeable pairs approach toward Stein’s method – see, e.g., [23].
Theorem 5.3.
For $q\ge 2$, let $F={I_{q}}({f_{q}})$ be a multiple integral of order q with respect to $\hat{\eta }$, and assume that $\mathbb{E}[{F^{2}}]=1$. Then,
\[ {d_{W}}(F,N)\le \left(\sqrt{\frac{2}{\pi }}+\frac{4}{3}\right)\sqrt{\mathbb{E}[{F^{4}}]-3}.\]
One should notice that the first bound of this type was proved in [37] under slightly more restrictive assumptions; also, reference [37] contains analogous bounds in the Kolmogorov distance, that are not achievable by using exchangeable pairs. In particular, one of the key estimates used in [37] is the following remarkable equality and bound
\[ {\frac{1}{2q}{\int _{Z}}\mathbb{E}\big[|{D_{z}^{+}}F|^{4}}\big]\mu (dz)\hspace{-0.1667em}=\frac{3}{q}\mathbb{E}\big[{F^{2}}\Gamma (F,F)\big]-\mathbb{E}\big[{F^{4}}\big]\hspace{-0.1667em}\le \hspace{-0.1667em}\frac{4q-3}{2q}\Big(\mathbb{E}\big[{F^{4}}\big]-3\mathbb{E}{[{F^{2}}]^{2}}\Big),\]
that are valid for every $F\in {C_{q}}$, $q\ge 2$, such that the mapping $z\mapsto {D_{z}^{+}}F$ verifies some minimal integrability conditions.

5.6 Second-order Poincaré estimates

What one calls second-order Poincaré inequalities is a collection of analytic estimates (first established on the Poisson space in [55]) where the Wasserstein and Kolmogorov distances, between a given function of η and a Gaussian random variable, are bounded by integrated moments of iterated add-one-cost operators on the Poisson space. The rationality behind such a name is the following. Just as the Poincaré inequality
(5.17)
\[ \operatorname{Var}(F)\le {\int _{Z}}\mathbb{E}[{({D_{z}^{+}}F)^{2}}]\mu (dz),\]
controls the variance of a random variable F by means of integrated moments of the add-one cost (see [51, Section 18.3]), the integrated moments of second-order add-one-cost ${D_{x}^{+}}{D_{y}^{+}}F:={D_{z,y}^{2}}F$ controll the discrepancy between the distribution of F and that of a Gaussian random variable – a phenomenon already observed in the Gaussian setting [21, 70, 96], where gradients typically replace add-one-cost operators.
For the rest of the section, we exclusively consider square-integrable random variables F such that $F\in \mathrm{dom}\hspace{0.1667em}D$, in such a way that ${D^{+}}F=DF$ (up to negligible sets). The starting point for proving second-order Poincaré estimates is the covariance identity (5.13), which can be proved as in the Gaussian setting by means of chaos expansions. When one combines Stein’s method with such a formula, it is however not possible to deduce the existence of a Stein kernel as in the Gaussian setting (see (3.12)), since Malliavin operators on a Poisson space do not enjoy an exact chain rule such as (3.7). Indeed, we have that, for sufficiently smooth mapping $f:\mathbb{R}\to \mathbb{R}$,
\[\begin{aligned}{}\operatorname{Cov}(F,f(F))& =-\int \mathbb{E}[{D_{z}}(f(F)){D_{z}}{\mathbf{L}^{-1}}F]\mu (dz)\\ {} & =:-\int \mathbb{E}[{f^{\prime }}(F){D_{z}}F{D_{z}}{\mathbf{L}^{-1}}F]\mu (dz)+R\end{aligned}\]
where we approximate ${D_{z}}(f(F))=f(F+{D_{z}}F)-f(F)$ by ${f^{\prime }}(F){D_{z}}F$ with the error term
\[ {D_{z}}F{\int _{0}^{1}}[{f^{\prime }}(F+t{D_{z}}F)-{f^{\prime }}(F)]dt\]
appearing in the implicit definition of R; notice that, in general, $R\ne 0$, thus the previous computations do not yield the existence of a Stein kernel. Selecting f as in Lemma 2.1-(d), one can bound the error term in the aforementioned calculation by $|{D_{z}}F{|^{2}}$. Therefore, for F such that $\mathbb{E}[F]=0$ and $\operatorname{Var}[F]=1$, one has the bound
\[ {d_{W}}(F,N)\le \sqrt{\operatorname{Var}\Big[\int {D_{z}}F{D_{z}}{\mathbf{L}^{-1}}F]\mu (dz)\Big]}+\int \mathbb{E}[|{D_{z}}F{|^{2}}|{D_{z}}{\mathbf{L}^{-1}}F|]\mu (dz).\]
Applying the Poincaré inequality (5.17) to the variance term, as well as the contraction bound [55, Lemma 3.4] for the add-one-cost
\[ \mathbb{E}[|{D_{z}}{\mathbf{L}^{-1}}F{|^{p}}]\le \mathbb{E}[|{D_{z}}F{|^{p}}],\hspace{1em}p\ge 1,\]
and analogous estimates for the iterated add-one-cost, leads to the following theorem.
Theorem 5.4 (Second-order Poincaré estimates [55]).
Let $F\in \mathrm{dom}\hspace{0.1667em}D$ be such that $\mathbb{E}[F]=0$ and $\operatorname{Var}[F]=1$, and let N be a standard Gaussian random variable. Then,
\[ {d_{W}}(F,N)\le {\gamma _{1}}+{\gamma _{2}}+{\gamma _{3}},\]
where
\[\begin{aligned}{}{\gamma _{1}}& :=2{\Big[\iiint \mathbb{E}{[{({D_{x}}F{D_{y}}F)^{2}}]^{1/2}}\mathbb{E}{[{({D_{x,z}^{2}}F{D_{y,z}^{2}}F)^{2}}]^{1/2}}{\mu ^{3}}(dxdydz)\Big]^{1/2}},\\ {} {\gamma _{2}}& :={\Big[\iiint \mathbb{E}[{({D_{x,z}^{2}}F{D_{y,z}^{2}}F)^{2}}]{\mu ^{3}}(dxdydz)\Big]^{1/2}},\\ {} {\gamma _{3}}& :=\int \mathbb{E}[|{D_{x}}F{|^{3}}]\mu (dx).\end{aligned}\]
As mentioned above, second-order Poincaré techniques are equally useful for obtaining bounds in the Kolmogorov distance – see [55], as well as [90] for a powerful extension to the framework of multivariate normal approximations.
An example of a successful application of second-order Poincaré estimates from [55] (to which we refer the reader for a discussion of the associated literature) is the derivation of presumably optimal Berry–Esseen bounds for the total edge length of the Poisson-based nearest neighbor graph. More precisely, let ${\eta _{t}}$ be a Poisson point process with intensity $t>0$ on a convex compact set $H\subset {\mathbb{R}^{d}}$. We consider the graph with vertex set $\operatorname{supp}{\eta _{t}}$ and edge set formed by $\{x,y\}\subset \operatorname{supp}{\eta _{t}}$ when either x is the nearest neighbor of y or the other way around. Consider the total edge length of the graph so obtained, denoted by ${L_{t}}$. Then we have
\[ {d_{W}}\Big(\frac{{L_{t}}-\mathbb{E}[{L_{t}}]}{\sqrt{\operatorname{Var}[{L_{t}}]}},N\Big)\le \frac{C}{\sqrt{t}},\]
where C depends only on H. We refer the reader to [55, Theorem 7.1] for a far more general statement, and to [49] for a collection of presumably optimal bounds on the normal approximation of exponentially stabilizing random variables (see the next subsection).

5.7 Stabilization theory and two-scale bounds

While the second-order Poincaré estimates can provide sharp Berry–Esseen bounds, they are not always applicable. This is the case, for instance, for certain combinatorial optimization statistics or connectivity functionals of the underlying Poisson process. The problem is typically that the iterated add-one-cost of the functionals, although well-defined almost surely, are not computationally tractable, e.g., for obtaining moment estimates.
In this section, we present an alternative collection of analytic inequalities, called the two-scale stabilization bounds, which avoid the use of iterated add-one-cost – they are one of the main findings from [48]; see also [22] for several related estimates obtained by a discretization procedure. As their name suggests, these bounds are closely related to the stabilization theory of Penrose and Yukich [87, 86]. Such a theory originated from the ground-breaking central limit theorem of Kesten and Lee [46] for the total edge weight ${M_{n}}$ of Euclidean minimal spanning trees (MST) with stationary Poisson points ${\eta _{n}}$ in a ball of radius $n\in \mathbb{N}$. Recall that the MST is the connected graph over the vertex set ${\eta _{n}}$ that minimizes its total length. Without referring to the stochastic analysis on the Poisson space, Kesten and Lee already performed a fine study of the add-one-cost of ${M_{n}}$ (and not of the iterated add-one-cost) implying some moment estimates of ${D_{x}}{M_{n}}$. Penrose and Yukich [87] extrapolated the high level ideas from [46] and transformed them into a general theory applicable to (nonquantitative) central limit theorems for a plethora of problems in stochastic geometry. The theory was further extended to multivariate normal approximation by Penrose [86]. A variant of the theory using score functionals was put forward by Baryshnikov and Yukich [13].
We now define properly the notions of strong and weak stabilization. We assume for concreteness that the ambient space is ${\mathbb{R}^{d}}$ and η is a Poisson process of unit intensity. A Poisson functional $F=F(\eta )$ is strongly stabilizing if there exists an almost surely finite random variable R, called the stabilization radius, such that
\[ {D_{0}}F(\eta {|_{{B_{R}}}})={D_{0}}F(\eta ),\]
where ${B_{R}}$ stands for a ball with radius R centered at the origin. Here is a simple example. Fix $r>0$ and make an edge between two points in ${\eta _{n}}:=\eta {|_{{B_{n}}}}$ within distance r. The graph $G({\eta _{n}},r)$ so obtained is known as the Gilbert graph or the random geometric graph. Then, the number $F({\eta _{n}})$ of edges within a finite window containing the origin has stabilization radius $R=r$ almost surely, since ${D_{0}}F(\eta )$ is the number of edges incident to the origin in $G(\eta +{\delta _{0}},r)$. Proving strong stabilization often relies on combinatorial and geometric arguments in many problems of stochastic geometry, see [87] for a list of examples. In general situations, R is genuinely random in contrast to the simple example given above.
To obtain central limit theorems, it actually suffices to show a weaker version of stabilization. We say that F is weakly stabilizing if for any sequence of measurable sets ${E_{n}}$ satisfying $\liminf {E_{n}}={\mathbb{R}^{d}}$, we have the almost sure convergence
\[ {D_{0}}F(\eta {|_{{E_{n}}}})\to \Delta \]
where Δ is a random variable. It is clear that a strongly stabilizing functional is also weakly stabilizing with $\Delta ={D_{0}}F(\eta )$.
Theorem 5.5 ( See [87, Theorem 3.1]).
Suppose that F is weakly stabilizing and satisfies the moment condition
\[ \underset{A}{\sup }\mathbb{E}[|{D_{0}}F(\eta {|_{A}}){|^{4}}]<\infty ,\]
where the supremum is taken for all balls A that contain 0. Then there exists ${\sigma ^{2}}\ge 0$, such that
\[ \frac{1}{\sqrt{\mathrm{Vol}({B_{n}})}}(F({\eta _{n}})-\mathbb{E}[F({\eta _{n}})])\stackrel{d}{\to }N(0,{\sigma ^{2}}).\]
It is remarkable how few assumptions one needs in order to obtain a CLT. Notice that the limiting variance ${\sigma ^{2}}$ could be 0. In [87], it was shown that ${\sigma ^{2}}>0$ whenever Δ is not a constant. Theorem 5.5 was proved by a martingale method and does not offer insights on how fast the normalized sequence converges to normal. The latter question was addressed in a recent preprint by Lachièze-Rey, Peccati and Yang [48]. Under slightly strengthened conditions on the functionals, they assessed the rate of normal approximation in Theorem 5.5. To state one of the bounds that can be deduced from [48], we consider again the ball ${B_{n}}$ of radius n centered at the origin, and introduce the key quantity
\[ {\psi _{n}}={\psi _{n}}({A_{n,\cdot }}):=\underset{x\in {B_{n}}}{\sup }\mathbb{E}[|{D_{x}}F(\eta {|_{{B_{n}}}})-{D_{x}}F(\eta {|_{{A_{n,x}}}})|],\hspace{1em}n\ge 1,\]
where ${A_{n,x}}$ is any measurable set indexed by n and x. In practice, we take ${A_{n,x}}={B_{{b_{n}}}}(x)=\{y:|x-y|\le {b_{n}}\}$ with $1\ll {b_{n}}\ll n$ which is a local window of x compared to the scale of ${B_{n}}$. In what follows, we accept this choice and call ${\psi _{n}}$ a two-scale discrepancy in view of this interpretation. The following result, taken from [48], can be applied in many concrete problems in stochastic geometry.
Theorem 5.6 ([48, Corollary 1.3]).
Let ${\hat{F}_{n}}=\operatorname{Var}{[F({\eta _{n}})]^{-1/2}}(F({\eta _{n}})-\mathbb{E}[F({\eta _{n}})])$ with ${\eta _{n}}=\eta {|_{{B_{n}}}}$ as before. Suppose that
\[ \underset{n\in \mathbb{N},x\in {B_{n}}}{\sup }\mathbb{E}[|{D_{x}}F({\eta _{n}}){|^{p}}]<\infty \]
for some $p>4$ and also that there exists an absolute constant $b>0$ such that $\operatorname{Var}[F({\eta _{n}})]\ge b|{B_{n}}|$. Then there exists a finite positive constant c such that
\[ \frac{1}{c}{d_{W}}({\hat{F}_{n}},N(0,1))\le {\psi _{n}^{\frac{1}{2}(1-\frac{4}{p})}}+{\Big(\frac{{b_{n}}}{n}\Big)^{\frac{d}{2}}}.\]
This theorem simplifies and extends some arguments in the proof of a quantitative CLT for the minimal spanning trees by Chatterjee and Sen [22]. Analogous Kolmogorov bounds for univariate normal approximation, and bounds for multivariate normal approximation are also considered in [48]. More remarks are in order.
Remark 5.7.
  • i) The sequence $({b_{n}})$ serves as a free parameter in the bound. One should keep track of the dependence of ${\psi _{n}}$ on ${b_{n}}$ and make an optimization in the end.
  • ii) For any fixed $x\in {\mathbb{R}^{d}}$, applying the weak stabilization condition for F with two sequences $({B_{n}})$ and $({B_{{b_{n}}}}(x))$ (together with the translation invariance of η and the moment assumption for the add-one-cost) yields the following convergence
    \[ \mathbb{E}[|{D_{x}}F(\eta {|_{{B_{n}}}})-{D_{x}}F(\eta {|_{{B_{{b_{n}}}}(x)}})|]\to 0.\]
    As such, Theorem 5.6 quantifies Theorem 5.5 after uniformly strengthening the assumptions of Theorem 5.5.
  • iii) When the functional is strongly stabilizing, this bound takes an even simpler form. More precisely, we say ${R_{x}}$ is a stabilization radius at x if
    \[ {D_{x}}F(\eta {|_{{B_{R}}(x)}})={D_{x}}F(\eta ).\]
    Then, applying Hölder’s inequality and the uniform moment condition for the add-one-cost leads to the existence a positive finite c such that
    \[ {\psi _{n}}\le c\underset{x\in {B_{n}}}{\sup }\mathbb{P}{[{R_{x}}\ge {b_{n}}]^{1-\frac{1}{p}}}.\]
    Hence, the upper tail of ${R_{x}}$ is relevant in the rate of normal approximation. One may further classify the stabilization condition with regards to the decay of the upper tail. For instance, we say that the funcitonal F is exponentially stabilizing if ${R_{x}}$ has a sub-exponential upper tail.
  • iv) There are some general methods for obtaining lower bounds of variance. For example, one can partition the space into nonoverlapping cubes of appropriate size then use projection method for functions of independent random variables such as Hoeffding decomposition. Another method via chaos expansion was given in [55, Section 5]
We mention one application where the second order Poincaré estimates do not apply but the two-scale stabilization bounds do. Fix $r>0$ and consider the number ${K_{n}}$ of components in the Gilbert graph $G({\eta _{n}},2r)$ (or equivalently the Boolean model ${O_{r,n}}={\cup _{x\in {\eta _{n}}}}B(x,r)$) as $n\to \infty $. This corresponds to the so-called thermodynamic regime, where the family of random sets ${O_{r}}={\cup _{x\in \eta }}B(x,r)$ (unbounded analogue of ${O_{r,n}}$) indexed by r exhibits a phase transition at some ${r^{\ast }}\in (0,\infty )$ defined as
\[ {r^{\ast }}=\inf \{r:\mathbb{P}[0\hspace{2.5pt}\text{is connected to infinity in}\hspace{2.5pt}{O_{r}}]>0\}.\]
We stress that the analysis of ${K_{n}}$ is relatively involved in the critical phase due to the co-existence of the unbounded occupied component and the unbounded vacant component (in ${O_{r}^{c}}$). However, the following estimate was obtained in [48] for all $r>0$ in dimension 2 using the strong stabilization bound:
\[ {d_{W}}(({K_{n}}-\mathbb{E}[{K_{n}}])/\sqrt{\operatorname{Var}[{K_{n}}]},N(0,1))\le \frac{C}{{n^{\beta }}},\]
where C and β are finite positive constants. In $d\ge 3$, a polylogarithmic rate was obtained. The bottleneck of these estimates are the two-arm exponents of the critical Boolean models which are hard to improve.
More generally, when one considers higher dimensional topological statistics of the Boolean model such as the Betti numbers, it may occur that strong stabilization does not hold [98, 95, 20]. In such case, the two-scale weak stabilization bound might be well suited for obtaining quantitative CLT’s.

6 Malliavin–Stein method for targets in the second Wiener chaos

In this section, we present a short overview of some recent developments of the Mallaivin–Stein approach for target distributions in the second Gaussian Wiener chaos. We also formulate some complementary conjectures. We adopt the same notation as in Section 3.1 above. Let W stand for an isonormal Gaussian process on a separable Hilbert space $\mathfrak{H}$. Recall that the elements in the second Wiener chaos are random variables having the general form $F={I_{2}}(f)$, with $f\in {\mathfrak{H}^{\odot 2}}$. With any kernel $f\in {\mathfrak{H}^{\odot 2}}$, we associate the following Hilbert–Schmidt operator
\[ {A_{f}}:\mathfrak{H}\mapsto \mathfrak{H};\hspace{1em}g\mapsto f{\otimes _{1}}g.\]
We also write ${\{{\alpha _{f,k}}\}_{k\ge 1}}$ and ${\{{e_{f,k}}\}_{k\ge 1}}$, respectively, to indicate the (not necessarily distinct) eigenvalues of ${A_{f}}$ and the corresponding eigenvectors. The next proposition gathers together some relevant properties of the elements of the second Wiener chaos associated with W.
Proposition 6.1 (See Section 2.7.4 in [66]).
Let $F={I_{2}}(f)$, $f\in {\mathfrak{H}^{\odot 2}}$, be a generic element of the second Wiener chaos of W.
  • 1. The following equality holds: $F={\textstyle\sum _{k\ge 1}}{\alpha _{f,k}}\big({N_{k}^{2}}-1\big)$, where ${\{{N_{k}}\}_{k\ge 1}}$ is a sequence of i.i.d. $\mathcal{N}(0,1)$ random variables that are elements of the isonormal process W, and the series converges in ${L^{2}}(\Omega )$ and almost surely.
  • 2. For any $r\ge 2$,
    \[ {\kappa _{r}}(F)={2^{r-1}}(r-1)!\sum \limits_{k\ge 1}{\alpha _{f,k}^{r}}.\]
  • 3. The law of the random variable F is determined by its moments, or equivalently, by its cumulants.
For the rest of the section, to avoid unnecessary complication, we consider target distributions in the second Wiener chaos of the form
(6.1)
\[ {F_{\infty }}={\sum \limits_{i=1}^{d}}{\alpha _{\infty ,i}}({N_{i}^{2}}-1)\]
where ${N_{i}}\sim \mathcal{N}(0,1)$ are i.i.d, and the coefficients $({\alpha _{\infty ,i}}:i=1,\dots ,d)$ are distinct, and ${\alpha _{\infty ,i}}=0$ for $i\ge d+1$. We also work under the normalization assumption $\mathbb{E}[{F_{\infty }^{2}}]=1$. We highlight the following particular cases: (i) ${\alpha _{\infty ,i}}=1$ for $i=1,\dots ,d$, for which the target random variable ${F_{\infty }}$ reduces to a centered chi-squared distribution with d degree of freedom (here, the Malliavin–Stein method has been successfully implemented in a series of papers [63, 36, 64, 71, 6]); (ii) $d=2$, and ${\alpha _{\infty ,1}}\times {\alpha _{\infty ,2}}<0$, in which case the target random variable ${F_{\infty }}$ belongs to the so-called Variance–Gamma class of probability distributions. We refer to [41–43, 39, 7] for development of Stein and Malliavin–Stein methods for the Variance–Gamma distributions.
To any target distribution ${F_{\infty }}$ of the form (6.1) we attach the following polynomial
(6.2)
\[ Q(x)={\big(P(x)\big)^{2}}:={\Big(x{\prod \limits_{i=1}^{d}}(x-{\alpha _{\infty ,i}})\Big)^{2}}.\]
It turns out that polynomials P and Q plays a major role in quantitative limit theorems in this setup. The next result provides a (suitable) Stein operator for target distributions ${F_{\infty }}$ in the second Wiener chaos. Also, the stability phenomenon of the weak convergence of the sequences in the second Wiener chaos is studied in [69] using tools from complex analysis.
Theorem 6.2 (Stein characterization [3]).
Let ${F_{\infty }}$ be an element of the second Wiener chaos of the form (6.1). Assume that F is a generic centered random variable living in a finite sum of Wiener chaoses (hence smooth in the sense of Malliavin calculus). Then, $F={F_{\infty }}$ (equality in distribution) if and only if $\mathbb{E}\left[{\mathcal{A}_{\infty }}f(F)\right]=0$ where the differential operator ${\mathcal{A}_{\infty }}$ is given by
(6.3)
\[ {\mathcal{A}_{\infty }}f(x):={\sum \limits_{l=2}^{d+1}}({b_{l}}-{a_{l-1}}x){f^{(d+2-l)}}(x)-{a_{d+1}}xf(x),\]
for all functions $f:\mathbb{R}\to \mathbb{R}$ such that ${\mathcal{A}_{\infty }}f(F)\in {L^{1}}(\Omega )$ and coefficients
(6.4)
\[\begin{aligned}{}& {a_{l}}:=\frac{{P^{(l)}}(0)}{l!{2^{l-1}}},\hspace{1em}1\le l\le d+1,\end{aligned}\]
(6.5)
\[\begin{aligned}{}& {b_{l}}:={\sum \limits_{r=l}^{d+1}}\frac{{a_{r}}}{(r-l+1)!}{\kappa _{r-l+2}}({F_{\infty }}),\hspace{1em}2\le l\le d+1.\end{aligned}\]
The polynomials P and Q are given by relation (6.2).
The next conjecture puts forward a non-Gaussian counterpart to the Stein’s Lemma 2.1.
Conjecture 6.3 (Stein Universality Lemma).
Let $\mathcal{H}$ denote an appropriate separating (see [66, Definition C.1.1]) class of test functions. For every given test function $h\in \mathcal{H}$ consider the associated Stein equation
(6.6)
\[ {\mathcal{A}_{\infty }}f(x)=h(x)-\mathbb{E}[h({F_{\infty }})].\]
Then, equation (6.6) admits a bounded d times differentiable solution ${f_{h}}$ such that $\| {f_{h}^{(r)}}{\| _{\infty }}<+\infty $ for all $r=1,\dots ,d$ and the bounds are independent of the given test function h.
The rest of the section is devoted to several quantitative estimates involving target distributions in the second Wiener chaos. The first estimate is stated in terms of the 2-Wasserstein transport distance ${\operatorname{W}_{2}}$ (see Section 4.3 for definition). See also [47] for several related results of a quantitative nature.
Theorem 6.4 ([2]).
Let $({F_{n}}:n\ge 1)$ be a sequence of random variables belonging to the second Wiener chaos associated to the isonormal process W so that $\mathbb{E}[{F_{n}^{2}}]=1$ for all $n\ge 1$. Assume that the target random variable ${F_{\infty }}$ takes the form (6.1). Define
(6.7)
\[ \Delta ({F_{n}}):={\sum \limits_{r=2}^{\textit{deg}(Q)}}\frac{{Q^{(r)}}(0)}{r!}\frac{{\kappa _{r}}({F_{n}})}{(r-1)!{2^{r-1}}},\]
where the polynomial Q is given by (6.2). Then, there exists a constant $C>0$ (possibly depending only on the target random variable ${F_{\infty }}$ but independent of n) such that
(6.8)
\[ {\mathrm{W}_{2}}({F_{n}},{F_{\infty }})\le \hspace{0.1667em}C\hspace{0.1667em}\bigg(\sqrt{\Delta ({F_{n}})}+{\sum \limits_{r=2}^{d+1}}|{\kappa _{r}}({F_{n}})-{\kappa _{r}}({F_{\infty }})|\bigg).\]
Example 6.5.
Consider the target random variable ${F_{\infty }}$ of the form (6.1) with $d=2$ and ${\alpha _{\infty ,1}}=-{\alpha _{\infty ,2}}=1/2$. Hence, ${F_{\infty }}$ $(={N_{1}}\times {N_{2}}$, where ${N_{1}},{N_{2}}\sim \mathcal{N}(0,1)$ are independent and equality holds in law) belongs to the class of Variance–Gamma distributions $V{G_{c}}(r,\theta ,\sigma )$ with parameters $r=\sigma =1$ and $\theta =0$. Then, [39, Corollary 5.10, part (a)] reads
(6.9)
\[ {d_{W}}({F_{n}},{F_{\infty }})\le C\hspace{0.1667em}\sqrt{\Delta ({F_{n}})+1/4\hspace{0.1667em}{\kappa _{3}^{2}}({F_{n}})}\]
that is in line with the estimate (6.8). One has to note that ${\kappa _{3}}({F_{\infty }})=0$.
The next result provides a quantitative bound in the Kolmogorov distance. The proof relies on the classical Berry–Essen estimate in terms of bounding the difference of the characteristic functions. We recall that for two real-valued random variables X and Y the Kolmogorov distance is defined as
\[ {d_{\text{Kol}}}(X,Y):=\underset{x\in \mathbb{R}}{\sup }\Big|\mathbb{P}[X\le x]-\mathbb{P}[Y\le x]\Big|.\]
Theorem 6.6 ([4]).
Let the target random variable ${F_{\infty }}$ in the second Wiener chaos be of the form (6.1). Assume that $({F_{n}}:n\ge 1)$ be a sequence of centered random elements living in a finite sum of the Wiener chaoses. Then, there exists a constant C (possibly depending on the sequence $({F_{n}})$, but not on n) such that
(6.10)
\[ \begin{aligned}{}& {d_{\textit{Kol}}}({F_{n}},{F_{\infty }})\\ {} & \le C\sqrt{\mathbb{E}\left[\Big|{\sum \limits_{r=1}^{d+1}}{a_{r}}\left({\Gamma _{r-1}}({F_{n}})-\mathbb{E}[{\Gamma _{r-1}}({F_{n}})]\right)\Big|\right]+{\sum \limits_{r=2}^{d+1}}|{\kappa _{r}}({F_{n}})-{\kappa _{r}}({F_{\infty }})|}\\ {} & \le C\sqrt{\sqrt{\operatorname{Var}\left({\sum \limits_{r=1}^{d+1}}{a_{r}}{\Gamma _{r-1}}({F_{n}})\right)}+{\sum \limits_{r=2}^{d+1}}|{\kappa _{r}}({F_{n}})-{\kappa _{r}}({F_{\infty }})|}\end{aligned}\]
where the coefficients $({a_{r}}:r=1,\dots ,d+1)$ are given by relation (6.4). In the particular case, when the sequence $({F_{n}}:n\ge 1)$ belongs to the second Wiener chaos, it holds that
\[ \operatorname{Var}\left({\sum \limits_{r=1}^{d+1}}{a_{r}}{\Gamma _{r-1}}({F_{n}})\right)=\Delta ({F_{n}})\]
where the quantity $\Delta ({F_{n}})$ is as in Theorem 6.4, and the estimate (6.10) takes the form (compare with the estimate (6.8))
\[ {d_{\textit{Kol}}}({F_{n}},{F_{\infty }})\le C\sqrt{\sqrt{\Delta ({F_{n}})}+{\sum \limits_{r=2}^{d+1}}|{\kappa _{r}}({F_{n}})-{\kappa _{r}}({F_{\infty }})|}.\]
We end the section with the following conjecture, whose object is the control of the iterated Gamma operators of Malliavin calculus appearing in the RHS of the estimate (6.10) by means of finitely many cumulants. Lastly, we point out that the forthcoming estimate (6.12) has to be compared with the famous estimate $\operatorname{Var}({\Gamma _{1}}(F))\le C\hspace{0.1667em}{\kappa _{4}}(F)$ in the normal approximation setting, when F is a chaotic random variable.
Conjecture 6.7.
Let ${F_{\infty }}$ be the target random variable in the second Wiener chaos of the form (6.1). Assume that $F={I_{q}}(f)$ is a chaotic random variable in the q-th Wiener chaos with $q\ge 2$. Then, there exists a general constant C (possibly depending on q and d) such that
(6.11)
\[ \operatorname{Var}\left({\sum \limits_{r=1}^{d+1}}\frac{{P^{(r)}}(0)}{r!{2^{r-1}}}\hspace{0.1667em}{\Gamma _{r-1}}(F)\right)\le C\hspace{0.1667em}{\sum \limits_{r=2}^{\textit{deg}(Q)}}\frac{{Q^{(r)}}(0)}{r!}\frac{{\kappa _{r}}(F)}{(r-1)!{2^{r-1}}}\]
where the polynomials P and Q are given by equation (6.2). In the particular case of the normal product target distribution, i.e., $d=2$, and ${\alpha _{\infty ,1}}=-{\alpha _{\infty ,2}}=1/2$, the estimate (6.11) boils down to
(6.12)
\[ \operatorname{Var}\left({\Gamma _{2}}(F)-F\right)\le C\left\{\frac{{\kappa _{6}}(F)}{5!}-2\frac{{\kappa _{4}}(F)}{3!}+{\kappa _{2}}(F)\right\},\]
where C is an absolute constant.

Footnotes

2 This is not completely accurate: attention has indeed to be paid to the fact that the function ${f_{h}}$ in (2.7) is only almost everywhere differentiable, and F does not necessarily have a density – see [60, Theorem 5.2] for a detailed proof based on the Lusin theorem.
3 Here, we adopt the usual convention of identifying a Poisson random variable with mean zero (resp. with infinite mean) with an a.s. zero (resp. infinite) random variable.

References

[1] 
A webpage about Stein’s method and Malliavin calculus (by I. Nourdin). https://sites.google.com/site/malliavinstein/home
[2] 
Arras, B., Azmoodeh, E., Poly, G., Swan, Y.: A bound on the 2-Wasserstein distance between linear combinations of independent random variables. Stoch. Process. Appl. 129(7), 2341–2375 (2019). MR3958435. https://doi.org/10.1016/j.spa.2018.07.009
[3] 
Arras, B., Azmoodeh, E., Poly, G., Swan, Y.: Stein characterizations for linear combinations of gamma random variables. Braz. J. Probab. Stat. 34(2), 394–413 (2020). MR4093265. https://doi.org/10.1214/18-BJPS420
[4] 
Arras, B., Mijoule, G., Poly, G., Swan, Y.: A new approach to the Stein-Tikhomirov method: with applications to the second Wiener chaos and Dickman convergence (2016). arXiv:1605.06819
[5] 
Azmoodeh, E., Campese, S., Poly, G.: Fourth Moment Theorems for Markov diffusion generators. J. Funct. Anal. 266(4), 2341–2359 (2014). MR3150163. https://doi.org/10.1016/j.jfa.2013.10.014
[6] 
Azmoodeh, E., Eichelsbacher, P., Knichel, L.: Optimal gamma approximation on Wiener space. ALEA Lat. Am. J. Probab. Math. Stat. 17(1), 101–132 (2020). MR4057185. https://doi.org/10.30757/alea.v17-05
[7] 
Azmoodeh, E., Gasbarra, D.: New moments criteria for convergence towards normal product/tetilla laws (2017). arXiv:1708.07681
[8] 
Azmoodeh, E., Malicet, D., Mijoule, G., Poly, G.: Generalization of the Nualart-Peccati criterion. Ann. Probab. 44(2), 924–954 (2016). MR3474463. https://doi.org/10.1214/14-AOP992
[9] 
Azmoodeh, E., Peccati, G., Poly, G.: Convergence towards linear combinations of chi-squared random variables: a Malliavin-based approach. In: In memoriam Marc Yor—Séminaire de Probabilités XLVII. Lecture Notes in Math., vol. 2137, pp. 339–367. Springer, Cham (2015). MR3444306. https://doi.org/10.1007/978-3-319-18585-9_16
[10] 
Bakry, D.: L‘hypercontractivité et son utilisation en théorie des semigroupes. In: Lectures on probability theory, Saint-Flour, 1992. Lecture Notes in Math., vol. 1581, pp. 1–114. Springer, Berlin (1994). MR1307413. https://doi.org/10.1007/BFb0073872
[11] 
Barbour, A.D.: Stein’s method for diffusion approximations. Probab. Theory Relat. Fields 84(3), 297–322 (1990). MR1035659. https://doi.org/10.1007/BF01197887
[12] 
Barbour, A.D., Janson, S.: A functional combinatorial central limit theorem. Electron. J. Probab. 14, 2352–2370 (2009). MR2556014. https://doi.org/10.1214/EJP.v14-709
[13] 
Baryshnikov, Yu., Yukich, J.E.: Gaussian limits for random measures in geometric probability. Ann. Appl. Probab. 15(1A), 213–253 (2005). MR2115042. https://doi.org/10.1214/105051604000000594
[14] 
Biermé, H., Bonami, A., Nourdin, I., Peccati, G.: Optimal Berry-Esseen rates on the Wiener space: the barrier of third and fourth cumulants. ALEA Lat. Am. J. Probab. Math. Stat. 9(2), 473–500 (2012). MR3069374
[15] 
Bonis, Th.: Stein’s method for normal approximation in Wasserstein distances with application to the multivariate central limit theorem. Probab. Theory Relat. Fields 178, 827–860 (2020). MR4168389. https://doi.org/10.1007/s00440-020-00989-4
[16] 
Bourguin, S., Campese, S., Leonenko, N., Taqqu, M.S.: Four moments theorems on Markov chaos. Ann. Probab. 47(3), 1417–1446 (2019). MR3945750. https://doi.org/10.1214/18-AOP1287
[17] 
Bourguin, S., Campese, S.: Approximation of Hilbert-Valued Gaussians on Dirichlet structures. Electron. J. Probab. 25, 150 (2020), 30 pp. MR4193891. https://doi.org/10.1214/20-ejp551
[18] 
Bakry, D., Gentil, I., Ledoux, M.: Analysis and Geometry of Markov Diffusion Operators. Springer (2014). MR3155209. https://doi.org/10.1007/978-3-319-00227-9
[19] 
Campese, S., Nourdin, I., Nualart, D.: Continuous Breuer-Major theorem: tightness and non-stationarity. Ann. Probab. 48(1), 147–177 (2020). MR4079433. https://doi.org/10.1214/19-AOP1357
[20] 
Can, V.H., Trinh, K.D.: Random connection models in the thermodynamic regime: central limit theorems for add-one cost stabilizing functionals arXiv:2004.06313 (2020). MR4049088. https://doi.org/10.1214/19-ecp279
[21] 
Chatterjee, S.: Fluctuations of eigenvalues and second order Poincaré inequalities. Probab. Theory Relat. Fields 143, 1–40 (2009). MR2449121. https://doi.org/10.1007/s00440-007-0118-6
[22] 
Chatterjee, S., Sen, S.: Minimal spanning trees and Stein’s method. Ann. Appl. Probab. 27(3), 1588–1645 (2017). MR3678480. https://doi.org/10.1214/16-AAP1239
[23] 
Chen, L.H.Y., Goldstein, L., Shao, Q.M.: Normal approximation by Stein’s method. Springer (2010). MR2732624. https://doi.org/10.1007/978-3-642-15007-4
[24] 
Chen, L.H., Poly, G.: Stein’s method, Malliavin calculus, Dirichlet forms and the fourth moment theorem. In: Festschrift of Masatoshi Fukushima. Interdisciplinary Mathematical Sciences, vol. 17, pp. 107–130 (2015). MR3379337. https://doi.org/10.1142/9789814596534_0006
[25] 
Courtade, T.A., Fathi, M., Pananjady, A.: Existence of Stein kernels under a spectral gap, and discrepancy bounds. Ann. Inst. Henri Poincaré Probab. Stat. 55(2), 777–790 (2019). MR3949953. https://doi.org/10.1214/18-aihp898
[26] 
Coutin, L., Decreusefond, L.: Stein’s method for Brownian approximations. Commun. Stoch. Anal. 7(3), 349–372 (2014). MR3167403. https://doi.org/10.31390/cosa.7.3.01
[27] 
Coutin, L., Decreusefond, L.: Higher order approximations via Stein’s method. Commun. Stoch. Anal. 8(2), 155–168 (2014). MR3269842. https://doi.org/10.31390/cosa.8.2.02
[28] 
Coutin, L., Decreusefond, L.: Stein’s method for rough paths. Potential Anal. 53(2), 387–406 (2020). MR4125096. https://doi.org/10.1007/s11118-019-09773-z
[29] 
Coutin, L., Decreusefond, L.: Donsker’s theorem in Wasserstein-1 distance. Electron. Commun. Probab. 25, 27 (2020), 13 pp. MR4089734. https://doi.org/10.1214/20-ecp308
[30] 
Decreusefond, L.: The Stein-Dirichlet-Malliavin method. ESAIM Proc. Surveys, vol. 51. EDP Sci., Les Ulis (2015). MR3440790. https://doi.org/10.1051/proc/201551003
[31] 
Decreusefond, L., Halconruy, H.: Malliavin and Dirichlet structures for independent random variables. Stoch. Process. Appl. 129(8), 2611–2653 (2019). MR3980139. https://doi.org/10.1016/j.spa.2018.07.019
[32] 
Decreusefond, L., Schulte, M., Thäle, Ch.: Functional Poisson approximation in Kantorovich-Rubinstein distance with applications to U-statistics and stochastic geometry. Ann. Probab. 44(3), 2147–2197 (2016). MR3502603. https://doi.org/10.1214/15-AOP1020
[33] 
Döbler, C.: Normal approximation via non-linear exchangeable pairs (2020). arXiv:2008.02272
[34] 
Döbler, C., Kasprzak, M.: Stein’s method of exchangeable pairs in multivariate functional approximations. Elec. J. Probab. (2021, to appear). MR4235479. https://doi.org/10.18287/2541-7525-2020-26-2-23-49
[35] 
Döbler, C., Kasprzak, M., Peccati, G.: Functional convergence of sequential U-processes with size-dependent kernels, arXiv:2008.02272 (2019).
[36] 
Döbler, C., Peccati, G.: The Gamma Stein equation and non-central de Jong theorems. Bernoulli 24(4B), 3384–3421 (2018). MR3788176. https://doi.org/10.3150/17-BEJ963
[37] 
Döbler, C., Peccati, G.: The fourth moment theorem on the Poisson space. Ann. Probab. 46(4), 1878–1916 (2018). MR3813981. https://doi.org/10.1214/17-AOP1215
[38] 
Döbler, C., Vidotto, A., Zheng, G.: Fourth moment theorems on The Poisson space in any dimension. Electron. J. Probab. 23, 36 (2018), 27 pp. MR3798246. https://doi.org/10.1214/18-EJP168
[39] 
Eichelsbacher, P., Thäle, C.: Malliavin-Stein method for Variance-Gamma approximation on Wiener space. Electron. J. Probab. 20(123), 1–28 (2014). MR3425543. https://doi.org/10.1214/EJP.v20-4136
[40] 
Fathi, M.: Stein kernels and moment maps. Ann. Probab. 47(4), 2172–2185 (2019). MR3980918. https://doi.org/10.1214/18-AOP1305
[41] 
Gaunt, R.E.: Variance-Gamma approximation via Stein’s method. Electron. J. Probab. 19(38), 1–33 (2014)
[42] 
Gaunt, R.E.: Wasserstein and Kolmogorov error bounds for variance-gamma approximation via Stein’s method I. J. Theor. Probab. 33, 465–505 (2020). https://doi.org/10.1007/s10959-018-0867-4
[43] 
Gaunt, R.E.: Stein factors for variance-gamma approximation in the Wasserstein and Kolmogorov distances (2020). arXiv:2008.06088. MR4064309. https://doi.org/10.1007/s10959-018-0867-4
[44] 
Gaunt, R.E., Mijoule, G., Swan, Y.: An algebra of Stein operators. J. Math. Anal. Appl. 469(1), 260–279 (2019). MR3857522. https://doi.org/10.1016/j.jmaa.2018.09.015
[45] 
Kasprzak, M.: Stein’s method for multivariate Brownian approximations of sums under dependence. Stoch. Process. Appl. 130(8), 4927–4967 (2020). MR4108478. https://doi.org/10.1016/j.spa.2020.02.006
[46] 
Kesten, H., Lee, S.: The central limit theorem for weighted minimal spanning trees on random points. Ann. Appl. Probab. 6(2), 495–527 (1996). MR1398055. https://doi.org/10.1214/aoap/1034968141
[47] 
Krein, Ch.: Weak convergence on Wiener space: targeting the first two chaoses. ALEA Lat. Am. J. Probab. Math. Stat. 16(1), 85–139 (2019). MR3903026. https://doi.org/10.30757/ALEA.v16-05
[48] 
Lachièze-Rey, R., Peccati, G., Yang, X.: Quantitative two-scale stabilization on the Poisson space (2020). arXiv:2010.13362. MR4108865. https://doi.org/10.1090/proc/14964
[49] 
Lachièze-Rey, R., Schultei, M., Yukich, J.: Normal approximation for stabilizing functionals. Ann. Appl. Probab. 29(2), 931–993 (2019). MR3910021. https://doi.org/10.1214/18-AAP1405
[50] 
Last, G.: Stochastic analysis for Poisson processes. In: Stochastic Analysis for Poisson Point Processes. Malliavin Calculus, Wiener-Itô Chaos Expansions and Stochastic Geometry, pp. 1–36. Springer (2016). MR3585396. https://doi.org/10.1007/978-3-319-05233-5_1
[51] 
Last, G., Penrose, M.: Lectures on the Poisson Process. Cambridge University Press (2017). MR3791470
[52] 
Ledoux, M.: Chaos of a Markov operator and the fourth moment condition. Ann. Probab. 40(6), 2439–2459 (2012). MR3050508. https://doi.org/10.1214/11-AOP685
[53] 
Ledoux, M., Nourdin, I., Peccati, G.: Stein’s method, logarithmic Sobolev and transport inequalities. Geom. Funct. Anal. 25, 256–306 (2015). MR3320893. https://doi.org/10.1007/s00039-015-0312-0
[54] 
Ledoux, M., Nourdin, I., Peccati, G.: A Stein deficit for the logarithmic Sobolev inequality. Sci. China Math. 60(7), 1163–1180 (2016). MR3665794. https://doi.org/10.1007/s11425-016-0134-7
[55] 
Last, G., Peccati, G., Schulte, M.: Normal approximation on Poisson spaces: Mehler’s formula, second order Poincaré inequalities and stabilization. Probab. Theory Relat. Fields 165(3–4), 667–723 (2016). MR3520016. https://doi.org/10.1007/s00440-015-0643-7
[56] 
Malliavin, P.: Stochastic calculus of variations and hypoelliptic operators. In: Proceedings of the International Symposium on Stochastic Differential Equations, Kyoto, pp. 195–263. Wiley, New York (1976). 1978. MR0536013
[57] 
Malliavin, P.: Stochastic Analysis. Springer (1997). MR1450093. https://doi.org/10.1007/978-3-642-15074-6
[58] 
Marinucci, D., Rossi, M., Vidotto, A.: Non-universal fluctuations of the empirical measure for isotropic stationary fields on ${S^{2}}\times R$. Ann. App. Probab. (2021, to appear).
[59] 
Notarnicola, M.: Fluctuations of nodal sets on the 3-torus and general cancellation phenomena (2020). arXiv:2004.04990
[60] 
Nourdin, I.: In: Lectures on Gaussian approximations with Malliavin calculus. Sém. Probab., XLV, pp. 3–89 (2012). MR3185909. https://doi.org/10.1017/CBO9781139084659
[61] 
Nourdin, I., Nualart, D.: The functional Breuer-Major theorem. Probab. Theory Relat. Fields 176, 203–218 (2020). MR4055189. https://doi.org/10.1007/s00440-019-00917-1
[62] 
Nourdin, I., Nualart, D., Peccati, G.: The Breuer-Major theorem in total variation: improved rates under minimal regularity. Stoch. Process. Appl. 131, 1–20 (2021). MR4151212. https://doi.org/10.1016/j.spa.2020.08.007
[63] 
Nourdin, I., Peccati, G.: Noncentral convergence of multiple integrals. Ann. Probab. 37(4), 1412–1426 (2009). MR2546749. https://doi.org/10.1214/08-AOP435
[64] 
Nourdin, I., Peccati, G.: Stein’s method on Wiener chaos. Probab. Theory Relat. Fields 145(1–2), 75–118 (2009). MR2520122. https://doi.org/10.1007/s00440-008-0162-x
[65] 
Nourdin, I., Peccati, G.: Cumulants on the Wiener space. J. Funct. Anal. 258(11), 3775–3791 (2010). MR2606872. https://doi.org/10.1016/j.jfa.2009.10.024
[66] 
Nourdin, I., Peccati, G.: Normal Approximations with Malliavin Calculus: From Stein’s Method to Universality. Cambridge Tracts in Mathematics. Cambridge University Press (2012). MR2962301. https://doi.org/10.1017/CBO9781139084659
[67] 
Nourdin, I., Peccati, G.: The optimal fourth moment theorem. Proc. Am. Math. Soc. 143(7), 3123–3133 (2015). MR3336636. https://doi.org/10.1090/S0002-9939-2015-12417-3
[68] 
Nourdin, I., Peccati, G., Rossi, M.: Nodal statistics of planar random waves. Commun. Math. Phys. 369(1), 99–151 (2019). MR3959555. https://doi.org/10.1007/s00220-019-03432-5
[69] 
Nourdin, I., Poly, G.: Convergence in law in the second Wiener/Wigner chaos. Electron. Commun. Probab. 17(36), 1–12 (2012). MR2970700. https://doi.org/10.1214/ecp.v17-2023
[70] 
Nourdin, I., Peccati, G., Reinert, G.: Second order Poincaré inequalities and CLTs on Wiener space. J. Funct. Anal. 257(4), 1005–1041 (2009). MR2527030. https://doi.org/10.1016/j.jfa.2008.12.017
[71] 
Nourdin, I., Peccati, G., Reinert, G.: Invariance principles for homogeneous sums: universality of Gaussian Wiener chaos. Ann. Probab. 38(5), 1947–1985 (2010). MR2722791. https://doi.org/10.1214/10-AOP531
[72] 
Nourdin, I., Peccati, G., Swan, Y.: Entropy and the fourth moment phenomenon. J. Funct. Anal. 266(5), 3170–3207 (2014). MR3158721. https://doi.org/10.1016/j.jfa.2013.09.017
[73] 
Nourdin, I., Peccati, G., Swan, Y.: Integration by parts and representation of information functionals. In: Proceedings of the 2014 IEEE International Symposium on Information Theory (ISIT), Honolulu, HI, pp. 2217–2221 (2015)
[74] 
Nourdin, I., Peccati, G., Yang, X.: Berry-Esseen bounds in the Breuer-Major CLT and Gebelein’s inequality. Electron. Commun. Probab. 24, 34 (2019). MR3978683. https://doi.org/10.1214/19-ECP241
[75] 
Nourdin, I., Peccati, G., Yang, X.:. Multivariate normal approximation on the Wiener space: new bounds in the convex distance (2020). arxiv:2001.02188. MR2606872. https://doi.org/10.1016/j.jfa.2009.10.024
[76] 
Nourdin, I., Rosinski, J.: Asymptotic independence of multiple Wiener-Ito integrals and the resulting limit laws. Ann. Probab. 42(2), 497–526 (2014). MR3178465. https://doi.org/10.1214/12-AOP826
[77] 
Nualart, D.: The Malliavin Calculus and Related Topics. Probability and its Applications, 2 edn. Springer, Berlin and Heidelberg and New York (2006). MR2200233
[78] 
Nualart, D.: Malliavin Calculus and Its Applications. CBMS Regional Conference Series in Mathematics, vol. 110 (2009). MR2498953. https://doi.org/10.1090/cbms/110
[79] 
Nualart, D., Ortiz-Latorre, S.: Central limit theorems for multiple stochastic integrals and Malliavin calculus. Stoch. Process. Appl. 118(4), 614–628 (2008). MR2394845. https://doi.org/10.1016/j.spa.2007.05.004
[80] 
Nualart, D., Peccati, G.: Central limit theorems for sequences of multiple stochastic integrals. Ann. Probab. 33(1), 177–193 (2005). MR2118863. https://doi.org/10.1214/009117904000000621
[81] 
Peccati, G., Reitzner, M. (eds.): Stochastic Analysis for Poisson Point Processes. Malliavin Calculus, Wiener-Itô Chaos Expansions and Stochastic Geometry Springer (2016). MR3444831. https://doi.org/10.1007/978-3-319-05233-5
[82] 
Peccati, G., Rossi, M.: Quantitative limit theorems for local functionals of arithmetic random waves. In: Combinatorics in Dynamics, Stochastics and Control, The Abel Symposium, Rosendal, Norway, August 2016, vol. 13, pp. 659–689 (2018). MR3967400. https://doi.org/10.1007/978-3-030-01593-0_23
[83] 
Peccati, G., Solé, J-., Utzet, F., Taqqu, M.S.: Stein’s method and normal approximation of Poisson functionals. Ann. Probab. 38, 443–478 (2010). MR2642882. https://doi.org/10.1214/10-AOP531
[84] 
Peccati, G., Taqqu, M.S.: Wiener Chaos: Moments, Cumulants and Diagrams. Springer (2010). MR2791919. https://doi.org/10.1007/978-88-470-1679-8
[85] 
Peccati, G., Vidotto, A.: Gaussian random measures generated by Berry’s nodal sets. J. Stat. Phys. 178(4), 443–478 (2020). MR4064212. https://doi.org/10.1007/s10955-019-02477-z
[86] 
Penrose, M.D.: Multivariate spatial central limit theorems with applications to percolation and spatial graphs. Ann. Probab. 33(5), 1945–1991 (2005). MR2165584. https://doi.org/10.1214/009117905000000206
[87] 
Penrose, M.D., Yukich, J.E.: Central limit theorems for some graphs in computational geometry. Ann. Appl. Probab. 11(4), 1005–1041 (2001). MR1878288. https://doi.org/10.1214/aoap/1015345393
[88] 
Rossi, M.: Random nodal lengths and Wiener chaos. In: Probabilistic Methods in Geometry, Topology and Spectral Theory. Probabilistic Methods in Geometry, Topology and Spectral Theory. Contemporary Mathematics Series, vol. 739, pp. 155–169 (2019). MR4033918. https://doi.org/10.1090/conm/739/14898
[89] 
Saumard, A.: Weighted Poincaré inequalities, concentration inequalities and tail bounds related to the behavior of the Stein kernel in dimension one. Bernoulli 25(4B), 3978–4006 (2019). MR4010979. https://doi.org/10.3150/19-BEJ1117
[90] 
Schulte, M., Yukich, J.E.: Multivariate second order Poincaré inequalities for Poisson functionals. Electron. J. Probab. 24, 130 (2019), 42 pp. MR4040990. https://doi.org/10.1214/19-ejp386
[91] 
Shih, H.-H.: On Stein’s method for infinite-dimensional Gaussian approximation in abstract Wiener spaces. J. Funct. Anal. 261(5), 1236–1283 (2011). MR2807099. https://doi.org/10.1016/j.jfa.2011.04.016
[92] 
Stein, C.: A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, II, pp. 583–602 (1972). MR0402873
[93] 
Stein, C.: Approximate computation of expectations. In: IMS. Lecture Notes-Monograph Series, vol. 7 (1986). MR0882007
[94] 
Todino, A.P.: A quantitative central limit theorem for the excursion area of random spherical harmonics over subdomains of ${S^{2}}$. J. Math. Phys. 60(2), 023505 (2019). MR3916834. https://doi.org/10.1063/1.5048976
[95] 
Trinh, K.D.: On central limit theorems in stochastic geometry for add-one cost stabilizing functionals. Electron. Commun. Probab. 24, 76 (2019), 15 pp. MR4049088. https://doi.org/10.1214/19-ecp279
[96] 
Vidotto, A.: An improved second order Poincaré inequality for functionals of Gaussian fields (2017). arXiv:1706.06985. MR4064306. https://doi.org/10.1007/s10959-019-00883-3
[97] 
Villani, C.: Optimal Transport. Old and New. Springer (2009). MR2459454. https://doi.org/10.1007/978-3-540-71050-9
[98] 
Yogeshwaran, D., Subag, E., Adler, R.J.: Random geometric complexes in the thermodynamic regime. Probab. Theory Relat. Fields 167(1–2), 107–142 (2017). MR3602843. https://doi.org/10.1007/s00440-015-0678-9
Exit Reading PDF XML


Table of contents
  • 1 Introduction and overview
  • 2 Elements of Stein’s method for normal approximations
  • 3 Normal approximation with Stein’s method and Malliavin calculus
  • 4 The Markov triple approach
  • 5 Bounds on the Poisson space: fourth moments, second-order Poincaré estimates and two-scale stabilization
  • 6 Malliavin–Stein method for targets in the second Wiener chaos
  • Footnotes
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy