Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 11, Issue 2 (2024)
  4. Optimal estimation of the local time and ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • More
    Article info Full article Related articles

Optimal estimation of the local time and the occupation time measure for an α-stable Lévy process
Volume 11, Issue 2 (2024), pp. 149–168
Chiara Amorino ORCID icon link to view author Chiara Amorino details   Arturo Jaramillo   Mark Podolskij  

Authors

 
Placeholder
https://doi.org/10.15559/24-VMSTA243
Pub. online: 6 February 2024      Type: Research Article      Open accessOpen Access

Received
6 May 2023
Revised
31 October 2023
Accepted
2 January 2024
Published
6 February 2024

Abstract

A novel theoretical result on estimation of the local time and the occupation time measure of an α-stable Lévy process with $\alpha \in (1,2)$ is presented. The approach is based upon computing the conditional expectation of the desired quantities given high frequency data, which is an ${L^{2}}$-optimal statistic by construction. The corresponding stable central limit theorems are proved and a statistical application is discussed. In particular, this work extends the results of [20], which investigated the case of the Brownian motion.

1 Introduction

1.1 The setting and overview

In this paper we consider a pure jump α-stable Lévy process $X={\{{X_{t}}\}_{t\ge 0}}$ with $\alpha \in (1,2)$, defined on a filtered probability space $(\Omega ,\mathcal{F},{\{{\mathcal{F}_{t}}\}_{t\ge 0}},\mathbb{P})$. The distribution law of the process X is uniquely determined by the characteristic function
(1.1)
\[\begin{aligned}{}\mathbb{E}\big[\exp (\mathbf{i}\xi {X_{t}})\big]& =\exp \bigg(-i\xi t\eta -\sigma t|\xi {|^{\alpha }}\bigg(1-i\beta \tan \bigg(\frac{\pi \alpha }{2}\bigg)\text{sgn}(\xi )\bigg)\bigg)\end{aligned}\]
for $\xi \in \mathbb{R}$, $t\ge 0$, $\eta \in \mathbb{R}$, $\beta \in [-1,1]$ and $\sigma \gt 0$. We will focus on estimation of the occupation time measure and the local time of X given high frequency data ${\{{X_{i/n}}\}_{1\le i\le \lfloor nT\rfloor }}$, with $T\gt 0$ being fixed and $n\to \infty $. We recall that for $a\lt b$ the occupation time of X at the set $(x,\infty )$ over the interval $[a,b]$, denoted by ${O_{[a,b]}}(x)$, is defined as
\[ {O_{[a,b]}}(x):={\int _{a}^{b}}{1_{(x,\infty )}}({X_{s}})ds.\]
The local time of X at point $x\in \mathbb{R}$ over the interval $[a,b]$ denoted as ${L_{[a,b]}}(x)$ is defined implicitly via the occupation density formula:
(1.2)
\[ {O_{[a,b]}}(x)={\int _{x}^{\infty }}{L_{[a,b]}}(y)dy\hspace{1em}\mathbb{P}\text{-a.s.}\]
(if it exists). Throughout the paper we use the abbreviation ${L_{t}}:={L_{[0,t]}}$ and ${O_{t}}:={O_{[0,t]}}$. The existence and smoothness properties of local times of stochastic processes have been extensively studied in the 70s and 80s; we refer to articles [3–6, 10, 11] among many others. In particular, in the setting of pure jump α-stable Lévy processes, the local time exists only for $\alpha \in (1,2)$ (cf. [21]). Furthermore, there exists a version of the local time, which is continuous in space and time. Indeed, according to [4, Theorems 2 and 3] and [2, Theorem 4.3], there exists a version of the local time with property
\[ {L_{t}}(\cdot )\hspace{2.5pt}\text{is}\hspace{2.5pt}\mathbb{P}\text{-a.s. locally H\"{o}lder continuous with index}\hspace{2.5pt}(\alpha -1)/2-\varepsilon ,\]
for any $\varepsilon \in (0,(\alpha -1)/2)$. Moreover, for all $\varepsilon \in (0,1-1/\alpha )$ and $T\gt 0$, there exists a deterministic constant ${C_{T}}\gt 0$, such that
(1.3)
\[ \underset{0\le s\le t\le T}{\sup }|{L_{t}}(x)-{L_{s}}(x)|\le {C_{T}}|t-s{|^{(1-1/\alpha )-\varepsilon }}\hspace{1em}\mathbb{P}\text{-a.s.}\]
In particular, ${L_{(\cdot )}}(x)$ is $\mathbb{P}$-a.s. locally Hölder continuous with index $(1-1/\alpha )-\varepsilon $ for any $\varepsilon \in (0,1-1/\alpha )$. Throughout the paper we consider the aforementioned Hölder continuous version of the local time.
The goal of this article is to introduce the ${L^{2}}$-optimal estimators of ${L_{t}}(x)$ and ${O_{t}}(x)$, and study their asymptotic properties. For this purpose we define the σ-algebra ${\mathcal{A}_{n}}:=\sigma (\{{X_{i/n}};\hspace{2.5pt}i\in \mathbb{N}\})$ and construct the estimators via
\[ {S_{L,t}^{n}}(x)=\mathbb{E}\big[{L_{t}}(x)\mid {\mathcal{A}_{n}}\big]\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}\text{and}\hspace{2.5pt}\hspace{2.5pt}\hspace{2.5pt}{S_{O,t}^{n}}(x)=\mathbb{E}\big[{O_{t}}(x)\mid {\mathcal{A}_{n}}\big].\]
By construction, ${S_{L,t}^{n}}$ (resp. ${S_{O,t}^{n}}$) is an ${L^{2}}$-optimal approximation of ${L_{t}}(x)$ (resp. ${O_{t}}(x)$). We will show a functional stable convergence for both statistics, which exhibit a mixed normal limit. Our main tool is Jacod’s stable central limit theorem for partial sums of high frequency data stated in [13].

1.2 Related literature

Functional limit theorems for estimators of local times have been studied for numerous stochastic models. In the setting of α-stable Lévy processes and related models, such estimators often take the form $G{(x,\phi )^{n}}=\{G{(x,\phi )_{t}^{n}};\hspace{2.5pt}t\ge 0\}$, with
(1.4)
\[ G{(x,\phi )_{t}^{n}}={n^{\frac{1}{\alpha }-1}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}\phi \big({n^{\frac{1}{\alpha }}}({X_{\frac{i-1}{n}}}-x)\big),\]
where $\phi \in {L^{1}}(\mathbb{R})$. Consistency and asymptotic mixed normality for statistics $G{(x,\phi )^{n}}$ and related functionals have been investigated in [7, 14] in the setting of Brownian motion and continuous stochastic differential equations. Some optimality results for estimation of the local time and the occupation time measure of a Brownian motion can be found in [20]. Estimation errors for occupation time functionals of stationary Markov processes have been studied in [1]. Limit theorems for statistics of the form (1.4) in the case of the fractional Brownian motion are discussed in [15, 17, 18, 23], although the complete weak limit theory is far from being understood.
While estimation of the local time and the occupation time measure has an interest in its own right, accurate estimation of these objects can be useful for related statistical problems. For example, nonparametric estimators of the diffusion coefficient in a continuous stochastic differential equation often involve local times in the mixed normal limit, see, e.g., [9]. In a similar spirit, local times and mixed normal local times appear as fundamental limits for additive functionals of a variety of Gaussian processes (see Papanicolaou, Stroock and Varadhan [22], Jaramillo, Nourdin, Nualart and Peccati [16] as well as Minhao Hong, Heguang Liu and Fangjun Xu [12]), thus serving as simplifying probabilistic models for otherwise complex probabilistic objects. Efficient estimation of local times and occupation times is very beneficial for statistical inference in this framework.
Our main result about the asymptotic theory for local times is mostly related to the articles [17–20, 26]. The paper [17] addresses consistency in the case of the linear fractional stable motions, a class which in particular includes stable Lévy processes.
Theorem 1.1 (Theorem 4 in [17]).
Suppose that $\phi :\mathbb{R}\to \mathbb{R}$ is a function satisfying $\phi ,{\phi ^{2}}\in {L^{1}}(\mathbb{R};\mathbb{R})$. Then, for every $t\gt 0$,
\[\begin{aligned}{}G{(x,\phi )_{t}^{n}}& \stackrel{{L^{2}}(\Omega )}{\to }{\int _{\mathbb{R}}}\phi (y)dy\cdot {L_{t}}(x)\hspace{2em}\textit{as}\hspace{2.5pt}n\to \infty .\end{aligned}\]
In [18, 19, 26] the authors prove the asymptotic mixed normality for continuous version of the functional $G{(x,\phi )^{n}}$, but only in the zero energy setting, i.e. ${\textstyle\int _{\mathbb{R}}}\phi (y)dy=0$. From the statistical point of view, this case is a less interesting one as we would like to use statistics of the type (1.4) as an estimator of the local time ${L_{t}}(x)$. More importantly, in the setting ${\textstyle\int _{\mathbb{R}}}\phi (y)dy\ne 0$ the methods developed in [18, 19] do not apply.
We will use stable convergence theorems for high frequency statistics introduced in [13] to show asymptotic mixed normality of standardised versions of ${S_{L,t}^{n}}(x)$ and ${S_{O,t}^{n}}(x)$. These results are strongly related to an earlier work [20], which considers the same problem in the setting of the Brownian motion. While their limit theorems are also based on the general results of [13], the technical aspects of the proof are more involved in the case of pure jump Lévy processes. The details will be highlighted in the proof section.

1.3 Outline of the paper

The rest of the paper is organised as follows. Section 2 presents some preliminaries, main results and an application. Proofs of the main results are collected in Section 3. Some technical statements are proved in Section 4.

2 Preliminaries and main results

In this section we present the spectral representation of local times, which will be useful in the sequel. Furthermore, we introduce the notion of stable convergence and establish the asymptotic theory for the estimators ${S_{L}^{n}}$ and ${S_{O}^{n}}$.

2.1 Local times and stable convergence

The analysis of occupation times and local times is an integral part of the theory of stochastic processes, which found manifold applications in probability during the past decades. We recall that, for $t\gt 0$ and $x\in \mathbb{R}$, the local time of the α-stable Lévy process X, $\alpha \in (1,2)$, up to time t at x can be formally defined as ${L_{t}}(x):={\textstyle\int _{0}^{t}}{\delta _{0}}({X_{s}}-\lambda )ds$, where ${\delta _{0}}$ denotes the Dirac delta function. A rigorous definition of the local time is obtained by replacing ${\delta _{0}}$ by the Gaussian kernel ${\phi _{\epsilon }}(x):={(2\pi \epsilon )^{-\frac{1}{2}}}\exp (-\frac{1}{2\epsilon }{x^{2}})$ and taking the limit in probability for $\epsilon \to 0$. For our purposes we will systematically use the following spectral representation for ${L_{t}}(x)$. The proof of the representation gathered in the lemma below can be found in, e.g., [17, Proposition 11], for completeness we give the proof of the boundedness of the moments. It can be found in Section 4.
Lemma 2.1.
For every $t\gt 0$ and $x\in \mathbb{R}$, the sequence
\[ {\Bigg\{{\int _{[-m,m]}}{\int _{0}^{t}}{e^{-\mathbf{i}\xi ({X_{s}}-x)}}dsd\xi \Bigg\}_{m\in \mathbb{N}}}\]
converges in ${L^{2}}(\Omega )$. The limit as $m\to \infty $, which we will denote by
\[ {\int _{\mathbb{R}}}{\int _{0}^{t}}{e^{-\mathbf{i}\xi ({X_{s}}-x)}}dsd\xi ,\]
satisfies
\[\begin{aligned}{}{L_{t}}(x)& =\frac{1}{2\pi }{\int _{\mathbb{R}}}{\int _{0}^{t}}{e^{-\mathbf{i}\xi ({X_{s}}-x)}}dsd\xi .\end{aligned}\]
Moreover, for any $p\in \mathbb{N}$, $\mathbb{E}[{({L_{t}}(x))^{p}}]\lt \infty $.
Recall furthermore that, by the definition of the occupation time ${O_{t}}(x)$, for any $p\in \mathbb{N}$, $\mathbb{E}[{({O_{t}}(x))^{p}}]\lt {t^{p}}$.
In what follows we will use the notion of stable convergence, which is originally due to Renyi [24]. Let $(\mathcal{S},\delta )$ be a Polish space. Let ${\{{Y_{n}}\}_{n\ge 1}}$ be a sequence of $\mathcal{S}$-valued and $\mathcal{F}$-measurable random variables defined on $(\Omega ,\mathcal{F},\mathbb{P})$ and Y a random variable defined on an enlarged probability space $({\Omega ^{\prime }},{\mathcal{F}^{\prime }},{\mathbb{P}^{\prime }})$. We say that ${Y_{n}}$ converges stably to Y, if and only if for any continuous bounded function $g:\mathcal{S}\to \mathbb{R}$ and any bounded $\mathcal{F}$-measurable random variable F, it holds that
\[ \underset{n\to \infty }{\lim }\mathbb{E}\big[Fg({Y_{n}})\big]={\mathbb{E}^{\prime }}\big[Fg(Y)\big].\]
In this case we write ${Y_{n}}\stackrel{\mathrm{Stably}}{\to }Y$. In this paper we deal with the space of cádlág functions $D([0,T])$ equipped with the Skorokhod ${J_{1}}$-topology.

2.2 Main results

The following proposition provides an explicit expression of the statistics ${S_{L,t}^{n}}$ and ${S_{O,t}^{n}}$. The proof of Proposition 2.2 is based upon the Markov property of X and the linearity in time of our objects.
Proposition 2.2.
For $i\ge 1$, define the increments ${\Delta _{i}^{n}}X:={X_{\frac{i}{n}}}-{X_{\frac{i-1}{n}}}$. Consider the function $f:{\mathbb{R}^{2}}\to \mathbb{R}$ given by $f(x,y):=\mathbb{E}[{L_{[0,1]}}(x)\mid {X_{1}}=y]$ and $F(x,y):={\textstyle\int _{x}^{\infty }}f(r,y)dr$. Then we obtain the identities
(2.5)
\[ \begin{aligned}{}{S_{L,t}^{n}}(x)& ={n^{\frac{1}{\alpha }-1}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}f\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)+{\mathcal{E}_{L,t}^{n}}(x),\\ {} {S_{O,t}^{n}}(x)& =\frac{1}{n}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}F\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)+{\mathcal{E}_{O,t}^{n}}(x)\end{aligned}\]
where ${\mathcal{E}_{L,t}^{n}}(x)$ and ${\mathcal{E}_{O,t}^{n}}(x)$ are defined as
\[\begin{aligned}{}{\mathcal{E}_{L,t}^{n}}(x)& :=\mathbb{E}\big[{L_{[\lfloor nt\rfloor /n,t]}}(x)\mid {\mathcal{A}_{n}}\big],\\ {} {\mathcal{E}_{O,t}^{n}}(x)& :=\mathbb{E}\big[{O_{[\lfloor nt\rfloor /n,t]}}(x)\mid {\mathcal{A}_{n}}\big].\end{aligned}\]
Moreover, the processes ${\{{n^{\frac{1}{2}(1-\frac{1}{\alpha })}}{\mathcal{E}_{L,t}^{n}}(x)\}_{t\ge 0}}$ and ${\{{n^{\frac{1}{2}(1+\frac{1}{\alpha })}}{\mathcal{E}_{O,t}^{n}}(x)\}_{t\ge 0}}$ converge to zero in probability uniformly on compact intervals as $n\to \infty $.
The proof of Proposition 2.2 will be given in Section 4. The next theorem is a functional limit result for the error of the approximation of the occupation and local times by their ${L^{2}}$-optimal estimators.
Theorem 2.3.
Fix $x\in \mathbb{R}$ and define the processes
\[\begin{aligned}{}{W_{L,t}^{n}}& :={n^{\frac{1}{2}(1-\frac{1}{\alpha })}}\big({S_{L,t}^{n}}(x)-{L_{t}}(x)\big),\\ {} {W_{O,t}^{n}}& :={n^{\frac{1}{2}(1+\frac{1}{\alpha })}}\big({S_{O,t}^{n}}(x)-{O_{t}}(x)\big).\end{aligned}\]
The functional stable convergence holds with respect to ${J_{1}}$ topology:
\[ {W_{L}^{n}}\stackrel{\mathrm{Stably}}{\to }{k_{L}}{B_{L(x)}},\hspace{2em}{W_{O}^{n}}\stackrel{\mathrm{Stably}}{\to }{k_{O}}{B_{L(x)}},\hspace{2em}\textit{as}\hspace{2.5pt}n\to \infty ,\]
where B is a standard Brownian motion defined on an extended space and independent of $\mathcal{F}$. The constants ${k_{L}}$ and ${k_{O}}$ are defined as
\[\begin{aligned}{}{k_{L}^{2}}& :={\int _{\mathbb{R}}}\mathbb{E}\big[{\big(\mathbb{E}\big[{L_{1}}(y)\mid {X_{1}}\big]-{L_{1}}(y)\big)^{2}}\big]dy,\\ {} {k_{O}^{2}}& :={\int _{\mathbb{R}}}\mathbb{E}\big[{\big(\mathbb{E}\big[{O_{1}}(z)\mid {X_{1}}\big]-{O_{1}}(z)\big)^{2}}\big]dz.\end{aligned}\]
Remark 2.4.
By the Dambis–Dubins–Schwarz theorem (cf. [25, Theorem 1.6, Section 5.1]), Theorem 2.3 implies the stable convergence
\[ {W_{L}^{n}}\stackrel{\mathrm{Stably}}{\to }{k_{L}}{\int _{0}^{\cdot }}\sqrt{{L_{s}}(x)}B(ds),\hspace{2em}{W_{O}^{n}}\stackrel{\mathrm{Stably}}{\to }{k_{O}}{\int _{0}^{\cdot }}\sqrt{{L_{s}}(x)}B(ds),\]
as $n\to \infty $.  □
Remark 2.5.
The statement above can be directly used to construct confidence regions for ${L_{t}}(x)$, ${O_{t}}(x)$ if the law of the Lévy process X is known. Indeed, by properties of stable convergence, it holds for any fixed $t\gt 0$ as $n\to \infty $:
\[ \frac{{n^{\frac{1}{2}(1-\frac{1}{\alpha })}}({S_{L,t}^{n}}(x)-{L_{t}}(x))}{{k_{L}}\sqrt{{S_{L,t}^{n}}(x)}}\stackrel{d}{\to }\mathcal{N}(0,1)\]
and
\[ \frac{{n^{\frac{1}{2}(1+\frac{1}{\alpha })}}({S_{O,t}^{n}}(x)-{O_{t}}(x))}{{k_{O}}\sqrt{{S_{L,t}^{n}}(x)}}\stackrel{d}{\to }\mathcal{N}(0,1).\]
Asymptotic confidence sets for ${L_{t}}(x)$ and ${O_{t}}(x)$ readily follow from the above central limit theorem.  □
Remark 2.6.
The setting of α-stable Lévy processes, $\alpha \in (1,2)$, is rather convenient as we often use self-similarity of X in our arguments. However, it might not be a necessary assumption. We conjecture that similar results can be shown for locally α-stable Lévy processes although some bias effects may appear in the limit theory.  □
From the statistical point of view Theorem 2.3 provides lower bounds for estimation of the path functionals ${L_{t}}(x)$ and ${O_{t}}(x)$. In particular, it shows that statistics $G{(x,\phi )^{n}}$ introduced in (1.4) do not produce ${L^{2}}$-optimal estimates. On the other hand, $G{(x,\phi )^{n}}$ can be computed from data even when the exact law of X is unknown (use, e.g., $\phi ={1_{[-1,1]}}$) in contrast to ${L^{2}}$-optimal statistics (recall that the functions f and F depend on the parameters of the stable distribution). We remark however that the limit theory for statistics $G{(x,\phi )^{n}}$ is expected to be much more involved; see the proofs in [14] for more details in the case of the Brownian motion. Hence, we postpone the discussion to future research.

3 Proof of main results

This section is devoted to the proof of the central limit theorem stated in the previous section. Throughout the proofs we denote by $C\gt 0$ a generic constant, which may change from line to line. In the proof of our main results the following lemma will be repeatedly used. Its proof can be found in Section 4.
Lemma 3.1.
Let us introduce ${\varphi _{1}}(y):=\mathbb{E}[{L_{[0,1]}^{p}}(y)]$ and ${\varphi _{2}}(y):=\mathbb{E}[{O_{[0,1]}^{p}}(y)]$. Then, for any $x\in \mathbb{R}$, $p\in \mathbb{N}$ and $i\in \{1,\dots ,n\}$, the following identities hold true.
  • (a) $\mathbb{E}[{L_{[\frac{i-1}{n},\frac{i}{n}]}^{p}}(x)\mid {\mathcal{F}_{\frac{i-1}{n}}}]={n^{p(\frac{1}{\alpha }-1)}}{\varphi _{1}}({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}))$,
  • (b) $\mathbb{E}[{O_{[\frac{i-1}{n},\frac{i}{n}]}^{p}}(x)\mid {\mathcal{F}_{\frac{i-1}{n}}}]={n^{-p}}\hspace{0.1667em}{\varphi _{2}}({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}))$.
Moreover, define the functions ${\varphi _{3}}(y):=\mathbb{E}[\mathbb{E}[{L_{[0,1]}}(y)|{X_{1}}]{L_{[0,1]}}(y)]$ and ${\varphi _{4}}(y):=\mathbb{E}[\mathbb{E}[{O_{[0,1]}}(y)|{X_{1}}]{O_{[0,1]}}(y)]$ and recall that $f(x,y)=\mathbb{E}[{L_{1}}(x)\mid {X_{1}}=y]$ and $F(x,y)={\textstyle\int _{x}^{\infty }}f(r,y)dr$. Then,
  • (c) $\mathbb{E}[f({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X){L_{[\frac{i-1}{n},\frac{i}{n}]}}(x)\mid {\mathcal{F}_{\frac{i-1}{n}}}]$ $={n^{\frac{1}{\alpha }-1}}{\varphi _{3}}({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}))$,
  • (d) $\mathbb{E}[F({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X){O_{[\frac{i-1}{n},\frac{i}{n}]}}(x)\mid {\mathcal{F}_{\frac{i-1}{n}}}]$ $={n^{-1}}{\varphi _{4}}({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}))$.
Regarding the well-posedness of the elements mentioned above, it is worth noting that ${\varphi _{1}}$ and ${\varphi _{2}}$ are finite, as shown in Lemma 2.1 and the comment below it. It is also easy to verify that ${\varphi _{3}}$ and ${\varphi _{4}}$ are finite. In particular, demonstrating the finiteness of ${\varphi _{3}}$ requires the use of the Cauchy–Schwarz and Jensen inequalities, in combination with the boundedness of the moments of ${L_{1}}$ as established in Lemma 2.1.

3.1 Proof of Theorem 2.3

Proof.
Our argument is based on a martingale approach. We first deal with the estimation of the local time. Recall that $f(x,y)=\mathbb{E}[{L_{1}}(x)\mid {X_{1}}=y]$. Because of Proposition 2.2 and the linearity of the local time (in time), we can write
(3.6)
\[ {W_{L,t}^{n}}={n^{\frac{1}{2}(1-\frac{1}{\alpha })}}\big({S_{L,t}^{n}}(x)-{L_{[0,t]}}(x)\big)={\sum \limits_{i=1}^{\lfloor nt\rfloor }}{Z_{in}^{L}}-{n^{\frac{1}{2}(1-\frac{1}{\alpha })}}{N_{L,t}^{n}}+{n^{\frac{1}{2}(1-\frac{1}{\alpha })}}{\mathcal{E}_{L,t}^{n}}(x),\]
where
\[ {Z_{in}^{L}}:={n^{\frac{1}{2}(\frac{1}{\alpha }-1)}}f\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)-{n^{\frac{1}{2}(1-\frac{1}{\alpha })}}{L_{[\frac{i-1}{n},\frac{i}{n}]}}(x)\]
and
\[ {N_{L,t}^{n}}={L_{[\lfloor nt\rfloor /n,t]}}(x).\]
The term ${N_{L}^{n}}$ appears due to the edge effect and it does not contribute to the limit as stated in the following lemma, whose proof can be found in the next section and immediately follows from (1.3).
Lemma 3.2.
The process ${N_{L}^{n}}=\{{N_{L,t}^{n}};\hspace{2.5pt}t\ge 0\}$ satisfies the following convergence in probability uniformly over compact sets:
\[ \underset{n}{\lim }{n^{\frac{1}{2}(1-\frac{1}{\alpha })}}{N_{L}^{n}}=0.\]
Due to Proposition 2.2 and Lemma 3.2 we can write (3.6) as
\[ {W_{L,t}^{n}}={\sum \limits_{i=1}^{\lfloor nt\rfloor }}{Z_{in}^{L}}+{o_{\mathbb{P}}}(1).\]
We introduce here the notation ${\mathbb{E}_{\frac{i-1}{n}}}[\cdot ]$ for $\mathbb{E}[\cdot \mid {\mathcal{F}_{\frac{i-1}{n}}}]$, which will be useful in the sequel.
In the next step we will apply Theorem 3-2 of [13]. In our setting, ${\mathbb{E}_{\frac{i-1}{n}}}[{Z_{in}^{L}}]=0$ because of Equation (4.18) below. Then, it suffices to show the following conditions:
(3.7)
\[\begin{aligned}{}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{L}}{|^{2}}\big]& \xrightarrow{\mathbb{P}}{k_{L}^{2}}{L_{t}}(x)\hspace{2em}\forall t\in [0,1],\end{aligned}\]
(3.8)
\[\begin{aligned}{}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[{Z_{in}^{L}}{\Delta _{i}^{n}}M\big]& \xrightarrow{\mathbb{P}}0\hspace{2em}\forall t\in [0,1],\end{aligned}\]
(3.9)
\[\begin{aligned}{}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{L}}{|^{2}}{1_{\{|{Z_{in}^{L}}|\gt \epsilon \}}}\big]& \xrightarrow{\mathbb{P}}0\hspace{2em}\forall \epsilon \gt 0,\end{aligned}\]
where the condition (3.8) should hold for all square integrable continuous martingales M. We emphasise that we have summarised the two conditions (3.12) and (3.14) from [13, Theorem 3-2] into our constraint (3.8). Indeed, conditions (3.12) and (3.14) in [13, Theorem 3-2] are formulated with respect to some continuous martingale M which has to be chosen by the user according to the problem at hand. The convergence in (3.8) holding for all bounded continuous martingales implies them both. It entails, in particular, that the continuous process G of [13, Theorem 3-2] is in our case identically zero.
We start by showing condition (3.7). Due to the identity $f(x,y)=\mathbb{E}[{L_{[0,1]}}(x)\mid {X_{1}}=y]$ we can check that
(3.10)
\[ {\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{L}}{|^{2}}\big]={n^{\frac{1}{\alpha }-1}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}\psi \big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big),\]
where ψ is given by
\[ \psi (q):=\mathbb{E}\big[{\big(\mathbb{E}\big[{L_{[0,1]}}(q)\mid {X_{1}}\big]-{L_{[0,1]}}(q)\big)^{2}}\big].\]
Indeed, from the definition of ${Z_{in}^{L}}$ we have
\[\begin{aligned}{}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{L}}{|^{2}}\big]& ={\sum \limits_{i=1}^{\lfloor nt\rfloor }}\big[{n^{\frac{1}{\alpha }-1}}{\mathbb{E}_{\frac{i-1}{n}}}\big[{f^{2}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)\big]\\ {} & \hspace{1em}+{n^{1-\frac{1}{\alpha }}}{\mathbb{E}_{\frac{i-1}{n}}}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}^{2}}(x)\big]\\ {} & \hspace{1em}-2{\mathbb{E}_{\frac{i-1}{n}}}\big[f\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big){L_{[\frac{i-1}{n},\frac{i}{n}]}}(x)\big]\big]\\ {} & =:{I_{1}}+{I_{2}}+{I_{3}}.\end{aligned}\]
We start studying ${I_{1}}$. It is easy to see that
\[\begin{aligned}{}{I_{1}}& ={n^{\frac{1}{\alpha }-1}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[{f^{2}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)\big]\\ {} & ={n^{\frac{1}{\alpha }-1}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}{g_{1}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big),\end{aligned}\]
with ${g_{1}}(q):=\mathbb{E}[{f^{2}}(q,{X_{1}})]$. From Lemma 3.1(a) the following is straightforward:
\[\begin{aligned}{}{I_{2}}& ={n^{1-\frac{1}{\alpha }}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}^{2}}(x)\big]\\ {} & ={n^{\frac{1}{\alpha }-1}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}{g_{2}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big),\end{aligned}\]
where ${g_{2}}(q):=\mathbb{E}[{L_{[0,1]}}{(q)^{2}}]$. We are left to study ${I_{3}}$. From Lemma 3.1(c) we directly obtain
\[ {I_{3}}={n^{\frac{1}{\alpha }-1}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}{g_{3}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big),\]
with ${g_{3}}(q):=-2\mathbb{E}[{L_{[0,1]}}(q)\mathbb{E}[{L_{[0,1]}}(q)\mid {X_{1}}]]$. Putting everything together we get (3.10) with $\psi (q)={g_{1}}(q)+{g_{2}}(q)+{g_{3}}(q)$.
To apply Theorem 1.1 we need to check that $\psi ,{\psi ^{2}}\in {L^{1}}(\mathbb{R})$. By the Minkowski and Jensen inequalities, $\psi (q)\le 4\mathbb{E}[{L_{[0,1]}^{2}}(q)]$. We therefore aim at showing that the function $\mathbb{E}[{L_{[0,1]}^{2}}(\cdot )]$ belongs to ${L^{1}}(\mathbb{R})$. Let ${\tau _{q}}$ denote the first passage time of X over the level q. Conditioning over ${\tau _{q}}$, using the additivity of the local time, we deduce the inequality
\[ \mathbb{E}\big[{L_{[0,1]}^{2}}(q)\big]\le \mathbb{E}\big[{L_{[0,1]}^{2}}(0)\big]\mathbb{P}({\tau _{q}}\lt 1).\]
Moreover, by the Fourier representation of the local time (as stated in Lemma 2.1), $\mathbb{E}[{L_{[0,1]}^{2}}(0)]$ is bounded. On the other hand,
(3.11)
\[ {\int _{0}^{\infty }}\mathbb{P}[{\tau _{q}}\lt 1]dq={\int _{0}^{\infty }}\mathbb{P}\Big[\underset{s\le 1}{\sup }{X_{s}}\gt q\Big]dq=\mathbb{E}\Big[\underset{s\le 1}{\sup }{X_{s}}\Big].\]
We recall that $\mathbb{E}[{({\sup _{s\le 1}}{X_{s}})^{p}}]$ is bounded for any $p\in (0,\alpha )$ (see Corollary II.1.6 and Theorem II.1.7 in [25]). As $\alpha \in (1,2)$, this implies the boundedness of $\mathbb{E}[{\sup _{s\le 1}}{X_{s}}]$. Hence, by symmetry, $\psi \in {L^{1}}(\mathbb{R})$ and ${k_{L}^{2}}\lt \infty $. Applying the same argument it is easy to show that also ${\psi ^{2}}\in {L^{1}}(\mathbb{R})$, thanks to (3.11) and the boundedness of $\mathbb{E}[{L_{[0,1]}^{4}}(q)]$. By Theorem 1.1 we conclude that
\[ {\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{L}}{|^{2}}\big]\xrightarrow{\mathbb{P}}{k_{L}^{2}}{L_{t}}(x),\]
where ${k_{L}^{2}}={\textstyle\int _{\mathbb{R}}}\psi (y)dy\lt \infty $.
In the next step we show condition (3.8). Let M be any continuous square integrable martingale and denote by μ (resp. $\widetilde{\mu }$) the random measure (resp. compensated random measure) associated with the pure jump Lévy process X. The martingale representation theorem for jump measures investigated in Lemma 3 (ii) and Theorem 6 of [8] implies that ${Z_{in}^{L}}$ has an integral representation
\[ {Z_{in}^{L}}={\int _{\frac{i-1}{n}}^{\frac{i}{n}}}{\int _{\mathbb{R}}}{\eta _{i}^{n}}(x,t)\widetilde{\mu }(dx,dt)\]
for some predictable square integrable process ${\eta _{i}^{n}}$. Using that the covariation between any continuous martingale and any pure jump martingale is zero, we conclude that
\[ {\mathbb{E}_{\frac{i-1}{n}}}\big[{Z_{in}^{L}}{\Delta _{i}^{n}}M\big]=0.\]
Thus, we obtain (3.8).
Finally, we show condition (3.9). The Cauchy–Schwarz inequality ensures that
\[ {\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{L}}{|^{2}}{1_{\{|{Z_{in}^{L}}|\gt \epsilon \}}}\big]\le {\epsilon ^{-2}}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{L}}{|^{4}}\big].\]
Then, by the Markov inequality, it suffices to prove that
\[ {\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{L}}{|^{4}}\big]\xrightarrow{\mathbb{P}}0.\]
We have that
\[\begin{aligned}{}& {\mathbb{E}_{\frac{i-1}{n}}}\big[{\big({Z_{in}^{L}}\big)^{4}}\big]\\ {} & \hspace{1em}\le C\big({n^{2(\frac{1}{\alpha }-1)}}{\mathbb{E}_{\frac{i-1}{n}}}\big[{f^{4}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)\big]+{n^{2(1-\frac{1}{\alpha })}}{\mathbb{E}_{\frac{i-1}{n}}}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}^{4}}(x)\big]\big).\end{aligned}\]
From Lemma 3.1(a) we conclude that
\[ {\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{L}}{|^{4}}\big]\le C{n^{2(\frac{1}{\alpha }-1)}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}h\big({n^{\frac{1}{\alpha }}}({X_{\frac{i-1}{n}}}-x)\big),\]
where $h(y)=\mathbb{E}[{f^{4}}(y,{X_{1}})+{L_{[0,1]}}{(y)^{4}}]$. It is easy to check that $h\in {L^{1}}(\mathbb{R})$. Indeed, similarly as in the proof of $\psi \in {L^{1}}(\mathbb{R})$, the Minkowski and Jensen inequalities imply
\[ h(y)\le C\mathbb{E}\big[{L_{[0,1]}^{4}}(y)\big]\le C\mathbb{E}\big[{L_{[0,1]}^{4}}(0)\big]\mathbb{P}[{\tau _{y}}\lt 1].\]
Then the boundedness of moments of the local time in 0 together with (3.11) provides $h\in {L^{1}}(\mathbb{R})$. As ${h^{2}}(y)\le C\mathbb{E}[{L_{[0,1]}^{8}}(y)]$, following the same route it is easy to see that ${h^{2}}\in {L^{1}}(\mathbb{R})$. Since $\alpha \gt 1$, we deduce by Theorem 1.1 that
\[ {n^{2(\frac{1}{\alpha }-1)}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}h\big({n^{\frac{1}{\alpha }}}({X_{\frac{i-1}{n}}}-x)\big)\xrightarrow{\mathbb{P}}0.\]
This concludes the proof of Theorem 2.3 for the local time.
Now we proceed to the analysis of the occupation time. As for the local time case, the proof is based on a martingale approach. The definition of ${W_{O,t}^{n}}$ together with the approximation of ${S_{O,t}^{n}}$ as in Proposition 2.2 provides
(3.12)
\[ {W_{O,t}^{n}}={n^{\frac{1}{2}(1+\frac{1}{\alpha })}}\big({S_{O,t}^{n}}-{O_{[0,t]}}(x)\big)={\sum \limits_{i=1}^{\lfloor nt\rfloor }}{Z_{in}^{O}}-{n^{\frac{1}{2}(1+\frac{1}{\alpha })}}{N_{O,t}^{n}}+{\mathcal{E}_{O,t}^{n}}(x),\]
where ${Z_{in}^{O}}$ is the principal term, given by
\[ {Z_{in}^{O}}:={n^{\frac{1}{2}(\frac{1}{\alpha }-1)}}F\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)-{n^{\frac{1}{2}(1+\frac{1}{\alpha })}}{O_{[\frac{i-1}{n},\frac{i}{n}]}}(x),\]
while
\[ {N_{O,t}^{n}}:={O_{[\frac{\lfloor nt\rfloor }{n},t]}}(x).\]
In a similar way as for ${N_{L,t}^{n}}$ we can show that ${N_{O,t}^{n}}$ is negligible. Indeed, the following lemma holds true. Its proof can be found in the next section and is a direct consequence of (1.3).
Lemma 3.3.
The process ${N_{O}^{n}}=\{{N_{O,t}^{n}};\hspace{2.5pt}t\ge 0\}$ satisfies the following convergence in probability, uniformly over compact sets:
\[ \underset{n}{\lim }{n^{\frac{1}{2}(1+\frac{1}{\alpha })}}{N_{O}^{n}}=0.\]
Due to Proposition 2.2 and Lemma 3.3 we can write (3.12) as
\[ {W_{O,t}^{n}}={\sum \limits_{i=1}^{\lfloor nt\rfloor }}{Z_{in}^{O}}+{o_{\mathbb{P}}}(1).\]
We are dealing with martingale differences. Indeed, ${\mathbb{E}_{\frac{i-1}{n}}}[{Z_{in}^{O}}]=0$ because of Lemma 3.1(b) with $p=1$, the definition of F and the independence of the increments of the process X. Therefore, similarly as before, our proof is based on [13, Theorem 3-2]. In particular, we want to show the following convergence statements:
(3.13)
\[ {\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{O}}{|^{2}}\big]\xrightarrow{\mathbb{P}}{k_{O}^{2}}{L_{[0,t]}}(x)\hspace{2em}\forall t\in [0,1],\]
(3.14)
\[ {\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[{Z_{in}^{O}}{\Delta _{i}^{n}}M\big]\xrightarrow{\mathbb{P}}0\hspace{2em}\forall t\in [0,1],\]
(3.15)
\[ {\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{O}}{|^{2}}{1_{\{|{Z_{in}^{O}}|\gt \epsilon \}}}\big]\xrightarrow{\mathbb{P}}0\hspace{2em}\forall \epsilon \gt 0.\]
The condition expressed in (3.14) should hold for all square integrable continuous martingales.
We start by proving (3.13). Similarly as for ${Z_{in}^{O}}$, from the definition of F and Lemma 3.1 (b) and (d), it follows
(3.16)
\[ {\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{O}}{|^{2}}\big]={n^{\frac{1}{\alpha }-1}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}\tilde{\psi }\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big),\]
with $\tilde{\psi }$ given by
\[\begin{aligned}{}\tilde{\psi }(z)& :=\mathbb{E}\big[{\big(\mathbb{E}\big[{O_{[0,1]}}(z)|{X_{1}}\big]-{O_{[0,1]}}(z)\big)^{2}}\big].\end{aligned}\]
We want to show that $\tilde{\psi }\in {L^{1}}(\mathbb{R})$. Let ${c_{z}}$ be a deterministic constant only depending on z and observe that by the Jensen inequality, it is enough to show the integrability of $\mathbb{E}[{({O_{[0,1]}}(z)-{c_{z}})^{2}}]$, where ${c_{z}}$ is arbitrary. Take ${c_{z}}=0$ for $z\ge 0$. Proceeding as in the case of the local time, it is easy to see that
\[ \mathbb{E}\big[{O_{[0,1]}^{2}}(z)\big]\le C\mathbb{P}({\tau _{z}}\lt 1)\in {L^{1}}(\mathbb{R}).\]
For $z\lt 0$ we choose instead ${c_{z}}=1$. Remarking that $\mathbb{E}[{(1-{O_{[0,1]}}(z))^{2}}]\le \mathbb{P}({\tau _{z}}\lt 1)$ we obtain the same result. After that it is straightforward to prove that ${\tilde{\psi }^{2}}\in {L^{1}}(\mathbb{R})$, observing that it is enough to show the integrability of $\mathbb{E}[{({O_{[0,1]}}(z)-{c_{z}})^{4}}]$. We can then apply Theorem 1.1, which implies (3.13) with ${k_{O}^{2}}={\textstyle\int _{\mathbb{R}}}\tilde{\psi }(z)dz$.
Condition (3.14) is, as for the local time, a consequence of the martingale representation for jump measures in Lemma 3 (ii) and Theorem 6 of [8].
Finally, we show condition (3.15). By the Cauchy–Schwarz and Markov inequalities it is enough to prove that
\[ {\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{O}}{|^{4}}\big]\xrightarrow{\mathbb{P}}0.\]
By the definition of ${Z_{in}^{O}}$ and Lemma 3.1, following the same proof as to obtain (3.16), it holds that
\[ {\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\mathbb{E}_{\frac{i-1}{n}}}\big[|{Z_{in}^{O}}{|^{4}}\big]={n^{2(\frac{1}{\alpha }-1)}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}\tilde{h}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big),\]
where $\tilde{h}(z):=\mathbb{E}[{(\mathbb{E}[{O_{[0,1]}}(z)\mid {X_{1}}]-{O_{[0,1]}}(z))^{4}}]$. Regarding the integrability of $\tilde{h}$ and ${\tilde{h}^{2}}$, proceeding as for $\tilde{\psi }$, it is sufficient to show the result for $\mathbb{E}[{({O_{[0,1]}}(z)-{c_{z}})^{4}}]$ and $\mathbb{E}[{({O_{[0,1]}}(z)-{c_{z}})^{8}}]$, respectively, with ${c_{z}}$ arbitrary. As before, the result follows by choosing ${c_{z}}=0$ for $z\ge 0$ and ${c_{z}}=1$ for $z\lt 0$, and using (3.11). We deduce by Theorem 1.1 that
\[ {n^{2(\frac{1}{\alpha }-1)}}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}\tilde{h}\big({n^{\frac{1}{\alpha }}}({X_{\frac{i-1}{n}}}-x)\big)\xrightarrow{\mathbb{P}}0,\]
which concludes the proof of our main theorem.  □

4 Proof of the auxiliary results

This section is devoted to the proof of technical lemmas and proposition we have previously stated and used in order to obtain our main result, gathered in Theorem 2.3. Before we proceed, we introduce the function $r(y):={e^{-\mathbf{i}y}}$, which will be useful in the sequel.

4.1 Proof of Lemma 2.1

Proof.
The Fourier representation is proven in [17, Proposition 11], we here prove the boundedness of the moments of ${L_{t}}(x)$. From the Fourier representation stated in the first part of this lemma we have, for any $p\in \mathbb{N}$,
(4.17)
\[\begin{aligned}{}& \mathbb{E}\big[{\big({L_{[0,1]}}(x)\big)^{p}}\big]\\ {} & \hspace{1em}=\mathbb{E}\bigg[\bigg(\frac{1}{2\pi }{\int _{\mathbb{R}}}{\int _{[0,1]}}r\big({\xi _{1}}({X_{{u_{1}}}}-x)\big)d{u_{1}}d{\xi _{1}}\bigg)\times \cdots \\ {} & \hspace{2em}\times \bigg(\frac{1}{2\pi }{\int _{\mathbb{R}}}{\int _{[0,1]}}r\big({\xi _{p}}({X_{{u_{p}}}}-x)\big)d{u_{p}}d{\xi _{p}}\bigg)\bigg]\\ {} & \hspace{1em}=\frac{1}{{(2\pi )^{p}}}{\int _{{\mathbb{R}^{p}}}}{\int _{{[0,1]^{p}}}}r\big(-({\xi _{1}}+\cdots +{\xi _{p}})x\big)\mathbb{E}\big[r({\xi _{1}}{X_{{u_{1}}}})\times \cdots \times r({\xi _{p}}{X_{{u_{p}}}})\big]dud\xi .\end{aligned}\]
The Fubini theorem, applied in the last identity, will be justified once we show that $|\mathbb{E}[r({\xi _{1}}{X_{{u_{1}}}})\times \cdots \times r({\xi _{p}}{X_{{u_{p}}}})]|$ is integrable, which will be proved next. We can now assume without losing of generality that p is even and ${u_{1}}\le {u_{2}}\le \cdots \le {u_{p}}$. Then, the expectation above can be seen as
\[\begin{aligned}{}& \mathbb{E}\big[r\big(({\xi _{1}}\hspace{-0.1667em}+\hspace{-0.1667em}\cdots \hspace{-0.1667em}+\hspace{-0.1667em}{\xi _{p}}){X_{{u_{1}}}}\big)r\big(({\xi _{2}}\hspace{-0.1667em}+\hspace{-0.1667em}\cdots \hspace{-0.1667em}+\hspace{-0.1667em}{\xi _{p}})({X_{{u_{2}}}}-{X_{{u_{1}}}})\big)\times \cdots \times r\big({\xi _{p}}({X_{{u_{p}}}}-{X_{{u_{p-1}}}})\big)\big]\\ {} & \hspace{1em}=\mathbb{E}\big[r\big(({\xi _{1}}+\cdots +{\xi _{p}}){X_{{u_{1}}}}\big)\big]\times \cdots \times \mathbb{E}\big[r\big({\xi _{p}}({X_{{u_{p}}}}-{X_{{u_{p-1}}}})\big)\big]\end{aligned}\]
having employed the self-similarity of X and the stationarity and independence of its increments. Relation (1.1) then implies that
\[\begin{aligned}{}& \big|\mathbb{E}\big[r\big(({\xi _{1}}\hspace{-0.1667em}+\hspace{-0.1667em}\cdots \hspace{-0.1667em}+\hspace{-0.1667em}{\xi _{p}}){X_{{u_{1}}}}\big)r\big(({\xi _{2}}\hspace{-0.1667em}+\hspace{-0.1667em}\cdots \hspace{-0.1667em}+\hspace{-0.1667em}{\xi _{p}})({X_{{u_{2}}}}\hspace{-0.1667em}-\hspace{-0.1667em}{X_{{u_{1}}}})\big)\times \cdots \times r\big({\xi _{p}}({X_{{u_{p}}}}\hspace{-0.1667em}-\hspace{-0.1667em}{X_{{u_{p-1}}}})\big)\big]\big|\\ {} & \hspace{1em}={e^{-|{\xi _{1}}+\cdots +{\xi _{p}}{|^{\alpha }}{u_{1}}}}\times \cdots \times {e^{-|{\xi _{p}}{|^{\alpha }}({u_{p}}-{u_{p-1}})}}.\end{aligned}\]
Now, up to apply the change of variables ${\xi _{j}}+\cdots +{\xi _{p}}=:{\eta _{j}}$ for any $j=1,\dots ,p$, it is easy to check that the absolute value in the integrand of (4.17) is integrable, thus yielding the finiteness of the p-moment of ${L_{t}}$.  □

4.2 Proof of Proposition 2.2

Proof.
Recall that ${\mathcal{A}_{n}}:=\sigma (\{{X_{\frac{i}{n}}};\hspace{2.5pt}i\in \mathbb{N}\})\subset \mathcal{F}$. We first deal with the analysis of the local time. Observe that the problem is reduced to proving the following two claims:
  • (i) First of all show that
    (4.18)
    \[ \mathbb{E}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}}(x)\mid {X_{\frac{i-1}{n}}},{\Delta _{i}^{n}}X\big]={n^{\frac{1}{\alpha }-1}}f\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big),\]
    for $f(x,y)=\mathbb{E}[{L_{[0,1]}}(x)\mid {X_{1}}=y]$.
  • (ii) After that we prove that ${\mathcal{E}_{L}^{(n)}}(x)$ satisfies
    (4.19)
    \[ \underset{0\le t\le T}{\sup }{n^{\frac{1}{2}(1-\frac{1}{\alpha })}}\big|{\mathcal{E}_{L,t}^{(n)}}(x)\big|\xrightarrow{\mathbb{P}}0,\]
    for all $T\gt 0$.
If (i) and (ii) are satisfied, it is easy to check that the result on the local time stated in Proposition 2.2 holds true. To prove this reduction we observe that by the independent increments property of X,
\[\begin{aligned}{}{S_{L,t}^{(n)}}(x)& =\mathbb{E}\big[{L_{[\lfloor nt\rfloor /n,t]}}(x)\mid {\mathcal{A}_{n}}\big]+{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\mathbb{E}\big[{L_{[\frac{k-1}{n},\frac{k}{n}]}}(x)\mid {\mathcal{A}_{n}}\big]\\ {} & =\mathbb{E}\big[{L_{[\lfloor nt\rfloor /n,t]}}(x)\mid {\mathcal{A}_{n}}\big]+{\sum \limits_{k=1}^{\lfloor nt\rfloor }}\mathbb{E}\big[{L_{[\frac{k-1}{n},\frac{k}{n}]}}(x)\mid {X_{\frac{k-1}{n}}},{\Delta _{k}^{n}}X\big],\end{aligned}\]
so that relation (4.18) implies the desired result. We now proceed with the proof of (4.18) and (4.19). In order to show (4.18), we consider the approximated local time ${L_{t}^{\varepsilon }}(x)$, defined by
\[ {L_{t}^{\varepsilon }}(x):={\int _{0}^{t}}{\phi _{\varepsilon }}({X_{s}}-x)ds,\]
where ${\phi _{\varepsilon }}(x):={(2\pi \varepsilon )^{-1/2}}\exp \{-\frac{1}{2\varepsilon }{x^{2}}\}$. By [17], we have the convergence towards zero of $\| {L_{t}}(x)-{L_{t}^{\varepsilon }}(x){\| _{{L^{2}}(\Omega )}}\to 0$, which implies that
\[\begin{aligned}{}\mathbb{E}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}}(x)\mid {X_{\frac{i-1}{n}}},{\Delta _{i}^{n}}X\big]& =\underset{\varepsilon \to 0}{\lim }\mathbb{E}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}^{\varepsilon }}(x)\mid {X_{\frac{i-1}{n}}},{\Delta _{i}^{n}}X\big],\end{aligned}\]
where ${L_{[a,b]}^{\varepsilon }}(x):={L_{b}^{\varepsilon }}(x)-{L_{a}^{\varepsilon }}(x)$. By the self-similarity of X, for every $a,b\in \mathbb{R}$,
\[\begin{aligned}{}& \mathbb{E}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}^{\varepsilon }}(x)\mid {X_{\frac{i-1}{n}}}=a,{\Delta _{i}^{n}}X=b\big]\\ {} & =\mathbb{E}\Bigg[{\int _{\frac{i-1}{n}}^{\frac{i}{n}}}{\phi _{\varepsilon }}({X_{s}}-x)ds\mid {X_{\frac{i-1}{n}}}=a,{\Delta _{i}^{n}}X=b\Bigg]\\ {} & =\mathbb{E}\Bigg[{\int _{\frac{i-1}{n}}^{\frac{i}{n}}}{\phi _{\varepsilon }}\big({n^{-1/\alpha }}{X_{ns}}-x\big)ds\mid {n^{-1/\alpha }}{X_{i-1}}=a,{n^{-1/\alpha }}({X_{i}}-{X_{i-1}})=b\Bigg]\\ {} & ={\int _{\frac{i-1}{n}}^{\frac{i}{n}}}\mathbb{E}\big[{\phi _{\varepsilon }}\big({n^{-1/\alpha }}({X_{ns}}\hspace{-0.1667em}-\hspace{-0.1667em}{X_{i-1}})\hspace{-0.1667em}+\hspace{-0.1667em}a\hspace{-0.1667em}-\hspace{-0.1667em}x\big)\hspace{-0.1667em}\mid \hspace{-0.1667em}{n^{-1/\alpha }}{X_{i-1}}\hspace{-0.1667em}=\hspace{-0.1667em}a,{n^{-1/\alpha }}({X_{i}}\hspace{-0.1667em}-\hspace{-0.1667em}{X_{i-1}})\hspace{-0.1667em}=\hspace{-0.1667em}b\big]ds,\end{aligned}\]
where the last identity follows by the Fubini theorem, which is applicable due to the fact that ${\phi _{\varepsilon }}$ is bounded and the domain of integration is compact. The independent increments property of X implies that ${X_{i-1}}$ is independent of the vector $({X_{s}}-{X_{i-1}},{X_{i}}-{X_{i-1}})$. Moreover, by the stationarity of the increments of X, we have that
\[ ({X_{s}}-{X_{i-1}},{X_{i}}-{X_{i-1}})\stackrel{\mathrm{Law}}{=}({X_{s-i+1}},{X_{1}}).\]
This identity combined with a suitable change of variables yields
\[\begin{aligned}{}& \mathbb{E}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}^{\varepsilon }}(x)\mid {X_{\frac{i-1}{n}}}=a,{n^{1/\alpha }}{\Delta _{i}^{n}}X=b\big]\\ {} & \hspace{1em}={n^{-1}}{\int _{0}^{1}}\mathbb{E}\big[{\phi _{\varepsilon }}\big({n^{-1/\alpha }}{X_{u}}+a-x\big)\mid {X_{1}}={n^{1/\alpha }}b\big]ds\\ {} & \hspace{1em}={n^{1/\alpha -1}}{\int _{0}^{1}}\mathbb{E}\big[{\phi _{\varepsilon {n^{-2/\alpha }}}}\big({X_{u}}+a{n^{1/\alpha }}-x{n^{1/\alpha }}\big)\mid {X_{1}}={n^{1/\alpha }}b\big]ds\\ {} & \hspace{1em}={n^{1/\alpha -1}}\mathbb{E}\Bigg[{\int _{0}^{1}}{\phi _{\varepsilon {n^{-2/\alpha }}}}\big({X_{u}}+a{n^{1/\alpha }}-x{n^{1/\alpha }}\big)ds\mid {X_{1}}={n^{1/\alpha }}b\Bigg].\end{aligned}\]
Using the fact that
\[ {\int _{0}^{1}}{\phi _{\varepsilon {n^{-2/\alpha }}}}\big({X_{u}}+a{n^{1/\alpha }}-x{n^{1/\alpha }}\big)du\stackrel{{L^{2}}(\Omega )}{\to }{L_{1}}\big({n^{1/\alpha }}(x-a)\big),\]
as $\varepsilon \to 0$, we obtain
\[\begin{aligned}{}& \underset{\varepsilon \to 0}{\lim }\mathbb{E}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}^{\varepsilon }}(x)\mid {X_{\frac{i-1}{n}}}=a,{n^{1/\alpha }}{\Delta _{i}^{n}}X=b\big]\\ {} & \hspace{1em}={n^{1/\alpha -1}}\mathbb{E}\big[{L_{1}}\big({n^{1/\alpha }}(x-a)\big)\mid {X_{1}}={n^{1/\alpha }}b\big].\end{aligned}\]
This finishes the proof of (4.18).
We are left with the problem of showing (4.19).
Due to (1.3), for every $T,\varepsilon \gt 0$ and $x\in \mathbb{R}$, we have
\[ \underset{0\le t\le T}{\sup }|{L_{[\frac{\lfloor nt\rfloor }{n},t]}}(x)|\le C{n^{-1+\frac{1}{\alpha }+\epsilon }},\]
for any $\epsilon \in (0,1-1/\alpha )$. Relation (4.19) follows from here.
Next we deal with identity (2.5). By (1.2) and (4.18), we have that
\[\begin{aligned}{}\mathbb{E}\big[{O_{t}}(x)\mid {\mathcal{A}_{n}}\big]& ={\sum \limits_{i=1}^{\lfloor nt\rfloor }}\mathbb{E}\big[{O_{[\frac{i-1}{n},\frac{i}{n}]}}(x)\mid {\mathcal{A}_{n}}\big]+\mathbb{E}\big[{O_{[\frac{\lfloor nt\rfloor }{n},t]}}(x)\mid {\mathcal{A}_{n}}\big]\\ {} & ={\sum \limits_{i=1}^{\lfloor nt\rfloor }}{\int _{x}^{\infty }}\mathbb{E}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}}(y)\mid {\mathcal{A}_{n}}\big]dy+\mathbb{E}\big[{O_{[\frac{\lfloor nt\rfloor }{n},t]}}(x)\mid {\mathcal{A}_{n}}\big]\\ {} & ={n^{\frac{1}{\alpha }-1}}\hspace{-0.1667em}{\int _{x}^{\infty }}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}f\big({n^{\frac{1}{\alpha }}}(y\hspace{-0.1667em}-\hspace{-0.1667em}{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)dy\hspace{-0.1667em}+\hspace{-0.1667em}\mathbb{E}\big[{O_{[\frac{\lfloor nt\rfloor }{n},t]}}(x)\mid {\mathcal{A}_{n}}\big]\\ {} & =\frac{1}{n}{\sum \limits_{i=1}^{\lfloor nt\rfloor }}F\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)+\mathbb{E}\big[{O_{[\frac{\lfloor nt\rfloor }{n},t]}}(x)\mid {\mathcal{A}_{n}}\big],\end{aligned}\]
where the last identity follows from a suitable change of variables. Consequently, we deduce (2.5). In addition, by definition of occupation ${O_{[a,b]}}(x)$ we have that $|\mathbb{E}[{O_{[\frac{\lfloor nt\rfloor }{n},t]}}(x)\mid {\mathcal{A}_{n}}]|\le {n^{-1}}$, therefore
\[ \underset{0\le t\le T}{\sup }{n^{\frac{1}{2}(1+\frac{1}{\alpha })}}\mathbb{E}\big[{O_{[\frac{\lfloor nt\rfloor }{n},t]}}(x)\mid {\mathcal{A}_{n}}\big]\le {n^{\frac{1}{2}(1/\alpha -1)}},\]
which converges towards zero. The proof is now complete.  □

4.3 Proof of Lemma 3.1

Proof.
Part (a)
Equation (4.18) provides the desired result for $p=1$ as the increments are independent and so, in particular, ${X_{\frac{i-1}{n}}}$ is independent from ${\Delta _{i}^{n}}X$. We thus study the case $p\ge 2$. The representation of the local time as in Lemma 2.1 leads us to
\[\begin{aligned}{}& {\mathbb{E}_{\frac{i-1}{n}}}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}^{p}}(x)\big]\\ {} & \hspace{1em}={\mathbb{E}_{\frac{i-1}{n}}}\bigg[\frac{1}{{(2\pi )^{p}}}{\int _{{\mathbb{R}^{p}}}}{\int _{{[\frac{i-1}{n},\frac{i}{n}]^{p}}}}r\big({\xi _{1}}({X_{{s_{1}}}}-x)\big)\cdots r\big({\xi _{p}}({X_{{s_{p}}}}-x)\big)dsd\xi \bigg],\end{aligned}\]
where we recall that ${\mathbb{E}_{\frac{i-1}{n}}}[\cdot ]=\mathbb{E}[\cdot |{\mathcal{F}_{\frac{i-1}{n}}}]$ and $r(y)={e^{-iy}}$.
The change of variables ${s_{j}}=:\frac{i-1}{n}+\frac{{u_{j}}}{n}$ for $j=1,\dots ,p$ yields
\[\begin{aligned}{}& \frac{1}{{n^{p}}}\frac{1}{{(2\pi )^{p}}}{\int _{{\mathbb{R}^{p}}}}{\int _{{[0,1]^{p}}}}r\big(({\xi _{1}}+\cdots +{\xi _{p}})({X_{\frac{i-1}{n}}}-x)\big)\\ {} & \hspace{2em}\times {\mathbb{E}_{\frac{i-1}{n}}}\big[r\big({\xi _{1}}({X_{\frac{i-1}{n}+\frac{{u_{1}}}{n}}}-{X_{\frac{i-1}{n}}})\big)\times \cdots \times r\big({\xi _{p}}({X_{\frac{i-1}{n}+\frac{{u_{p}}}{n}}}-{X_{\frac{i-1}{n}}})\big)\big]dud\xi \\ {} & \hspace{1em}={n^{-p}}\frac{1}{{(2\pi )^{p}}}{\int _{{\mathbb{R}^{p}}}}{\int _{{[0,1]^{p}}}}r\big(({\xi _{1}}+\cdots +{\xi _{p}})({X_{\frac{i-1}{n}}}-x)\big)\\ {} & \hspace{2em}\times \mathbb{E}\big[r({\xi _{1}}{X_{\frac{{u_{1}}}{n}}})\times \cdots \times r({\xi _{p}}{X_{\frac{{u_{p}}}{n}}})\big]dud\xi ,\end{aligned}\]
using also the independence and stationarity of the increments of X, as well as the Fubini theorem, which is justified as in the proof of Lemma 2.1. As the process X is self-similar, we obtain
\[\begin{aligned}{}& {n^{-p}}\frac{1}{{(2\pi )^{p}}}{\int _{{\mathbb{R}^{p}}}}{\int _{{[0,1]^{p}}}}r\big(({\xi _{1}}+\cdots +{\xi _{p}})({X_{\frac{i-1}{n}}}-x)\big)\\ {} & \hspace{1em}\times \mathbb{E}\big[r\big({\xi _{1}}{n^{-\frac{1}{\alpha }}}{X_{{u_{1}}}}\big)\times \cdots \times r\big({\xi _{p}}{n^{-\frac{1}{\alpha }}}{X_{{u_{p}}}}\big)\big]dud\xi .\end{aligned}\]
Applying the change of variables ${\tilde{\xi }_{j}}={n^{-\frac{1}{\alpha }}}{\xi _{j}}$ for $j=1,\dots ,p$, we get
\[\begin{aligned}{}& {n^{p(\frac{1}{\alpha }-1)}}\frac{1}{{(2\pi )^{p}}}{\int _{{\mathbb{R}^{p}}}}{\int _{{[0,1]^{p}}}}r\big(({\tilde{\xi }_{1}}+\cdots +{\tilde{\xi }_{p}}){n^{\frac{1}{\alpha }}}({X_{\frac{i-1}{n}}}-x)\big)\\ {} & \hspace{2em}\times \mathbb{E}\big[r({\tilde{\xi }_{1}}{X_{{u_{1}}}})\times \cdots \times r({\tilde{\xi }_{p}}{X_{{u_{p}}}})\big]dud\mathbf{\tilde{\xi }}\\ {} & \hspace{1em}={n^{p(\frac{1}{\alpha }-1)}}{\varphi _{1}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big).\end{aligned}\]
Indeed, we can see
\[\begin{aligned}{}& \frac{1}{{(2\pi )^{p}}}{\int _{{\mathbb{R}^{p}}}}{\int _{{[0,1]^{p}}}}r\big(-({\xi _{1}}+\cdots +{\xi _{p}})y\big)\mathbb{E}\big[r({\xi _{1}}{X_{{u_{1}}}})\times \cdots \times r({\xi _{p}}{X_{{u_{p}}}})\big]dud\xi \\ {} & \hspace{1em}=\mathbb{E}\big[{\big({L_{[0,1]}}(y)\big)^{p}}\big]={\varphi _{1}}(y),\end{aligned}\]
because of (4.17). It is important to remark that, in the proof above, it is possible to obtain the desired result, as we consider the conditional expectation with respect to ${\mathcal{F}_{\frac{i-1}{n}}}$, and we used several times the independence of the increments.
Part (b)
The definition of the occupation time provides
\[\begin{aligned}{}& {\mathbb{E}_{\frac{i-1}{n}}}\big[{O_{[\frac{i-1}{n},\frac{i}{n}]}^{p}}(x)\big]\\ {} & \hspace{1em}={\mathbb{E}_{\frac{i-1}{n}}}\Bigg[\Bigg({\int _{x}^{\infty }}{L_{[\frac{i-1}{n},\frac{i}{n}]}}({y_{1}})d{y_{1}}\Bigg)\times \cdots \times \Bigg({\int _{x}^{\infty }}{L_{[\frac{i-1}{n},\frac{i}{n}]}}({y_{p}})d{y_{p}}\Bigg)\Bigg]\\ {} & \hspace{1em}={\int _{x}^{\infty }}\cdots {\int _{x}^{\infty }}{\mathbb{E}_{\frac{i-1}{n}}}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}}({y_{1}})\times \cdots \times {L_{[\frac{i-1}{n},\frac{i}{n}]}}({y_{p}})\big]dy.\end{aligned}\]
The change in the order of integration is justified by the Tonelli theorem, which is valid due to the fact that ${L_{[a,b]}}(x)\ge 0$ for all $a\le b$ and $x\ge 0$.
Acting as in the proof of part (a) it is then easy to check that
\[\begin{aligned}{}& {\mathbb{E}_{\frac{i-1}{n}}}\big[{O_{[\frac{i-1}{n},\frac{i}{n}]}^{p}}(x)\big]\\ {} & \hspace{1em}=\frac{{n^{p(\frac{1}{\alpha }-1)}}}{{(2\pi )^{p}}}{\int _{{[x,\infty ]^{p}}}}{\int _{{\mathbb{R}^{p}}}}{\int _{{[0,1]^{p}}}}r\big({\tilde{\xi }_{1}}{n^{\frac{1}{\alpha }}}({X_{\frac{i-1}{n}}}-{y_{1}})\big)\times \cdots \times r\big({\tilde{\xi }_{p}}{n^{\frac{1}{\alpha }}}({X_{\frac{i-1}{n}}}-{y_{p}})\big)\\ {} & \hspace{1em}\hspace{1em}\times \mathbb{E}\big[r({\tilde{\xi }_{1}}{X_{{u_{1}}}})\times \cdots \times r({\tilde{\xi }_{p}}{X_{{u_{p}}}})\big]dud\mathbf{\tilde{\xi }}dy.\end{aligned}\]
To conclude the analysis we apply the change of variable ${\tilde{y}_{j}}:={n^{\frac{1}{\alpha }}}({y_{j}}-{X_{\frac{i-1}{n}}})$ for $j=1,\dots ,p$. We get
\[\begin{aligned}{}& \frac{{n^{-p}}}{{(2\pi )^{p}}}{\int _{{[{n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),\infty ]^{p}}}}{\int _{{\mathbb{R}^{p}}}}{\int _{{[0,1]^{p}}}}r(-{\tilde{\xi }_{1}}{\tilde{y}_{1}})\times \cdots \times r(-{\tilde{\xi }_{p}}{\tilde{y}_{p}})\\ {} & \hspace{1em}\hspace{1em}\times \mathbb{E}\big[r({\tilde{\xi }_{1}}{X_{{u_{1}}}})\times \cdots \times r({\tilde{\xi }_{p}}{X_{{u_{p}}}})\big]dud\mathbf{\tilde{\xi }}d\mathbf{\tilde{y}}\\ {} & \hspace{1em}={n^{-p}}{\varphi _{2}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big),\end{aligned}\]
with
\[\begin{aligned}{}& {\varphi _{2}}(z)\\ {} & \hspace{1em}={\int _{{[z,\infty ]^{p}}}}\frac{1}{{(2\pi )^{p}}}{\int _{{\mathbb{R}^{p}}}}{\int _{{[0,1]^{p}}}}\mathbb{E}\big[r\big({\xi _{1}}({X_{{u_{1}}}}-{y_{1}})\big)\hspace{-0.1667em}\times \hspace{-0.1667em}\cdots \hspace{-0.1667em}\times \hspace{-0.1667em}r\big({\xi _{p}}({X_{{u_{p}}}}\hspace{-0.1667em}-\hspace{-0.1667em}{y_{p}})\big)\big]dud\xi dy\\ {} & \hspace{1em}={\int _{{[z,\infty ]^{p}}}}\mathbb{E}\big[{L_{[0,1]}}({y_{1}})\times \cdots \times {L_{[0,1]}}({y_{p}})\big]d{y_{1}}\cdots d{y_{p}}=\mathbb{E}\big[{O_{[0,1]}^{p}}(z)\big],\end{aligned}\]
where the next to last identity is justified by the Tonelli theorem. The proof of this part is therefore completed.
Part (c)
According to (4.18), one has
\[ f\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)={n^{1-\frac{1}{\alpha }}}\mathbb{E}\big[{L_{[\frac{i-1}{n},\frac{i}{n}]}}(x)\mid {X_{\frac{i-1}{n}}},{\Delta _{i}^{n}}X\big].\]
Then, following the proof of part (a), one can easily obtain
(4.20)
\[ f\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)=\mathbb{E}\big[{L_{[0,1]}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big)\mid {X_{1}}\big].\]
From the representation of the local time as in Lemma 2.1 we conclude the identity
\[\begin{aligned}{}& {\mathbb{E}_{\frac{i-1}{n}}}\big[f\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big){L_{[\frac{i-1}{n},\frac{i}{n}]}}(x)\big]\\ {} & \hspace{1em}=\frac{1}{2\pi }{\int _{\mathbb{R}}}{\int _{\frac{i-1}{n}}^{\frac{i}{n}}}{\mathbb{E}_{\frac{i-1}{n}}}\big[\mathbb{E}\big[{L_{[0,1]}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big)\mid {X_{1}}\big]r\big(\xi ({X_{s}}-x)\big)\big]dsd\xi .\end{aligned}\]
The change of variable $s:=\frac{i-1}{n}+\frac{u}{n}$ provides the quantity above is equal to
\[\begin{aligned}{}& \frac{1}{n}\frac{1}{2\pi }{\int _{\mathbb{R}}}{\int _{0}^{1}}r\big(\xi ({X_{\frac{i-1}{n}}}-x)\big)\\ {} & \hspace{2em}\times {\mathbb{E}_{\frac{i-1}{n}}}\big[\mathbb{E}\big[{L_{[0,1]}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big)\mid {X_{1}}\big]r\big(\xi ({X_{\frac{i-1}{n}+\frac{u}{n}}}-{X_{\frac{i-1}{n}}})\big)\big]dud\xi \\ {} & \hspace{1em}={n^{\frac{1}{\alpha }-1}}\frac{1}{2\pi }\hspace{-0.1667em}{\int _{\mathbb{R}}}{\int _{0}^{1}}\hspace{-0.1667em}\hspace{-0.1667em}r\big(\tilde{\xi }{n^{\frac{1}{\alpha }}}({X_{\frac{i-1}{n}}}\hspace{-0.1667em}-\hspace{-0.1667em}x)\big)\mathbb{E}\big[\mathbb{E}\big[{L_{[0,1]}}\big({n^{\frac{1}{\alpha }}}(x\hspace{-0.1667em}-\hspace{-0.1667em}{X_{\frac{i-1}{n}}})\big)\hspace{-0.1667em}\mid \hspace{-0.1667em}{X_{1}}\big]r(\tilde{\xi }{X_{u}})\big]dud\tilde{\xi },\end{aligned}\]
where the independence of the increments, the self-similarity of the process X and the change of variable $\tilde{\xi }:={n^{-\frac{1}{\alpha }}}\xi $ are used. The proof is concluded once we observe that this is ${n^{\frac{1}{\alpha }-1}}{\varphi _{3}}({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}))$, as
\[\begin{aligned}{}& \frac{1}{2\pi }{\int _{\mathbb{R}}}{\int _{0}^{1}}\mathbb{E}\big[\mathbb{E}\big[{L_{[0,1]}}(z)\mid {X_{1}}\big]r\big(\tilde{\xi }({X_{u}}-z)\big)\big]dud\tilde{\xi }\\ {} & \hspace{1em}=\mathbb{E}\big[\mathbb{E}\big[{L_{[0,1]}}(z)\mid {X_{1}}\big]{L_{[0,1]}}(z)\big]={\varphi _{3}}(z).\end{aligned}\]
Part (d)
We remark that, from the definition of F, Equation (4.20) and a suitable change of variable, the following holds:
\[\begin{aligned}{}F\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)& ={n^{\frac{1}{\alpha }}}{\int _{x}^{\infty }}f\big({n^{\frac{1}{\alpha }}}(y-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big)dy\\ {} & ={n^{\frac{1}{\alpha }}}{\int _{x}^{\infty }}\mathbb{E}\big[{L_{[0,1]}}\big({n^{\frac{1}{\alpha }}}(y-{X_{\frac{i-1}{n}}})\big)|{X_{1}}\big]dy\\ {} & =\mathbb{E}\big[{O_{[0,1]}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big)\mid {X_{1}}\big].\end{aligned}\]
Then, following the same route as in the proof of parts (b) and (c), it is easy to check that
\[\begin{aligned}{}& \mathbb{E}\big[F\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),{n^{\frac{1}{\alpha }}}{\Delta _{i}^{n}}X\big){O_{[\frac{i-1}{n},\frac{i}{n}]}}(x)\mid {\mathcal{F}_{\frac{i-1}{n}}}\big]\\ {} & \hspace{1em}={n^{-1}}\mathbb{E}\big[{O_{[0,1]}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big)\mathbb{E}\big[{O_{[0,1]}}\big({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}})\big)\mid {X_{1}}\big]\big]\\ {} & \hspace{1em}={n^{-1}}{\varphi _{4}}({n^{\frac{1}{\alpha }}}(x-{X_{\frac{i-1}{n}}}),\end{aligned}\]
as required.  □

4.4 Proof of Lemmas 3.2 and 3.3

Proof.
It follows by arguments analogous to part (ii) in Proposition 2.2. In particular, the statement of Lemma 3.2 is a straightforward consequence of the Hölder property in (1.3).  □

References

[1] 
Altmeyer, R., Chorowski, J.: Estimation error for occupation time functionals of stationary Markov processes. Stoch. Process. Appl. 128(6), 1830–1848 (2018). MR3797645. https://doi.org/10.1016/j.spa.2017.08.013
[2] 
Ayache, A., Xiao, Y.: Harmonizable fractional stable fields: Local nondeterminism and joint continuity of the local times. Stoch. Process. Appl. 126(1), 171–185 (2016). MR3426515. https://doi.org/10.1016/j.spa.2015.08.001
[3] 
Barlow, M.T.: Continuity of local times for Lévy processes. Z. Wahrscheinlichkeitstheor. Verw. Geb. 69, 23–35 (1985). MR0775850. https://doi.org/10.1007/BF00532583
[4] 
Barlow, M.T.: Necessary and sufficient conditions for the continuity of local times of Lévy processes. Ann. Probab. 16(4), 1389–1427 (1988). MR0958195
[5] 
Berman, S.M.: Local times and sample function properties of stationary Gaussian processes. Trans. Am. Math. Soc. 137, 277–299 (1969). MR0239652. https://doi.org/10.2307/1994804
[6] 
Berman, S.M.: Gaussian processes with stationary increments: Local times and sample function properties. Ann. Math. Stat. 41(4), 1260–1272 (1970). MR0272035. https://doi.org/10.1214/aoms/1177696901
[7] 
Borodin, A.N.: On the character of convergence to Brownian local time. I, II. Probab. Theory Relat. Fields 72(2), 231–250, 251–277 (1986). MR0836277. https://doi.org/10.1007/BF00699106
[8] 
Cohen, S.N.: A martingale representation theorem for a class of jump processes (2013). arXiv preprint. arXiv:1310.6286
[9] 
Florens-Zmirou, D.: On estimating the diffusion coefficient from discrete observations. J. Appl. Probab. 30, 790–804 (1993). MR1242012. https://doi.org/10.2307/3214513
[10] 
Geman, D., Horowitz, J.: Occupation densities. Ann. Probab. 8(1), 1–67 (1980). MR0556414. https://doi.org/10.1214/aop/1176994824
[11] 
Getoor, R.K., Kesten, H.: Continuity of local times of Markov processes. Compos. Math. 24, 277–303 (1972). MR0310977
[12] 
Hong, M., Liu, H., Xu, F.: Limit theorems for additive functionals of some self-similar Gaussian processes (2023). Preprint
[13] 
Jacod, J.: On continuous conditional Gaussian martingales and stable convergence in law. Seminaire de Probabilites XXXI, vol. 232. Springer, Berlin, Heidelberg (1997). MR1478732. https://doi.org/10.1007/BFb0119308
[14] 
Jacod, J.: Rates of convergence to the local time of a diffusion. Ann. Inst. Henri Poincaré B, Probab. Stat. 34(4), 505–544 (1998). MR1632849. https://doi.org/10.1016/S0246-0203(98)80026-5
[15] 
Jaramillo, A., Nourdin, I., Peccati, G.: Approximation of fractional local times: Zero energy and derivatives. Ann. Appl. Probab. 31(5), 2143–2191 (2021). MR4332693. https://doi.org/10.1214/20-aap1643
[16] 
Jaramillo, A., Nourdin, I., Nualart, D., Peccati, G.: Limit theorems for additive functionals of the fractional Brownian motion. Ann. Probab. 51(3), 1066–1111 (2023). MR4583063. https://doi.org/10.1214/22-aop1612
[17] 
Jeganathan, P.: Convergence of functionals of sums of r.v.s to local times of fractional stable motions. Ann. Probab. 32(3), 1771–1795 (2004). MR2073177. https://doi.org/10.1214/009117904000000658
[18] 
Jeganathan, P.: Limit laws for the local times of fractional Brownian and stable motions (2006). Working paper, available at: https://www.isibang.ac.in/~statmath/eprints/2006/11.pdf
[19] 
Jeganathan, P.: Limit theorems for functionals of sums that converge to fractional Brownian and stable motions (2008). Working paper, available at: https://cowles.yale.edu/sites/default/files/files/pub/d16/d1649.pdf
[20] 
Ivanovs, J., Podolskij, M.: Optimal estimation of the supremum and occupation times of a self-similar Lévy process. Electron. J. Stat. 16(1), 892–934 (2022). MR4372664. https://doi.org/10.1214/21-ejs1928
[21] 
Kesten, H.: Hitting probabilities of single points for processes with stationary independent increments. Mem. Amer. Math. Soc. 93 (1969). MR0272059
[22] 
Papanicolaou, G.C., Stroock, D., Varadhan, S.R.S.: Martingale approach to some limit theorems. Papers from the Duke Turbulence Conference 6, ii+120 (1976). MR0461684
[23] 
Podolskij, M., Rosenbaum, M.: Comment on: Limit of random measures associated with the increments of a Brownian semimartingale. Asymptotic behavior of local times related statistics for fractional Brownian motion. J. Financ. Econom. 16(4), 588–598 (2018). https://doi.org/10.1093/jjfinec/nbx036
[24] 
Renyi, A.: On stable sequences of events. Sankhya, Ser. A 25, 293–302 (1963). MR0170385
[25] 
Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion. Springer (1999). MR1725357. https://doi.org/10.1007/978-3-662-06400-9
[26] 
Rosen, J.: Second order limit laws for the local times of stable processes. In: Séminaire de Probabilités, XXV. Lecture Notes in Mathematics, vol. 1485, pp. 407–424. Springer, Berlin (1991). MR1187796. https://doi.org/10.1007/BFb0100872
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Preliminaries and main results
  • 3 Proof of main results
  • 4 Proof of the auxiliary results
  • References

Copyright
© 2024 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
High frequency data local time mixed normal distribution occupation time stable Lévy processes

MSC2010
62E17 60F05 11N60

Funding
The authors gratefully acknowledge financial support of ERC Consolidator Grant 815703 “STAMFORD: Statistical Methods for High Dimensional Diffusions.”

Metrics
since March 2018
507

Article info
views

170

Full article
views

218

PDF
downloads

66

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    2
Theorem 1.1 (Theorem 4 in [17]).
Theorem 2.3.
Theorem 1.1 (Theorem 4 in [17]).
Suppose that $\phi :\mathbb{R}\to \mathbb{R}$ is a function satisfying $\phi ,{\phi ^{2}}\in {L^{1}}(\mathbb{R};\mathbb{R})$. Then, for every $t\gt 0$,
\[\begin{aligned}{}G{(x,\phi )_{t}^{n}}& \stackrel{{L^{2}}(\Omega )}{\to }{\int _{\mathbb{R}}}\phi (y)dy\cdot {L_{t}}(x)\hspace{2em}\textit{as}\hspace{2.5pt}n\to \infty .\end{aligned}\]
Theorem 2.3.
Fix $x\in \mathbb{R}$ and define the processes
\[\begin{aligned}{}{W_{L,t}^{n}}& :={n^{\frac{1}{2}(1-\frac{1}{\alpha })}}\big({S_{L,t}^{n}}(x)-{L_{t}}(x)\big),\\ {} {W_{O,t}^{n}}& :={n^{\frac{1}{2}(1+\frac{1}{\alpha })}}\big({S_{O,t}^{n}}(x)-{O_{t}}(x)\big).\end{aligned}\]
The functional stable convergence holds with respect to ${J_{1}}$ topology:
\[ {W_{L}^{n}}\stackrel{\mathrm{Stably}}{\to }{k_{L}}{B_{L(x)}},\hspace{2em}{W_{O}^{n}}\stackrel{\mathrm{Stably}}{\to }{k_{O}}{B_{L(x)}},\hspace{2em}\textit{as}\hspace{2.5pt}n\to \infty ,\]
where B is a standard Brownian motion defined on an extended space and independent of $\mathcal{F}$. The constants ${k_{L}}$ and ${k_{O}}$ are defined as
\[\begin{aligned}{}{k_{L}^{2}}& :={\int _{\mathbb{R}}}\mathbb{E}\big[{\big(\mathbb{E}\big[{L_{1}}(y)\mid {X_{1}}\big]-{L_{1}}(y)\big)^{2}}\big]dy,\\ {} {k_{O}^{2}}& :={\int _{\mathbb{R}}}\mathbb{E}\big[{\big(\mathbb{E}\big[{O_{1}}(z)\mid {X_{1}}\big]-{O_{1}}(z)\big)^{2}}\big]dz.\end{aligned}\]

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy