1 Introduction and main results
Let ${({\xi _{k}},{\eta _{k}})_{k\ge 1}}$ be independent copies of a random vector $(\xi ,\eta )$ with positive arbitrarily dependent components. Put
\[ {S_{0}}:=0,\hspace{1em}{S_{k}}:={\xi _{1}}+\cdots +{\xi _{k}},\hspace{1em}k\in \mathbb{N}:=\{1,2,\dots \}\]
and then
The random sequences $S:={({S_{k}})_{k\ge 0}}$ and $T:={({T_{k}})_{k\ge 1}}$ are known in the literature as the standard random walk and a (globally) perturbed random walk. A survey of various results for the so defined perturbed random walks can be found in the book [9].Put
\[ Y(t)\hspace{3.33333pt}:=\hspace{3.33333pt}\sum \limits_{k\ge 1}{1_{\{{T_{k}}\le t\}}},\hspace{1em}t\ge 0.\]
A law of the iterated logarithm (LIL) for $Y(t)$, properly normalized and centered, was proved as $t\to \infty $ along integers in Proposition 2.3 of [11] under the assumptions that $\mathbb{E}{\eta ^{a}}\lt \infty $ for some $a\gt 0$ and ${\sigma ^{2}}:=\mathrm{Var}\hspace{0.1667em}\xi \in (0,\infty )$. We improve the aforementioned result by showing that the assumption $\mathbb{E}{\eta ^{a}}\lt \infty $ for some $a\gt 0$ can be dispensed with and also that the LIL holds as $t\to \infty $ along reals, thereby obtaining an ultimate version of the LIL for $Y(t)$. For a family $({x_{t}})$ of real numbers denote by $C(({x_{t}}))$ the set of its limit points.Theorem 1.
Assume that ${\sigma ^{2}}=\mathrm{Var}\hspace{0.1667em}\xi \in (0,\infty )$. Then
\[ C\bigg(\bigg(\frac{Y(t)-{\mu ^{-1}}{\textstyle\textstyle\int _{0}^{t}}\mathbb{P}\{\eta \le y\}\mathrm{d}y}{{(2{\sigma ^{2}}{\mu ^{-3}}t\log \log t)^{1/2}}}\hspace{0.1667em}:\hspace{0.1667em}t\gt \mathrm{e}\bigg)\bigg)=[-1,1]\hspace{1em}\textit{a.s.},\]
where $\mu :=\mathbb{E}\xi \lt \infty $.
Next, we consider a general branching process generated by the random sequence ${({T_{k}})_{k\ge 1}}$. Thus, the random variables ${T_{1}},{T_{2}},\dots $ are interpreted as the birth times of the first generation individuals. The first generation produces the second generation. The shifts of birth times of the second generation individuals with respect to their mothers’ birth times are distributed according to copies of T, and for different mothers these copies are independent. The second generation produces the third one, and so on.
Let ${Y_{j}}(t)$ be the number of the jth generation individuals with birth times $\le t$. Following [3], we call the sequence of processes ${({({Y_{j}}(t))_{t\ge 0}})_{j\ge 2}}$ an iterated perturbed random walk. Note that, for $t\ge 0$, ${Y_{1}}(t)=Y(t)$ and the following decomposition holds:
where ${Y_{j-1}^{(r)}}(t)$ is the number of the jth generation individuals who are descendants of the first generation individual with birth time ${T_{r}}$. Put $V(t):={V_{1}}(t)=\mathbb{E}Y(t)$ for $t\ge 0$. Taking expectations in (1) we infer, for $j\ge 2$ and $t\ge 0$,
(1)
\[ {Y_{j}}(t)=\sum \limits_{r\ge 1}{Y_{j-1}^{(r)}}(t-{T_{r}}){1_{\{{T_{r}}\le t\}}},\hspace{1em}j\ge 2,\]The iterated perturbed random walks are interesting objects on their own, see [14, 16]. Also, these are the main auxiliary tool in investigations of nested infinite occupancy schemes in random environment. Details can be found in the papers [4–6, 15]. Attention was also paid to iterated standard random walks, which are a rather particular instance of the iterated perturbed random walks which corresponds to $\eta =\xi $. An LIL for the iterated standard random walks was recently proved in [12]. Continuing this line of investigation we formulate and prove an LIL for ${Y_{j}}(t)$, properly normalized and centered, as $t\to \infty $.
Theorem 2.
Assume that ${\sigma ^{2}}=\mathrm{Var}\hspace{0.1667em}\xi \in (0,\infty )$. Then, for $j\ge 2$,
where $\mu =\mathbb{E}\xi \lt \infty $.
(3)
\[ C\bigg(\bigg(\frac{{Y_{j}}(t)-{V_{j}}(t)}{{(2{((2j-1)(j-1)!)^{-1}}{\sigma ^{2}}{\mu ^{-2j-1}}{t^{2j-1}}\log \log t)^{1/2}}}\hspace{0.1667em}:\hspace{0.1667em}t\gt \mathrm{e}\bigg)\bigg)=[-1,1]\hspace{1em}\textit{a.s.},\]Although the beginning of our proof of Theorem 2 is similar to that of Theorem 1.1 in [12], the subsequent technical details are essentially different. The main difficulty is that the distribution of η is arbitrary. Imposing a moment assumption on the distribution of η would greatly simplify an argument.
2 Proof of Theorem 1
We shall denote by n an integer argument and by t a real argument. For $t\in \mathbb{R}$, put $F(t):=\mathbb{P}\{\eta \le t\}$ and
and observe that $F(t)=0$ and $\nu (t)=0$ for $t\lt 0$. For $t\gt \mathrm{e}$, write
This result holds irrespective of whether $\mathbb{E}{\eta ^{a}}\lt \infty $ for some $a\gt 0$ or $\mathbb{E}{\eta ^{a}}=\infty $ for all $a\gt 0$. We intend to show that (5) entails
Given $t\ge 4$ there exists $n\in \mathbb{N}$ such that $t\in (n-1,n]$. Hence, by monotonicity,
\[\begin{array}{c}\displaystyle Y(t)-{\mu ^{-1}}{\int _{0}^{t}}F(y)\mathrm{d}y=Y(t)-{\int _{[0,\hspace{0.1667em}t]}}F(t-y)\mathrm{d}\nu (y)\\ {} \displaystyle +{\int _{[0,\hspace{0.1667em}t]}}F(t-y)\mathrm{d}(\nu (y)-{\mu ^{-1}}y)=:X(t)+Z(t)\end{array}\]
and put $a(t):={(2{\sigma ^{2}}{\mu ^{-3}}t\log \log t)^{1/2}}$. It is shown in the proof of Proposition 2.3 in [11] that
(5)
\[ C\big(\big(Z(n)/a(n)\hspace{3.33333pt}:\hspace{3.33333pt}n\ge 3\big)\big)=[-1,1]\hspace{1em}\text{a.s.}\](6)
\[ C\big(\big(Z(t)/a(t)\hspace{3.33333pt}:\hspace{3.33333pt}t\gt \mathrm{e}\big)\big)=[-1,1]\hspace{1em}\text{a.s.}\]
\[ \frac{Z(t)}{a(t)}\le \frac{Z(n)+{\mu ^{-1}}{\textstyle\textstyle\int _{n-1}^{n}}F(y)\mathrm{d}y}{a(n-1)}\le \frac{Z(n)+{\mu ^{-1}}}{a(n-1)}\hspace{1em}\text{a.s.}\]
Analogously,
We conclude that (6) does indeed hold.It is known (see the proof of Theorem 3.2 in [1]) that
\[ \underset{n\to \infty }{\lim }\hspace{0.1667em}{n^{-1/2}}\bigg(Y(n)-{\int _{[0,\hspace{0.1667em}n]}}F(n-y)\mathrm{d}\nu (y)\bigg)=0\hspace{1em}\text{a.s.}\]
whenever $\mathbb{E}{\eta ^{a}}\lt \infty $ for some $a\gt 0$. We note that the latter limit relation may fail to hold if $\mathbb{E}{\eta ^{a}}=\infty $ for all $a\gt 0$. For instance, it follows from Remark 4.4 in [13] that the upper limit in the last displayed formula is equal to $+\infty $ a.s. whenever $\mathbb{P}\{\xi =c\}=1$ for some $c\gt 0$ and ${\lim \nolimits_{t\to \infty }}(\log \log t)(1-F(t))=1$.The proof of Theorem 3.2 in [1] operates with power moments and relies heavily upon the assumption $\mathbb{E}{\eta ^{a}}\lt \infty $ for some $a\gt 0$. Without such an assumption another argument is needed, which operates with exponential rather than power moments. In the remainder of the proof we present such an argument, which enables us to prove that
thereby completing the proof of the theorem; here, $b(t):={(t\log \log t)^{1/2}}$ for $t\gt \mathrm{e}$.
Fix any $u\ne 0$ and $t\gt 0$. Put ${W_{0}}:=1$ and, for $j\in \mathbb{N}$,
\[\begin{aligned}{}{W_{j}}:=\exp \Big(& u{\sum \limits_{k=0}^{j-1}}({1_{\{{\eta _{k+1}}+{S_{k}}\le t\}}}-F(t-{S_{k}}){1_{\{{S_{k}}\le t\}}})\\ {} & -({u^{2}}{\mathrm{e}^{|u|}}/2){\sum \limits_{k=0}^{j-1}}(1-F(t-{S_{k}})){1_{\{{S_{k}}\le t\}}}\Big),\end{aligned}\]
and denote by ${\mathcal{G}_{0}}$ the trivial σ-algebra and, for $j\in \mathbb{N}$, by ${\mathcal{G}_{j}}$ the σ-algebra generated by ${({\xi _{k}},{\eta _{k}})_{1\le k\le j}}$. Observe that the variable ${W_{j}}$ is ${\mathcal{G}_{j}}$-measurable for $j\in {\mathbb{N}_{0}}:=\mathbb{N}\cup \{0\}$. Now we prove that ${({W_{j}},{\mathcal{G}_{j}})_{j\ge 0}}$ is a positive supermartingale. Indeed, writing ${\mathbb{E}_{j}}(\cdot )$ for $\mathbb{E}(\cdot |{\mathcal{G}_{j}})$ and using the inequality ${\mathrm{e}^{x}}\le 1+x+{x^{2}}{\mathrm{e}^{|x|}}/2$ for $x\in \mathbb{R}$ in combination with
\[ {\mathbb{E}_{j-1}}\big({1_{\{{\eta _{j}}+{S_{j-1}}\le t\}}}-F(t-{S_{j-1}}){1_{\{{S_{j-1}}\le t\}}}\big)=0\hspace{1em}\text{a.s.}\]
we infer
\[\begin{aligned}{}& {\mathbb{E}_{j-1}}\exp \big(u({1_{\{{\eta _{j}}+{S_{j-1}}\le t\}}}-F(t-{S_{j-1}}){1_{\{{S_{j-1}}\le t\}}})\big)\\ {} & \hspace{1em}\le 1+({u^{2}}/2){\mathbb{E}_{j-1}}{({1_{\{{T_{j}}\le t\}}}-F(t-{S_{j-1}}))^{2}}\\ {} & \hspace{2em}\times \exp (|u({1_{\{{T_{j}}\le t\}}}-F(t-{S_{j-1}}){1_{\{{S_{j-1}}\le t\}}})|){1_{\{{S_{j-1}}\le t\}}}.\end{aligned}\]
In view of $|{1_{\{{T_{j}}\le t\}}}-F(t-{S_{j-1}}){1_{\{{S_{j-1}}\le t\}}}|\le 1$ a.s., the right-hand side does not exceed
\[\begin{aligned}{}& 1+({u^{2}}{\mathrm{e}^{|u|}}/2)F(t-{S_{j-1}})(1-F(t-{S_{j-1}})){1_{\{{S_{j-1}}\le t\}}}\\ {} & \hspace{1em}\le 1+({u^{2}}{\mathrm{e}^{|u|}}/2)(1-F(t-{S_{j-1}})){1_{\{{S_{j-1}}\le t\}}}\\ {} & \hspace{1em}\le \exp (({u^{2}}{\mathrm{e}^{|u|}}/2)(1-F(t-{S_{j-1}})){1_{\{{S_{j-1}}\le t\}}}).\end{aligned}\]
For the latter inequality we have used $1+x\le {\mathrm{e}^{x}}$ for $x\ge 0$. Thus, we have proved that, for $j\in \mathbb{N}$, ${\mathbb{E}_{j-1}}({W_{j}}/{W_{j-1}})\le 1$ a.s. and thereupon ${\mathbb{E}_{j-1}}{W_{j}}\le {W_{j-1}}$ a.s., that is, ${({W_{j}},{\mathcal{G}_{j}})_{j\ge 0}}$ is indeed a positive supermartingale. As a consequence, the a.s. limit
\[ \underset{j\to \infty }{\lim }{W_{j}}=:{W_{\infty }}=\exp \Big(uX(t)-({u^{2}}{\mathrm{e}^{|u|}}/2)\sum \limits_{k\ge 0}(1-F(t-{S_{k}})){1_{\{{S_{k}}\le t\}}}\Big)\]
satisfies $\mathbb{E}{W_{\infty }}\le \mathbb{E}{W_{0}}=1$. In other words, with $u\in \mathbb{R}$ and $t\gt 0$ fixed,
We shall also need another auxiliary result.
Proof.
To prove (9), write, for fixed $a\gt 0$ and $t\gt a$,
\[\begin{aligned}{}& \sum \limits_{k\ge 0}(1-F(t-{S_{k}})){1_{\{{S_{k}}\le t\}}}=\sum \limits_{k\ge 0}(1-F(t-{S_{k}})){1_{\{{S_{k}}\le t-a\}}}\\ {} & \hspace{1em}+\sum \limits_{k\ge 0}(1-F(t-{S_{k}})){1_{\{t-a\lt {S_{k}}\le t\}}}\le (1-F(a))\nu (t)+(\nu (t)-\nu (t-a)).\end{aligned}\]
By the strong law of large numbers for renewal processes, ${\lim \nolimits_{t\to \infty }}{t^{-1}}\nu (t)={\mu ^{-1}}$ a.s. and ${\lim \nolimits_{t\to \infty }}{t^{-1}}(\nu (t)-\nu (t-a))={\mu ^{-1}}-{\mu ^{-1}}=0$ a.s. Hence, for each fixed $a\gt 0$,
\[ \underset{t\to \infty }{\limsup }{t^{-1}}\sum \limits_{k\ge 0}(1-F(t-{S_{k}})){1_{\{{S_{k}}\le t\}}}\le {\mu ^{-1}}(1-F(a))\hspace{1em}\text{a.s.}\]
Letting $a\to \infty $ we arrive at (9). □Fix any $\varepsilon \gt 0$ and put ${t_{n}}:=\exp ({n^{3/4}})$ for $n\in \mathbb{N}$. We intend to prove that
To this end, for $n\ge 3$, define the event
In view of (9), for large n,
\[ \sum \limits_{k\ge 0}(1-F({t_{n}}-{S_{k}})){1_{\{{S_{k}}\le {t_{n}}\}}}\le ({\varepsilon ^{2}}/8){t_{n}}.\]
Using this we obtain, for any $u\gt 0$ and large n,
\[\begin{aligned}{}{A_{n}}& =\{uX({t_{n}})-({u^{2}}{\mathrm{e}^{|u|}}/2)\sum \limits_{k\ge 0}(1-F({t_{n}}-{S_{k}})){1_{\{{S_{k}}\le {t_{n}}\}}}\\ {} & \hspace{1em}\gt \varepsilon ub({t_{n}})-({u^{2}}{\mathrm{e}^{|u|}}/2)\sum \limits_{k\ge 0}(1-F({t_{n}}-{S_{k}})){1_{\{{S_{k}}\le {t_{n}}\}}}\}\\ {} & \subseteq \{uX({t_{n}})-({u^{2}}{\mathrm{e}^{|u|}}/2)\sum \limits_{k\ge 0}(1-F({t_{n}}-{S_{k}})){1_{\{{S_{k}}\le {t_{n}}\}}}\\ {} & \hspace{1em}\gt \varepsilon ub({t_{n}})-({\varepsilon ^{2}}/8)({u^{2}}{\mathrm{e}^{|u|}}/2){t_{n}}\}=:{B_{n}}.\end{aligned}\]
Invoking Markov’s inequality in combination with (8) we infer
\[\begin{aligned}{}\mathbb{P}\{{B_{n}}\}& \le \exp \Big(-\varepsilon ub({t_{n}})+({\varepsilon ^{2}}/8)({u^{2}}{\mathrm{e}^{|u|}}/2){t_{n}}\Big)\\ {} & \hspace{1em}\times \mathbb{E}\exp \Big(uX({t_{n}})-({u^{2}}{\mathrm{e}^{|u|}}/2)\sum \limits_{k\ge 0}(1-F({t_{n}}-{S_{k}})){1_{\{{S_{k}}\le {t_{n}}\}}}\Big)\\ {} & \le \exp \Big(-\varepsilon ub({t_{n}})+({\varepsilon ^{2}}/8)({u^{2}}{\mathrm{e}^{|u|}}/2){t_{n}}\Big).\end{aligned}\]
Let $\rho \gt 0$ satisfy $\exp (8{\varepsilon ^{-1}}\rho )=3/2$. For large $x\gt 0$, ${x^{-1}}\log \log x\le \rho $. Put
Then
\[ -\varepsilon ub({t_{n}})+({\varepsilon ^{2}}/8)({u^{2}}{\mathrm{e}^{|u|}}/2){t_{n}}\le -8\log \log {t_{n}}+4{\mathrm{e}^{8{\varepsilon ^{-1}}\rho }}\log \log {t_{n}}=-2\log \log {t_{n}}.\]
Hence, by the Borel–Cantelli lemma, ${\limsup _{n\to \infty }}(X({t_{n}})/b({t_{n}}))\le 0$ a.s. The converse inequality for the lower limit follows analogously. We start with ${A_{n}^{\ast }}:=\{-X({t_{n}})\gt \varepsilon b({t_{n}})\}$ and show, by the same reasoning as above, that ${A_{n}^{\ast }}\subseteq {B_{n}^{\ast }}$, where ${B_{n}^{\ast }}$ only differs from ${B_{n}}$ by the term $-uX({t_{n}})$ in place of $uX({t_{n}})$.It remains to show that (10) can be lifted to (7). To this end, it suffices to prove that
Indeed, (11) in combination with (10) entails
(11)
\[ \underset{n\to \infty }{\lim }\frac{{\sup _{u\in [{t_{n}},\hspace{0.1667em}{t_{n+1}}]}}\hspace{0.1667em}|X(u)-X({t_{n}})|}{{t_{n}^{1/2}}}=0\hspace{1em}\text{a.s.}\]
\[ \underset{n\to \infty }{\lim }\frac{{\sup _{u\in [{t_{n}},\hspace{0.1667em}{t_{n+1}}]}}\hspace{0.1667em}|X(u)|}{b({t_{n}})}=0\hspace{1em}\text{a.s.}\]
This ensures (7) because, for large enough n,
\[ \frac{|X(t)|}{b(t)}\le \frac{{\sup _{u\in [{t_{n}},\hspace{0.1667em}{t_{n+1}}]}}\hspace{0.1667em}|X(u)|}{b({t_{n}})}\hspace{1em}\text{a.s.}\]
whenever $t\in [{t_{n}},\hspace{0.1667em}{t_{n+1}}]$.We denote by $I={I_{n}}$ a sequence of positive integers to be chosen later. For $j\in {\mathbb{N}_{0}}$ and $n\in \mathbb{N}$, put
In what follows, we write ${v_{j,\hspace{0.1667em}m}}$ for ${v_{j,\hspace{0.1667em}m}}(n)$. Observe that ${F_{j}}(n)\subseteq {F_{j+1}}(n)$. For any $u\in [{t_{n}},{t_{n+1}}]$, put
\[ {u_{j}}:=\max \{v\in {F_{j}}(n):v\le u\}={t_{n}}+{2^{-j}}({t_{n+1}}-{t_{n}})\left\lfloor \frac{{2^{j}}(u-{t_{n}})}{{t_{n+1}}-{t_{n}}}\right\rfloor .\]
An important observation is that either ${u_{j-1}}={u_{j}}$ or ${u_{j-1}}={u_{j}}-{2^{-j}}({t_{n+1}}-{t_{n}})$. Necessarily, ${u_{j}}={v_{j,m}}$ for some $0\le m\le {2^{j}}$, so that either ${u_{j-1}}={v_{j,m}}$ or ${u_{j-1}}={v_{j,m-1}}$. Write
\[\begin{aligned}{}& \underset{u\in [{t_{n}},\hspace{0.1667em}{t_{n+1}}]}{\sup }|X(u)-X({t_{n}})|\\ {} & \hspace{1em}=\underset{0\le j\le {2^{I}}-1}{\max }\underset{z\in [0,\hspace{0.1667em}{v_{I,\hspace{0.1667em}j+1}}-{v_{I,\hspace{0.1667em}j}}]}{\sup }|(X({v_{I,\hspace{0.1667em}j}})-X({t_{n}}))+(X({v_{I,\hspace{0.1667em}j}}+z)-X({v_{I,\hspace{0.1667em}j}}))|\\ {} & \hspace{1em}\le \underset{0\le j\le {2^{I}}-1}{\max }|X({v_{I,\hspace{0.1667em}j}})-X({t_{n}})|\\ {} & \hspace{2em}+\underset{0\le j\le {2^{I}}-1}{\max }\underset{z\in [0,\hspace{0.1667em}{v_{I,\hspace{0.1667em}j+1}}-{v_{I,\hspace{0.1667em}j}}]}{\sup }|X({v_{I,\hspace{0.1667em}j}}+z)-X({v_{I,\hspace{0.1667em}j}})|\hspace{1em}\text{a.s.}\end{aligned}\]
For $u\in {F_{I}}(n)$,
\[\begin{aligned}{}|X(u)-X({t_{n}})|& =\Big|{\sum \limits_{j=1}^{I}}(X({u_{j}})-X({u_{j-1}}))+X({u_{0}})-X({t_{n}})\Big|\\ {} & \le {\sum \limits_{j=0}^{I}}\underset{1\le m\le {2^{j}}}{\max }\hspace{0.1667em}|X({v_{j,\hspace{0.1667em}m}})-X({v_{j,\hspace{0.1667em}m-1}})|.\end{aligned}\]
With this at hand, we obtain
(12)
\[\begin{aligned}{}& \underset{u\in [{t_{n}},\hspace{0.1667em}{t_{n+1}}]}{\sup }|X(u)-X({t_{n}})|\le {\sum \limits_{j=0}^{I}}\underset{1\le m\le {2^{j}}}{\max }\hspace{0.1667em}|X({v_{j,\hspace{0.1667em}m}})-X({v_{j,\hspace{0.1667em}m-1}})|\\ {} & \hspace{1em}+\underset{0\le j\le {2^{I}}-1}{\max }\underset{z\in [0,\hspace{0.1667em}{v_{I,\hspace{0.1667em}j+1}}-{v_{I,\hspace{0.1667em}j}}]}{\sup }|X({v_{I,\hspace{0.1667em}j}}+z)-X({v_{I,\hspace{0.1667em}j}})|\hspace{1em}\text{a.s.}\end{aligned}\]We first show that, for all $\varepsilon \gt 0$,
Let $\ell \in \mathbb{N}$. As a preparation, we derive an appropriate upper bound for $\mathbb{E}{(X(u)-X(v))^{2\ell }}$ for $u,v\gt 0$, $u\gt v$. Observe that $X(u)-X(v)$ is equal to the a.s. limit ${\lim \nolimits_{j\to \infty }}R(j,u,v)$, where ${(R(j,u,v),{\mathcal{G}_{j}})_{j\ge 0}}$ is a martingale defined by
Further,
and thereupon
(13)
\[ \sum \limits_{n\ge 1}\mathbb{P}\Big\{{\sum \limits_{j=0}^{I}}\underset{1\le m\le {2^{j}}}{\max }\hspace{0.1667em}|X({v_{j,\hspace{0.1667em}m}})-X({v_{j,\hspace{0.1667em}m-1}})|\gt \varepsilon {t_{n}^{1/2}}\Big\}\lt \infty .\]
\[ R(0,u,v):=0,\hspace{1em}R(j,u,v):={\sum \limits_{k=0}^{j-1}}({1_{\{v\lt {\eta _{k+1}}+{S_{k}}\le u\}}}-F(u-{S_{k}})+F(v-{S_{k}})),\hspace{1em}j\in \mathbb{N},\]
and, as before, ${\mathcal{G}_{0}}$ denotes the trivial σ-algebra and, for $j\in \mathbb{N}$, ${\mathcal{G}_{j}}$ denotes the σ-algebra generated by ${({\xi _{k}},{\eta _{k}})_{1\le k\le j}}$. Recall that $F(t)=0$ for $t\lt 0$. By the Burkholder–Davis–Gundy inequality, see, for instance, Theorem 11.3.2 in [7],
\[\begin{aligned}{}& \mathbb{E}{(X(u)-X(v))^{2\ell }}\le C\Big(\mathbb{E}\Big(\sum \limits_{k\ge 0}\mathbb{E}{\big({(R(k+1,u,v)-R(k,u,v))^{2}}|{\mathcal{G}_{k}}\big)\Big)^{\ell }}\\ {} & \hspace{2em}+\sum \limits_{k\ge 0}\mathbb{E}{(R(k+1,u,v)-R(k,u,v))^{2\ell }}\Big)\\ {} & \hspace{1em}=C\Big(\mathbb{E}{\Big(\sum \limits_{k\ge 0}(F(u-{S_{k}})-F(v-{S_{k}}))(1-F(u-{S_{k}})+F(v-{S_{k}}))\Big)^{\ell }}\\ {} & \hspace{2em}+\sum \limits_{k\ge 0}\mathbb{E}{({1_{\{v\lt {\eta _{k+1}}+{S_{k}}\le u\}}}-F(u-{S_{k}})+F(v-{S_{k}}))^{2\ell }}\Big)=:C(A(u,v)+B(u,v))\end{aligned}\]
for a positive constant C. Let $f:[0,\infty )\to [0,\infty )$ be a locally bounded function. It is shown in the proof of Lemma A.3 in [1] that $\mathbb{E}{(\nu (1))^{\ell }}\lt \infty $ and that
(14)
\[ \mathbb{E}{\Big({\int _{[0,\hspace{0.1667em}t]}}f(t-y)\mathrm{d}\nu (y)\Big)^{\ell }}\le \mathbb{E}{(\nu (1))^{\ell }}{\Big({\sum \limits_{n=0}^{\lfloor t\rfloor }}\underset{y\in [n,\hspace{0.1667em}n+1)}{\sup }\hspace{0.1667em}f(y)\Big)^{\ell }}.\]
\[\begin{aligned}{}& A(u,v)=\mathbb{E}\Big({\int _{(v,\hspace{0.1667em}u]}}F(u-y)(1-F(u-y))\mathrm{d}\nu (y)\\ {} & \hspace{2em}+{\int _{[0,\hspace{0.1667em}v]}}(F(u-y)-F(v-y))(1-F(u-y)+F(v-y))\mathrm{d}\nu (y){\Big)^{\ell }}\\ {} & \hspace{1em}\le {2^{\ell -1}}\Big(\mathbb{E}{\Big({\int _{(v,\hspace{0.1667em}u]}}F(u-y)(1-F(u-y))\mathrm{d}\nu (y)\Big)^{\ell }}\\ {} & \hspace{2em}+\mathbb{E}{\Big({\int _{[0,\hspace{0.1667em}v]}}(F(u-y)-F(v-y))(1-F(u-y)+F(v-y))\mathrm{d}\nu (y)\Big)^{\ell }}\\ {} & \hspace{1em}\le {2^{\ell -1}}\Big(\mathbb{E}{\Big({\int _{[0,\hspace{0.1667em}u]}}{1_{[0,\hspace{0.1667em}u-v)}}(u-y)\mathrm{d}\nu (y)\Big)^{\ell }}\\ {} & \hspace{2em}+\mathbb{E}{\Big({\int _{[0,\hspace{0.1667em}v]}}(F(u-y)-F(v-y))\mathrm{d}\nu (y)\Big)^{\ell }}\Big)\\ {} & \hspace{1em}=:{2^{\ell -1}}({A_{1}}(u,v)+{A_{2}}(u,v)).\end{aligned}\]
Using (14) with $t=u$ and $f(y)={1_{[0,\hspace{0.1667em}u-v)}}(y)$ and then with $t=v$ and $f(y)=F(u-v+y)-F(y)$ we infer
\[ {A_{1}}(u,v)\le \mathbb{E}{(\nu (1))^{\ell }}{\Big({\sum \limits_{n=0}^{\lfloor u\rfloor }}\underset{y\in [n,\hspace{0.1667em}n+1)}{\sup }\hspace{0.1667em}{1_{[0,\hspace{0.1667em}u-v)}}(y)\Big)^{\ell }}=\mathbb{E}{(\nu (1))^{\ell }}{(\lceil u-v\rceil )^{\ell }},\]
where $x\mapsto \lceil x\rceil $ is the ceiling function, and
\[\begin{aligned}{}{A_{2}}(u,v)& \le \mathbb{E}{(\nu (1))^{\ell }}{\Big({\sum \limits_{n=0}^{\lfloor v\rfloor }}\underset{y\in [n,\hspace{0.1667em}n+1)}{\sup }\hspace{0.1667em}(F(u-v+y)-F(y))\Big)^{\ell }}\\ {} & \le \mathbb{E}{(\nu (1))^{\ell }}{\Big({\sum \limits_{n=0}^{\lfloor v\rfloor }}(F(\lceil u-v\rceil +n+1)-F(n))\Big)^{\ell }}\\ {} & =\mathbb{E}{(\nu (1))^{\ell }}{\Big({\sum \limits_{n=0}^{\lceil u-v\rceil }}(F(\lfloor v\rfloor +1+n)-F(n))\Big)^{\ell }}\le \mathbb{E}{(\nu (1))^{\ell }}{(\lceil u-v\rceil +1)^{\ell }}.\end{aligned}\]
Finally,
\[\begin{aligned}{}B(u,v)& \le \sum \limits_{k\ge 0}\mathbb{E}{({1_{\{v\lt {\eta _{k+1}}+{S_{k}}\le u\}}}-F(u-{S_{k}})+F(v-{S_{k}}))^{2}}\\ {} & \le 2\mathbb{E}\nu (1)(\lceil u-v\rceil +1)\le 2\mathbb{E}\nu (1){(\lceil u-v\rceil +1)^{\ell }}\end{aligned}\]
and thereupon
Note that ${v_{j,\hspace{0.1667em}m}}-{v_{j,\hspace{0.1667em}m-1}}={2^{-j}}({t_{n+1}}-{t_{n}})$. Put $I={I_{n}}:=\lfloor {\log _{2}}({2^{-1}}({t_{n+1}}-{t_{n}}))\rfloor $. We claim that there exists a constant ${C_{2}}\gt 0$ such that ${C_{1}}{(\lceil {2^{-j}}({t_{n+1}}-{t_{n}})\rceil +1)^{\ell }}\le {C_{2}}{2^{-j\ell }}{({t_{n+1}}-{t_{n}})^{\ell }}$ whenever $j\in \mathbb{N}$, $j\le I$. Indeed,
\[\begin{aligned}{}{(\lceil {2^{-j}}({t_{n+1}}-{t_{n}})\rceil +1)^{\ell }}& \le {({2^{-j}}({t_{n+1}}-{t_{n}})+2)^{\ell }}\le {2^{\ell -1}}({2^{-j\ell }}{({t_{n+1}}-{t_{n}})^{\ell }}+{2^{\ell }})\\ {} & \le {2^{\ell }}{2^{-j\ell }}{({t_{n+1}}-{t_{n}})^{\ell }}\end{aligned}\]
having utilized ${2^{-j}}({t_{n+1}}-{t_{n}})\ge 2$ for $j\le I$. Invoking (15) we then obtain, for nonnegative integer $j\le I$,
(16)
\[ \mathbb{E}{(X({v_{j,\hspace{0.1667em}m}})-X({v_{j,\hspace{0.1667em}m-1}}))^{2\ell }}\le {C_{1}}{(\lceil {2^{-j}}({t_{n+1}}-{t_{n}})\rceil +1)^{\ell }}\le {C_{2}}{2^{-j\ell }}{({t_{n+1}}-{t_{n}})^{\ell }}\]
\[\begin{aligned}{}\mathbb{E}\big(\underset{1\le m\le {2^{j}}}{\max }\hspace{0.1667em}{(X({v_{j,\hspace{0.1667em}m}})-X({v_{j,\hspace{0.1667em}m-1}}))^{2\ell }}\big)& \le {\sum \limits_{m=1}^{{2^{j}}}}\mathbb{E}{(X({v_{j,\hspace{0.1667em}m}})-X({v_{j,\hspace{0.1667em}m-1}}))^{2\ell }}\\ {} & \le {C_{2}}{2^{-j(\ell -1)}}{({t_{n+1}}-{t_{n}})^{\ell }}.\end{aligned}\]
By the triangle inequality for the ${L_{2\ell }}$-norm,
\[\begin{aligned}{}& \mathbb{E}{\Big({\sum \limits_{j=0}^{I}}\underset{1\le m\le {2^{j}}}{\max }\hspace{0.1667em}|X({v_{j,\hspace{0.1667em}m}})-X({v_{j,\hspace{0.1667em}m-1}})|\Big)^{2\ell }}\\ {} & {\hspace{1em}\le \Big({\sum \limits_{j=0}^{I}}\big(\mathbb{E}{\big(\underset{1\le m\le {2^{j}}}{\max }\hspace{0.1667em}{(X({v_{j,\hspace{0.1667em}m}})-X({v_{j,\hspace{0.1667em}m-1}}))^{2\ell }}\big)\big)^{1/(2\ell )}}\Big)^{2\ell }}\\ {} & \hspace{1em}\le {C_{2}}{({t_{n+1}}-{t_{n}})^{\ell }}{\big(\sum \limits_{j\ge 0}{2^{-j(\ell -1)/(2\ell )}}\big)^{2\ell }}=:{C_{3}}{({t_{n+1}}-{t_{n}})^{\ell }}.\end{aligned}\]
By Markov’s inequality,
\[ \mathbb{P}\Big\{{\sum \limits_{j=0}^{I}}\underset{1\le m\le {2^{j}}}{\max }\hspace{0.1667em}|X({v_{j,\hspace{0.1667em}m}})-X({v_{j,\hspace{0.1667em}m-1}})|\gt \varepsilon {t_{n}^{1/2}}\Big\}\le {C_{3}}{\varepsilon ^{-2\ell }}{t_{n}^{-\ell }}{({t_{n+1}}-{t_{n}})^{\ell }}.\]
Since ${t_{n}^{-1}}({t_{n+1}}-{t_{n}})\sim (3/4){n^{-1/4}}$ as $n\to \infty $, (13) follows upon setting $\ell =6$, say. Invoking the Borel–Cantelli lemma we infer
Now we proceed with the analysis of the second summand in (12). Put $M(t):={\textstyle\int _{[0,\hspace{0.1667em}t]}}F(t-y)\mathrm{d}\nu (y)$ for $t\ge 0$. Using the equality $X(t)=N(t)-M(t)$ and a.s. monotonicity of N and M we infer
and
Arguing as above we conclude that, for $u,v\gt 0$, $u\gt v$,
\[\begin{aligned}{}& \underset{z\in [0,\hspace{0.1667em}{v_{I,\hspace{0.1667em}j+1}}-{v_{I,\hspace{0.1667em}j}}]}{\sup }|X({v_{I,\hspace{0.1667em}j}}+z)-X({v_{I,\hspace{0.1667em}j}})|\\ {} & \hspace{1em}\le \underset{z\in [0,\hspace{0.1667em}{v_{I,\hspace{0.1667em}j+1}}-{v_{I,\hspace{0.1667em}j}}]}{\sup }(N({v_{I,\hspace{0.1667em}j}}+z)-N({v_{I,\hspace{0.1667em}j}}))\\ {} & \hspace{2em}+\underset{z\in [0,\hspace{0.1667em}{v_{I,\hspace{0.1667em}j+1}}-{v_{I,\hspace{0.1667em}j}}]}{\sup }\hspace{0.1667em}(M({v_{I,\hspace{0.1667em}j}}+z)-M({v_{I,\hspace{0.1667em}j}}))\\ {} & \hspace{1em}=(N({v_{I,\hspace{0.1667em}j+1}})-N({v_{I,\hspace{0.1667em}j}}))+(M({v_{I,\hspace{0.1667em}j+1}})-M({v_{I,\hspace{0.1667em}j}})).\end{aligned}\]
Observe that
\[\begin{aligned}{}\underset{0\le j\le {2^{I}}-1}{\max }\hspace{0.1667em}(N({v_{I,\hspace{0.1667em}j+1}})-N({v_{I,\hspace{0.1667em}j}}))& \le \underset{0\le j\le {2^{I}}-1}{\max }\hspace{0.1667em}|X({v_{I,\hspace{0.1667em}j+1}})-X({v_{I,\hspace{0.1667em}j}})|\\ {} & \hspace{1em}+\underset{0\le j\le {2^{I}}-1}{\max }\hspace{0.1667em}(M({v_{I,\hspace{0.1667em}j+1}})-M({v_{I,\hspace{0.1667em}j}})).\end{aligned}\]
Hence, according to the Borel–Cantelli lemma, it is enough to prove that, for all $\varepsilon \gt 0$,
(17)
\[ \sum \limits_{n\ge 1}\mathbb{P}\{\underset{0\le j\le {2^{I}}-1}{\max }\hspace{0.1667em}(M({v_{I,\hspace{0.1667em}j+1}})-M({v_{I,\hspace{0.1667em}j}}))\gt \varepsilon {t_{n}^{1/2}}\}\lt \infty \](18)
\[ \sum \limits_{n\ge 1}\mathbb{P}\{\underset{0\le j\le {2^{I}}-1}{\max }\hspace{0.1667em}|X({v_{I,\hspace{0.1667em}j+1}})-X({v_{I,\hspace{0.1667em}j}})|\gt \varepsilon {t_{n}^{1/2}}\Big\}\lt \infty .\]
\[\begin{aligned}{}\mathbb{E}{(M(u)-M(v))^{\ell }}& =\mathbb{E}{\Big({\int _{(v,\hspace{0.1667em}u]}}F(u-y)\mathrm{d}\nu (y)+{\int _{[0,\hspace{0.1667em}v]}}(F(u-y)-F(v-y))\mathrm{d}\nu (y)\Big)^{\ell }}\\ {} & \le {2^{\ell -1}}\mathbb{E}{(\nu (1))^{\ell }}{(\lceil u-v\rceil +1)^{\ell }}.\end{aligned}\]
As a consequence, for nonnegative integer $j\le I$ and a constant ${C_{4}}\gt 0$,
\[ \mathbb{E}{(M({v_{I,\hspace{0.1667em}j+1}})-M({v_{I,\hspace{0.1667em}j}}))^{\ell }}\le {C_{4}}{2^{-I\ell }}{({t_{n+1}}-{t_{n}})^{\ell }}.\]
By Markov’s inequality and our choice of I,
\[\begin{aligned}{}\mathbb{P}\{\underset{0\le j\le {2^{I}}-1}{\max }\hspace{0.1667em}(M({v_{I,\hspace{0.1667em}j+1}})-M({v_{I,\hspace{0.1667em}j}}))\gt \varepsilon {t_{n}^{1/2}}\}& \le {C_{4}}{\varepsilon ^{-\ell }}{2^{-I(\ell -1)}}{t_{n}^{-\ell /2}}{({t_{n+1}}-{t_{n}})^{\ell }}\\ {} & \le {C_{4}}{\varepsilon ^{-\ell }}{2^{2(\ell -1)}}{t_{n}^{-\ell /2}}({t_{n+1}}-{t_{n}}).\end{aligned}\]
Hence, (17) follows upon choosing $\ell \gt 2$. To prove (18), we invoke (16) which enables us to conclude that
\[\begin{aligned}{}\mathbb{P}\{\underset{0\le j\le {2^{I}}-1}{\max }\hspace{0.1667em}|X({v_{I,\hspace{0.1667em}j+1}})-X({v_{I,\hspace{0.1667em}j}})|\gt \varepsilon {t_{n}^{1/2}}\}& \le {C_{2}}{\varepsilon ^{-2\ell }}{2^{-I(\ell -1)}}{t_{n}^{-\ell }}{({t_{n+1}}-{t_{n}})^{\ell }}\\ {} & \le {C_{2}}{\varepsilon ^{-2\ell }}{2^{2(\ell -1)}}{t_{n}^{-\ell }}({t_{n+1}}-{t_{n}}).\end{aligned}\]
Choosing $\ell \gt 1$ we arrive at (18).The proof of Theorem 1 is complete.
3 Auxiliary results
To prove Theorem 2, we need some auxiliary results on the iterated perturbed random walks. Lemma 1 is a known result, see Assertion 1 in [16].
Put
so that U is the renewal function.
Proof.
We use mathematical induction. For $k=1$, write
The penultimate inequality is justified by subadditivity of the renewal function U, see Theorem 1.7 on p. 10 in [17], and its monotonicity.
(22)
\[\begin{aligned}{}& V(x+h)-V(x)={\int _{[0,\hspace{0.1667em}x+h]}}U(x+h-y)\mathrm{d}F(y)-{\int _{[0,\hspace{0.1667em}x]}}U(x-y)\mathrm{d}F(y)\\ {} & \hspace{1em}={\int _{(x,\hspace{0.1667em}x+h]}}U(x+h-y)\mathrm{d}F(y)+{\int _{[0,\hspace{0.1667em}x]}}(U(x+h-y)-U(x-y))\mathrm{d}F(y)\\ {} & \hspace{1em}\le U(h)(F(x+h)-F(x))+U(h)F(x)\le U(h).\end{aligned}\]Assume that inequality (21) holds for $k\le l-1$. Note that (2) implies that ${V_{l-1}}(h)\le {(V(h))^{l-1}}\le U(h){(V(x+h))^{l-2}}$ for $l\ge 2$ and $h\ge 0$. Using this and the induction assumption, we have
\[\begin{aligned}{}& {V_{l}}(x+h)-{V_{l}}(x)={\int _{[0,\hspace{0.1667em}x]}}({V_{l-1}}(x+h-y)-{V_{l-1}}(x-y))\mathrm{d}V(y)\\ {} & \hspace{2em}+{\int _{(x,x+h]}}{V_{l-1}}(x+h-y)\mathrm{d}V(y)\\ {} & \hspace{1em}\le U(h){\int _{[0,\hspace{0.1667em}x]}}{(V(x+h-y))^{l-2}}\mathrm{d}V(y)+{V_{l-1}}(h)(V(x+h)-V(x))\\ {} & \hspace{1em}\le U(h){(V(x+h))^{l-2}}\cdot V(x)+U(h){(V(x+h))^{l-2}}(V(x+h)-V(x))\\ {} & \hspace{1em}=U(h){(V(x+h))^{l-1}}.\end{aligned}\]
□Proof.
We use mathematical induction. For $k=1$, write
\[\begin{aligned}{}{Y_{1}}(t)-{V_{1}}(t)& =\sum \limits_{j\ge 1}({1_{\{{\eta _{j}}+{S_{j-1}}\le t\}}}-F(t-{S_{j-1}}))+(\sum \limits_{j\ge 0}F(t-{S_{j}})-{V_{1}}(t))\\ {} & =:{I_{1}}(t)+{I_{2}}(t).\end{aligned}\]
Note that $\mathrm{Var}\hspace{0.1667em}{Y_{1}}(t)=\mathbb{E}{({Y_{1}}(t)-{V_{1}}(t))^{2}}\le 2(\mathbb{E}{({I_{1}}(t))^{2}}+\mathbb{E}{({I_{2}}(t))^{2}})$. Let U be as in (20). We have
\[ \mathbb{E}{({I_{1}}(t))^{2}}={\int _{[0,\hspace{0.1667em}t]}}F(t-y)(1-F(t-y))\mathrm{d}U(y)\le {\int _{[0,\hspace{0.1667em}t]}}(1-F(t-y))\mathrm{d}U(y).\]
If $\mathbb{E}\eta =\infty $, then Lemma 6.2.9 in [9] with ${r_{1}}=0$ and ${r_{2}}=1$ yields
\[ {\int _{[0,\hspace{0.1667em}t]}}(1-F(t-y))\mathrm{d}U(y)\sim \frac{1}{\mu }{\int _{0}^{t}}(1-F(y))\mathrm{d}y=o(t),\hspace{1em}t\to \infty ,\]
where $\mu =\mathbb{E}\xi \lt \infty $. If $\mathbb{E}\eta \lt \infty $, then ${\textstyle\int _{[0,\hspace{0.1667em}t]}}(1-F(t-y))\mathrm{d}U(y)=O(1)$ as $t\to \infty $ by the key renewal theorem. Thus, in any case, $\mathbb{E}{({I_{1}}(t))^{2}}=o(t)$ as $\hspace{3.33333pt}t\to \infty $.In the proof of Lemma 4.2 in [8] it is shown that
where $\nu (s)$ is the same as in (4). Therefore, almost surely
Consequently, according to (24), $\mathbb{E}{({I_{2}}(t))^{2}}\le \mathbb{E}{\sup _{s\in [0,\hspace{0.1667em}t]}}{(\nu (s)-U(s))^{2}}=O(t)$ as $t\to \infty $. We have proved that ${a_{1}}(t)=O(t)$ as $t\to \infty $.
(24)
\[ \mathbb{E}\underset{s\in [0,\hspace{0.1667em}t]}{\sup }{(\nu (s)-U(s))^{2}}=O(t),\hspace{1em}t\to \infty ,\](25)
\[\begin{aligned}{}|{I_{2}}(t)|& =\Big|{\int _{[0,\hspace{0.1667em}t]}}F(t-y)\mathrm{d}(\nu (y)-U(y))\Big|=\Big|{\int _{[0,\hspace{0.1667em}t]}}(\nu (t-y)-U(t-y)\mathrm{d}F(y)\Big|\\ {} & \le {\int _{[0,\hspace{0.1667em}t]}}|\nu (t-y)-U(t-y)|\mathrm{d}F(y)\le \underset{s\in [0,\hspace{0.1667em}t]}{\sup }|\nu (s)-U(s)|\cdot F(t)\\ {} & \le \underset{s\in [0,\hspace{0.1667em}t]}{\sup }|\nu (s)-U(s)|.\end{aligned}\]Assume that relation (23) holds for $k\le l-1$. We shall use the representation
Further,
\[ {Y_{l}}(t)-{V_{l}}(t)=\sum \limits_{r\ge 1}\big({Y_{l-1}^{(r)}}(t-{T_{r}})-{V_{l-1}}(t-{T_{r}})\big)+\big(\sum \limits_{r\ge 1}{V_{l}}(t-{T_{r}})-{V_{l}}(t)\big)=:{J_{l}}(t)+{K_{l}}(t),\]
which particularly entails
\[ {a_{l}}(t)=\mathbb{E}{({Y_{l}}(t)-{V_{l}}(t))^{2}}=\mathbb{E}{({J_{l}}(t))^{2}}+\mathbb{E}{({K_{l}}(t))^{2}}.\]
Note that, according to the induction assumption, there exist $A\gt 0$ and ${t_{0}}\gt 0$ such that ${a_{l-1}}(t)\le A{t^{2l-3}}$ for all $t\ge {t_{0}}$. Therefore, using (19) and (22),
(26)
\[\begin{aligned}{}\mathbb{E}{({J_{l}}(t))^{2}}& ={\int _{[0,\hspace{0.1667em}t]}}{a_{l-1}}(t-y)\mathrm{d}V(y)\\ {} & ={\int _{[0,\hspace{0.1667em}t-{t_{0}}]}}{a_{l-1}}(t-y)\mathrm{d}V(y)+{\int _{(t-{t_{0}},\hspace{0.1667em}t]}}{a_{l-1}}(t-y)\mathrm{d}V(y)\\ {} & \le A{\int _{[0,\hspace{0.1667em}t]}}{(t-y)^{2l-3}}\mathrm{d}V(y)+\underset{s\in [0,{t_{0}}]}{\sup }{a_{l-1}}(s)(V(t)-V(t-{t_{0}}))\\ {} & \le A{t^{2l-3}}V(t)+O(1)=O({t^{2l-2}}),\hspace{1em}t\to \infty .\end{aligned}\]
\[\begin{aligned}{}{K_{l}}(t)& =\sum \limits_{r\ge 0}\big({V_{l-1}}(t-{T_{r}})-({V_{l-1}}\ast F)(t-{S_{r-1}})\big)\\ {} & \hspace{2em}+\big(\sum \limits_{r\ge 0}({V_{l-1}}\ast F)(t-{S_{r-1}})-{V_{l}}(t))\big)\\ {} & =:{K_{l1}}(t)+{K_{l2}}(t).\end{aligned}\]
Using ${V_{l}}={V_{l-1}}\ast F\ast U$ and the same reasoning as in (25) we obtain
\[ |{K_{l2}}(t)|=\Big|{\int _{[0,\hspace{0.1667em}t]}}({V_{l-1}}\ast F)(t-y)\mathrm{d}(\nu (y)-U(y))\Big|\le \underset{s\in [0,\hspace{0.1667em}t]}{\sup }|\nu (s)-U(s)|\cdot {V_{l-1}}(t)\hspace{3.33333pt}\hspace{2.5pt}\text{a.s.}\]
Therefore, in view of (19) and (24),
Finally,
\[\begin{aligned}{}\mathbb{E}{({K_{l1}}(t))^{2}}& =\sum \limits_{r\ge 1}\mathbb{E}{\big({V_{l-1}}(t-{T_{r}})-({V_{l-1}}\ast F)(t-{S_{r-1}})\big)^{2}}\\ {} & \le \sum \limits_{r\ge 1}\Big[\mathbb{E}{\big({V_{l-1}}(t-{T_{r}})\big)^{2}}+\mathbb{E}{\big(({V_{l-1}}\ast F)(t-{S_{r-1}})\big)^{2}}\Big]\\ {} & ={\int _{[0,\hspace{0.1667em}t]}}{\big({V_{l-1}}(t-y)\big)^{2}}\mathrm{d}V(y)+{\int _{[0,\hspace{0.1667em}t]}}{\big(({V_{l-1}}\ast F)(t-y)\big)^{2}}\mathrm{d}U(y)\\ {} & \le {\big({V_{l-1}}(t)\big)^{2}}\cdot V(t)+{\big(({V_{l-1}}\ast F)(t)\big)^{2}}\cdot U(t)=O({t^{2l-1}}),\hspace{1em}t\to \infty .\end{aligned}\]
For the last equality we have used $({V_{l-1}}\ast F)(t)\le {V_{l-1}}(t)$ for $t\ge 0$. The proof of the Lemma 3 is complete. □We shall also need two results on the standard random walks. The next lemma is a consequence of formula (33) in [1], with $\eta =\xi $.
Lemma 5.
Let ${K_{1}},\hspace{3.57777pt}{K_{2}}\hspace{3.57777pt}:\hspace{3.57777pt}[0,\infty )\to [0,\infty )$ be nondecreasing functions and ${K_{1}}(t)\ge {K_{2}}(t)$ for $t\ge 0$. Assume that
Then, for all $c\gt 0$,
(27)
\[ \underset{t\to \infty }{\limsup }\frac{{K_{1}}(t)+{K_{2}}(t)}{{\textstyle\textstyle\int _{0}^{t}}({K_{1}}(y)-{K_{2}}(y))\mathrm{d}y}\le \lambda \in (0,\infty ).\]Proof.
We use the decomposition
\[ {\int _{[0,\hspace{0.1667em}t]}}({K_{1}}(t-y)-{K_{2}}(t-y))\mathrm{d}\nu (y)={\int _{[0,\hspace{0.1667em}\lfloor t\rfloor ]}}...+{\int _{[\lfloor t\rfloor ,\hspace{0.1667em}t]}}...=:{I_{1}}(t)+{I_{2}}(t).\]
For ${I_{2}}(t)$ we have
\[ {I_{2}}(t)\le {\int _{[\lfloor t\rfloor ,\hspace{0.1667em}t]}}{K_{1}}(t-y)\mathrm{d}\nu (y)\le {K_{1}}(t-\lfloor t\rfloor )(\nu (t)-\nu (\lfloor t\rfloor ))\le {K_{1}}(1)(\nu (t)-\nu (t-1)).\]
Hence, by Lemma 4, for all $c\gt 0$, ${\lim \nolimits_{t\to \infty }}{t^{-c}}{I_{2}}(t)=0$ a.s. It remains to consider ${I_{1}}(t)$:
\[\begin{aligned}{}{I_{1}}(t)& ={K_{1}}(t)-{K_{2}}(t)+{\sum \limits_{j=0}^{\lfloor t\rfloor -1}}{\int _{(j,\hspace{0.1667em}j+1]}}({K_{1}}(t-y)-{K_{2}}(t-y))\mathrm{d}\nu (y)\\ {} & \le {K_{1}}(t)-{K_{2}}(t)+{\sum \limits_{j=0}^{\lfloor t\rfloor -1}}({K_{1}}(t-j)-{K_{2}}(t-j-1))(\nu (j+1)-\nu (j))\\ {} & \le {K_{1}}(t)+\underset{s\in [0,\hspace{0.1667em}\lfloor t\rfloor ]}{\sup }(\nu (s+1)-\nu (s)){\sum \limits_{j=0}^{\lfloor t\rfloor -1}}({K_{1}}(t-j)-{K_{2}}(t-j-1))\\ {} & \le {K_{1}}(t)+\underset{s\in [0,\lfloor t\rfloor ]}{\sup }(\nu (s+1)-\nu (s)){\sum \limits_{j=0}^{\lfloor t\rfloor -1}}({K_{1}}(\lfloor t\rfloor +1-j)-{K_{2}}(\lfloor t\rfloor -1-j))\\ {} & =\underset{s\in [0,\lfloor t\rfloor ]}{\sup }(\nu (s+1)-\nu (s))\left({\int _{2}^{\lfloor t\rfloor }}({K_{1}}(y)-{K_{2}}(y))\mathrm{d}y+O({K_{1}}(t)+{K_{2}}(t))\right).\end{aligned}\]
Another application of Lemma 4 yields
\[ \underset{t\to \infty }{\lim }\frac{{I_{1}}(t)}{{t^{c}}{\textstyle\textstyle\int _{0}^{t}}({K_{1}}(y)-{K_{2}}(y))\mathrm{d}y}=0\hspace{1em}\hspace{2.5pt}\text{a.s.}\]
The proof of Lemma 5 is complete. □4 Proof of Theorem 2
We use a decomposition
The first term of the decomposition is treated in Proposition 1.
(29)
\[ {Y_{j}}(t)-{V_{j}}(t)\hspace{0.1667em}=\hspace{0.1667em}\sum \limits_{k\ge 1}\big({Y_{j-1}^{(k)}}(t-{T_{k}})-{V_{j-1}}(t-{T_{k}})\big)+\sum \limits_{k\ge 1}{V_{j-1}}(t-{T_{k}})-{V_{j}}(t),\hspace{1em}j\hspace{0.1667em}\ge \hspace{0.1667em}2,\hspace{3.33333pt}t\hspace{0.1667em}\ge \hspace{0.1667em}0.\]We first prove Theorem 2 with the help of Proposition 1. Afterwards, a proof of Proposition 1 will be given.
Proof of Theorem 2.
By Proposition 1, the contribution of the first term in (29) normalized by ${({t^{2j-1}}\log \log t)^{1/2}}$ vanishes as $t\to \infty $.
For the second term in (29), write
\[\begin{aligned}{}& \sum \limits_{k\ge 1}{V_{j-1}}(t-{T_{k}})-{V_{j}}(t)={\int _{[0,\hspace{0.1667em}t]}}Y(t-x)\mathrm{d}{V_{j-1}}(x)-{V_{j}}(t)\\ {} & \hspace{1em}={\int _{[0,\hspace{0.1667em}t]}}\big(Y(t-x)-(F\ast \nu )(t-x)\big)\mathrm{d}{V_{j-1}}(x)\\ {} & \hspace{2em}+\Big({\int _{[0,\hspace{0.1667em}t]}}(F\ast \nu )(t-x)\mathrm{d}{V_{j-1}}(x)-{V_{j}}(t)\Big)=:{A_{1}}(t)+{A_{2}}(t).\end{aligned}\]
According to (7), ${\lim \nolimits_{t\to \infty }}\frac{Y(t)-(F\ast \nu )(t)}{{(t\log \log t)^{1/2}}}=0$ a.s., whence
\[ \underset{t\to \infty }{\lim }\underset{z\in [0,\hspace{0.1667em}t]}{\sup }\frac{|Y(z)-(F\ast \nu )(z)|}{{(t\log \log t)^{1/2}}}=0\hspace{1em}\text{a.s.}\]
With this at hand,
\[\begin{aligned}{}& \frac{|{A_{1}}(t)|}{{t^{j-1/2}}{(\log \log t)^{1/2}}}\le \underset{z\in [0,\hspace{0.1667em}t]}{\sup }\frac{|Y(z)-(F\ast \nu )(z)|}{{(t\log \log t)^{1/2}}}\cdot \frac{{V_{j-1}}(t)}{{t^{j-1}}}\longrightarrow 0\hspace{1em}\text{a.s.,}\hspace{1em}t\to \infty ,\\ {} & \hspace{1em}\text{using}\hspace{2.5pt}\frac{{V_{j-1}}(t)}{{t^{j-1}}}\longrightarrow \frac{1}{(j-1)!{\mu ^{j-1}}},\hspace{1em}t\to \infty .\end{aligned}\]
Further,
Recall that the distribution of η is arbitrary. Now we show that in the subsequent proof F can be replaced with an absolutely continuous distribution function that has a directly Riemann integrable (dRi) density.Put $G(x):=1-{\mathrm{e}^{-x}}$ for $x\ge 0$. The function $H:=F\ast G$ is absolutely continuous with the density $h(x)={\textstyle\int _{[0,\hspace{0.1667em}x]}}{\mathrm{e}^{-(x-y)}}\mathrm{d}F(y)$ for $x\ge 0$. Since $x\mapsto {\mathrm{e}^{-x}}$ is dRi on $[0,\infty )$, so is h as a Lebesgue–Stieltjes convolution of a dRi function and a distribution function, see Lemma 6.2.1 (c) in [9]. Note that $H(x)\le F(x)$ for $x\ge 0$. To show that we can work with H instead of F, it suffices to check that
and
For (31), write
(30)
\[ \underset{t\to \infty }{\lim }\frac{(F\ast {V_{j-1}}\ast \nu )(t)-(H\ast {V_{j-1}}\ast \nu )(t)}{{t^{j-1/2}}}=0\hspace{1em}\text{a.s.}\](31)
\[ \underset{t\to \infty }{\lim }\frac{(F\ast {V_{j-1}}\ast U)(t)-(H\ast {V_{j-1}}\ast U)(t)}{{t^{j-1/2}}}=0.\]
\[\begin{aligned}{}& (F\ast {V_{j-1}}\ast U)(t)-(H\ast {V_{j-1}}\ast U)(t)={\int _{[0,\hspace{0.1667em}t]}}(1-G(t-x))\mathrm{d}{V_{j}}(x)\\ {} & \hspace{1em}={\int _{[0,\hspace{0.1667em}t]}}{\mathrm{e}^{-(t-x)}}\mathrm{d}{V_{j}}(x)\hspace{0.1667em}\sim \hspace{0.1667em}\Big({\int _{0}^{\infty }}{\mathrm{e}^{-y}}\mathrm{d}y\Big){V_{j-1}}(t)\\ {} & \hspace{1em}={V_{j-1}}(t)\hspace{0.1667em}\sim \hspace{0.1667em}\frac{{t^{j-1}}}{(j-1)!{\mu ^{j-1}}},\hspace{1em}t\hspace{0.1667em}\to \hspace{0.1667em}\infty ,\end{aligned}\]
where the asymptotic equalities are justified by (19) and Theorem 2 in [16]. This proves (31).To prove (30), we use Lemma 5 with ${K_{1}}(t)=(F\ast {V_{j-1}})(t)$ and ${K_{2}}(t)=(H\ast {V_{j-1}})(t)$ for $t\ge 0$. Note that ${K_{2}}(t)=\mathbb{E}{K_{1}}(t-\theta ){1_{\{\theta \le t\}}}$, where θ is a random variable with the distribution function G, and that
\[\begin{aligned}{}0\le {\int _{0}^{t}}({K_{1}}(y)-{K_{2}}(y))\mathrm{d}y& ={\int _{0}^{t}}({K_{1}}(y)-\mathbb{E}{K_{1}}(y-\theta ){1_{\{\theta \le y\}}})\mathrm{d}y\\ {} & ={\int _{0}^{t}}{K_{1}}(y)\mathrm{d}y\cdot {\mathrm{e}^{-t}}+\mathbb{E}{\int _{t-\theta }^{t}}{K_{1}}(y)\mathrm{d}y{1_{\{\theta \le t\}}}.\end{aligned}\]
Using the Laplace transforms and (19), we have
\[ {K_{1}}(t)\sim {K_{2}}(t)\sim {V_{j-1}}(t)\sim \frac{{t^{j-1}}}{(j-1)!{\mu ^{j-1}}},\hspace{1em}t\to \infty .\]
Therefore,
\[ \underset{t\to \infty }{\lim }\frac{{\textstyle\textstyle\int _{0}^{t}}{K_{1}}(y)\mathrm{d}y\cdot {\mathrm{e}^{-t}}}{{K_{1}}(t)}=0\]
and, in view of monotonicity,
\[ \frac{\mathbb{E}\theta {K_{1}}(t-\theta ){1_{\{\theta \le t\}}}}{{K_{1}}(t)}\le \frac{\mathbb{E}{\textstyle\textstyle\int _{t-\theta }^{t}}{K_{1}}(y)\mathrm{d}y{1_{\{\theta \le t\}}}}{{K_{1}}(t)}\le \mathbb{E}\theta =1.\]
By Lebesgue’s dominated convergence theorem
\[ {\int _{0}^{t}}({K_{1}}(t)-{K_{2}}(t))\mathrm{d}y\sim {K_{1}}(t)\sim \frac{{t^{j-1}}}{(j-1)!{\mu ^{j-1}}},\hspace{1em}t\to \infty .\]
Thus, condition (27) holds with $\lambda =2$. Consequently, (30) holds by (28) with $c=1/2$.As a consequence of (30) and (31), we can and do investigate ${\hat{A}_{2}}(t)={\textstyle\int _{[0,\hspace{0.1667em}t]}}(\nu (t-x)-U(t-x))\mathrm{d}(H\ast {V_{j-1}})(x)$ in place of ${A_{2}}(t)$. By Lemma 3.1 in [12], there exists a standard Brownian motion ${(W(t))_{t\ge 0}}$ such that
With this specific ${(W(t))_{t\ge 0}}$, write
(32)
\[ \underset{t\to \infty }{\lim }\frac{{\sup _{z\in [0,\hspace{0.1667em}t]}}|\nu (z)-U(z)-\sigma {\mu ^{-3/2}}W(z)|}{{(t\log \log t)^{1/2}}}=0\hspace{1em}\text{a.s.}\]
\[\begin{aligned}{}{\hat{A}_{2}}(t)& ={\int _{[0,\hspace{0.1667em}t]}}\Big(\nu (t-x)-U(t-x)-\sigma {\mu ^{-3/2}}W(t-x)\Big)\mathrm{d}(H\ast {V_{j-1}})(x)\\ {} & \hspace{1em}+\sigma {\mu ^{-3/2}}{\int _{[0,\hspace{0.1667em}t]}}W(t-x)\mathrm{d}(H\ast {V_{j-1}})(x)=:{B_{1}}(t)+\sigma {\mu ^{-3/2}}{B_{2}}(t).\end{aligned}\]
Then, using (32) and (19), we have
\[\begin{aligned}{}|{B_{1}}(t)|& \le \underset{z\in [0,\hspace{0.1667em}t]}{\sup }|\nu (z)-U(z)-\sigma {\mu ^{-3/2}}W(z)|\cdot (H\ast {V_{j-1}})(t)\\ {} & \le \underset{z\in [0,\hspace{0.1667em}t]}{\sup }|\nu (z)-U(z)-\sigma {\mu ^{-3/2}}W(z)|\cdot {V_{j-1}}(t)\\ {} & =o({({t^{2j-1}}\log \log t)^{1/2}}),\hspace{1em}t\to \infty .\end{aligned}\]
We are left with showing that
In particular, ${(H\ast {V_{j-1}})^{\prime }}$ varies regularly at infinity with index $j-2$, and Proposition 2.4 in [10] yields
\[ C\bigg(\bigg(\frac{(j-1)!{\mu ^{j-1}}{B_{2}}(t)}{{(2{(2j-1)^{-1}}{t^{2j-1}}\log \log t)^{1/2}}}\hspace{0.1667em}:\hspace{0.1667em}t\gt \mathrm{e}\bigg)\bigg)=[-1,1]\hspace{1em}\text{a.s.}\]
Since H is absolutely continuous with a dRi density, the function $H\ast {V_{j-1}}$ is almost everywhere differentiable with
\[ {(H\ast {V_{j-1}})^{\prime }}(x)={\int _{[0,\hspace{0.1667em}x]}}h(x-y)\mathrm{d}{V_{j-1}}(y)\hspace{1em}\text{for almost every}\hspace{2.5pt}x\ge 0.\]
Consequently,
\[ {\int _{[0,\hspace{0.1667em}t]}}W(t-x)\mathrm{d}(H\ast {V_{j-1}})(x)={\int _{[0,\hspace{0.1667em}t]}}W(t-x){(H\ast {V_{j-1}})^{\prime }}(x)\mathrm{d}x.\]
By Theorem 2 in [16], for $j\ge 2$,
(33)
\[ {\int _{[0,\hspace{0.1667em}x]}}h(x-y)\mathrm{d}{V_{j-1}}(y)\sim {\int _{0}^{\infty }}h(y)\mathrm{d}y\cdot \frac{{x^{j-2}}}{(j-2)!{\mu ^{j-1}}}=\frac{{x^{j-2}}}{(j-2)!{\mu ^{j-1}}},\hspace{1em}x\to \infty .\]
\[ C\bigg(\bigg(\frac{(j-1)!{\mu ^{j-1}}}{{t^{j-1}}}\frac{{\textstyle\int _{[0,\hspace{0.1667em}t]}}W(t-x){(H\ast {V_{j-1}})^{\prime }}(x)\mathrm{d}x}{{(2{(2j-1)^{-1}}t\log \log t)^{1/2}}}\hspace{0.1667em}:\hspace{0.1667em}t\gt \mathrm{e}\bigg)\bigg)=[-1,1]\hspace{1em}\text{a.s.}\]
Here, we have used that (33) entails
see Proposition 1.5.8 in [2]. The proof of Theorem 2 is complete. □Finally, we prove Proposition 1.
Proof of Proposition 1.
Put ${Z_{j}}(t)={\textstyle\sum _{k\ge 1}}\big({Y_{j-1}^{(k)}}(t-{T_{k}})-{V_{j-1}}(t-{T_{k}})\big)$ for $t\ge 0$. Relation (23) implies that there exist ${t_{0}}\gt 0$ and $A\gt 0$ such that ${a_{j-1}}(t)\le A{t^{2j-3}}$ for all $t\ge {t_{0}}$. Using the same reasoning as in (26), we have
By Markov’s inequality and (34), for all $\varepsilon \gt 0$,
It remains to pass from an integer argument to a continuous argument. For any $t\ge 0$ there exists $n\in {\mathbb{N}_{0}}$ such that $t\in [{n^{3/2}},{(n+1)^{3/2}})$. By monotonicity,
(34)
\[ \mathbb{E}{({Z_{j}}(t))^{2}}={\int _{[0,\hspace{0.1667em}t]}}{a_{j-1}}(t-x)\mathrm{d}V(x)=O({t^{2j-2}}),\hspace{1em}t\to \infty .\]
\[ \sum \limits_{n\ge 1}\mathbb{P}\left\{\frac{|{Z_{j}}({n^{3/2}})|}{{n^{(3/2)(j-1/2)}}}\gt \varepsilon \right\}\le \sum \limits_{n\ge 1}\frac{\mathbb{E}{({Z_{j}}({n^{3/2}}))^{2}}}{{\varepsilon ^{2}}{n^{3(j-1/2)}}}\lt \infty .\]
Hence, by the Borel–Cantelli lemma,
(35)
\[ \underset{n\to \infty }{\lim }\frac{{Z_{j}}({n^{3/2}})}{{n^{(3/2)(j-1/2)}}}=0\hspace{1em}\text{a.s.}\]
\[\begin{aligned}{}& \frac{{Z_{j}}(t)}{{t^{j-1/2}}}\le \frac{{Z_{j}}({(n+1)^{3/2}})}{{n^{(3/2)(j-1/2)}}}\\ {} & \hspace{1em}+\frac{{\textstyle\int _{[0,\hspace{0.1667em}{(n+1)^{3/2}}]}}{V_{j-1}}({(n+1)^{3/2}}-x)\mathrm{d}Y(x)-{\textstyle\int _{[0,\hspace{0.1667em}{n^{3/2}}]}}{V_{j-1}}({n^{3/2}}-x)\mathrm{d}Y(x)}{{n^{(3/2)(j-1/2)}}}.\end{aligned}\]
Relation (35) implies that the first summand on the right-hand side converges to 0 a.s. as $n\to \infty $. The second summand is equal to
\[\begin{aligned}{}& {\int _{({n^{3/2}},{(n+1)^{3/2}}]}}{V_{j-1}}({(n+1)^{3/2}}-x)\mathrm{d}Y(x)\\ {} & \hspace{1em}+{\int _{[0,{n^{3/2}}]}}({V_{j-1}}({(n+1)^{3/2}}-x)-{V_{j-1}}({n^{3/2}}-x))\mathrm{d}Y(x)=:{X_{j,1}}(n)+{X_{j,2}}(n).\end{aligned}\]
By monotonicity, for $j\ge 2$, as $n\to \infty $, a.s.
\[\begin{aligned}{}{X_{j,1}}(n)& \le {V_{j-1}}({(n+1)^{3/2}}-{n^{3/2}})(Y({(n+1)^{3/2}})-Y({n^{3/2}}))\\ {} & =O({n^{j/2+1}})=o({n^{(3/2)(j-1/2)}}).\end{aligned}\]
Here, the penultimate equality is justified by the inequality $Y(t)\le \nu (t)$ for $t\ge 0$, the strong law of large numbers for renewal processes ${\lim \nolimits_{n\to \infty }}{n^{-1}}\nu (n)={\mu ^{-1}}$ a.s. and ${V_{j-1}}({(n+1)^{3/2}}-{n^{3/2}})=O({n^{(j-1)/2}})$ as $n\to \infty $, which holds true by (19).Using (21), we infer
\[\begin{aligned}{}{X_{j,2}}(n)& \le U({(n+1)^{3/2}}-{n^{3/2}}){(V({(n+1)^{3/2}}))^{j-2}}Y({n^{3/2}})\\ {} & =O({n^{(3/2)(j-2/3)}})=o({n^{(3/2)(j-1/2)}})\end{aligned}\]
a.s. as $n\to \infty $. The penultimate equality is secured by the elementary renewal theorem, the strong law of large numbers for renewal processes and the inequality $Y(t)\le \nu (t)$ for $t\ge 0$.We have shown that
An analogous argument proves the converse inequality for the lower limit. The proof of Proposition 1 is complete. □