1 Introduction
Over the past several decades, fractional Brownian motion (fBm) has remained a widely used stochastic process for modeling various phenomena that exhibit memory, as well as long- or short-range dependence. Initially proposed as a model for turbulence, it now finds applications in diverse fields such as mathematical finance (e.g., modeling stochastic volatility), hydrology, telecommunications (e.g., network traffic with self-similarity), image processing, and physics (e.g., anomalous diffusion). A comprehensive mathematical foundation and stochastic calculus for fBm and related processes can be found in the monograph [25].
To enhance its modeling capabilities in specific contexts and better adapt it to particular problems, numerous generalizations and related processes have been proposed. These include multifractional Brownian motion [31], mixed fractional Brownian motion [34], bifractional Brownian motion [15], subfractional Brownian motion [5], mixed subfractional Brownian motion [34], submixed fractional Brownian motion [14], and generalized fractional Brownian motions [35], among others. A comprehensive description of various types of mixed fractional Gaussian processes can be found in [30].
In recent years, there has been growing interest in tempered versions of fractional Brownian motion. Several types of tempered fractional Brownian motions have been introduced in the literature. In this work, we focus on two prominent examples: tempered fractional Brownian motion (tfBm) and tempered fractional Brownian motion of the second kind (tfBmII), introduced in [23] and [32], respectively. Both processes are based on the Mandelbrot–van Ness representation of fBm, modified by incorporating exponential tempering.
The incorporation of exponential tempering into fBm is primarily motivated by the need to overcome the limitation of classical fBm, which exhibits long-range dependence for $H\gt 1/2$. In many applications, such persistent correlations at all time scales are not realistic: one often observes strong dependence at short or moderate lags, but a much faster decay at longer horizons. Tempered versions of fBm provide a natural remedy, as the exponential factor effectively truncates long memory, yielding processes that are short-range dependent for all $H\gt 0$. At the same time, they retain long-memory-like features at shorter scales and are therefore sometimes described as exhibiting semi-long-range dependence. This intermediate structure makes tfBm and tfBmII attractive in diverse modeling contexts, including finance, where asset returns may display rough or persistent short-term dynamics but limited long-term memory, and in physics or hydrology, where tempered fractional models arise in anomalous diffusion and geophysical time series. The second kind of tempered fBm further broadens this framework by offering alternative covariance structures and scaling limits, which can be useful when fitting models to empirical data.
In addition to tfBm and tfBmII, other related processes include tempered fractional Brownian motion of the third kind [21], tempered multifractional Brownian motion, mixed tempered fractional Brownian motion, tempered fractional Brownian motion with two indices [20], and the general tempered fractional Brownian motion [24].
Due to tempering, tfBm and tfBmII exhibit several additional advantages over classical fBm. In particular, they are well defined for all positive values of the Hurst parameter H, whereas fBm is restricted to the interval $(0,1)$. Tempering modifies the global properties of the process while preserving local sample-path regularity and p-variation. As a result, the asymptotic behavior of the variance changes: for tfBm, the variance converges to a finite limit as $t\to \infty $, while for tfBmII, it grows linearly with t, see [1]. Furthermore, tempering also affects the structure of central limit theorems, leading to modifications of higher-order cumulants such as those appearing in the Breuer–Major theorem. The increments of tfBm and tfBmII, known as tempered fractional Gaussian noise, are stationary and exhibit correlation decay that is significantly faster than in the classical case, which has important consequences for statistical inference.
Additional motivation for studying these processes comes from their successful applications in modeling empirical data. For example, [6] demonstrates that tfBm provides a better fit for turbulent supercritical flow in the Red Cedar River than fBm, and [23] argues for the suitability of tempered fractional Gaussian noise in modeling wind speed data. These real-world applications highlight the importance of parameter estimation and hypothesis testing. In this context, [6] proposes wavelet-based estimators for tfBm parameters and introduces a statistical test to distinguish between fBm and tfBm.
In this paper, we study Langevin-type equations driven by either tfBm or tfBmII, and we develop statistical estimators for the drift parameter of these models.
The Langevin equation has a rich history of investigation (see the Appendix in [8]), and various types of stochastic processes have been considered as driving noise. In particular, [8] proposed a version of the Langevin equation driven by fBm, leading to the so-called fractional Ornstein–Uhlenbeck (fOU) process.
Today, this model has numerous applications and has been well studied from a statistical perspective. In particular, a large body of literature is devoted to the estimation of the drift parameter; see the survey [26] and the monograph [18] for comprehensive overviews. Estimation methods for the Hurst parameter and diffusion coefficient of the fOU process are also available, see, for example, [4, 7, 17, 18].
In the present paper, we estimate the drift parameter of tempered fractional Ornstein–Uhlenbeck processes using a least squares approach. We begin by analyzing an estimator based on continuous-time observations of the entire trajectory of the process. The main focus of our study, however, is on its discrete-time counterpart.
In the continuous-time setting, we adapt the least squares estimator originally proposed in [3] for the fOU process, and later generalized to Ornstein–Uhlenbeck processes driven by more general noise structures in [12, 13, 29]. We then consider a discretized version of this estimator, building upon the approach introduced in [19] and further developed in [18].
The primary contribution of this work is the establishment of strong consistency for both the continuous-time and discrete-time estimators. To prove the strong consistency of the continuous-time estimator, we utilize almost sure bounds for the growth rate of tfBm and tfBmII, recently developed in [27]. These results extend the corresponding bounds for fBm, initially established in [16]. To establish the strong consistency of the discretized least squares estimator, we derive almost sure upper bounds for the increments of both tfBm and tfBmII, employing techniques from [12].
The structure of the paper is as follows. Section 2 introduces the definitions and essential properties of tfBm and tfBmII. In Section 3, we establish almost sure upper bounds for the increments of these processes. The main theoretical results concerning the strong consistency of drift parameter estimators for tempered fractional Ornstein–Uhlenbeck processes are presented in Section 4. Finally, Section 5 is devoted to numerical simulations.
2 Preliminaries
In this section, we recall the definitions and basic properties of tempered fractional Brownian motion (tfBm) and tempered fractional Brownian motion of the second kind (tfBmII), which are required for the subsequent analysis. Both processes are constructed by modifying the Mandelbrot–van Ness representation [22] of fractional Brownian motion (fBm) through the introduction of exponential tempering factors.
In contrast to classical fBm ${B_{H}}=\{{B_{H}}(t),t\ge 0\}$, their tempered counterparts, ${B_{H,\lambda }^{I}}=\{{B_{H,\lambda }^{I}}(t),t\ge 0\}$ and ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}=\{{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t),t\ge 0\}$, are well defined for all positive values of the Hurst index H. These processes also depend on a tempering parameter $\lambda \gt 0$, and for $H\in (0,1)$, they become (up to a normalization) fBm if we put $\lambda =0$. In particular,
\[ {B_{H,0}^{I}}\stackrel{\triangle }{=}{B_{H,0}^{I\hspace{-0.1667em}I}}\stackrel{\triangle }{=}{C_{H}}{B_{H}},\hspace{1em}\text{where}\hspace{3.33333pt}{C_{H}}=\frac{\Gamma (H+\frac{1}{2})}{\sqrt{2H\sin (\pi H)\Gamma (2H)}},\]
see, e.g., [27, Remark 1].Let $W=\{{W_{t}},t\in \mathbb{R}\}$ be a two-sided Wiener process.
Definition 1 ([23]).
Definition 2 ([32]).
The tempered fractional Brownian motion of the second kind (tfBmII) is the stochastic process ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}=\{{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t),t\ge 0\}$, defined by the Wiener integral
where
(2)
\[ {B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t):={\int _{-\infty }^{t}}{g_{H,\lambda ,t}^{I\hspace{-0.1667em}I}}(s)\hspace{0.1667em}d{W_{s}},\]
\[\begin{aligned}{}{g_{H,\lambda ,t}^{I\hspace{-0.1667em}I}}(s)& :={e^{-\lambda {(t-s)_{+}}}}{(t-s)_{+}^{H-\frac{1}{2}}}-{e^{-\lambda {(-s)_{+}}}}{(-s)_{+}^{H-\frac{1}{2}}}\\ {} & \hspace{1em}+\lambda {\int _{0}^{t}}{(u-s)_{+}^{H-\frac{1}{2}}}{e^{-\lambda {(u-s)_{+}}}}\hspace{0.1667em}du,\hspace{1em}s\in \mathbb{R}.\end{aligned}\]
Both processes, ${B_{H,\lambda }^{I}}$ and ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$, are centered Gaussian processes with stationary increments and share the following scaling property: for any scaling factor $c\gt 0$,
where ${X_{H,\lambda }}$ denotes either ${B_{H,\lambda }^{I}}$ or ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$, and $\stackrel{\triangle }{=}$ denotes equality in all finite-dimensional distributions.
(3)
\[ \{{X_{H,\lambda }}(ct),t\ge 0\}\stackrel{\triangle }{=}\{{c^{H}}{X_{H,c\lambda }}(t),t\ge 0\},\]The covariance functions of the tempered processes take the following forms.
-
• For tfBm (1) with parameters $H\gt 0$ and $\lambda \gt 0$, the covariance function is given bywhere
(4)
\[ \operatorname{Cov}\left[{B_{H,\lambda }^{I}}(t),{B_{H,\lambda }^{I}}(s)\right]=\frac{1}{2}\left[{({C_{t}^{I}})^{2}}|t{|^{2H}}+{({C_{s}^{I}})^{2}}|s{|^{2H}}-{({C_{t-s}^{I}})^{2}}|t-s{|^{2H}}\right],\]\[ {({C_{t}^{I}})^{2}}:=\frac{2\Gamma (2H)}{{(2\lambda |t|)^{2H}}}-\frac{2\Gamma (H+\frac{1}{2})}{\sqrt{\pi }}\frac{1}{{(2\lambda |t|)^{H}}}{K_{H}}(\lambda |t|),\hspace{1em}t\gt 0,\]and ${K_{\nu }}(z)$ denotes the modified Bessel function of the second kind. -
• For tfBmII (2) with parameters $H\gt 0$ and $\lambda \gt 0$, the covariance function iswhere
(5)
\[ \operatorname{Cov}\left[{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t),{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(s)\right]=\frac{1}{2}\left[{({C_{t}^{I\hspace{-0.1667em}I}})^{2}}|t{|^{2H}}+{({C_{s}^{I\hspace{-0.1667em}I}})^{2}}|s{|^{2H}}-{({C_{t-s}^{I\hspace{-0.1667em}I}})^{2}}|t-s{|^{2H}}\right],\]\[\begin{aligned}{}{({C_{t}^{I\hspace{-0.1667em}I}})^{2}}& :=\frac{(1-2H)\Gamma (H+\frac{1}{2})\Gamma (H)}{\sqrt{\pi }{(\lambda t)^{2H}}}\\ {} & \hspace{1em}\hspace{0.2778em}\times \left[1{-_{2}}{F_{3}}\left(\left\{1,-\frac{1}{2}\right\},\left\{1-H,\frac{1}{2},1\right\},\frac{{\lambda ^{2}}{t^{2}}}{4}\right)\right]\\ {} & \hspace{1em}\hspace{0.2778em}+\frac{\Gamma (1-H)\Gamma (H+\frac{1}{2})}{\sqrt{\pi }H{2^{2H}}}{\hspace{0.1667em}_{2}}{F_{3}}\left(\left\{1,H-\frac{1}{2}\right\},\left\{1,H+1,H+\frac{1}{2}\right\},\frac{{\lambda ^{2}}{t^{2}}}{4}\right)\hspace{-0.1667em},\end{aligned}\]and ${_{2}}{F_{3}}$ denotes the generalized hypergeometric function.
The behavior of the covariance functions (4) and (5) was investigated in [1], where it was shown that, unlike fBm, both tfBm and tfBmII exhibit short-range dependence for all $H\gt 0$.
With regard to sample path properties, it is known from [27] that the trajectories of both tfBm and tfBmII are γ-Hölder continuous for any $\gamma \in (0,H)$ if $H\in (0,1]$, and are continuously differentiable if and only if $H\gt 1$. Furthermore, [27] establishes almost sure upper bounds for the asymptotic growth of these processes.
-
• For tfBm, it is shown in [27, Theorem 3] that for any $\delta \gt 0$, there exists a nonnegative random variable $\xi =\xi (\delta )$ such that for all $t\gt 0$,and there exist constants ${C_{1}}={C_{1}}(\delta )\gt 0$, ${C_{2}}={C_{2}}(\delta )\gt 0$ such that for all $u\gt 0$,
(6)
\[ \underset{s\in [0,t]}{\sup }|{B_{H,\lambda }^{I}}(s)|\le ({t^{\delta }}\vee 1)\hspace{0.1667em}\xi \hspace{1em}\text{a.s.},\] -
• For tfBmII, it is shown in [27, Theorem 5] that for any $p\gt 1$, there exists a nonnegative random variable $\xi =\xi (p)$ such that for all $t\gt 0$,and the tail bound (7) holds for some constants ${C_{1}},{C_{2}}\gt 0$, which may depend on p.
(8)
\[ \underset{s\in [0,t]}{\sup }|{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(s)|\le \left({t^{\frac{1}{2}}}{({\log ^{+}}t)^{p}}\vee 1\right)\xi \hspace{1em}\text{a.s.},\]
These growth bounds (6) and (8) have been applied in [27, 28] to statistical problems related to the estimation of the drift parameter in tempered Vasicek-type models under continuous-time observations.
To handle discrete-time observations, however, we also require almost sure bounds for the increments of the tempered fractional processes in addition to the bounds for the processes themselves. This issue will be addressed in Section 3.
We conclude this section with an auxiliary results that provide lower and upper bounds for the variance of tfBm and tfBmII on the interval $[0,1]$. These results will be used in Section 3 to establish inequalities for the increments of these processes.
Lemma 1.
For all $H\gt 0$, $\lambda \gt 0$, and $t\in [0,1]$,
\[ \mathsf{E}\hspace{-0.1667em}\left[{\big({B_{H,\lambda }^{I}}(t)\big)^{2}}\right]\ge C{t^{2H}}\hspace{1em}\textit{and}\hspace{1em}\mathsf{E}\hspace{-0.1667em}\left[{\big({B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t)\big)^{2}}\right]\ge C{t^{2H}},\]
where $C={e^{-2\lambda }}/(2H)$.
Proof.
By Definition 1, for tfBm we have
\[\begin{aligned}{}\mathsf{E}\hspace{-0.1667em}\left[{\big({B_{H,\lambda }^{I}}(t)\big)^{2}}\right]& ={\int _{-\infty }^{0}}\hspace{-0.1667em}{\left({e^{-\lambda (t-s)}}{(t-s)^{H-\frac{1}{2}}}-{e^{-\lambda (-s)}}{(-s)^{H-\frac{1}{2}}}\right)^{2}}ds\\ {} & \hspace{1em}+{\int _{0}^{t}}\hspace{-0.1667em}{\left({e^{-\lambda (t-s)}}{(t-s)^{H-\frac{1}{2}}}\right)^{2}}ds\\ {} & \ge {\int _{0}^{t}}\hspace{-0.1667em}{\left({e^{-\lambda (t-s)}}{(t-s)^{H-\frac{1}{2}}}\right)^{2}}ds\\ {} & \ge {e^{-2\lambda t}}{\int _{0}^{t}}{(t-s)^{2H-1}}ds\ge \frac{{e^{-2\lambda }}}{2H}\hspace{0.1667em}{t^{2H}}.\end{aligned}\]
For tfBmII, Definition 2 yields a similar representation, namely,
\[\begin{aligned}{}& \mathsf{E}\left[{\left({B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t)\right)^{2}}\right]={\int _{-\infty }^{0}}{\left({g_{H,\lambda ,t}^{I\hspace{-0.1667em}I}}(s)\right)^{2}}ds\\ {} & \hspace{2em}+{\int _{0}^{t}}{\left({e^{-\lambda (t-s)}}{(t-s)^{H-\frac{1}{2}}}+\lambda {\int _{0}^{t}}{(u-s)^{H-\frac{1}{2}}}{e^{-\lambda (u-s)}}\hspace{0.1667em}du\right)^{2}}ds,\end{aligned}\]
and applying the same estimate to the second integral gives the same lower bound. □Lemma 2.
Let ${X_{H,\lambda }}$ denote either ${B_{H,\lambda }^{I}}$ or ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$. For all $H\gt 0$, $\lambda \gt 0$, and $\kappa \in (0,2)$, there exist constants ${C_{1}},{C_{2}}\gt 0$ such that for all $t\in [0,1]$,
\[\begin{aligned}{}{C_{1}}{t^{2H}}& \le \mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)\big)^{2}}\right]\le {C_{2}}{t^{2H}},\hspace{1em}H\in (0,1),\\ {} {C_{1}}{t^{2}}& \le \mathsf{E}\hspace{-0.1667em}\left[{\big({X_{1,\lambda }}(t)\big)^{2}}\right]\le {C_{2}}{t^{2-\kappa }},\hspace{1em}H=1,\\ {} {C_{1}}{t^{2}}& \le \mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)\big)^{2}}\right]\le {C_{2}}{t^{2}},\hspace{1em}H\gt 1.\end{aligned}\]
Proof.
According to [27, Lemmas 1 and 3], the variances of both processes are continuous on $[0,1]$ and exhibit similar behavior as $t\downarrow 0$, namely,
where
(9)
\[ \mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)\big)^{2}}\right]\sim \left\{\begin{array}{l@{\hskip10.0pt}l}C{t^{2H}},\hspace{1em}& H\in (0,1),\\ {} C{t^{2}}\left|\log t\right|,\hspace{1em}& H=1,\\ {} C{t^{2}},\hspace{1em}& H\gt 1,\end{array}\right.\]
\[ C=C(H,\lambda )=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{\Gamma {(H+\frac{1}{2})^{2}}}{2H\sin (\pi H)\Gamma (2H)},\hspace{1em}& H\in (0,1),\\ {} \frac{1}{4},\hspace{1em}& H=1,\\ {} \frac{\Gamma (2H)}{{2^{2H+1}}{\lambda ^{2H-2}}(H-1)},\hspace{1em}& H\gt 1,\hspace{3.33333pt}{X_{H,\lambda }}={B_{H,\lambda }^{I}},\\ {} \frac{(2H-1)\Gamma (2H)}{{2^{2H+1}}{\lambda ^{2H-2}}(H-1)},\hspace{1em}& H\gt 1,\hspace{3.33333pt}{X_{H,\lambda }}={B_{H,\lambda }^{I\hspace{-0.1667em}I}}.\end{array}\right.\]
Moreover, by Lemma 1, the variance $\mathsf{E}\hspace{-0.1667em}\left[{({X_{H,\lambda }}(1))^{2}}\right]$ is strictly positive.
Define
\[ {f_{1}}(t)=\frac{{t^{2(H\wedge 1)}}}{\mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)\big)^{2}}\right]},\hspace{2em}{f_{2}}(t)=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{\mathsf{E}\hspace{-0.1667em}\left[{({X_{H,\lambda }}(t))^{2}}\right]}{{t^{2(H\wedge 1)}}},\hspace{1em}& H\gt 0,\hspace{3.33333pt}H\ne 1,\\ {} \frac{\mathsf{E}\hspace{-0.1667em}\left[{({X_{H,\lambda }}(t))^{2}}\right]}{{t^{2-\kappa }}},\hspace{1em}& H=1.\end{array}\right.\]
Both ${f_{1}}$ and ${f_{2}}$ are nonnegative and continuous on $(0,1]$. Using the asymptotic relation (9), they can be extended continuously to $[0,1]$. Hence, ${f_{1}}$ and ${f_{2}}$ are bounded on $[0,1]$, which yields the claim. □
Taking into account the stationarity of increments of both tfBm and tfBmII, Lemma 2 immediately yields the following result, which can be regarded as an extension of [1, Theorem 2.7], where only the case $H\in (0,1)$ was considered.
Corollary 1.
Let ${X_{H,\lambda }}$ denote either ${B_{H,\lambda }^{I}}$ or ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$ with $H\gt 0$ and $\lambda \gt 0$, and let $\kappa \in (0,1)$. Then there exist constants ${K_{1}},{K_{2}}\gt 0$ such that, for all $t,s\in {\mathbb{R}_{+}}$ with $\left|t-s\right|\le 1$, the following two-sided bounds hold:
\[\begin{array}{r@{\hskip0pt}l@{\hskip0pt}r@{\hskip0pt}l}\displaystyle {K_{1}}{\left|t-s\right|^{H}}& \displaystyle \le {\left(\mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)-{X_{H,\lambda }}(s)\big)^{2}}\right]\right)^{1/2}}\le {K_{2}}{\left|t-s\right|^{H}},\hspace{2em}& & \displaystyle H\in (0,1),\\ {} \displaystyle {K_{1}}\left|t-s\right|& \displaystyle \le {\left(\mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)-{X_{H,\lambda }}(s)\big)^{2}}\right]\right)^{1/2}}\le {K_{2}}{\left|t-s\right|^{1-\kappa }},\hspace{2em}& & \displaystyle H=1,\\ {} \displaystyle {K_{1}}\left|t-s\right|& \displaystyle \le {\left(\mathsf{E}\hspace{-0.1667em}\left[{\big({X_{H,\lambda }}(t)-{X_{H,\lambda }}(s)\big)^{2}}\right]\right)^{1/2}}\le {K_{2}}\left|t-s\right|,\hspace{2em}& & \displaystyle H\gt 1.\end{array}\]
Remark 1.
The bounds established in Corollary 1 imply that both tempered fractional Brownian motions of the first and second kind, ${B_{H,\lambda }^{I}}$ and ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$, possess Hölder continuous trajectories of any order $\gamma \lt H\wedge 1$. This follows from standard arguments based on the Kolmogorov–Čentsov theorem or, more generally, from the necessary and sufficient conditions for Hölder continuity of Gaussian processes given in [2].
Furthermore, it is known that ${B_{H,\lambda }^{I}}$ and ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$ have the same local regularity despite their different kernel structures; see [23, Theorem 5.1 and Remark 5.2] and [32, Remark 2.3]. Indeed, the latter paper shows that
\[ {B_{H,\lambda }^{I\hspace{-0.1667em}I}}(t)={B_{H,\lambda }^{I}}(t)+\lambda {\int _{0}^{t}}{B_{H,\lambda }^{I}}(s)\hspace{0.1667em}ds+\lambda \hspace{0.1667em}{\eta _{H,\lambda }}t,\]
with ${\eta _{H,\lambda }}={\textstyle\int _{-\infty }^{0}}{(-s)^{H-\frac{1}{2}}}{e^{-\lambda (-s)}}\hspace{0.1667em}d{W_{s}}$, so ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$ differs from ${B_{H,\lambda }^{I}}$ only by smoother additive terms (an integrated process and a linear function of t). Consequently, both processes share the same local Hölder exponent.3 Almost sure asymptotic growth of increments of tfBm and tfBmII
In this section, we establish almost sure upper bounds for the increments of tfBm and tfBmII over small intervals. These bounds are essential for proving the strong consistency of discretized estimators for Ornstein–Uhlenbeck-type processes, which will be studied in Section 4.
The derivation is based on verifying the assumptions of general theorems from [12, 18], which provide analogous upper bounds for a broad class of Gaussian processes. These assumptions are formulated in terms of the variogram of the underlying process and are summarized in a convenient form in the appendix for ease of reference.
Following [12], consider an increasing sequence $\{{b_{k}},k\ge 0\}$ with ${b_{0}}=0$ and increments satisfying ${b_{k+1}}-{b_{k}}\ge 1$ for all k. Let $a:[0,\infty )\to (0,\infty )$ be an increasing continuous function such that $a(t)\to \infty $ as $t\to \infty $, and define the sequence $\{{a_{k}},k\ge 0\}$ by ${a_{k}}=a({b_{k}})$.
For $\Delta \in (0,1]$, define and introduce the metric
In this section, we analyze both processes simultaneously. Let
\[ X={X_{H,\lambda }}\hspace{2.5pt}\text{be either}\hspace{2.5pt}{B_{H,\lambda }^{I}},\hspace{2.5pt}\text{or}\hspace{2.5pt}{B_{H,\lambda }^{I\hspace{-0.1667em}I}},\]
and define the increment of ${X_{H,\lambda }}$ as
We begin with auxiliary results leading to the main theorem of this subsection.
Proposition 1.
Assume the following conditions hold:
Here ${K_{1}}$ and ${K_{2}}$ are the constants from Corollary 1.
(13)
\[ {L_{1}}:={\sum \limits_{k=0}^{\infty }}\frac{1}{{a_{k}}}\lt \infty ,\hspace{1em}{L_{2}}:={\sum \limits_{k=0}^{\infty }}\frac{\log ({b_{k+1}}-{b_{k}})}{{a_{k}}}\lt \infty .\]-
(i) If $H\ne 1$, then for any $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, $\Delta \in (0,1]$, and $\rho \gt 0$, the following inequality holds:\[\begin{aligned}{}\mathsf{E}\exp \left\{\rho \underset{\mathbf{t}\in {\mathbf{T}_{\Delta }}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\right\}& \le \frac{{2^{2/\varepsilon +2}}}{\Delta }\exp \left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{\Delta ^{2\beta }}}{2{(1-\vartheta )^{2}}}\right\}\\ {} & \hspace{1em}\times \exp \left\{\frac{{K_{2}}}{{K_{1}}}\left(\frac{{L_{2}}}{{L_{1}}}+\frac{{4^{1-\beta }}}{\beta \vartheta {(1-\varepsilon /\beta )^{\beta /\varepsilon }}}\right)\right\},\end{aligned}\]where $\beta =H\wedge 1$.
-
(ii) If $H=1$, $\beta \in (0,1)$ is arbitrary, then for any $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, $\kappa \in (0,1)$, $\Delta \in (0,1]$, and $\rho \gt 0$, the following inequality holds:\[\begin{aligned}{}\mathsf{E}\exp \left\{\rho \underset{\mathbf{t}\in {\mathbf{T}_{\Delta }}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\right\}& \le \frac{{2^{2/\varepsilon +2}}}{\Delta }\exp \left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{\Delta ^{2\beta }}}{2{(1-\vartheta )^{2}}}\right\}\\ {} & \hspace{1em}\times \exp \left\{\frac{{K_{2}}{\Delta ^{\beta -1}}}{{K_{1}}}\left(\frac{{L_{2}}}{{L_{1}}}+\frac{{4^{1-\beta }}}{\beta \vartheta {(1-\varepsilon /\beta )^{\beta /\varepsilon }}}\right)\right\}.\end{aligned}\]
Proof.
We verify that $Z(\mathbf{t})$ satisfies the conditions of [18, Theorem B.34] (see Theorem 4 in Appendix).
(i) By Corollary 1 (applied with $\kappa =1-\beta $ for $H=1$),
Hence, the condition (i) of Theorem 4 is satisfied.
(14)
\[ {m_{k}}:=\underset{\mathbf{t}\in {\mathbf{T}_{{b_{k}},{b_{k+1}},\Delta }}}{\sup }{\left(\mathsf{E}Z{(\mathbf{t})^{2}}\right)^{1/2}}\le {K_{2}}{\Delta ^{\beta }}.\](ii) Using Minkowski’s inequality and Corollary 1, for any $h\in (0,1)$,
\[\begin{aligned}{}& \underset{\substack{d(\mathbf{t},\mathbf{s})\le h\\ {} \mathbf{t},\mathbf{s}\in {\mathbf{T}_{{b_{k}},{b_{k+1}},\Delta }}}}{\sup }{\left(\mathsf{E}{\left|Z(\mathbf{t})-Z(\mathbf{s})\right|^{2}}\right)^{1/2}}\\ {} & \hspace{1em}\le \hspace{-0.1667em}\hspace{-0.1667em}\underset{\substack{d(\mathbf{t},\mathbf{s})\le h\\ {} \mathbf{t},\mathbf{s}\in {\mathbf{T}_{{b_{k}},{b_{k+1}},\Delta }}}}{\sup }\hspace{-0.1667em}\hspace{-0.1667em}\left[{\left(\mathsf{E}{\left|{X_{H,\lambda }}({t_{1}})-{X_{H,\lambda }}({s_{1}})\right|^{2}}\right)^{\frac{1}{2}}}+{\left(\mathsf{E}{\left|{X_{H,\lambda }}({t_{2}})-{X_{H,\lambda }}({s_{2}})\right|^{2}}\right)^{\frac{1}{2}}}\right]\\ {} & \hspace{1em}\le 2{K_{2}}{h^{\beta }}.\end{aligned}\]
Thus, $Z(\mathbf{t})$ satisfies the condition (ii) of Theorem 4 with ${c_{k}}=2{K_{2}}$, $k\ge 0$.(iii) The convergence of the first two series in condition (iii) of Theorem 4 follows from the upper bound (14), together with the summability assumptions in (13):
Choosing $\gamma :=\beta /2$ and using the fact that ${c_{k}}=2{K_{2}}$, $k\ge 0$, we obtain that the third series in condition (iii) can be written as
(15)
\[ A:={\sum \limits_{k=0}^{\infty }}\frac{{m_{k}}}{{a_{k}}}\le {K_{2}}{L_{1}}{\Delta ^{\beta }}\lt \infty ,\hspace{2em}{\sum \limits_{k=0}^{\infty }}\frac{{m_{k}}\log ({b_{k+1}}-{b_{k}})}{{a_{k}}}\le {K_{2}}{L_{2}}{\Delta ^{\beta }}\lt \infty .\]Thus, all assumptions of Theorem 4 are satisfied. Consequently, applying this theorem we obtain that, for any $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, and $\rho \gt 0$,
\[ \mathsf{E}\exp \left\{\rho \underset{\mathbf{t}\in {\mathbf{T}_{\Delta }}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\right\}\le \exp \left\{\frac{{\rho ^{2}}{A^{2}}}{2{(1-\vartheta )^{2}}}\right\}\cdot {A_{0}}\hspace{-0.1667em}\left(\vartheta ,\frac{\beta }{2},\varepsilon \right),\]
where
\[\begin{aligned}{}{A_{0}}\hspace{-0.1667em}\left(\vartheta ,\frac{\beta }{2},\varepsilon \right)& =\frac{{2^{2/\varepsilon +2}}}{\Delta }\exp \left\{\frac{1}{A}{\sum \limits_{k=0}^{\infty }}\frac{{m_{k}}\log ({b_{k+1}}-{b_{k}})}{{a_{k}}}\right\}\\ {} & \hspace{1em}\times \exp \left\{\frac{2{\Delta ^{\beta }}}{\beta A{\left(1-\varepsilon \beta \right)^{\beta /\varepsilon }}\vartheta \hspace{0.1667em}{4^{\beta }}}{\sum \limits_{k=0}^{\infty }}\frac{{c_{k}}}{{a_{k}}}\right\}.\end{aligned}\]
Next, using the bounds (15), the identity (16), and estimating A in the denominators from below as
\[ A={\sum \limits_{k=0}^{\infty }}\frac{{m_{k}}}{{a_{k}}}\ge {K_{1}}{L_{1}}{\Delta ^{H\wedge 1}},\]
we arrive at the inequality
\[\begin{aligned}{}& \mathsf{E}\exp \left\{\rho \underset{\mathbf{t}\in {\mathbf{T}_{\Delta }}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\right\}\le \exp \left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{\Delta ^{2\beta }}}{2{(1-\vartheta )^{2}}}\right\}\\ {} & \hspace{2em}\times \frac{{2^{2/\varepsilon +2}}}{\Delta }\exp \left\{\frac{{K_{2}}{L_{2}}{\Delta ^{\beta -H\wedge 1}}}{{K_{1}}{L_{1}}}\right\}\exp \left\{\frac{4{K_{2}}{L_{1}}{\Delta ^{\beta -H\wedge 1}}}{\beta {K_{1}}{L_{1}}{\left(1-\varepsilon \beta \right)^{\beta /\varepsilon }}\vartheta \hspace{0.1667em}{4^{\beta }}}\right\}.\end{aligned}\]
Finally, since $\beta -H\wedge 1=0$ for $H\ne 1$ and $\beta -H\wedge 1=\beta -1$ for $H=1$, we recover the bounds stated in the proposition. □Now, introduce a strictly decreasing sequence $\{{d_{k}},k\ge 0\}$ with ${d_{0}}=1$ and ${d_{k}}\to 0$ as $k\to \infty $, a continuous function $g:(0,1]\to (0,\infty )$, and a sequence $\{{g_{k}},k\ge 0\}$ such that $0\lt {g_{k}}\le {\min _{{d_{k+1}}\le t\le {d_{k}}}}g(t)$.
The following theorem and corollary are derived from Proposition 1 in the same manner as Theorem B.57 and Corollary B.58 in [18], respectively.
Proposition 2.
Assume the conditions of Proposition 1.
-
(i) Let $H\ne 1$ and set $\beta =H\wedge 1$. Assume additionally thatThen for all $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, and $\rho \gt 0$,
(17)
\[ {D_{1}}:={\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{\beta }}|\log {d_{k}}|}{{g_{k}}}\lt \infty .\]\[ \mathsf{E}\exp \left\{\rho \underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})g({t_{1}}-{t_{2}})}\right\}\le \exp \left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}{2{(1-\vartheta )^{2}}}\right\}{A_{1}}(\vartheta ,\varepsilon ),\]where $B={\textstyle\sum _{k=0}^{\infty }}\frac{{d_{k}^{\beta }}}{{g_{k}}}$, -
(ii) Let $H=1$ and $\beta \in (0,1)$. Assume that the stronger condition is satisfied. Then for all $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, and $\rho \gt 0$, where
Proof.
Define
\[ {\mathbf{T}^{(k)}}=\left\{({t_{1}},{t_{2}})\in {\mathbb{R}_{+}}:{d_{k+1}}\lt {t_{1}}-{t_{2}}\le {d_{k}}\right\}.\]
Clearly, ${\mathbf{T}^{(k)}}\subset {\mathbf{T}_{{d_{k}}}}$, and
\[ \left\{({t_{1}},{t_{2}})\in \mathbb{R}:0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1\right\}={\bigcup \limits_{k=0}^{\infty }}{\mathbf{T}^{(k)}}.\]
Hence,
\[\begin{aligned}{}\underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})g({t_{1}}-{t_{2}})}& \le {\sum \limits_{k=0}^{\infty }}\underset{\mathbf{t}\in {\mathbf{T}^{(k)}}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})g({t_{1}}-{t_{2}})}\\ {} & \le {\sum \limits_{k=0}^{\infty }}\frac{1}{{g_{k}}}\underset{\mathbf{t}\in {\mathbf{T}^{(k)}}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\\ {} & \le {\sum \limits_{k=0}^{\infty }}\frac{1}{{g_{k}}}\underset{\mathbf{t}\in {\mathbf{T}_{{d_{k}}}}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}.\end{aligned}\]
Let ${\{{r_{k}}\}_{k\ge 0}}$ be positive numbers such that ${\textstyle\sum _{k=0}^{\infty }}\frac{1}{{r_{k}}}=1$. Then
\[\begin{aligned}{}I(\rho )& :=\mathsf{E}\exp \left\{\rho \underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})g({t_{1}}-{t_{2}})}\right\}\\ {} & \le \mathsf{E}\exp \left\{{\sum \limits_{k=0}^{\infty }}\frac{\rho }{{g_{k}}}\underset{\mathbf{t}\in {\mathbf{T}_{{d_{k}}}}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\right\}\le {\prod \limits_{k=0}^{\infty }}{\left(\mathsf{E}\exp \left\{\frac{\rho {r_{k}}}{{g_{k}}}\underset{\mathbf{t}\in {\mathbf{T}_{{d_{k}}}}}{\sup }\frac{|Z(\mathbf{t})|}{a({t_{1}})}\right\}\right)^{1/{r_{k}}}}.\end{aligned}\]
We now distinguish two cases: $H\ne 1$ and $H=1$.
(i) Case $H\ne 1$. By Proposition 1(i), we obtain
\[\begin{aligned}{}I(\rho )& \le {\prod \limits_{k=0}^{\infty }}{\left(\frac{{2^{2/\varepsilon +2}}}{{d_{k}}}\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{r_{k}^{2}}{K_{2}^{2}}{L_{1}^{2}}{d_{k}^{2\beta }}}{2{(1-\vartheta )^{2}}{g_{k}^{2}}}\right\}\exp \left\{{A_{2}}(\vartheta ,\varepsilon )\right\}\right)^{1/{r_{k}}}}\\ {} & ={2^{2/\varepsilon +2}}\exp \left\{{A_{2}}(\vartheta ,\varepsilon )\right\}\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}}{2{(1-\vartheta )^{2}}}{\sum \limits_{k=0}^{\infty }}\frac{{r_{k}}{d_{k}^{2\beta }}}{{g_{k}^{2}}}\right\}\exp \hspace{-0.1667em}\left\{-{\sum \limits_{k=0}^{\infty }}\frac{\log {d_{k}}}{{r_{k}}}\right\},\end{aligned}\]
where
Setting ${r_{k}}=\frac{B{g_{k}}}{{d_{k}^{\beta }}}$, we obtain
\[\begin{aligned}{}I(\rho )& \le {2^{2/\varepsilon +2}}\exp \left\{{A_{2}}(\vartheta ,\varepsilon )\right\}\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}{2{(1-\vartheta )^{2}}}\right\}\exp \hspace{-0.1667em}\left\{-\frac{1}{B}{\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{\beta }}\log {d_{k}}}{{g_{k}}}\right\}\\ {} & ={2^{2/\varepsilon +2}}\exp \left\{{A_{2}}(\vartheta ,\varepsilon )\right\}\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}{2{(1-\vartheta )^{2}}}\right\}\exp \left\{\frac{{D_{1}}}{B}\right\}\\ {} & =\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}{2{(1-\vartheta )^{2}}}\right\}{A_{1}}(\vartheta ,\varepsilon ).\end{aligned}\]
(ii) Case $H=1$. By Proposition 1(ii), we have
\[\begin{aligned}{}I(\rho )& \le {\prod \limits_{k=0}^{\infty }}{\left(\frac{{2^{2/\varepsilon +2}}}{{d_{k}}}\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{d_{k}^{2\beta }}}{2{(1-\vartheta )^{2}}}\right\}\exp \left\{{d_{k}^{\beta -1}}{A_{2}}(\vartheta ,\varepsilon )\right\}\right)^{1/{r_{k}}}}\\ {} & ={2^{2/\varepsilon +2}}\exp \hspace{-0.1667em}\left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}}{2{(1-\vartheta )^{2}}}{\sum \limits_{k=0}^{\infty }}\frac{{r_{k}}{d_{k}^{2\beta }}}{{g_{k}^{2}}}\right\}\\ {} & \hspace{1em}\times \exp \hspace{-0.1667em}\left\{-{\sum \limits_{k=0}^{\infty }}\frac{\log {d_{k}}}{{r_{k}}}\right\}\exp \hspace{-0.1667em}\left\{{A_{2}}(\vartheta ,\varepsilon ){\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{\beta -1}}}{{r_{k}}}\right\}.\end{aligned}\]
Finally, by choosing ${r_{k}}=\frac{B{g_{k}}}{{d_{k}^{\beta }}}$, the conclusion follows as in case (i). □Corollary 2.
Under the assumptions of Proposition 2, for all $\vartheta \in (0,1)$, $\varepsilon \in (0,\beta )$, and $u\gt 0$,
Proof.
By Chebyshev’s inequality, we have
\[\begin{aligned}{}\mathsf{P}& \left\{\underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{\left|Z(\mathbf{t})\right|}{a({t_{1}})g({t_{1}}-{t_{2}})}\gt u\right\}\\ {} & \hspace{2em}\le \frac{1}{{e^{\rho u}}}\mathsf{E}\exp \left\{\rho \hspace{0.1667em}\underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{\left|Z(\mathbf{t})\right|}{a({t_{1}})g({t_{1}}-{t_{2}})}\right\}\\ {} & \hspace{2em}\le \exp \left\{\frac{{\rho ^{2}}{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}{2{(1-\vartheta )^{2}}}-\rho u\right\}{A_{1}}(\vartheta ,\varepsilon ).\end{aligned}\]
Choosing $\rho =\frac{u{(1-\vartheta )^{2}}}{{K_{2}^{2}}{L_{1}^{2}}{B^{2}}}$ yields (19). □Propositions 1–2 and Corollary 2 allow us to derive the asymptotic bound for the increments of tfBm and tfBmII. The following theorem is the main result of this section.
Theorem 1.
Let $\beta =H\wedge 1$ if $H\ne 1$, and let $\beta \in (0,1)$ be arbitrary if $H=1$. For any $\delta \gt 0$ and $p\gt 2$, there exists a nonnegative random variable $\eta =\eta (\delta ,p)$ such that, for all $0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1$,
and there exist constants ${C_{1}}={C_{1}}(\delta ,p)\gt 0$ and ${C_{2}}={C_{2}}(\delta ,p)\gt 0$ such that, for all $u\gt 0$,
(20)
\[ |Z(\mathbf{t})|\le ({t_{1}^{\delta }}\vee 1){({t_{1}}-{t_{2}})^{\beta }}\big(|\log ({t_{1}}-{t_{2}}){|^{p}}\vee 1\big)\hspace{0.1667em}\eta ,\hspace{1em}\textit{a.s.},\]Proof.
The proof consists of three steps, corresponding to the application of Proposition 1, Proposition 2, and Corollary 2, respectively.
Step 1. Let ${b_{0}}=0$, ${b_{k}}={e^{k}}$ for $k\ge 1$, and define $a(t)={t^{\delta }}\vee 1$. Then ${a_{k}}=a({b_{k}})={e^{k\delta }}$.
We verify the conditions in (13) from Proposition 1:
\[\begin{aligned}{}{L_{1}}={\sum \limits_{k=0}^{\infty }}\frac{1}{{a_{k}}}& ={\sum \limits_{k=0}^{\infty }}\frac{1}{{e^{k\delta }}}\lt \infty ,\\ {} {L_{2}}={\sum \limits_{k=0}^{\infty }}\frac{\log ({b_{k+1}}-{b_{k}})}{{a_{k}}}& =1+{\sum \limits_{k=1}^{\infty }}\frac{k}{{e^{k\delta }}}+{\sum \limits_{k=1}^{\infty }}\frac{\log (e-1)}{{e^{k\delta }}}\lt \infty ,\end{aligned}\]
where the convergence of all series is verified by the ratio test.
Step 2. Let ${d_{k}}={e^{-k}}$ and define $g(t)={t^{\beta }}|\log t{|^{p}}$, $t\in (0,1]$. Define
\[ {g_{0}}={e^{-\beta }},\hspace{1em}{g_{k}}={d_{k+1}^{\beta }}|\log {d_{k}}{|^{p}}={e^{-(k+1)\beta }}{k^{p}},\hspace{1em}k\ge 1.\]
Note that ${g_{k}}\le {\min _{{d_{k+1}}\le t\le {d_{k}}}}g(t)$. We now verify the remaining assumptions of Proposition 2.(i) Case $H\ne 1$. In this case, condition (17) holds:
(ii) Case $H=1$. Here we apply Propositions 1–2 and Corollary 2 with ${\beta ^{\prime }}=(1+\beta )/2$ in place of β (recall that in these results $\beta \in (0,1)$ may be chosen arbitrarily when $H=1$). Then the conditions (17)–(18) are satisfied for any $p\gt 1$:
\[\begin{aligned}{}{D_{1}}& ={\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{{\beta ^{\prime }}}}|\log {d_{k}}|}{{g_{k}}}={e^{\beta }}{\sum \limits_{k=1}^{\infty }}\frac{{e^{-({\beta ^{\prime }}-\beta )k}}}{{k^{p-1}}}={e^{\beta }}{\sum \limits_{k=1}^{\infty }}\frac{{e^{-(1-\beta )k/2}}}{{k^{p-1}}}\lt \infty ,\\ {} {D_{2}}& ={\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{2{\beta ^{\prime }}-1}}}{{g_{k}}}={\sum \limits_{k=0}^{\infty }}\frac{{d_{k}^{\beta }}}{{g_{k}}}={e^{\beta }}\left(1+{\sum \limits_{k=1}^{\infty }}{k^{-p}}\right)\lt \infty .\end{aligned}\]
Step 3. Define
\[ \eta :=\underset{0\le {t_{2}}\lt {t_{1}}\le {t_{2}}+1}{\sup }\frac{|Z(\mathbf{t})|}{({t_{1}^{\delta }}\vee 1){({t_{1}}-{t_{2}})^{\beta }}\left(|\log ({t_{1}}-{t_{2}}){|^{p}}\vee 1\right)}.\]
Then the inequality (20) holds a.s., and by Corollary 2, the random variable η has Gaussian-type tail estimates (21). □4 Strongly consistent estimation of the drift parameter in Ornstein–Uhlenbeck-type processes
4.1 Statistical model description
We consider the tempered fractional processes (tfBm and tfBmII) as the driving noise in a Langevin-type stochastic differential equation of the form
where ${y_{0}}\in \mathbb{R}$, $\theta \gt 0$, and $\sigma \gt 0$ are constants.
In our analysis, we set $X={X_{H,\lambda }}$ to be either ${B_{H,\lambda }^{I}}$, or ${B_{H,\lambda }^{I\hspace{-0.1667em}I}}$.
Similarly to the case of the Langevin equation driven by fBm, equation (22) admits a solution $Y=\{{Y_{t}},t\ge 0\}$, which can be represented as
The stochastic integral with respect to tfBm is defined pathwise in the Riemann–Stieltjes sense, since the integrand ${e^{\theta (t-s)}}$ is Lipschitz continuous, and the sample paths of both tfBm and tfBmII are Hölder continuous of order up to $H\wedge 1$.
By applying the integration-by-parts formula, the solution (23) can be equivalently rewritten as
The integral ${\textstyle\int _{0}^{\infty }}{e^{-bs}}{X_{s}}ds$ is well defined and bounded. For a detailed analysis, we refer to Lemma 4 and inequalities (33) and (34) for the case of tfBm and to Lemma 6 and inequality (41) for the case of tfBmII.
(24)
\[ {Y_{t}}={y_{0}}{e^{\theta t}}+\sigma \theta {e^{\theta t}}{\int _{0}^{t}}{e^{-\theta s}}{X_{s}}ds+\sigma {X_{t}}.\]Throughout this section, we assume that the parameters σ, H, and λ are known and fixed, while the drift parameter $\theta \gt 0$ is unknown and subject to estimation. The goal is to estimate θ based on observations of the process Y. We consider two observational settings.
4.2 Continuous-time estimator
In this section, we consider the model (22) and assume that a trajectory $\{{Y_{t}},0\le t\le T\}$ is observed. We introduce the continuous-time estimator
(25)
\[ {\widehat{\theta }_{T}}=\frac{{Y_{T}^{2}}-{y_{0}^{2}}}{2{\textstyle\textstyle\int _{0}^{T}}{Y_{t}^{2}}\hspace{0.1667em}dt}.\]Remark 2.
The estimator ${\widehat{\theta }_{T}}$ can be interpreted as the least squares estimator that formally minimizes the functional $F(\theta )={\textstyle\int _{0}^{T}}{\left({\dot{Y}_{t}}-\theta {Y_{t}}\right)^{2}}\hspace{0.1667em}dt$. It is straightforward to verify that the minimizer of this functional is given by ${\widehat{\theta }_{T}}=\frac{{\textstyle\int _{0}^{T}}{Y_{t}}d{Y_{t}}}{{\textstyle\int _{0}^{T}}{Y_{t}^{2}}\hspace{0.1667em}dt}$. If the process Y is γ-Hölder continuous on $[0,T]$ for some $\gamma \gt \frac{1}{2}$, then the integral ${\textstyle\int _{0}^{T}}{Y_{t}}\hspace{0.1667em}d{Y_{t}}$ is well defined in the Young sense and satisfies ${\textstyle\int _{0}^{T}}{Y_{t}}\hspace{0.1667em}d{Y_{t}}=\frac{1}{2}\left({Y_{T}^{2}}-{Y_{0}^{2}}\right)$. However, for $H\lt \frac{1}{2}$, the required Hölder continuity of order greater than $\frac{1}{2}$ cannot be guaranteed. Despite this, the estimator defined by (25) remains well defined for all $H\gt 0$ and, as we show below, is strongly consistent.
Proof.
According to [27, Lemma 6], the following a.s. convergences hold as $T\to \infty $:
where
\[ {\zeta _{\infty }}:={y_{0}}+\sigma \theta {\int _{0}^{\infty }}{e^{-\theta s}}{X_{s}}\hspace{0.1667em}ds.\]
The random variable ${\zeta _{\infty }}$ is Gaussian and satisfies $0\lt |{\zeta _{\infty }}|\lt \infty $ almost surely.Multiplying the numerator and denominator of (25) by ${e^{-2\theta T}}$, and applying the limits from (26) and (27), we obtain
\[ {\widehat{\theta }_{T}}=\frac{{Y_{T}^{2}}-{y_{0}^{2}}}{2{\textstyle\textstyle\int _{0}^{T}}{Y_{t}^{2}}\hspace{0.1667em}dt}=\frac{{({e^{-\theta T}}{Y_{T}})^{2}}-{e^{-2\theta T}}{y_{0}^{2}}}{2{e^{-2\theta T}}{\textstyle\textstyle\int _{0}^{T}}{Y_{t}^{2}}\hspace{0.1667em}dt}\to \theta \hspace{1em}\text{a.s., as}\hspace{2.5pt}T\to \infty .\]
□4.3 Discrete-time estimator
We now introduce a discrete-time counterpart of the estimator (25). To this end, consider a partition of the interval $[0,{n^{m-1}}]$ with mesh size ${n^{-1}}$, where $n\gt 1$ and $m\gt 1$ is fixed. This yields a grid of points defined by ${t_{k,n}}=k{n^{-1}}$ for $0\le k\le {n^{m}}$. From the continuous-time process $\{{Y_{t}}:0\le t\le {n^{m-1}}\}$, we observe only the values at these discrete points and, for simplicity, denote ${Y_{k,n}}:={Y_{{t_{k,n}}}}$.
Replacing the integrals in (25) with appropriate Riemann sums, we define the estimator
We analyze the asymptotic behavior of ${\widetilde{\theta }_{n}}(m)$ as $n\to \infty $. The main result of this section is stated below.
The proof of this theorem follows the general scheme proposed in [19]. We begin with the following auxiliary result.
Lemma 3.
For the process Y, the following bounds hold: for any $t\gt 0$,
for any $s\in \left(\frac{k}{n},\frac{k+1}{n}\right]$,
(29)
\[ \underset{0\le s\le t}{\sup }|{Y_{s}}|\le |{y_{0}}|{e^{\theta t}}+\sigma \theta {e^{\theta t}}{\int _{0}^{t}}{e^{-\theta s}}\underset{0\le u\le s}{\sup }|{X_{u}}|\hspace{0.1667em}ds+\sigma \underset{0\le s\le t}{\sup }|{X_{s}}|;\](30)
\[\begin{aligned}{}& \underset{\frac{k}{n}\le u\le s}{\sup }|{Y_{u}}-{Y_{k,n}}|\\ {} & \hspace{1em}\le \theta {\int _{\frac{k}{n}}^{s}}\left(\left|{y_{0}}\right|{e^{\theta u}}+\sigma \theta {e^{\theta u}}{\int _{0}^{u}}{e^{-\theta v}}\underset{0\le z\le v}{\sup }|{X_{z}}|dv+\sigma \underset{0\le z\le u}{\sup }|{X_{z}}|\right)du\\ {} & \hspace{2em}+\sigma \underset{\frac{k}{n}\le u\le s}{\sup }\left|{X_{u}}-{X_{k,n}}\right|.\end{aligned}\]Proof.
To derive inequality (30), we proceed in two steps. First, using equation (22), we express the difference ${Y_{u}}-{Y_{k,n}}$ as
\[ {Y_{u}}-{Y_{k,n}}=\theta {\int _{\frac{k}{n}}^{u}}{Y_{s}}\hspace{0.1667em}ds+\sigma ({X_{u}}-{X_{k,n}}).\]
Then, we apply the bound from (29):
\[\begin{aligned}{}& \underset{\frac{k}{n}\le u\le s}{\sup }\left|{Y_{u}}-{Y_{k,n}}\right|\le \theta \underset{\frac{k}{n}\le u\le s}{\sup }{\int _{\frac{k}{n}}^{u}}\left|{Y_{v}}\right|dv+\sigma \underset{\frac{k}{n}\le u\le s}{\sup }\left|{X_{u}}-{X_{k,n}}\right|\\ {} & \hspace{1em}\le \theta {\int _{\frac{k}{n}}^{s}}\left(\left|{y_{0}}\right|{e^{\theta u}}+\sigma \theta {e^{\theta u}}{\int _{0}^{u}}{e^{-\theta v}}\underset{0\le z\le v}{\sup }|{X_{z}}|dv+\sigma \underset{0\le z\le u}{\sup }|{X_{z}}|\right)du\\ {} & \hspace{2em}+\sigma \underset{\frac{k}{n}\le u\le s}{\sup }\left|{X_{u}}-{X_{k,n}}\right|.\end{aligned}\]
□We now proceed to analyze the two cases separately: tfBm and tfBmII.
4.4 Proof of Theorem 3 in the case of tfBm
Throughout this subsection, we assume that the model (22) is driven by tfBm, i.e., we consider the case $X={B_{H,\lambda }^{I}}$.
To simplify the notation, we denote by Ξ the class of nonnegative random variables ξ satisfying the following tail bound: there exist constants ${C_{1}}\gt 0$ and ${C_{2}}\gt 0$ such that ξ satisfies (7) for all $u\gt 0$.
Remark 3.
The class Ξ is closed under positive scaling, addition of constants, and finite sums: if $\xi ,\eta \in \Xi $ and $C\gt 0$, then $C\xi \in \Xi $, $\xi +C\in \Xi $, $\xi +\eta \in \Xi $. The claim follows from elementary tail inequalities; for instance,
\[\begin{array}{l}\displaystyle \mathsf{P}(\xi +\eta \gt u)\le \mathsf{P}(\xi \gt u/2)+\mathsf{P}(\eta \gt u/2),\\ {} \displaystyle \mathsf{P}(\xi +C\gt u)\le \mathsf{P}(\xi \gt u-C)\le {C_{1}}{e^{-{C_{2}}{(u-C)^{2}}}}\le {C^{\prime }_{1}}{e^{-{C^{\prime }_{2}}{u^{2}}}}\end{array}\]
for suitable constants ${C^{\prime }_{1}},{C^{\prime }_{2}}\gt 0$.Recall the notation: $\beta =H\wedge 1$ if $H\ne 1$, and $\beta \in (0,1)$ is arbitrary if $H=1$.
Lemma 4.
Proof.
It follows from (6) that for any $s\gt 0$, the following inequality holds:
where the random variable $\xi =\xi (\delta )$ is nonnegative and belongs to Ξ for any $\delta \gt 0$.
(33)
\[ \underset{u\in [0,s]}{\sup }\left|{B_{H,\lambda }^{I}}(u)\right|\le \left({s^{\delta }}+1\right)\xi (\delta )\hspace{1em}\text{a.s.},\]Hence, we have the bound
Substituting (33) and (34) into (29) yields
Inequality (31) then follows by observing that in the last term one can bound ${s^{\delta }}+1\le {s^{\delta }}+{e^{\theta s}}$, and by setting $\zeta :=|{y_{0}}|+C\sigma \theta \xi (\delta )+\sigma \xi (\delta )\in \Xi $.
(35)
\[ \underset{0\le u\le s}{\sup }\left|{Y_{u}}\right|\le \left|{y_{0}}\right|{e^{\theta s}}+C\sigma \theta {e^{\theta s}}\xi (\delta )+\sigma \left({s^{\delta }}+1\right)\xi (\delta ).\]Now, let $s\in \left(\frac{k}{n},\frac{k+1}{n}\right]$, so that $0\lt s-\frac{k}{n}\le \frac{1}{n}\lt 1$. Set $p=3$, ${t_{1}}=s$, and ${t_{2}}=\frac{k}{n}$ in (20). Then,
In order to bound the right-hand side of (36), note that for any $r\gt 0$, the function $f(t)={t^{r}}|\log t{|^{3}}$, $t\in (0,1)$, is bounded and attains its maximum at $t={e^{-3/r}}$. Therefore, for any $r\gt 0$, there exists a constant C such that ${t^{r}}|\log t{|^{3}}\le C$. Multiplying both sides by ${t^{\beta -r}}$ gives ${t^{\beta }}|\log t{|^{3}}\le C{t^{\beta -r}}$. Since ${t^{\beta }}$ as a function of β is strictly decreasing for $t\in (0,1)$, we also have ${t^{\beta }}\lt {t^{\beta -r}}$ for any $r\gt 0$. Hence,
Now, restrict r to the interval $0\lt r\le \beta $, so that $\alpha :=\beta -r\in [0,\beta )$. Then
for all $s\in \left(\frac{k}{n},\frac{k+1}{n}\right]$. Substituting this into (36) yields
Now apply (33), (34), and (37) to (30). This gives
(38)
\[\begin{aligned}{}\underset{\frac{k}{n}\le u\le s}{\sup }|{Y_{u}}-{Y_{k,n}}|& \le \theta {\int _{\frac{k}{n}}^{s}}\left(\left|{y_{0}}\right|{e^{\theta u}}+C\sigma \theta {e^{\theta u}}\xi (\delta )+\sigma ({s^{\delta }}+1)\xi (\delta )\right)du\\ {} & \hspace{1em}+C\sigma {n^{-\alpha }}({s^{\delta }}+1)\eta ,\\ {} & \le \theta {n^{-1}}\left(\left|{y_{0}}\right|{e^{\theta s}}+C\sigma \theta {e^{\theta s}}\xi (\delta )+\sigma ({s^{\delta }}+1)\xi (\delta )\right)\\ {} & \hspace{1em}+C\sigma {n^{-\alpha }}({s^{\delta }}+1)\eta .\end{aligned}\]This inequality implies the bound (32), noting that ${n^{-1}}\le {n^{-\alpha }}$ (since $\alpha \lt \beta \le 1$), using the estimate ${s^{\delta }}+1\le {s^{\delta }}+{e^{\theta s}}$, and defining
\[ \zeta =\theta |{y_{0}}|+C\sigma {\theta ^{2}}\xi (\delta )+\sigma \theta \xi (\delta )+C\sigma \eta ,\]
which belongs to Ξ by Remark 3.Lemma 5.
For any $\alpha \in (0,\beta )$, there exist $\zeta \in \Xi $ and a positive integer ${n_{0}}$ such that, for all $n\ge {n_{0}}$, the following inequality holds:
Proof.
We begin by observing that
where the summands are given by
(39)
\[ \left|{\int _{0}^{{n^{m-1}}}}{Y_{s}^{2}}\hspace{0.1667em}ds-\frac{1}{n}{\sum \limits_{k=0}^{{n^{m}}-1}}{Y_{k,n}^{2}}\right|\le {\int _{0}^{{n^{m-1}}}}{\sum \limits_{k=0}^{{n^{m}}-1}}\left|{\varphi _{n,k}}(s)\right|\hspace{0.1667em}ds,\]
\[ {\varphi _{n,k}}(s):=\left({Y_{s}^{2}}-{Y_{k,n}^{2}}\right){𝟙_{\left(\frac{k}{n},\frac{k+1}{n}\right]}}(s),\hspace{1em}0\le k\le {n^{m}}-1.\]
Using the bounds established in Lemma 4, we estimate each term as follows:
\[\begin{aligned}{}\left|{\varphi _{n,k}}(s)\right|& \le \left|{Y_{s}}-{Y_{k,n}}\right|\left(\left|{Y_{s}}\right|+\left|{Y_{k,n}}\right|\right){𝟙_{\left(\frac{k}{n},\frac{k+1}{n}\right]}}(s)\\ {} & \le 2\underset{\frac{k}{n}\le u\le s}{\sup }\left|{Y_{u}}-{Y_{k,n}}\right|\cdot \underset{0\le u\le s}{\sup }\left|{Y_{u}}\right|{𝟙_{\left(\frac{k}{n},\frac{k+1}{n}\right]}}(s)\\ {} & \le 2{n^{-\alpha }}{\left({e^{\theta s}}+{s^{\delta }}\right)^{2}}{\zeta ^{2}}{𝟙_{\left(\frac{k}{n},\frac{k+1}{n}\right]}}(s)\le 4{n^{-\alpha }}\left({e^{2\theta s}}+{s^{2\delta }}\right){\zeta ^{2}}{𝟙_{\left(\frac{k}{n},\frac{k+1}{n}\right]}}(s).\end{aligned}\]
Substituting this bound into (39) and integrating, we obtain
\[ \left|{\int _{0}^{{n^{m-1}}}}{Y_{s}^{2}}\hspace{0.1667em}ds-\frac{1}{n}{\sum \limits_{k=0}^{{n^{m}}-1}}{Y_{k,n}^{2}}\right|\le 4\hspace{0.1667em}{n^{-\alpha }}\left(\frac{{e^{2\theta {n^{m-1}}}}-1}{2\theta }+\frac{{n^{(2\delta +1)(m-1)}}}{2\delta +1}\right){\zeta ^{2}}.\]
Finally, note that for large n, the term in parentheses is $O({e^{2\theta {n^{m-1}}}})$, and any positive constants can be absorbed into the generic random variable $\zeta \in \Xi $ in accordance with Remark 3. □Corollary 3.
Proof of Theorem 3.
Combining (40) with estimate (25), we obtain
\[\begin{aligned}{}{\widetilde{\theta }_{n}}(m)& =\frac{n\left({Y_{{n^{m-1}}}^{2}}-{y_{0}^{2}}\right)}{2{\textstyle\textstyle\sum _{k=0}^{{n^{m}}-1}}{Y_{k,n}^{2}}}=\frac{{Y_{{n^{m-1}}}^{2}}-{y_{0}^{2}}}{2{\textstyle\textstyle\int _{0}^{{n^{m-1}}}}{Y_{s}^{2}}\hspace{0.1667em}ds+2{R_{n}}}\\ {} & ={\left(\frac{2{\textstyle\textstyle\int _{0}^{{n^{m-1}}}}{Y_{s}^{2}}\hspace{0.1667em}ds}{{Y_{{n^{m-1}}}^{2}}-{y_{0}^{2}}}+2\cdot \frac{{R_{n}}}{{Y_{{n^{m-1}}}^{2}}-{y_{0}^{2}}}\right)^{-1}}\\ {} & ={\left(\frac{1}{{\widehat{\theta }_{{n^{m-1}}}}}+\frac{2{R_{n}}}{{e^{2\theta {n^{m-1}}}}}\cdot \frac{{e^{2\theta {n^{m-1}}}}}{{Y_{{n^{m-1}}}^{2}}-{y_{0}^{2}}}\right)^{-1}}\to \theta ,\hspace{1em}\text{as}\hspace{2.5pt}n\to \infty .\end{aligned}\]
Here, the convergence follows from Theorem 2, Corollary 3, and the convergence (26). □4.5 Sketch of the proof of Theorem 3 for the tfBmII case
The proof of Theorem 3 for the tfBmII process follows the same strategy as in the case of the tfBm process. Lemma 6 below is established in a manner analogous to Lemma 4 for the process ${B_{H,\lambda }^{I}}$. The key difference is that inequality (8) is used in place of (6).
Applying (8) with $p=2$, we obtain the following bound, which replaces (33):
\[ \underset{u\in [0,s]}{\sup }|{B_{H,\lambda }^{I\hspace{-0.1667em}I}}(u)|\le \left({s^{\frac{1}{2}}}{({\log ^{+}}s)^{2}}+1\right)\xi ,\hspace{1em}\text{a.s.},\]
where $\xi \in \Xi $.Furthermore, using the elementary inequality
and setting $\delta =\frac{1}{2}+2\epsilon $, while incorporating the constant ${C_{\epsilon }}$ into the random variable $\xi \in \Xi $, we arrive at
Unlike (33), the bound in (41) holds only for $\delta \gt \frac{1}{2}$. However, this restriction does not affect the subsequent proofs. An analogue of Lemma 6 can therefore be formulated as follows.
Lemma 6.
The remaining arguments for the tfBmII case proceed in the same way as for the tfBm process.
5 Simulation
To verify and illustrate the theoretical results, we investigate the behavior of the estimator ${\widetilde{\theta }_{n}}(m)$ for the drift parameter in the Ornstein–Uhlenbeck process driven by tfBm and tfBmII. All simulations were conducted using the R programming language.
The generation of tfBm and tfBmII trajectories was based on the Davies–Harte method, which employs circulant matrix embedding and is widely used for exact simulation of fBm. This method was originally introduced in [9] and subsequently refined in [11] and [33]. A user-friendly and comprehensive implementation can be found in [10]. To adapt this algorithm from fBm to tfBm and tfBmII, we applied several modifications, including the use of different covariance matrices (see (4) and (5)) and the incorporation of scaling factors arising from the scaling properties of tempered fractional Brownian motions (see (3)).
For equation (22), we fixed the parameters $\theta =2$ and $\sigma =1$. For each type of tempered fractional Brownian motion (tfBm and tfBmII), we considered 18 combinations of parameters, with $\lambda \in \{0.05,0.25,0.5\}$ and $H\in \{0.2,0.4,0.6,0.8,1.2,1.4\}$.
For each parameter set, 1000 sample paths were simulated over the interval $[0,10]$. We considered two choices of the parameter m: $m=4/3$ and $m=5/4$.
In each case, we computed 1000 estimates ${\widetilde{\theta }_{n}}$ using formula (28). For the resulting collection of estimates, we calculated the sample mean and standard deviation. The results are summarized in Tables 1–4.
The simulation outcomes support the theoretical findings. The bias of the estimator decreases with increasing n. Furthermore, for a given value of n, the biases exhibit minor fluctuations around specific values: approximately 0.0385 and 0.0020 for $n=100$ and $n=1000$, respectively, in the case $m=4/3$; and 0.0403, 0.0038, and 0.0002 for $n=100$, $n=1000$, and $n=10000$, respectively, in the case $m=5/4$. The standard deviations are consistently small and close to zero, which is typical for nonergodic models driven by fractional noises. In most scenarios, the standard deviation decreases as n increases.
Table 1.
Bias of ${\widetilde{\theta }_{n}}$ for $m=4/3$
| tfBm | tfBmII | ||||||
| H | n | $\lambda =0.05$ | $\lambda =0.25$ | $\lambda =0.5$ | $\lambda =0.05$ | $\lambda =0.25$ | $\lambda =0.5$ |
| 0.2 | 100 | 0.038459904 | 0.038459886 | 0.038459903 | 0.038459898 | 0.038459898 | 0.038459958 |
| 1000 | 0.002000002 | 0.001999986 | 0.002000002 | 0.001999996 | 0.001999997 | 0.002000053 | |
| 0.4 | 100 | 0.038459892 | 0.038459896 | 0.038459903 | 0.038459902 | 0.038459236 | 0.038459906 |
| 1000 | 0.001999991 | 0.001999995 | 0.002000002 | 0.002000001 | 0.001999385 | 0.002000005 | |
| 0.6 | 100 | 0.038459902 | 0.038459900 | 0.038459901 | 0.038459899 | 0.038459900 | 0.038459902 |
| 1000 | 0.002000001 | 0.001999999 | 0.002000000 | 0.001999999 | 0.001999999 | 0.002000001 | |
| 0.8 | 100 | 0.038459902 | 0.038459903 | 0.038459900 | 0.038459904 | 0.038459902 | 0.038459901 |
| 1000 | 0.002000001 | 0.002000002 | 0.001999999 | 0.002000003 | 0.002000001 | 0.002000000 | |
| 1.2 | 100 | 0.038459895 | 0.038459896 | 0.038459901 | 0.038459908 | 0.038459905 | 0.038459899 |
| 1000 | 0.001999994 | 0.001999995 | 0.002000000 | 0.002000007 | 0.002000003 | 0.001999998 | |
| 1.4 | 100 | 0.038459885 | 0.038459900 | 0.038459902 | 0.038459894 | 0.038459923 | 0.038459906 |
| 1000 | 0.001999985 | 0.001999999 | 0.002000001 | 0.001999993 | 0.002000021 | 0.002000005 | |
Table 2.
Standard deviation of ${\widetilde{\theta }_{n}}$ for $m=4/3$
| tfBm | tfBmII | ||||||
| H | n | $\lambda =0.05$ | $\lambda =0.25$ | $\lambda =0.5$ | $\lambda =0.05$ | $\lambda =0.25$ | $\lambda =0.5$ |
| 0.2 | 100 | $1.0850\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $3.6756\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.2962\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $2.0551\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.1776\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.7373\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ |
| 1000 | $1.0981\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $3.3239\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.3093\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $2.0221\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.1270\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.6145\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ | |
| 0.4 | 100 | $2.5119\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.4829\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.0550\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $5.4634\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.1149\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-5}}$ | $1.2389\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ |
| 1000 | $2.3847\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.4046\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.0164\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $5.2696\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.9552\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-5}}$ | $1.1813\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | |
| 0.6 | 100 | $4.4196\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $5.9572\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $8.9885\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.5798\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.3487\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $6.9116\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ |
| 1000 | $4.2560\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $5.7832\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $8.6578\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.4102\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.2977\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $6.6270\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | |
| 0.8 | 100 | $3.9826\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $3.3919\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $3.2389\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.0509\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $4.6020\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.6810\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ |
| 1000 | $3.8418\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $3.2792\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $3.1287\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.0135\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $4.4359\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.5867\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | |
| 1.2 | 100 | $1.6268\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.4903\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.5265\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.4087\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.6554\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $5.9973\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ |
| 1000 | $1.5693\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.4377\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.4725\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.3233\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.5969\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $5.7856\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | |
| 1.4 | 100 | $4.8088\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $2.1822\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.8087\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $3.2764\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $5.4250\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.6412\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ |
| 1000 | $4.6385\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $2.1051\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.7446\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $3.1606\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $5.2330\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.5831\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | |
Table 3.
Bias of ${\widetilde{\theta }_{n}}$ for $m=5/4$
| tfBm | tfBmII | ||||||
| H | n | $\lambda =0.05$ | $\lambda =0.25$ | $\lambda =0.5$ | $\lambda =0.05$ | $\lambda =0.25$ | $\lambda =0.5$ |
| 0.2 | 100 | 0.040330582 | 0.040330576 | 0.040330592 | 0.040330554 | 0.040330575 | 0.040330443 |
| 1000 | 0.003804569 | 0.003804564 | 0.003804579 | 0.003804543 | 0.003804563 | 0.003804407 | |
| 10000 | 0.000200005 | 0.000200001 | 0.000200015 | 0.000199980 | 0.000199999 | 0.000199840 | |
| 0.4 | 100 | 0.040330593 | 0.040330573 | 0.040330578 | 0.040330580 | 0.040330582 | 0.040330569 |
| 1000 | 0.003804580 | 0.003804561 | 0.003804566 | 0.003804568 | 0.003804570 | 0.003804557 | |
| 10000 | 0.000200016 | 0.000199997 | 0.000200002 | 0.000200004 | 0.000200006 | 0.000199993 | |
| 0.6 | 100 | 0.040330587 | 0.040330574 | 0.040330574 | 0.040330576 | 0.040330580 | 0.040330584 |
| 1000 | 0.003804574 | 0.003804562 | 0.003804562 | 0.003804565 | 0.003804568 | 0.003804572 | |
| 10000 | 0.000200011 | 0.000199999 | 0.000199998 | 0.000200001 | 0.000200004 | 0.000200008 | |
| 0.8 | 100 | 0.040330574 | 0.040330575 | 0.040330577 | 0.040330577 | 0.040330598 | 0.040330576 |
| 1000 | 0.003804563 | 0.003804563 | 0.003804565 | 0.003804565 | 0.003804585 | 0.003804564 | |
| 10000 | 0.000199999 | 0.000199999 | 0.000200001 | 0.000200001 | 0.000200021 | 0.000200000 | |
| 1.2 | 100 | 0.040330579 | 0.040330578 | 0.040330576 | 0.040330575 | 0.040330577 | 0.040330567 |
| 1000 | 0.003804567 | 0.003804566 | 0.003804564 | 0.003804564 | 0.003804565 | 0.003804556 | |
| 10000 | 0.000200003 | 0.000200002 | 0.000200000 | 0.000200000 | 0.000200001 | 0.000199992 | |
| 1.4 | 100 | 0.040330564 | 0.040330574 | 0.040330576 | 0.040330505 | 0.040330569 | 0.040330574 |
| 1000 | 0.003804553 | 0.003804562 | 0.003804565 | 0.003804496 | 0.003804557 | 0.003804563 | |
| 10000 | 0.000199989 | 0.000199998 | 0.000200001 | 0.000199932 | 0.000199994 | 0.000199999 | |
Table 4.
Standard deviation of ${\widetilde{\theta }_{n}}$ for $m=5/4$
| tfBm | tfBmII | ||||||
| H | n | $\lambda =0.05$ | $\lambda =0.25$ | $\lambda =0.5$ | $\lambda =0.05$ | $\lambda =0.25$ | $\lambda =0.5$ |
| 0.2 | 100 | $1.2076\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $5.8049\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.8835\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $6.9047\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.8000\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $3.4982\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ |
| 1000 | $1.1527\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $5.5964\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.5472\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $6.5957\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.8392\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $4.2790\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ | |
| 10000 | $1.1419\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $5.5662\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.5446\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $6.4627\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.8377\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $4.3842\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ | |
| 0.4 | 100 | $7.1789\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $8.0580\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $8.1488\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $8.5452\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.0327\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.1210\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ |
| 1000 | $6.9960\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $7.7201\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $7.9059\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $8.1132\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.9951\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.0461\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | |
| 10000 | $6.9327\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $7.6983\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $7.8546\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $8.0857\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.9895\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.0428\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | |
| 0.6 | 100 | $3.4083\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $7.5283\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.2984\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $8.0141\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $6.2212\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.6325\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ |
| 1000 | $3.2785\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $7.2589\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.1538\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $7.7401\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $5.9969\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.5024\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | |
| 10000 | $3.2664\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $7.2358\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.1403\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $7.7070\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $5.9765\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.4914\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | |
| 0.8 | 100 | $3.3002\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.2949\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.2554\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.3186\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.3405\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ | $2.2509\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ |
| 1000 | $3.1831\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.2007\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.1749\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.1645\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.2854\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ | $2.1748\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | |
| 10000 | $3.1717\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.1920\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.1670\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.1495\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.2803\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ | $2.1670\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | |
| 1.2 | 100 | $8.0339\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $5.9244\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.1974\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $7.7352\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.6121\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.9655\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ |
| 1000 | $7.7497\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $5.7172\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.1550\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $7.4610\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.4483\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.8957\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | |
| 10000 | $7.7218\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $5.6967\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.1509\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $7.4342\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $4.4323\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $1.8889\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | |
| 1.4 | 100 | $5.1044\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.5895\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $2.4863\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.1536\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ | $2.1307\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.0808\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ |
| 1000 | $4.9236\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.5332\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $2.3984\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.0777\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ | $2.0552\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.0425\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | |
| 10000 | $4.9059\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.5277\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $2.3898\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-8}}$ | $2.0702\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-6}}$ | $2.0479\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | $1.0388\hspace{-0.1667em}\cdot \hspace{-0.1667em}{10^{-7}}$ | |