In this paper the fractional Cox–Ingersoll–Ross process on R+ for H<1/2 is defined as a square of a pointwise limit of the processes Yε, satisfying the SDE of the form dYε(t)=(kYε(t)1{Yε(t)>0}+ε−aYε(t))dt+σdBH(t)0\}}}+\varepsilon }-a{Y_{\varepsilon }}(t))dt+\sigma d{B^{H}}(t)$]]>, as ε↓0. Properties of such limit process are considered. SDE for both the limit process and the fractional Cox–Ingersoll–Ross process are obtained.
The Cox–Ingersoll–Ross (CIR) process, that was first introduced and studied by Cox, Ingersoll and Ross in papers [4–6], can be defined as the process X={Xt,t≥0} that satisfies the stochastic differential equation of the form
dXt=a(b−Xt)dt+σXtdWt,X0,a,b,σ>0,0,\]]]>
where W={Wt,t≥0} is the Wiener process.
The CIR process was originally proposed as a model for interest rate evolution in time and in this framework the parameters a, b and σ have the interpretation as follows: b is the “mean” that characterizes the level, around which the trajectories of X will evolve in a long run; a is “the speed of adjustment” that measures the velocity of regrouping around b; σ is an instantaneous volatility which shows the amplitude of randomness entering the system. Another common application is the stochastic volatility modeling in the Heston model which was proposed in [10] (an extensive bibliography on the subject can be found in [11] and [12]).
For the sake of simplicity, we shall use another parametrization of the equation (1), namely
dXt=(k−aXt)dt+σXtdWt,X0,k,a,σ>0.0.\]]]>
According to [9], if the condition 2k≥σ2, sometimes referred to as the Feller condition, holds, then the CIR process is strictly positive. It is also well-known that this process is ergodic and has a stationary distribution.
However, in reality the dynamics on financial markets is characterized by the so-called “memory phenomenon”, which cannot be reflected by the standard CIR model (for more details on financial markets with memory, see [1, 2, 7, 21]). Therefore, it is reasonable to introduce a fractional Cox–Ingersoll–Ross process, i.e. to modify the equation (2) by replacing (in some sense) the standard Wiener process with the fractional Brownian motion BH={BtH,t≥0}, i.e. with a centered Gaussian process with the covariance function E[BtHBsH]=12(t2H+s2H−|t−s|2H).
There are several approaches to the definition of the fractional Cox–Ingersoll–Ross process. The paper [15] introduces the so-called “rough-path” approach; in [13, 14] it is defined as a time-changed standard CIR process with inverse stable subordinator; another definition is presented in [8] as a part of discussion on rough Heston models.
A simpler pathwise approach is presented in [17] and [18]. There, the fractional Cox–Ingersoll–Ross process was defined as the square of the solution of the SDE
dYt=12(kYt−aYt)dt+σ2dBtH,Y0>0,0,\]]]>
until the first moment of zero hitting and zero after this moment (the latter condition was necessary as in the case of k>00$]]> the existence of the solution of (3) could not be guaranteed after the first moment of reaching zero).
The reason of such definition lies in the fact that the fractional Cox–Ingersoll–Ross process X defined in that way satisfies pathwisely (until the first moment of zero hitting) the equation
dXt=(k−aXt)dt+σXt∘dBtH,X0=Y02>0,0,\]]]>
where the integral with respect to the fractional Brownian motion is considered as the pathwise Stratonovich integral.
It was shown that if k>00$]]> and H>12\frac{1}{2}$]]>, such process is strictly positive and never hits zero; if k>00$]]> and H<12, the probability that there is no zero hitting on the fixed interval [0,T], T>00$]]>, tends to 1 as k→∞.
The special case of k=0 was considered in [17]. In this situation Y is the well-known fractional Ornstein–Uhlenbeck process (for more details see [3]) and, if a≥0, the fractional Cox–Ingersoll–Ross process hits zero almost surely, and if a<0 the probability of hitting zero is greater than 0 but less than 1.
However, such a definition has a significant disadvantage: according to it, the process remains on the zero level after reaching the latter, and if H<1/2, such case cannot be excluded. In this paper we generalize the approach presented in [18] and [17] in order to solve this issue.
We define the fractional CIR process on R+ for H<1/2 as the square of Y which is the pointwise limit as ε↓0 of the processes Yε that satisfy the SDE of the following type:
dYε(t)=12(kYε(s)1{Yε(s)>0}+ε−aYε(t))dt+σ2dBH(t),Yε(0)=Y0>0,0\}}}+\varepsilon }-a{Y_{\varepsilon }}(t)\bigg)dt+\frac{\sigma }{2}d{B^{H}}(t),\hspace{1em}{Y_{\varepsilon }}(0)={Y_{0}}>0,\]]]>
where a, k, σ>00$]]>.
We prove that this limit indeed exists, is nonnegative a.s. and is positive a.e. with respect to the Lebesgue measure a.s. Moreover, Y is continuous and satisfies the equation of the type
Y(t)=Y(α)+12∫αt(kY(s)−aY(s))ds+σ2(BH(t)−BH(α))
for all t∈[α,β] where (α,β) is any interval of Y’s positiveness.
The possibility of getting the equation of the form (3) is restricted due to the obscure structure of the set {t≥0|Y(t)=0} which is connected to the structure of the level sets of the fractional Brownian motion. For some results on the latter see, for example, [19].
This paper is organised as follows.
In Section 1 we give preliminary remarks on some results concerning the existence and uniqueness of solutions of SDEs driven by an additive fractional Brownian motion, as well as explain connection of this paper to [17] and [18].
In Section 2 we construct the square root process as the limit of approximations. In particular, it contains a variant of the comparison Lemma and a uniform boundary for all moments of the prelimit processes.
In Section 3 we consider properties of the square root process. We prove that Y is nonnegative and positive a.e., and also continuous.
In Section 4 we give some remarks concerning the equation for the limit square root process on R+. We obtain the equation for this process with the noise in form of sum of increments of the fractional Brownian motion on the intervals of Y’s positiveness.
Section 5 is fully devoted to the pathwise definition and equation of the fractional Cox–Ingersoll–Ross process. We prove that on each interval of positiveness the process satisfies the CIR SDE with the integral with respect to a fractional Brownian motion, considered as the pathwise Stratonovich integral. The equation on an arbitrary finite interval is also obtained, with the integral with respect to a fractional Brownian motion considered as the sum of pathwise Stratonovich integrals on intervals of positiveness.
Fractional Cox–Ingersoll–Ross process until the first moment of zero hitting
Let BH={BH(t),t≥0} be a fractional Brownian motion with an arbitrary Hurst index H∈(0,1). Consider the process Y={Y(t),t≥0}, such that
dY(t)=12(kY(t)−aY(t))dt+σ2dBH(t),t≥0,Y0,a,k,σ>0.0.\]]]>
Note that, according to [20], the sufficient conditions that guarantee the existence and the uniqueness of the strong solution of the equation
Z(t)=z+∫0tf(s,Z(s))ds+BH(t),t∈[0,T],
are as follows:
(i) for H<1/2: linear growth condition:
|f(t,z)|≤C(1+|z|),
where C is a positive constant;
(ii) for H>1/21/2$]]>: Hölder continuity:
|f(t1,z1)−f(t2,z2)|≤C(|z1−z2|α+|t1−t2|γ),
where C is a positive constant, z1,z2∈R, t1,t2∈[0,T], 1>α>1−1/2H\alpha >1-1/2H$]]>, γ>H−12H-\frac{1}{2}$]]>.
The function
f(y)=ky−ay
satisfies both (i) and (ii) for all y∈(δ,+∞), where δ∈(0,Y0) is arbitrary. Therefore for all H∈(0,1) the unique strong solution of (4) on [0,T] exists until the first moment of hitting the level δ∈(0,Y0) and, from the fact that δ is arbitrary, it exists until the first moment of zero hitting.
Let τ:=sup{t>0|∀s∈[0,t):Y(s)>0}0\hspace{2.5pt}|\hspace{2.5pt}\forall s\in [0,t):Y(s)>0\}$]]> be the first moment of zero hitting by Y. The fractional Cox-Ingersoll-Ross process was defined in [18] as follows.
The fractional Cox–Ingersoll–Ross (CIR) process is the process X={X(t),t≥0}, such that
X(t)=Y2(t)1{t<τ},
where Y is the solution of the equation (4).
Further, Y will be sometimes referred to as the square root process, as it is indeed a square root of the fractional CIR process.
It is known ([18], Theorem 2) that if H>12\frac{1}{2}$]]>, the process Y is strictly positive a.s., therefore the fractional Cox–Ingersoll–Ross process is simply Y2(t), t≥0.
However, in the case of H<12, the process Y may hit zero. In such situation, according to (7), the corresponding trajectories of the fractional Cox–Ingersoll–Ross process remain in zero after this moment, which is an undesirable property for any financial applications.
Our goal is to modify the definition of the fractional CIR process in order to remove the problem mentioned above.
Construction of the square root process on R+ as a limit of ε-approximations
Consider the process Yε={Yε(t),t∈[0,T]} that satisfies the equation of the form
Yε(t)=Y0+∫0tkYε(s)1{Yε(s)>0}+εds−a∫0tYε(s)ds+σBH(t),t≥0,0\}}}+\varepsilon }ds-a{\int _{0}^{t}}{Y_{\varepsilon }}(s)ds+\sigma {B^{H}}(t),\hspace{1em}t\ge 0,\]]]>
where Y0,k,a,σ>00$]]> and BH={BH(t),t≥0} is a fractional Brownian motion with H∈(0,1/2).
We will sometimes call Yε an ε-approximation of the square-root process Y.
For any T>00$]]>, the function fε: R→R such that
fε(y)=ky1{y>0}+ε−ay0\}}}+\varepsilon }-ay\]]]>
satisfies the conditions (5) and (6), therefore the strong solution of (8) exists and is unique.
The goal of this section is to prove that there is a pointwise limit of Yε as ε→0.
First, let us prove the analogue of the comparison Lemma.
Assume that continuous random processesY1={Y1(t),t≥0}andY2={Y2(t),t≥0}satisfy the equations of the formYi(t)=Y0+∫0tfi(Yi(s))ds+∫0tαi(s)ds+σBH(t),t≥0,i=1,2,whereBH={BH(t),t≥0}is a fractional Brownian motion,Y0, σ>0 are constants,αi={αi(t),t≥0},i=1,2, are continuous random processes andf1,f2:R→Rare continuous functions, such that:
(i) for ally∈R:f1(y)<f2(y);
(ii) for allω∈Ω, for allt≥0:α1(t,ω)≤α2(t,ω).
Then, for allω∈Ω,t≥0:Y1(t,ω)<Y2(t,ω).
Let ω∈Ω be fixed (we will omit ω in brackets in what follows).
The function δ is differentiable, δ(0)=0 and
δ+′(0)=(f2(Y0)−f1(Y0))+(α2(0)−α1(0))>0.0.\]]]>
It is clear that δ(t)=δ+′(0)t+o(t), t→0+, so there exists the maximal interval (0,t∗)⊂(0,∞) such that δ(t)>00$]]> for all t∈(0,t∗). It is also clear that
t∗=sup{t>0|∀s∈(0,t):δ(s)>0}.0\hspace{2.5pt}|\hspace{2.5pt}\forall s\in (0,t):\delta (s)>0\big\}.\]]]>
Assume that t∗<∞. According to the definition of t∗, δ(t∗)=0. Hence Y1(t∗)=Y2(t∗)=Y∗ and
δ′(t∗)=(f2(Y∗)−f1(Y∗))+(α2(t∗)−α1(t∗))>0.0.\]]]>
As δ(t)=δ′(t∗)(t−t∗)+o(t−t∗), t→t∗, there exists ε>00$]]> such that δ(t)<0 for all t∈(t∗−ε,t∗), that contradicts the definition of t∗.
Therefore, ∀t>00$]]>:
Y1(t)<Y2(t).
□
It is obvious that Lemma 2.1 still holds if we replace the index set [0,+∞) by [a,b], a<b, or if we consider the case Y1(0)<Y2(0).
Moreover, the condition (i) can be replaced by f1(y)≤f2(y). In this case it can be obtained that Y1(t,ω)≤Y2(t,ω) for all ω∈Ω and t≥0.
According to Lemma 2.1, for any ε1>ε2{\varepsilon _{2}}$]]> and for all t∈(0,∞):
Yε1(t)<Yε2(t).
Now, let us show that there exists the limit
limε→0Yε(t)=Y(t)<∞.
We will need an auxiliary result, presented in [16].
For allr≥1:E[supt∈[0,T]|BH(t)|r]<∞.
Let a, k, σ>00$]]>, H∈(0,1).
For allH∈(0,1),T>00$]]>andr≥1there are non-random constantsC1=C1(T,r,Y0,a,k)>00$]]>andC2=C2(T,r,a,σ)>00$]]>such that for allε>00$]]>and for allt∈[0,T]:|Yε(t)|r≤C1+C2sups∈[0,T]|BH(s)|r.
Let an arbitrary ω∈Ω, r≥1 and T>00$]]> be fixed (we will omit ω in what follows).
Let us prove that for all ε>00$]]> and for all t∈[0,T]:
|Yε(t)|r≤((4Y0)r+(16kTY0)r+(8σ)rsups∈[0,T]|BH(s)|r)+(8a)rTr−1∫0t|Yε(s)|rds.
Consider the moment
τ1(ε):=sup{s∈[0,T]|∀u∈[0,s]:Yε(s)≥Y02}.
Note that Yε is continuous, so 0<τ1(ε)≤T and, moreover, for all t∈[0,τ1(ε)]: Yε(t)≥Y02>00$]]>.
In order to make the further proof more convenient for the reader, we shall divide it into 3 steps. In Steps 1 and 2, we will separately show that (2) holds for all t∈[0,τ1(ε)] and for all t∈(τ1(ε),T], and in Step 3 we will obtain the final result.
If r>11$]]>, by applying the Hölder inequality to the right-hand side of (2), we obtain that
|Yε(t)|r≤4r−1(Y0r+(∫0tkYε(s)1{Yε(s)>0}+εds)r+ar(∫0t|Yε(s)|ds)r+σr|BH(t)|r).0\}}}+\varepsilon }ds\Bigg)^{r}}\\ {} \displaystyle +{a^{r}}{\Bigg({\int _{0}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds\Bigg)^{r}}+{\sigma ^{r}}{\big|{B^{H}}(t)\big|^{r}}\Bigg).\end{array}\]]]>
Note that (2) is also true for r=1 as in this case it simply coincides with right-hand side of (2).
For all t∈[0,τ1(ε)](∫0tkYε(s)1{Yε(s)>0}+εds)r≤(∫0t2kY0ds)r≤(2kTY0)r.0\}}}+\varepsilon }ds\Bigg)^{r}}\le {\Bigg({\int _{0}^{t}}\frac{2k}{{Y_{0}}}ds\Bigg)^{r}}\le {\bigg(\frac{2kT}{{Y_{0}}}\bigg)^{r}}.\]]]>
For r≥1, from Jensen’s inequality,
(∫0t|Yε(s)|ds)r≤tr−1∫0t|Yε(s)|rds≤Tr−1∫0t|Yε(s)|rds.
Finally, for all t∈[0,τ1(ε)]:
|BH(t)|r≤sups∈[0,T]|BH(s)|r.
Hence, for all t∈[0,τ1(ε)]:
|Yε(t)|r≤4r−1(Y0r+(2kTY0)r+arTr−1∫0t|Yε(s)|rds+σrsups∈[0,T]|BH(s)|r)≤((4Y0)r+(8kTY0)r+(4σ)rsups∈[0,T]|BH(s)|r)+(4a)rTr−1∫0t|Yε(s)|rds≤((4Y0)r+(16kTY0)r+(8σ)rsups∈[0,T]|BH(s)|r)+(8a)rTr−1∫0t|Yε(s)|rds.
Note that the inequality (2) holds for all t∈[0,T], if τ1(ε)=T.
Step 2. Assume that τ1(ε)<T, i.e. the interval (τ1(ε),T] is non-empty. From definition of τ1(ε) and continuity of Yε, Yε(τ1(ε))=Y02, for all t∈(τ1(ε),T]:
{s∈(τ1(ε),t]||Yε(s)|<Y02}≠∅.
Denote
τ2(ε,t):=sup{s∈(τ1(ε),t]||Yε(s)|<Y02}.
Note that for all t∈(τ1(ε),T]: τ1(ε)<τ2(ε,t)≤t and |Yε(τ2(ε,t))|≤Y02, so
|Yε(t)|r=|Yε(t)−Yε(τ2(ε,t))+Yε(τ2(ε,t))|r≤2r−1(|Yε(t)−Yε(τ2(ε,t))|r+|Yε(τ2(ε,t))|r)≤2r−1(|Yε(t)−Yε(τ2(ε,t))|r+(Y02)r)≤2r−1|Yε(t)−Yε(τ2(ε,t))|r+Y0r.
In addition, if τ2(ε,t)=t, then
|Yε(t)−Yε(τ2(ε,t))|r=0,
and if τ2(ε,t)<t, then
|Yε(t)−Yε(τ2(ε,t))|r=|∫τ2(ε,t)tkYε(s)1{Yε(s)>0}+εds−a∫τ2(ε,t)tYε(s)ds+σ(BH(t)−BH(τ2(ε,t)))|r≤(∫τ2(ε,t)tkYε(s)1{Yε(s)>0}+εds+a∫τ2(ε,t)t|Yε(s)|ds+σ|BH(t)|+σ|BH(τ2(ε,t))|)r≤4r−1[(∫τ2(ε,t)tkYε(s)1{Yε(s)>0}+εds)r+ar(∫τ2(ε,t)t|Yε(s)|ds)r+σr|BH(t)|r+σr|BH(τ2(ε,t))|r].0\}}}+\varepsilon }ds\\ {} \displaystyle -a{\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}{Y_{\varepsilon }}(s)ds+\sigma \big({B^{H}}(t)-{B^{H}}\big({\tau _{2}}(\varepsilon ,t)\big)\big){\Bigg|^{r}}\\ {} \displaystyle \le \Bigg({\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds+a{\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds+\sigma \big|{B^{H}}(t)\big|\\ {} \displaystyle +\sigma \big|{B^{H}}\big({\tau _{2}}(\varepsilon ,t)\big)\big|\Bigg){^{r}}\\ {} \displaystyle \le {4^{r-1}}\Bigg[{\Bigg({\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\frac{k}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds\Bigg)^{r}}+{a^{r}}{\Bigg({\int _{{\tau _{2}}(\varepsilon ,t)}^{t}}\big|{Y_{\varepsilon }}(s)\big|ds\Bigg)^{r}}\\ {} \displaystyle +{\sigma ^{r}}{\big|{B^{H}}(t)\big|^{r}}+{\sigma ^{r}}{\big|{B^{H}}\big({\tau _{2}}(\varepsilon ,t)\big)\big|^{r}}\Bigg].\end{array}\]]]>
In this case, from definition of τ2(ε,t), for all s∈[τ2(ε,t),t]: Yε(s)≥Y02, so
(∫τ2(ε,t)tkYε(s)1{Yε(s)>0}+εds)r≤(2kTY0)r.0\}}}+\varepsilon }ds\Bigg)^{r}}\le {\bigg(\frac{2kT}{{Y_{0}}}\bigg)^{r}}.\]]]>
Furthermore, from Jensen’s inequality,
(∫τ2(ε,t)t|Yε(s)|ds)r≤(∫0t|Yε(s)|ds)r≤tr−1∫0t|Yε(s)|rds≤Tr−1∫0t|Yε(s)|rds.
Next,
σr|BH(t)|r+σr|BH(τ2(ε,t))|r≤2σrsups∈[0,T]|BH(s)|r.
Hence,
|Yε(t)−Yε(τ2(ε,t))|r≤(8kTY0)r+(4a)rTr−1∫0t|Yε(s)|rds+(4σ)rsups∈[0,T]|BH(s)|r.
Finally, from (2) and (2), for all t∈(τ1(ε),T]:
|Yε(t)|r≤(Y0r+(16kTY0)r+(8σ)rsups∈[0,T]|BH(s)|r)+(8a)rTr−1∫0t|Yε(s)|rds≤((4Y0)r+(16kTY0)r+(8σ)rsups∈[0,T]|BH(s)|r)+(8a)rTr−1∫0t|Yε(s)|rds.
Therefore, (2) indeed holds for all t∈[0,T].
Step 3. From (2), by applying the Grönwall inequality, we obtain that for all t∈[0,T]:
|Yε(t)|r≤((4Y0)r+(16kTY0)r+(8σ)rsups∈[0,T]|BH(s)|r)e(8aT)r=:C1+C2sups∈[0,T]|BH(s)|r,
where
C1=e(8aT)r((4Y0)r+(16kTY0)r),C2=(8σ)re(8aT)r,
which ends the proof. □
For allT>00$]]>andr≥1there existsC=C(T,r,Y0,k,a,σ,H)<∞such that for allε>00$]]>andt∈[0,T]:E|Yε(t)|r<C.
The proof immediately follows from Lemma 2.2 and Theorem 2.1. □
Letr≥1andC1,C2be constants from Theorem2.1, andY(t,ω)=limε→0Yε(t,ω),t∈[0,T],ω∈Ω.
Then|Y(t)|r<C1+C2sups∈[0,T]|BH(s)|r.
In particular,Y(t)<∞a.s.
Let ω∈Ω and t∈(0,T] be fixed (if t=0, the result is trivial).
From Lemma 2.1, if ε1>ε2{\varepsilon _{2}}$]]>, then Yε1(t)<Yε2(t), therefore the limit in (18) exists.
The upper bound for |Y(t)|r immediately follows from Theorem 2.1 as the right-hand side of (12) does not depend on ε. The a.s. finiteness of Y follows from the a.s. finiteness of sups∈[0,T]|BH(s)|, which is a direct consequence of Lemma 2.2. □
The process Y is Lebesgue integrable on an arbitrary interval[0,t]a.s.
First, note that the trajectories of Y are measurable as they are the pointwise limits of continuous functions.
Let t∈R+ be fixed. Due to Tonelli’s theorem, for any r≥1 there is such C that
E[|∫0tY(s)ds|r]≤Tr−1E[∫0t|Y(s)|rds]=Tr−1∫0tE|Y(s)|rds≤CTr,
therefore
∫0tY(s)ds<∞a.s.
□
For all t>00$]]>:
limε→0∫0tYε(s)ds=∫0tY(s)dsa.s.
It follows immediately from monotonicity of Yε with respect to ε. □
Later it will be shown that Y is Riemann integrable as well. Until that, all integrals should be considered as the Lebesgue integrals.
We will sometimes refer to the limit process Y as to the square root process. It will be shown that it coincides with the square root process presented in Section 1 until the first zero hitting by the latter.
Further, we will consider only finite and integrable paths of Y.
Properties of ε-approximations and the square root process
Now let us prove several properties of both square root process and its ε-approximations.
LetT>00$]]>and λ be the Lebesgue measure on[0,T]. Thenλ{t∈[0,T]|Yε(t)≤0}→0,ε→0,a.s.
Let ω∈Ω be fixed (we will omit ω in what follows).
From the definition of Y and Remark 2.3, for any t∈[0,T] the left-hand side of
Yε(t)−Y0+a∫0tYε(s)ds−σBH(t)=∫0tkYε(s)1{Yε(s)>0}+εds0\}}}+\varepsilon }ds\]]]>
converges to
Y(t)−Y0+a∫0tY(s)ds−σBH(t)<∞,
as ε→0. Therefore there exists a limit
limε→0∫0tkYε(s)1{Yε(s)>0}+εds<∞.0\}}}+\varepsilon }ds<\infty .\]]]>
Assume that there exists a sequence {εn:n≥1}, εn↓0, and δ>00$]]> such that for all n≥1:
λ{t∈[0,T]|Yεn(t)≤0}≥δ>0.0.\]]]>
In such case, as
∫0TkYεn(t)1{Yεn(t)>0}+εndt=∫{t∈[0,T]|Yεn(t)≤0}kYεn(t)1{Yεn(t)>0}+εndt+∫{t∈[0,T]|Yεn(t)>0}kYεn(t)1{Yεn(t)>0}+εndt≥∫{t∈[0,T]|Yεn(t)≤0}kYεn(t)1{Yεn(t)>0}+εndt≥kδεn,0\}}}+{\varepsilon _{n}}}dt\\ {} \displaystyle ={\int _{\{t\in [0,T]|{Y_{{\varepsilon _{n}}}}(t)\le 0\}}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\\ {} \displaystyle +{\int _{\{t\in [0,T]|{Y_{{\varepsilon _{n}}}}(t)>0\}}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\\ {} \displaystyle \ge {\int _{\{t\in [0,T]|{Y_{{\varepsilon _{n}}}}(t)\le 0\}}}\frac{k}{{Y_{{\varepsilon _{n}}}}(t){1_{\{{Y_{{\varepsilon _{n}}}}(t)>0\}}}+{\varepsilon _{n}}}dt\ge \frac{k\delta }{{\varepsilon _{n}}},\end{array}\]]]>
it is clear that
∫0TkYεn(t)1{Yεn(t)>0}+εndt→∞,n→∞,0\}}}+{\varepsilon _{n}}}dt\to \infty ,\hspace{1em}n\to \infty ,\]]]>
that contradicts (19). □
For anyT>00$]]>,Y(t)>00$]]>almost everywhere on[0,T]a.s. and henceY(t)>00$]]>almost everywhere onR+a.s.
LetT>00$]]>be arbitrary. Then, for allt∈[0,T]:∫0tkY(s)ds<∞.
According to the Fatou lemma,
∫0tkY(s)ds≤lim_ε→0∫0tkYε(s)1{Yε(s)>0}+εds<∞.0\}}}+\varepsilon }ds<\infty .\]]]>
□
For the next result, we will require the following well-known property of the fractional Brownian motion (see, for example, [16]).
Let{BH(t),t≥0}be a fractional Brownian motion with the Hurst index H. Then there is suchΩ′⊂Ω,P(Ω′)=1, that for allω∈Ω′,T>00$]]>,γ>00$]]>and0≤s≤t≤Tthere is a positiveC=C(ω,T,γ)for which:|BH(t)−BH(s)|≤C|t−s|H−γ.
The processY={Y(t),t≥0}is non-negative a.s., so{t≥0|Y(t)≤0}={t≥0|Y(t)=0}a.s.
Let an arbitrary ω∈Ω′ from Lemma 3.2 be fixed and assume that there is such τ>00$]]> that Y(τ)≤0. Then, for all ε>00$]]>:
Yε(τ)<Y(τ)≤0.
Let an arbitrary ε>00$]]> be fixed. Denote
τ−(ε):=sup{t∈(0,τ)|Yε(t)>0},τ+(ε):=inf{t∈(τ,∞)|Yε(t)>0}.0\big\},\\ {} \displaystyle {\tau _{+}}(\varepsilon ):=\inf \big\{t\in (\tau ,\infty )\hspace{2.5pt}|\hspace{2.5pt}{Y_{\varepsilon }}(t)>0\big\}.\end{array}\]]]>
Note that, due to continuity of Yε and Lemma 3.1, 0<τ−(ε)<τ<τ+(ε)<∞.
It is clear that for all t∈(τ−(ε),τ+(ε)): Yε(t)<0, therefore 1{Yε(t)>0}=00\}}}=0$]]> and, due to Lemma 3.2, there is such C>00$]]> that for all t∈(τ−(ε),τ+(ε)):
0<Yε(t)=Yε(t)−Yε(τ−(ε))=∫τ−(ε)tkεds−a∫τ−(ε)tYε(s)ds+σ(BH(t)−BH(τ−(ε)))≥kε(t−τ−(ε))−Cσ(t−τ−(ε))H/2.
It is sufficient to prove that
F(ε):=mint∈[τ−(ε),τ+(ε)](kε(t−τ−(ε))−Cσ(t−τ−(ε))H/2)→0,ε→0.
Indeed,
(kε(t−τ−(ε))−Cσ(t−τ−(ε))H/2)t′=kε−CHσ2(t−τ−(ε))H−22.
Equating the right-hand side of (21) to 0 and solving the equation with respect to t, we obtain
t∗=τ−(ε)+C1ε22−H,
where C1=CHσ2k22−H.
It is easy to check that the second derivative of the right-hand side of (3) is positive on (τ−(ε),τ+(ε)), so t∗ is indeed its point of minimum. Therefore
F(ε)=kε(t∗−τ−(ε))−Cσ(t∗−τ−(ε))H/2=kεC1ε22−H−Cσ(C1ε22−H)H/2=kC1εH2−H−CC1H/2σεH2−H→0,ε→0.
□
It is clear that for all t∈[0,T]:
Y(t)≥Y0+∫0tkY(s)ds−a∫0tY(s)ds+σBtH.
For anyε∗>00$]]>and allt>00$]]>:limε→ε∗Yε(t)=Yε∗(t).
Indeed, denote
limε↓ε∗Yε(t)=Z+(t)≤Yε∗(t).
It is clear that for all t≥0, Yε(t)↑Z+(t), ε↓ε∗, so for any t>00$]]>:
limε↓ε∗∫0tYε(s)ds=∫0tZ+(s)ds.
Moreover, for all ε>ε∗{\varepsilon ^{\ast }}$]]>:
1Yε(t)1{Yε(t)>0}+ε≤1ε∗,0\}}}+\varepsilon }\le \frac{1}{{\varepsilon ^{\ast }}},\]]]>
therefore
limε↓ε∗∫0t1Yε(s)1{Yε(s)>0}+εds=∫0t1Z+(s)1{Z+(s)>0}+ε∗ds0\}}}+\varepsilon }ds={\int _{0}^{t}}\frac{1}{{Z_{+}}(s){1_{\{{Z_{+}}(s)>0\}}}+{\varepsilon ^{\ast }}}ds\]]]>
and hence
Z+(t)=limε↓ε∗Yε(t)=Y0+limε↓ε∗∫0tkYε(s)1{Yε(s)>0}+εds−alimε↓ε∗∫0t1Yε(s)1{Yε(s)>0}+εds+σBH(t)=Y0+∫0t1Z+(s)1{Z+(s)>0}+ε∗ds−a∫0tZ+(s)ds+σBH(t).0\}}}+\varepsilon }ds\\ {} \displaystyle -a\underset{\varepsilon \downarrow {\varepsilon ^{\ast }}}{\lim }{\int _{0}^{t}}\frac{1}{{Y_{\varepsilon }}(s){1_{\{{Y_{\varepsilon }}(s)>0\}}}+\varepsilon }ds+\sigma {B^{H}}(t)\\ {} \displaystyle ={Y_{0}}+{\int _{0}^{t}}\frac{1}{{Z_{+}}(s){1_{\{{Z_{+}}(s)>0\}}}+{\varepsilon ^{\ast }}}ds-a{\int _{0}^{t}}{Z_{+}}(s)ds+\sigma {B^{H}}(t).\end{array}\]]]>
It is known that Yε∗ is the unique solution to the equation (3), therefore for all t≥0:
limε↓ε∗Yε(t)=Yε∗(t).
Next, denote
limε↑ε∗Yε(t)=Z−(t)≥Yε∗(t).
For all t≥0, Yε(t)↓Z−(t), ε↑ε∗, so
limε↑ε∗∫0tYε(s)ds=∫0tZ−(s)ds
and for all ε∈(ε∗2,ε∗):
1Yε(t)1{Yε(t)>0}+ε≤2ε∗,0\}}}+\varepsilon }\le \frac{2}{{\varepsilon ^{\ast }}},\]]]>
so
limε↑ε∗∫0t1Yε(s)1{Yε(s)>0}+εds=∫0t1Z−(s)1{Z−(s)>0}+ε∗ds.0\}}}+\varepsilon }ds={\int _{0}^{t}}\frac{1}{{Z_{-}}(s){1_{\{{Z_{-}}(s)>0\}}}+{\varepsilon ^{\ast }}}ds.\]]]>
Therefore, similarly to (3), Z− satisfies the same equation as Yε∗, so
limε↑ε∗Yε(t)=Yε∗(t).
□
LetY={Y(t),t≥0}be the process defined by the formula (18). Then
1) the set{t>0|Y(t)>0}0|Y(t)>0\}$]]>is open in the natural topology onR;
2) Y is continuous on{t≥0|Y(t)>0}0\}$]]>.
We shall divide the proof into 3 steps.
Step 1. Let ω∈Ω be fixed. Consider an arbitrary t∗∈{t>0|Y(t)>0}0|Y(t)>0\}$]]>. As Yε(t∗)→Y(t∗), ε→0, there exists such ε∗>00$]]> that for all ε<ε∗: Yε(t∗)>00$]]>. From continuity of Yε with respect to t and their monotonicity with respect to ε, it follows that there exists such δ∗=δ∗(t∗)>00$]]> that
∀ε<ε∗,∀t∈(t∗−δ∗,t∗+δ∗):Yε(t)>0.0.\]]]>
Hence for all t∈(t∗−δ∗,t∗+δ∗): Y(t)>00$]]> and therefore the set {t>0|Y(t)>0}0|Y(t)>0\}$]]> is open.
Step 2. Let us prove that
supt∈(t∗−δ∗,t∗+δ∗)(Y(t)−Yε(t))→0,ε→0,
and therefore Y is continuous on the interval (t∗−δ∗,t∗+δ∗).
It is enough to prove that for any θ>00$]]> there exists such ε0=ε0(θ)>00$]]> that for all ε<ε0:
supt∈(t∗−δ∗,t∗+δ∗)(Y(t)−Yε(t))≤θ.
Indeed, let us fix an arbitrary θ>00$]]>. From the definition of Y it follows that Yε(t∗−δ∗)→Y(t∗−δ∗) as ε→0, so there is such ε∗∗<ε∗ that for all ε<ε∗∗ the following inequality holds:
Y(t∗−δ∗)−Yε(t∗−δ∗)<θ.
Denote
ε1:=min{θ,sup{ε∗∗∈(0,ε∗)|∀ε∈(0,ε∗∗):Y(t∗−δ∗)−Yε(t∗−δ∗)<θ}}.
As ε1≤θ, there exists such C∈(0,1] that ε1=Cθ.
From the continuity with respect to ε,
Y(t∗−δ∗)−Yε1(t∗−δ∗)≤θ
and, from the monotonicity with respect to ε, for all ε<ε1:
0≤Yε(t∗−δ∗)−Yε1(t∗−δ∗)≤θ.
It is obvious that Yε(t∗−δ∗)−Yε1(t∗−δ∗)↓0 as ε↑ε1, so let us denote
ε0:=sup{ε∈(0,ε1)|Yε(t∗−δ∗)−Yε1(t∗−δ∗)≥Y(t∗−δ∗)−Yε1(t∗−δ∗)2}.
It is obvious that
Yε0(t∗−δ∗)−Yε1(t∗−δ∗)=Y(t∗−δ∗)−Yε1(t∗−δ∗)2
and therefore
Y(t∗−δ∗)−Yε0(t∗−δ∗)=(Y(t∗−δ∗)−Yε1(t∗−δ∗))−(Yε0(t∗−δ∗)−Yε1(t∗−δ∗))=Y(t∗−δ∗)−Yε1(t∗−δ∗)2≤θ2.
Moreover, for all ε<ε0:
Yε(t∗−δ∗)−Yε0(t∗−δ∗)≤θ2.
Now consider an arbitrary ε<ε0 and assume that there is such τ0∈(t∗−δ∗,t∗+δ∗) that
Yε(τ0)−Yε0(τ0)>θ.\theta .\]]]>
Denote
τ:=inf{t∈(t∗−δ∗,τ0)|∀s∈(t,τ0):Yε(s)−Yε0(s)>θ}<τ0.\theta \big\}<{\tau _{0}}.\]]]>
From the definition of τ0 and τ, for all t∈(τ,τ0):
Yε(t)−Yε0(t)>θ.\theta .\]]]>
However, as for all t∈(t∗−δ∗,t∗+δ∗) and for all ε<ε∗ it is true that 1{Yε(t)>0}=10\}}}=1$]]>,
(Yε(τ)−Yε0(τ))′=(kYε(τ)1{Yε(τ)>0}+ε−kYε2(τ)1{Yε0(τ)>0}+ε0)−a(Yε(τ)−Yε0(τ))=(kYε(τ)+ε−kYε0(τ)+ε0)−a(Yε(τ)−Yε0(τ))=k(Yε0(τ)−Yε(τ))+k(ε0−ε)(Yε(τ)+ε)(Yε0(τ)+ε0)−a(Yε(τ)−Yε0(τ)).0\}}}+\varepsilon }-\frac{k}{{Y_{{\varepsilon _{2}}}}(\tau ){1_{\{{Y_{{\varepsilon _{0}}}}(\tau )>0\}}}+{\varepsilon _{0}}}\bigg)\\ {} \displaystyle -a\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big)\\ {} \displaystyle =\bigg(\frac{k}{{Y_{\varepsilon }}(\tau )+\varepsilon }-\frac{k}{{Y_{{\varepsilon _{0}}}}(\tau )+{\varepsilon _{0}}}\bigg)-a\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big)\\ {} \displaystyle =\frac{k({Y_{{\varepsilon _{0}}}}(\tau )-{Y_{\varepsilon }}(\tau ))+k({\varepsilon _{0}}-\varepsilon )}{({Y_{\varepsilon }}(\tau )+\varepsilon )({Y_{{\varepsilon _{0}}}}(\tau )+{\varepsilon _{0}})}-a\big({Y_{\varepsilon }}(\tau )-{Y_{{\varepsilon _{0}}}}(\tau )\big).\end{array}\]]]>
From the continuity of Yε0(t)−Yε(t) with respect to t and definition of τ, it is clear that Yε0(τ)−Yε(τ)=−θ and, as 0<ε<ε0<ε1=Cθ, where C∈(0,1],
k(Yε0(τ)−Yε(τ))+k(ε0−ε)<k(−θ+Cθ)=k(C−1)θ≤0.
Hence, as
Yε(t)−Yε0(t)=θ+(Yε(τ)−Yε0(τ))′(t−τ)+o(t−τ),t→τ,
there exists such interval (τ,τ1)⊂(τ,τ0) that for all t∈(τ,τ1):
Yε(t)−Yε0(t)<θ,
which contradicts (23).
So, for all t∈(t∗−δ∗,t∗+δ∗):
Yε(t∗−δ∗)−Yε0(t∗−δ∗)≤θ,
and hence, as for all ε<ε0 and t∈R it holds that Yε0(t)<Yε(t)<Y(t),
supt∈(t∗−δ∗,t∗+δ∗)(Yε(t)−Yε(t))≤θ,∀ε<ε0.
Step 3. In order to prove that
limt→0+Y(t)=Y(0)=Y0,
it is enough to notice that for any ε˜>00$]]> there is such t˜>00$]]> that for all ε<ε˜: Yε(t)>Y02\frac{{Y_{0}}}{2}$]]>, t∈[0,t˜].
Hence, for each all ε<ε˜ and t∈[0,t˜]kYε(t)1{Yε(t)>0}+ε=kYε(t)+ε≤kYε(t)≤2kY00\}}}+\varepsilon }=\frac{k}{{Y_{\varepsilon }}(t)+\varepsilon }\le \frac{k}{{Y_{\varepsilon }}(t)}\le \frac{2k}{{Y_{0}}}\]]]>
and so
limε→0∫0tkYε(s)1{Yε(s)>0}+εds=∫0tkY(s)ds,0\}}}+\varepsilon }ds={\int _{0}^{t}}\frac{k}{Y(s)}ds,\]]]>
hence, for all t∈[0,t˜]:
Y(t)=Y0+∫0tkY(s)ds−a∫0tY(s)ds+σBH(t).
This equation has a unique continuous solution, therefore Y is continuous on [0,t˜]. □
From Theorem 3.1 it is easy to see that the limit square root process Y satisfies the equation of the form (4) until the first moment τ of zero hitting. Indeed, on each compact set [0,t˜]⊂[0,τ)Yε converges to Y uniformly as ε→0 due to Dini’s theorem. Hence there is such ε˜>00$]]> that for all ε<ε˜: Yε(t)>mins∈[0,t˜]Y(s)2>0\frac{{\min _{s\in [0,\tilde{t}]}}Y(s)}{2}>0$]]> for all t∈[0,t˜], and, similarly to Step 3 of Theorem 3.1, it can be shown that Y satisfies equation (4) on [0,t˜].
The trajectories of the processY={Y(t),t≥0}are continuous a.e. onR+a.s. and therefore are Riemann integrable a.s.
The claim follows directly from Theorem 3.1 and Corollary 3.1. □
The set {t>0|Y(t)>0}0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\}$]]> is open in the natural topology on R a.s., so it can be a.s. represented as the finite or countable union of non-intersecting intervals, i.e.
{t>0|Y(t)>0}=⋃i=0N(αi,βi),N∈N∪{∞},0\hspace{2.5pt}|\hspace{2.5pt}Y(t)>0\big\}={\bigcup \limits_{i=0}^{N}}({\alpha _{i}},{\beta _{i}}),\hspace{1em}N\in \mathbb{N}\cup \{\infty \},\]]]>
where (αi,βi)∩(αj,βj)=∅, i≠j.
Moreover, the set {t≥0|Y(t)>0}0\}$]]> can be a.s. represented as
{t≥0|Y(t)>0}=[α0,β0)∪(⋃i=1N(αi,βi)),0\big\}=[{\alpha _{0}},{\beta _{0}})\cup \Bigg({\bigcup \limits_{i=1}^{N}}({\alpha _{i}},{\beta _{i}})\Bigg),\]]]>
where α0=0, β0 is the first moment of zero hitting by the square root process Y, (αi,βi)∩(αj,βj)=∅, i≠j, and (αi,βi)∩[α0,β0)=∅, i≠0.
Let(αi,βi),i≥1, be an arbitrary interval from the representation (25). Then
1)limt→αi+Y(t)=0,limt→βi−Y(t)=0a.s.;
2) for anyt∈[αi,βi]:Y(t)=∫αitkY(s)ds−a∫αitY(s)ds+σ(BH(t)−BH(αi))a.s.
Let Ω′ be from Lemma 3.2 and an arbitrary ω∈Ω′ be fixed.
1) Proofs for both left and right ends of the segment are similar, so we shall give a proof for the left end only.
Y is positive on (αi,βi), so it is sufficient to prove that lim‾t→αi+Y(t)=0.
Assume that lim‾t→αi+Y(t)=x>00$]]>. Then for any δ>00$]]> there exists such τ∈(αi,αi+δ) that Y(τ)∈(3x4,5x4).
Let δ and such τ∈(αi,αi+δ) be fixed. Yε(τ)↑Y(τ) as ε→0, so there is such ε=ε(τ) that Yε(τ)∈(x2,5x4). It is clear that Yε(αi)<0, therefore there is such a moment τ1∈(αi,τ) that
τ1=sup{t∈(αi,τ)|Yε(t)=x4}.
From continuity of Yε, Yε(τ1)=x4, so Yε(τ)−Yε(τ1)>x4\frac{x}{4}$]]>. On the other hand, from definitions of τ and τ1, for all t∈[τ1,τ]: Yε(t)∈(x4,5x4). That, together with Lemma 3.2, gives:
x4<Yε(τ)−Yε(τ1)=∫τ1τkYε(s)+εds−a∫τ1τYε(s)ds+σ(BH(τ)−BH(τ1))≤(2kx+ax4)(τ−τ1)+C(τ2−τ1)H/2,
i.e. for any δ>00$]]>0<x4≤(2kx+ax4)(τ−τ1)+C(τ2−τ1)H/2≤(2kx+ax4)δ+CδH/2,
which is not possible.
2) From Theorem 3.1, Y is continuous on each segment [αi∗,βi∗]⊂[αi,βi], so, due to Dini’s theorem, Yε converges uniformly to Y on [αi∗,βi∗] as ε→0. Moreover, there is such δ>00$]]> such that for all t∈[αi∗,βi∗]: Y(t)>δ\delta $]]>, therefore it is easy to see that kYε(·)1{Yε(·)>0}+ε0\}}}+\varepsilon }$]]> converges uniformly to kY(·) on [αi∗,βi∗] as ε→0.
The right-hand side of (27) is right continuous with respect to αi∗ due to the previous clause of proof, i.e.
limαi∗→αi+Y(αi∗)=Y(αi);limαi∗→αi+∫αi∗tkY(s)ds=∫αitkY(s)ds;limαi∗→αi+∫αi∗tY(s)ds=∫αitY(s)ds;limαi∗→αi+(BH(t)−BH(αi∗))=(BH(t)−BH(αi)).
Due to Lemma 3.3, Y(αi)=0, therefore (26) holds for an arbitrary t∈[αi,βi).
To get the result for t=βi, it is sufficient to pass to the limit as t→βi. □
Similarly to Theorem 3.2, it is easy to prove that
limt→β0−Y(t)=Y(β0)=0,
and therefore, taking into account Remark 3.2, for all t∈[0,β0]:
Y(t)=Y0+∫0tkY(s)ds−a∫0tY(s)ds+σBH(t).
The choice of ε-approximations may be different. For example, instead of (8), it is possible to consider the equation of the form
Yε˜(t)=Y0+∫0tkmax{Yε˜(s),ε}ds−a∫0tYε˜(s)ds+σBH(t).
By following the proofs of the results above, it can be verified that all of them hold for the resulting limit process Y˜. Furthermore, if k,a>00$]]>, it coincides with Y on [0,+∞).
Indeed, let ω∈Ω be fixed. Consider the difference
Δε(t):=Yε˜(t)−Yε(t)=∫0t(kmax{Yε˜(s),ε}−kYε(s)1{Yε(s)>0}+ε)ds−a∫0t(Yε˜(s)−Yε(s))ds.0\}}}+\varepsilon }\bigg)ds\\ {} \displaystyle -a{\int _{0}^{t}}\big(\tilde{{Y_{\varepsilon }}}(s)-{Y_{\varepsilon }}(s)\big)ds.\end{array}\]]]>
As kmax{x,ε}≥kx1{x>0}+ε0\}}}+\varepsilon }$]]> for all x∈R, it is easy to see from Lemma 2.1 and Remark 2.2 that Δε(t)=Yε˜(t)−Yε(t)≥0. Furthermore, Δε is differentiable on (0,+∞) and Δε(0)=0.
Assume that there is such τ>00$]]> that Δε(τ)≥2ε and denote
τε:=inf{t∈(0,τ)|∀s∈(t,τ]:Δε(s)>ε}.\varepsilon \big\}.\]]]>
Note that, due to continuity of Yε˜ and Yε, Δε(τε)=ε and therefore for all t∈(τε,τ):
Δε(t)−Δε(τε)>0,0,\]]]>
so, as Δε is differentiable in τε,
Δε′(τε)=(Δε)+′(τε)=limt→τε+Δε(t)−Δε(τε)t−τε≥0.
However, as
max{Yε˜(τε),ε}=max{Yε(τε)+ε,ε}=max{Yε(τε),0}+ε=Yε(τε)1{Yε(τε)>0}+ε,0\}}}+\varepsilon ,\end{array}\]]]>
it is easy to see that
Δε′(τε)=kmax{Yε˜(τε),ε}−kYε(τε)1{Yε(τε)>0}+ε−a(Yε˜(τε)−Yε(τε))=−aε<0.0\}}}+\varepsilon }-a\big(\tilde{{Y_{\varepsilon }}}({\tau _{\varepsilon }})-{Y_{\varepsilon }}({\tau _{\varepsilon }})\big)\\ {} \displaystyle =-a\varepsilon <0.\end{array}\]]]>
The obtained contradiction shows that for all t≥0: Yε˜(t)−Yε(t)<2ε and therefore, by letting ε→0, it is easy to verify that for all t≥0: Y˜(t)=Y(t).
Simulations illustrate that the processes indeed coincide (see Fig. 1). Euler approximations of Yε and Yε˜ on [0,5] were used with ε=0.01, H=0.3, Y0=a=k=σ=1. The mesh of the partition was Δt=0.0001.
Comparison of Yε (black) and Yε˜ (red), ε=0.01. Two paths are close to each other, so that they cannot be distinguished on the figure
Equation for the square root process Y for t≥0
The equation for Y(t) in case of t∈[0,β0] has already been obtained in Remark 3.3. In order to get the equation for an arbitrary t∈R+, consider the following procedure.
Let I be the set of all non-intersecting open intervals from representation (25), i.e.
I={(αi,βi),i=1,…,N},N∈N∪{∞}.
Consider an arbitrary interval (α,β)∈I and assume that t∈[α,β]. Then, from Theorem 3.2,
Y(t)=∫αtkY(s)ds−a∫αtY(s)ds+σ(BH(t)−BH(α))
Consider all such intervals (α˜j,β˜j)∈I, j=1,…,M, M∈N∪{∞}, that α˜j<α.
For each j=1,…,M,
0=Y(β˜j)=∫α˜jβ˜jkY(s)ds−a∫α˜jβ˜jY(s)ds+σ(BH(β˜j)−BH(α˜j)).
Moreover,
0=Y(β0)=Y0+∫0β0kY(s)ds−a∫0β0Y(s)ds+σBH(β0).
Therefore
Y(t)=Y(β0)+∑j=1MY(β˜j)+Y(t)=Y0+(∫0β0(kY(s)−aY(s))ds+∑j=1M∫α˜jβ˜j(kY(s)−aY(s))ds+∫αt(kY(s)−aY(s))ds)+σ(BH(β0)+∑j=1M(BH(β˜j)−BH(α˜j))+(BH(t)−BH(α)))=Y0+∫[0,β0)∪(⋃j=1M(α˜j,β˜j))∪[α,t)(kY(s)−aY(s))ds+σ(BH(β0)+∑j=1M(BH(β˜j)−BH(α˜j))+(BH(t)−BH(α)))=Y0+∫0t(kY(s)−aY(s))ds+σ(BH(β0)+∑j=1M(BH(β˜j)−BH(α˜j))+(BH(t)−BH(α))).
The question whether Y satisfies the equation of the form (28) on R+ remains open due to the obscure structure of its zero set. The simulation studies do not contradict with that either.
Indeed, consider the process Z={Z(t),t≥0} that satisfies the SDE
Z(t)=Y0−a∫0tZ(s)ds+∫0tkY(s)ds+σBH(t),
where Y is, as usual, the pointwise limit of Yε and Y0, k, a, σ are positive constants. By following explicitly the proof of Proposition A.1 in [3] it can be shown that
Z(t)=e−at(Y0+∫0tkeasY(s)ds+σ∫0teasdBH(s)).
Now assume that Y satisfies the equation of the form (28) for all t≥0. Then Y admits the representation of the form (29), i.e.
Y(t)=e−at(Y0+∫0tkeasY(s)ds+σ∫0teasdBH(s)),
so the paths of Y and paths obtained by direct simulation of the right-hand side of (30) must coincide.
In other words, if we simulate the trajectory of Y, apply transform given in the RHS of (30) to it and receive the trajectory that significantly differs from the initial one, it would be an evidence that Y does not satisfy the equation of the form (28) for all t≥0.
To simulate the left-hand side of (30), the Euler approximations of the process Yε with ε=0.01 are used. They are compared to the right-hand side of (30) simulated explicitly using the same Yε. The mesh of the partition is Δt=0.0001, T=5, H=0.4, Y0=k=a=σ=1 (see Fig. 2).
Comparison of right-hand and left-hand sides of (30)
As we see, the Euler approximation of Y (of Yε with small ε, black) indeed almost coincides with its transformation given by the right-hand side of (30) (red), so no contradiction is obtained.
Fractional Cox–Ingersoll–Ross process on R+
Consider a set of random processes {Yε,ε>0}0\}$]]> which satisfy the equations of the form
Yε(t)=Y0+12∫0t(kYε(s)1{Yε(s)>0}+ε−aYε(s))ds+σ2BH(t),0\}}}+\varepsilon }-a{Y_{\varepsilon }}(s)\bigg)ds+\frac{\sigma }{2}{B^{H}}(t),\]]]>
driven by the same fractional Brownian motion with the same parameters k, a, σ>00$]]> and the same starting point Y0>00$]]>.
Let the process Y={Y(t),t∈[0,T]} be such that for all t≥0:
Y(t)=limε→0Yε(t).
The fractional Cox–Ingersoll–Ross process is the process X={X(t),t≥0}, such that
X(t)=Y2(t),∀t≥0.
Let us show that this definition is indeed a generalization of the original Definition 1.1.
First, we will require the following definition.
Let {Ut,t≥0}, {Vt,t≥0} be random processes. The pathwise Stratonovich integral∫0TUs∘dVs is a pathwise limit of the following sums
∑k=1nUtk+Utk−12(Vtk−Vtk−1),
as the mesh of the partition 0=t0<t1<t2<⋯<tn−1<tn=T tends to zero, in case if this limit exists.
Taking into account the results of previous sections and from Theorem 1 in [18], for all t∈[0,β0]:
X(t)=X0+∫0t(k−aX(s))ds+σ∫0tX(s)∘dBH(s)a.s.
A similar result holds for all t∈[αi,βi], i≥1.
Let(αi,βi),i≥1, be an arbitrary interval from the representation (25). Consider the fractional Cox–Ingersoll–Ross processX={X(t),t∈[αi,βi]}from Definition5.1.
Then, forαi≤t≤βithe process X a.s. satisfies the SDEdX(t)=(k−aX(t))dt+σX(t)∘dBH(t),X(αi)=0,where the integral with respect to the fractional Brownian motion is defined as the pathwise Stratonovich integral.
We shall follow the proof of Theorem 1 from [18].
Let us fix an ω∈Ω and consider an arbitrary t∈[αi,βi].
It is clear that
X(t)=Y2(t)=(12∫αit(kY(s)−aY(s))ds+σ2BH(t)−σ2BH(αi))2.
Consider an arbitrary partition of the interval [αi,t]:
αi=t0<t1<t2<⋯<tn−1<tn=t.
Taking into account that X(αi)=0 and using (5), we obtain
X(t)=∑j=1n(X(tj)−X(tj−1))=∑j=1n([12∫αitj(kY(s)−aY(s))ds+σ2BH(tj)−σ2BH(αi)]2−[12∫αitj−1(kY(s)−aY(s))ds+σ2BH(tj−1)−σ2BH(αi)]2)
Factoring each summand as the difference of squares, we get:
X(t)=∑j=1n[12(∫αitj(kY(s)−aY(s))ds+∫αitj−1(kY(s)−aY(s))ds)+σ2(BtjH+Btj−1H)−σBH(αi)]×[12∫tj−1tj(kY(s)−aY(s))ds+σ2(BH(tj)−BH(tj−1))].
Expanding the brackets in the last expression, we obtain:
X(t)=14∑j=1n(∫αitj(kY(s)−aY(s))ds+∫αitj−1(kY(s)−aY(s))ds)∫tj−1tj(kY(s)−aY(s))ds+σ4∑j=1n(BtjH+Btj−1H)∫tj−1tj(kY(s)−aY(s))ds−σ2BH(αi)∑j=1n(∫tj−1tj(kY(s)−aY(s))ds)+σ4∑j=1n(∫αitj(kY(s)−aY(s))ds+∫αitj−1(kY(s)−aY(s))ds)×(BH(tj)−BH(tj−1))+σ24∑j=1n(BH(tj)−BH(tj−1))(BH(tj)+BH(tj−1))−σ22BH(αi)∑j=1n((BH(tj)−BH(tj−1))).
Let the mesh Δt of the partition tend to zero. The first three summands
14∑j=1n(∫αitj(kY(s)−aY(s))ds+∫αitj−1(kY(s)−aY(s))ds)×∫tj−1tj(kY(s)−aY(s))ds+σ4∑j=1n(BtjH+Btj−1H)∫tj−1tj(kY(s)−aY(s))ds−σ2BH(αi)∑j=1n(∫tj−1tj(kY(s)−aY(s))ds)→∫αit(kY(s)−aY(s))×(12∫αis(kY(u)−aY(u))du+σ2BH(s)−σ2BH(αi))ds=∫αit(kY(s)−aY(s))Y(s)ds=∫αit(k−aX(s))ds,Δt→0,
and the last three summands
σ4∑j=1n(∫αitj(kY(s)−aY(s))ds+∫αitj−1(kY(s)−aY(s))ds)×(BH(tj)−BH(tj−1))+σ24∑j=1n(BH(tj)−BH(tj−1))(BH(tj)+BH(tj−1))−σ22BH(αi)∑j=1n((BH(tj)−BH(tj−1)))→σ∫αit(12∫αis(kY(u)−aY(u))du+σ2BH(s)−σ2BH(αi))∘dBH(s)=σ∫αitY(s)∘dBH(s)=σ∫αitX(s)∘dBH(s),Δt→0.
Note that the left-hand side of (5) does not depend on the partition and the limit in (5) exists as the pathwise Riemann integral, therefore the corresponding pathwise Stratonovich integral exists and the passage to the limit in (5) is correct.
Thus, the process X satisfies the SDE of the form
X(t)=∫αit(k−aX(s))ds+σ∫αitX(s)∘dBH(s),t∈[αi,βi],
where ∫αitX(s)∘dBH(s) is a pathwise Stratonovich integral. □
Finally, similarly to Section 4, the next result follows from Theorem 5.1.
Consider an arbitrary (α,β)∈I, where I is a set of all open intervals from representation (25) (see Section 4). Let β0 be the first moment of zero hitting by Y and (α˜j,β˜j)∈I, j=1,…,M, M∈N∪{∞}, are all such intervals that α˜j<α.
We are deeply grateful to anonymous referees whose valuable comments helped us to improve the manuscript significantly.
ReferencesAnh, V., Inoue, A.: Financial Markets with Memory I: Dynamic Models. 23(2), 275–300 (2005). MR2130350. https://doi.org/10.1081/SAP-200050096Bollerslev, T., Mikkelsen, H.O.: Modelling and pricing long memory in stock market volatility. 73(1), 151–184 (2005)Cheridito, P., Kawaguchi, H., Maejima, M.: Fractional Ornstein-Uhlenbeck processes. 8(1), 1–14 (2003). MR1961165. https://doi.org/10.1214/EJP.v8-125Cox, J.C., Ingersoll, J.E., Ross, S.A.: A re-examination of traditional hypotheses about the term structure of interest rates. 36, 769–799 (1981)Cox, J.C., Ingersoll, J.E., Ross, S.A.: An intertemporal general equilibrium model of asset prices. 53(1), 363–384 (1985). MR0785474. https://doi.org/10.2307/1911241Cox, J.C., Ingersoll, J.E., Ross, S.A.: A theory of the term structure of interest rates. 53(2), 385–408 (1985). MR0785475. https://doi.org/10.2307/1911242Ding, Z., Granger, C.W., Engle, R.F.: A long memory property of stock market returns and a new model. 1(1), 83–106 (1993)Euch, O., Rosenbaum, M.: The characteristic function of rough Heston models. https://arxiv.org/pdf/1609.02108.pdf. Accessed 18 Aug 2018. arXiv: 1609.02108Feller, W.: Two singular diffusion problems. 54, 173–182 (1951). MR0054814. https://doi.org/10.2307/1969318Heston, S.L.: A closed-form solution for options with stochastic volatility with applications to bond and currency options. 6(2), 327–343 (1993)Kuchuk-Iatsenko, S., Mishura, Y.: Pricing the European call option in the model with stochastic volatility driven by Ornstein-Uhlenbeck process. Exact formulas. 2(3), 233–249 (2015). MR3407504. https://doi.org/10.15559/15-VMSTA36CNFKuchuk-Iatsenko, S., Mishura, Y., Munchak, Y.: Application of Malliavin calculus to exact and approximate option pricing under stochastic volatility. 94, 93–115 (2016). MR3553457. https://doi.org/10.1090/tpms/1012Leonenko, N., Meerschaert, M., Sikorskii, A.: Correlation structure of fractional Pearson diffusion. 66(5), 737–745 (2013). MR3089382. https://doi.org/10.1016/j.camwa.2013.01.009Leonenko, N., Meerschaert, M., Sikorskii, A.: Fractional Pearson diffusion. 403(2), 532–546 (2013). MR3037487. https://doi.org/10.1016/j.jmaa.2013.02.046Marie, N.: A generalized mean-reverting equation and applications. 18, 799–828 (2014). MR3334015. https://doi.org/10.1051/ps/2014002Mishura, Y.: . Springer, Berlin (2008). MR2378138. https://doi.org/10.1007/978-3-540-75873-0Mishura, Y., Piterbarg, V., Ralchenko, K., Yurchenko-Tytarenko, A.: Stochastic representation and pathwise properties of fractional Cox-Ingersoll-Ross process (in Ukrainian). 97, 157–170 (2017). Available in English at: https://arxiv.org/pdf/1708.02712.pdf. MR3746006Mishura, Y., Yurchenko-Tytarenko, A.: Fractional Cox-Ingersoll-Ross process with non-zero “mean”. 5(1), 99–111 (2018). MR3784040. https://doi.org/10.15559/18-vmsta97Mukeru, S.: The zero set of fractional Brownian motion is a Salem set. 24(4), 957–999 (2018). MR3843846. https://doi.org/10.1007/s00041-017-9551-9Nualart, D., Ouknine, Y.: Regularization of differential equations by fractional noise. 102, 103–116 (2002). MR1934157. https://doi.org/10.1016/S0304-4149(02)00155-2Yamasaki, K., Muchnik, L., Havlin, S., Bunde, A., Stanley, H.E.: Scaling and memory in volatility return intervals in financial markets. 102(26), 9424–9428 (2005)