Approximation of solutions of SDEs driven by a fractional Brownian motion, under pathwise uniqueness

Our aim in this paper is to establish some strong stability properties of a solution of a stochastic differential equation driven by a fractional Brownian motion for which the pathwise uniqueness holds. The results are obtained using Skorokhod's selection theorem.


Introduction
Consider a fractional Brownian motion (fBm), a self-similar Gaussian process with stationary increments. It was introduced by Kolmogorov [5] and studied by Mandelbrot and Van Ness [6]. The fBm with Hurst parameter H ∈ (0, 1) is a centered Gaussian process with covariance function If H = 1/2, then the process B 1/2 is a standard Brownian motion. When H = 1 2 , B H is neither a semimartingale nor a Markov process, so that many of the techniques employed in stochastic analysis are not available for an fBm. The self-similarity and stationarity of increments make the fBm an appropriate model for many applications in diverse fields from biology to finance. We refer to [7] for details on these notions.
Consider the following stochastic differential equation (SDE) where b : [0, T ] × R d → R d is a measurable function, and B H is a d-dimensional fBm with Hurst parameter H < 1/2 whose components are one-dimensional independent fBms defined on a probability space (Ω, F , {F t } t∈[0,T ] , P ), where the filtration {F t } t∈[0,T ] is generated by B H t , t ∈ [0, T ], augmented by the P -null sets. It has been proved in [2] that if b satisfies the assumption for H < 1 2(3d−1) , then Eq. (1) has a unique strong solution, which will be assumed throughout this paper.
Notice that if the drift coefficient is Lipschitz continuous, then Eq. (1) has a unique strong solution, which is continuous with respect to the initial condition. Moreover, the solution can be constructed using various numerical schemes.
Our purpose in this paper is to establish some stability results under the pathwise uniqueness of solutions and under weak regularity conditions on the drift coefficient b. We mention that a considerable result in this direction has been established in [1] when an fBm is replaced by a standard Brownian motion.
The paper is organized as follows. In Section 2, we introduce some properties, notation, definitions, and preliminary results. Section 3 is devoted to the study of the variation of solution with respect to the initial data. In the last section, we drop the continuity assumption on the drift and try to obtain the same result as in Section 3.

Preliminaries
In this section, we give some properties of an fBm, definitions, and some tools used in the proofs.
For any H < 1/2, let us define the square-integrable kernel . We denote by ζ the set of step functions on [0, T ]. Let H be the Hilbert space defined as the closure of ζ with respect to the scalar product The mapping 1 [0,t] → B H t can be extended to an isometry between H and the Gaussian subspace of L 2 (Ω) associated with B H , and such an isometry is denoted by ϕ → B H (ϕ). Now we introduce the linear operator K * H from ζ to L 2 ([0, T ]) defined by The operator K * H is an isometry between ζ and L 2 ([0, T ]), which can be extended to the Hilbert space H.
Then W is a Brownian motion; moreover, B H has the integral representation We need also to define an isomorphism K H from L 2 ([0, T ]) onto I H+ 1 2 0+ (L 2 ) associated with the kernel K H (t, s) in terms of the fractional integrals as follows: Note that, for ϕ ∈ L 2 ([0, T ]), I α 0 + is the left fractional Riemann-Liouville integral operator of order α defined by where Γ is the gamma function (see [3] for details).
The inverse of K H is given by where for ϕ ∈ I If ϕ is absolutely continuous (see [8]), then Definition 2.1. On a given probability space (Ω, F , P ), a process X is called a strong solution to (1) if (2) X satisfies (1).
satisfies the usual conditions; (3) X and B H satisfy (1). The main tool used in the proofs is Skorokhod's selection theorem given by the following lemma.

Lemma 2.4. ([4]
, p. 9) Let (S, ρ) be a complete separable metric space, and let P , P n , n = 1, 2, . . ., be probability measures on (S, B(S)) such that P n converges weakly to P as n → ∞. Then, on a probability space ( Ω, F , P ), we can construct S-valued random variables X, X n , n = 1, 2, . . ., such that: (i) P n = P Xn , n = 1, 2, . . ., and P = P X , where P Xn and P X are respectively the laws of X n and X; (ii) X n converges to X P -a.s.
We will also make use of the following result, which gives a criterion for the tightness of sequences of laws associated with continuous processes. (ii) there exist positive constants α, β, M k , k = 1, 2, . . ., such that, for every n ≥ 1 and all t, s ∈ [0, k], k = 1, 2, . . ., Then, there exist a subsequence (n k ), a probability space ( Ω, F , P ), and d-dimensional continuous processes X, X n k , k = 1, 2, . . ., defined on Ω such that (1) The laws of X n k and X n k coincide; (2) X n k t converges to X t uniformly on every finite time interval P -a.s.

Variation of solutions with respect to initial conditions
The purpose of this section is to ensure the continuous dependence of the solution with respect to the initial condition when the drift b is continuous and bounded. Note that, in the case of ordinary differential equation, the continuity of the coefficient is sufficient to ensure this dependence. Next, we give a theorem that will be essential in establishing the desired result.
Before we proceed to the proof of Theorem 3.1, we state the following technical lemma.

Lemma 3.2.
Let X n be the solution of (1) corresponding to the initial condition x n . Then, for every p > 1 2H , there exists a positive constant C p such that, for all Due to the stationarity of the increments and the scaling property of an fBm and the boundedness of b, we get that which finishes the proof.
Let us now turn to the proof of Theorem 3.1.

Proof.
Suppose that the result of the theorem is false. Then there exist a constant δ > 0 and a sequence x n converging to x 0 such that Let X n (respectively, X) be the solution of (1) corresponding to the initial condition x n (respectively, x 0 ). According to Lemma 3.2, the sequence (X n , X, B H ) satisfies conditions (i) and (ii) of Lemma 2.5. Then, by Skorokhod's selection theorem there exist a subsequence {n k , k ≥ 1}, a probability space ( Ω, F , P ), and stochastic processes ( X, Y , B H ), ( X k , Y k , B H,k ), k ≥ 1, defined on ( Ω, F , P ) such that: (α) for each k ≥ 1, the laws of ( X k , Y k , B H,k ) and (X n k , X, B H ) coincide; (β) ( X k , Y k , B H,k ) converges to ( X, Y , B H ) uniformly on every finite time interval P -a.s.
Thanks to property (α), we have, for k ≥ 1 and t > 0, In other words, X k t satisfies the following SDE: Similarly, Using (β), we deduce that Thus, the processes X and Y satisfy the same SDE on ( Ω, F, P ) with the same driving noise B H t and the initial condition x 0 . Then, by pathwise uniqueness, we conclude that X t = Y t for all t ∈ [0, T ], P -a.s.
On the other hand, by uniform integrability we have that which is a contradiction. Then the desired result follows.

The case of discontinuous drift coefficient
In this section, we drop the continuity assumption on the drift coefficient and only assume that b is bounded. The goal of this section is to generate the same result as in Theorem 3.1 without the continuity assumption. Next, in order to use the fractional Girsanov theorem given in [8,Thm. 2], we should first check that the conditions imposed in the latter are satisfied in our context. This will be done in the following lemma.
As a result, we get that T 0 |v s | 2 ds < ∞, P -a.s.
(2) The second item is obtained easily by the following estimate: Γ (2−2H) 2 , which finishes the proof. Next, we will establish the following Krylov-type inequality that will play an essential role in the sequel. (1). Then, there exists β > 1 + dH such that, for any measurable nonnegative function g :

Lemma 4.2. Suppose that X is a solution of SDE
where M is a constant depending only on T , d, β, and H.
Proof. Let W be a d-dimensional Brownian motion such that For the process v introduced in Lemma 4.1, let us define P by Then, in light of Lemma 4.1 together with the fractional Girsanov theorem [8,Thm. 2], we can conclude that P is a probability measure under which the process X − x is an fBm. Now, applying Hölder's inequality, we have where 1/α + 1/ρ = 1, and C is a positive constant depending only on T , α, and ρ.
From [2,Lemma 4.3] we can see that E[Z α T ] satisfies the following property: where C H,d,T is a continuous increasing function depending only on H, d, and T . On the other hand, applying again Hölder's inequality with 1/γ + 1/γ ′ = 1 and γ > dH + 1, we obtain A direct calculation gives Plugging this into (7), we get Finally, combining this with (5) and (6), we get estimate (4) with β = ργ. The proof is now complete.
Now we are able to state the main result of this section. in probability. In other words, for ǫ > 0, we will show that lim sup where * denotes the convolution on R d , and φ is an infinitely differentiable function with support in the unit ball such that φ(x) dx = 1. Applying Chebyshev's inequality, we obtain From the continuity of b δ in x and from the convergence of X k s to X s uniformly on every finite time interval P a.s. it follows that J 2 converges to 0 as k → ∞ for every δ > 0.
On the other hand, let θ : R d → R + be a smooth truncation function such that θ(z) = 1 in the unit ball and θ(z) = 0 for |z| > 1.
By applying Lemma 4.2 we obtain where N does not depend on δ and k, and · β,R denotes the norm in L β ([0, T ] × B(0, R)).
The last expression in the right-hand side of the last inequality satisfies the following estimate: But we know that sup k≥1 E[sup s≤t | X k s | p ] < ∞ for all p > 1, and thus Substituting estimate (10) into (9), letting δ → 0, and using (11), we deduce that the convergence of the term J 1 follows. Finally, since estimate (10) also holds for X, it suffices to use the same arguments as before to obtain the convergence of the term J 3 , which completes the proof.