Moderate deviations for a stochastic Burgers equation

A moderate deviations principle for the law of a stochastic Burgers equation is proved via the weak convergence approach. In addition, some useful estimates toward a central limit theorem are established.


Introduction
We consider the following Stochastic Burgers equation with multiplicative space-time white noise, indexed by ε > 0, given by with Dirichlet's boundary conditions u ε (t, 0) = u ε (t, 1) = 0 for t ∈ [0, T ], and the initial condition u ε (0, x) = u 0 (x) for x ∈ [0, 1]. We assume that u 0 is continuous on [0, 1] and σ is bounded and globally Lipschitz on R. The driving noise W is a space-time Brownian sheet defined on some filtered probability space Ω, F , (F t ) t∈[0,T ] , P .
Deterministic Burgers equation was introduced by [Bur74] as a simplified mathematical model describing the turbulence phenomena in fluids. Its stochastic version has been the subject of several works; see for instance [BCJL94], [Mor99], [Gyö98] and the references therein. In particular, a large deviation principle is established in [Set14] for an additive version of (1.1), and in [CW99] and [FS17] for a class of Burgers' type stochastic partial differential equations (SPDEs for short) including (1.1). Our aim in this paper is to investigate the moderate deviations of u ε from the deterministic solution u 0 of the equation (1.4) below. More precisely, we deal with the deviations of the trajectorȳ where the deviation scale a : R + −→ R + is such that and u 0 stands for the solution of the following deterministic partial differential equation with Dirichlet's boundary conditions u 0 (t, 0) = u 0 (t, 1) = 0 for t ∈ [0, T ], and the initial condition u 0 (0, x) = u 0 (x).
Roughly speaking, the deviation scale a(ε) influences strongly the asymptotic behavior ofū ε . In fact, for certain norm · , bounds of the probabilities P u ε −u 0 √ ε ∈ · are dealt with the central limit theorem, while probabilities P u ε − u 0 ∈ · are estimated by large deviations results. Furthermore, when we are interested to probabilities of the form P u ε −u 0 a(ε) ∈ · under the condition (1.3) (e.g. a(ε) = ε 1/4 ), then we are in the framework of the so called moderate deviations.
. 1 The moderate deviations principle has a practical interest and arises in various fields such as the theory of statistical inference and finance. It enables us with the rate of convergence and a useful method for constructing asymptotic confidence intervals (see, e.g. [GZ11], [CGW15]). Furthermore, results on the moderate deviations principle for various kind of stochastic processes has been tackled recently, see for instance e.g. [DA92], [Lim95], [Gao96].
Let us stress here on the fact that the most difficulty in studying any aspect of Burgers-type equations lies in their quadratic term. In fact, most of the techniques usually used to deal with stochastic differential equations with Lipschitz drift coefficients don't longer work generally, and one resort to localization or tightness argument to circumvent this difficulty.
Generally speaking large deviations theory deals with determining how fast the probabilities P(A ε ) of a family of rare events (A ε ) decay to 0 as ε tends to 0, and how to compute the precise rate of decay as a function of the rare events. There are basically two approaches to analyzing large deviations. The former, which is originally used by Freidlin and Wentzell [FW] for diffusions processes, relies on discretization and localization arguments that allow to deduce the large deviations principle, for the solutions of equations under study, using a general contraction principle from some Schilder-type theorems for the driving noises. The second one, which we are going to use in present paper, is the so-called the weak convergence approach. It was introduced in [DE11] and developed in [BD98], [BD00] and [BDM08], and its starting point is the equivalence between large deviations principle and Laplace principle in the setting of Polish spaces. It consists in using certain variational formulas that can be viewed as the minimal cost functions for associated stochastic optimal problems. These minimal cost functions have a form to which the theory of weak convergence of probability measures can be applied. We refer to [DE11] for a more complete exposition on this approach. The most likely advantage in using the weak convergence approach is that it allows one to avoid establishing technical exponential-type probability estimates usually needed in the classical studies of large deviations principle, and reduces the proofs to demonstrating qualitative properties like existence, uniqueness and tightness of certain analogues of the original processes. As pointed out before, we will prove a moderate deviations principle for the stochastic Burgers equation (1.1), and two first-step results toward a central limit theorem. It is worth bearing in mind that the most difficulty we have encountered in establishing a central limit theorem is mainly due to the quadratic term appearing in the Burgers equation for which the classical conditions (namely, the Lipschitz condition on the drift coefficient, the boundedness and the differentiability of its derivative) are no longer satisfied. The paper is organized as follows. Section 2 is devoted to some preliminaries. The framework of our moderate deviations result and its proof are given in Section 3. In Section 4, and toward a central limit theorem for the stochastic Burgers equation, we prove the uniform boundedness and the convergence of u ε to u 0 in L q (Ω; C([0, T ]; L 2 ([0, 1]))) for q 2. Furthermore, some technical results needed in our proofs are included in the Appendix.
In this paper all positives constants are denoted by c, and their values may change from line to line. Also, for ρ 1 and t ∈ [0, T ], the usual norms on L ρ ([0, 1]) and H t := L 2 ([0, t] × [0, 1]) are respectively denoted by · ρ and · Ht .

Preliminaries
Let {W (t, x), t ∈ [0, T ], x ∈ [0, 1]} be a space-time Brownian sheet on a filtred probability space (Ω, F , F t , P). That is, a zero-mean Gaussian field with covariance function given by For each t ∈ [0, T ], F t is the completion of the σ-field generated by the random variables {W (s, x), 0 s t, x ∈ [0, 1]}. A rigorous meaning to the solution of (1.1) is given by a jointly measurable and F t -adapted process u ε := {u ε (t, x); (t, x) ∈ [0, T ] × [0, 1]} satisfying, for almost all ω ∈ Ω and all t ∈ [0, T ] the following evolution equation: where G t (·, ·) denotes the Green kernel corresponding to the operator ∂ ∂t − ∆ with the Dirichlet boundary conditions. The stochastic integral in (2.1) is understood in the Walsh sense, see [Wal86].
We now recall some estimations of the Green kernel function G, as stated in [Gyö98] and [Mor99], that will be used in the sequel.
Lemma 2.1. There exist a constant c, depending only on T , such that for all y, z ∈ [0, 1] and t, t ′ ∈ [0, T ] such that 0 t t ′ 1 i) where E is the expectation with respect to P .
In the context of the weak convergence approach, proving Laplace principle for functionals of the Brownian sheet is essentially based on the following variational representation formula, which were originally proved in [BD00].
into R, and let P 2 be the class of all predictable processes φ such that φ HT < ∞, a.s. Then 3.1.1. Sufficient conditions for a general Laplace principle. Here, we briefly describe the result needed, in our context, for proving the Laplace principle and state our main result. Let us first introduce some notations. For ε > 0, denote by G ε : E 0 × C([0, T ] × [0, 1]; R) → E a measurable map, where E 0 stands to a compact subspace of E in which the initial condition u 0 takes values, and let X ε,u0 := G ε (u 0 , h(ε)W ). (3.2) Later, we will state sufficient conditions for the Laplace principle for X ε,u0 to hold uniformly in u 0 for compact subsets of E 0 . For any positive integer N , we introduce s . It is worth noticing that the space S N is a compact metric space equipped with the weak topology on L 2 ([0, T ] × [0, 1]) and that P N 2 is the space of controls, which plays a central role in the weak convergence approach. For u ∈ H T , define the element We are now in position to introduce the following result, due to Budhiraja and al. [BDM08], ensuring sufficient condition for Laplacian principle to hold Proposition 3.1. (Theorem 7 in [BDM08]) Assume that there exists a measurable map such that the following hold: Then, the family {X ε,u0 ; ε > 0} defined by (3.2) satisfies the Laplace principle on E with speed λ 2 (ε) and rate function I u0 given, for any h ∈ E and u 0 ∈ E 0 , by where the infimum over an empty set is taken to be ∞.

3.1.2.
Controlled processes for SPDEs (1.1). In this subsection, we adapt the general scheme described above to study moderate deviations for the equation (1.1).
We state E = E 0 := C([0, T ]; L 2 ([0, 1])) to be the space of solutions of (1.1). As we are interested in proving Laplace principle forū ε (t, x) defined by (1.2), we interpretū ε as a functional of the Brownian sheet W . Indeed, using (2.1) and (2.2) we deduce thatū ε (t, x) satisfies for all ω ∈ Ω and all t ∈ [0, T ] the following equation This implies (see Theorem IV.9.1. of [IW14]) the existence of a measurable mapping As a first step toward the conditions (A1) and (A2) stated in Proposition 3.1, we define for v ε ∈ P N 2 , In Proposition 3.2 below we will establish that the mapū ε,v ε is the unique solution of the following stochastic controlled analogue equation of (3.4) and referred to us as the controlled process. Moreover, for any v ∈ S N , we associate to (3.6) the following skeleton zero-noise equation: Existence and uniqueness of the solutionū v for (3.7) is obtained in Proposition 3.3 below, and thereby, we define the map With these notations in mind, the main result of this section is stated in the following Theorem 3.3. Assume that u 0 is continuous, σ is bounded and globally Lipschitz and that (1.3) holds.
Then the family of processes {ū ε ; ε > 0} satisfies a LDP on the space C([0, T ]; L 2 ([0, 1])) with speed λ 2 (ε) and rate function given by 3.2. Proof of the main result. We basically follow the same idea as in [BDM08] and [Set14]. According to Proposition 3.1, it suffices to check that the conditions (A1) and (A2) are fulfilled. For (A1), we will establish well-posedness, tightness and convergence of controlled processes. The condition (A2), which gives that I is a rate function, will follows from the continuity of the map G 0 with respect to the weak topology.
The proof of (A1) will be done in several steps.
Step 1: Existence and uniqueness of controlled and limiting processes.
Proposition 3.2. Assume that σ is bounded and globally Lipschitz, and that (1.3) holds. Then, the Since Q ε,v ε is defined through an exponential martingale, it is a probability measure on Ω. And thus, by Girsanov theorem the process W defined by ). Now, if u denotes the unique solution of (3.4) with W (dt, dx) on the space (Ω, F , Q ε,v ε ), then u satisfies (3.6), Q ε,v ε a.s. And hence by equivalence of probabilities, u satisfies (3.6), P a.s. For the uniqueness, if u 1 and u 2 are two solutions of (3.6) on (Ω, F , P ), then u 1 and u 2 are solutions of (3.4) governed by W (dt, dx) on (Ω, F , Q ε,v ε ). By the uniqueness of the solution of (3.6), we obtain u 1 = u 2 , Q ε,v ε a.s. And thus u 1 = u 2 , P a.s. by equivalence of probabilities.
Proposition 3.3. Assume that σ is bounded and globally Lipschitz. For any v ∈ S N , for some N ∈ N, the equation (3.7) admits a unique solutionū v belonging to C([0, T ]; L 2 ([0, 1])). Moreover, for any q 2 Proof. The proof follows from a standard fixed point argument, and for the convenience of the reader, we include it in the Appendix .
Step 2: Tightness of the family We have the following proposition.
Proof . Recall that x), i = 1, 2, 3, 4, stands for the i th summand of the RHS of the above equation. In view of (3.11), in order to prove the claim of Proposition 3.4, we will state and prove the next two lemmas which give the tightness of each summand I ε,v ε i , i = 1, 2, 3, 4.
We first consider the cases where i = 1 and i = 4. Using Theorem 4.10 in [KS12], the following Lemma states sufficient conditions for tightness.
For the tightness of I ε,v ε 2 ε , we follow an idea introduced in [Gyö98] which is essentially based on Lemma 4.3 in the Appendix. More precisely, we state the following Proof. The proof of the tightness of I ε,v ε 3 ε will be omitted since it can be done similarly to this of To show the tightness of I ε,v ε 2 ε , we will apply Lemma 4.3 with q = 1, ρ = 2 and ζ ε (t, ·) := √ εh(ε)(ū ε,v ε ) 2 (t, ·). Set According to Lemma 4.3, it suffices to show that (θ ε ) ε is bounded in probability. i.e.
According to Theorem 2.1. in [Gyö98], the continuity of the initial condition u 0 implies the continuity of the solution u 0 of the equation (1.4) on the compact set [0, T ] × [0, 1]. Consequently, u 0 is bounded. This fact combined with the condition (1.3) allows us to see the function g ε as a somme of two functions g 1 ε and g 2 ε satisfying quadratic and linear major conditions respectively, uniformly in ε less than certain ε 0 . Using again the condition (1.3) and the hypotheses on the function σ, we see that σ ε is bounded and globally Lipschitzian, uniformly in ε less than certain ε 0 . Thus, the equation (3.20) is covered by the class of semi-linear SPDE studied in [Gyö98], and for which the existence and uniqueness of the solutionū ε,v ε is showed by an approximation procedure. This procedure consisted to define a sequence of truncated equations, and to establish existence and some convergence results for the corresponding sequence of solutions ū ε,v ε n n , see [Gyö98], [FS17], [Set14]. In fact, in the course of the proof of Theorem 2.1. in [Gyö98] it was shown that and that ū ε,v ε n n converges in probability in C([0, T ]; L 2 ([0, 1])) to the solutionū ε,v ε of (3.11). Now, observing Then, as c tends to infinity, the estimate (3.21) yield And by letting n tend to infinity and using the convergence in probability ofū ε,v ε n toū ε,v ε we get Hence, by applying Lemma 4.3 we obtain the tightness property for I ε,v ε 2 ε .
Step3: Convergence to the limit equation Having shown the tightness of each I ε,v ε i for i = 1, 2, 3, 4, by Prohorov's theorem, we can extract a subsequence, that we continue to denote by ε, and along which each of these processes andū ε,v ε converges in distribution (as S N -valued random elements) in C([0, T ]; L 2 ([0, 1])) to limits denoted respectively by I 0,v i for i = 1, 2, 3, 4, andū 0,v . We will show that To handle the convergence of each of the other terms, we invoke the Skorohod representation theorem and assuming the almost sure convergence on a larger common probability space. For i = 2, applying Lemma 4.1 with ρ = 2 and λ = 1, we deduce there exists a constant c > 0 such that And since ū ε,v ε ε converges a.s. in C([0, T ]; L 2 ([0, 1])) toū 0,v , then there exists ε 0 > 0 small enough such that a.s.

Toward a central limit theorem
Many results on central limit theorem has been recently established for various kind of parabolic SPDEs under strong assumptions on the drift coefficient. More specifically, under the linear growth condition, the differentiability and the global Liptschitz condition on both the drift coefficient and its derivative, some central limit theorems has been established in [WZ15], [YJ16]. And while these conditions are not all fulfilled for the stochastic Burgers equation, it is not surprising that classical tools does not apply to establish a central limit theorem. Nevertheless, we will prove in this section two first-step results toward a central limit theorem. More specifically, the uniform boundedness and the convergence of u ε to u 0 in L q (Ω; C([0, T ]; L 2 ([0, 1]))) for q 2. We hope that our current estimates could be helpful for future works in this direction.
We begin with the following result.
Proposition 4.1. Assume that σ is bounded and globally Lipschitz. Then for all q 2, we have Proof. We will use similar arguments as in Cardon-Weber and Millet [CWM01] and Gyöngy [Gyö98]. For 0 < ε 1, set Then, ϑ ε is a solution of the following equation with Dirichlet's boundary conditions and initial condition ϑ ε (0, x) = u 0 (x). By Burkholder-Davis-Gundy inequality (see Theorem 2.5.2. in [Kho02]) there exist a universal constant C(p) depending only on p such that thus, by using the boundedness of σ and Lemma 2.1, we get where c is a constant independent of ε. In particular, the random variableη ε := sup 0 t T sup 0 x 1 |η ε (t, x)| is well defined a.s. Moreover, using the SPDE (4.2) satisfied by ϑ ε and following the same arguments as in the proof of Theorem 2.1. in [Gyö98], we deduce the existence of a constant c independent of ε and ω (see [Gyö98] pages 286-289) such that Consequently, for any q 2 Hence, to prove (4.1) it suffices to show that For this purpose, note first that Thus, by Lemma 4.2, there exist two positive constants C 1 and C 2 , independent of ε, such that for any (4.5) Setting ϕ(x) := (1 + x 2q )e cT (1+x 2 ) , which is positive, continuous and an increasing function on [0, +∞[, we get for any A C 1 σ ∞ where the last integral is finite provided that cT C 2 1 + T 1 8 < 1. This implies that there exist T 0 > 0, independent of u 0 and ε, such that (4.4) holds for 0 < T T 0 . Using (4.3), and by iterating the procedure finitely many times we conclude the proof. Now, we can announce and state the following proposition.
(4.11) Therefore, for any fixed M > 0 we have To deal with the second term of the last inequality, on one hand, estimations (2.3) and (4.1) imply that there exist c > 0 such that (4.18) On the other hand, by Markov inequality and using again the estimations (2.3) and (4.1) we have (4.19) Then E sup 0 t T u ε (t, ·) − u 0 (t, ·) q 2 cε q/2 e 2cM q + c M q/2 . (4.20) Letting ε tends to zero and taking into account the fact that ε and M are independent, we obtain lim sup c M q/2 . Finally, since M is arbitrary, we conclude that (4.6) holds. which is clearly finite.
The continuity of the solution u v follows by the continuity of the integrals. For the estimation (3.10), one can use for u v the same computations as in (4.24) and Gronwall's lemma. Then, the family (J(ζ ε )) ε>0 is uniformly tight in C([0, T ]; L ρ ([0, 1])).
Therefore by the weak convergence of (v n ) to v in H T , we get the point i) of Lemma 4.4. Now, let us show (4.25) and (4.26). Using Cauchy-Schwarz inequality, the boundedness of σ, the facts that v n , v ∈ S N and Lemma 2.1, we have for any 0 t T ,