Estimates for distribution of suprema of solutions to higher-order partial differential equations with random initial conditions

In the paper we consider higher-order partial differential equations from the class of linear dispersive equations. We investigate solutions to these equations subject to random initial conditions given by harmonizable $\varphi$-sub-Gaussian processes. The main results are the bounds for the distributions of the suprema for solutions. We present the examples of processes for which the assumptions of the general result are verified and bounds are written in the explicit form. The main result is also specified for the case of Gaussian initial condition.


Introduction
Numerous recent studies are concerned with evolution equations of the form ∂u ∂t + l j=1 ∂ 2j+1 u ∂x 2j+1 + u k ∂u ∂x = 0, l, k ∈ N, which are dispersive equations of order 2l+1 with a convective term u k ∂u ∂x ; equations with coefficients of more general form and different kinds of nonlinearity are also the subject of active research. The is used to model various dispersive phenomena such as plasma waves, capilaritygravity water waves, etc. in situations when the cubic dispersive term is weak or not sufficient. In the most recent research, generalisations of Equations (1.1), (1.2) have been suggested and treated.
In the physical and mathematical literature the existence, uniqueness and analytic properties of solutions to the initial value problem have been intensively investigated for various linear and nonlinear dispersive equations. Boundary value problems for such equations were also considered. We refer, for example, to the comprehensive study undertaken in the book by Tao [16], among many other books and papers on the topic.
One should note the importance of the study of constant coefficient linear dispersive equations for its own sake and also because this provides prerequisites for the theory of nonlinear dispersive equations, since the latter are often obtained by perturbation of the linear theory ( [16]). Developing the theory of linear equations is also essential for describing those evolution phenomena where the linear effects compensate or dominate nonlinear ones. In such situations, one can expect that the nonlinear solutions display almost the same behavior as the linear ones.
In the probabilistic literature significant attention has been paid to the equations of the form ∂u ∂t = k m ∂ m u ∂x m , x ∈ R, t > 0, m ≥ 2. (1. 3) The investigation of fundamental solutions to the equation (1.3) can be traced back to works by Bernštein and Lévy. Such solutions are sign-varying and, based on them, the so-called pseudoprocesses have been introduced and extensively investigated in the literature. Note, that Equations (1.3) of even and odd order possess solutions of different structure and behaviour (for example, see [13]). We refer to [7], where a review of the recent results on this topic and additional literature are presented.
We note that in the probabilistic literature equations of the form (1.3) and their generalizations are often called higher-order heat-type equations.
Equations of the form (1.3) subject to random initial conditions were studied in [1], namely, the asymptotic behavior was analysed for the rescaled solution to the Airy equation with random initial conditions given by weakly dependent stationary processes.
More general odd-order equations of the form subject to the random initial conditions represented by a strictly ϕ-sub-Gaussian harmonizable processes were considered in [2,7]. Rigorous conditions were stated therein for the existence of solutions and some distributional properties of solutions were investigated. The present paper continues the line of research initiated in the papers [2,7]. Note that in the mathematical literature the initial value problems for partial differential equations have been studied within the framework of different functional spaces, including the most abstract ones. Here we take into consideration Equation (1.4) in the framework of special Banach spaces of random variables, which constitute a subclass of Orlicz spaces of exponential type, more precisely, we deal with the spaces of strictly ϕ-sub-Gaussian random processes. These spaces play an important role in extensions of properties of Gaussian and sub-Gaussian processes. Basic results on ϕ-sub-Gaussian processes and fields can be found, for example, in [3,4,8,17].
The general methods and techniques developed for ϕ-sub-Gaussian processes, applied to the problems under consideration in the present paper, permit us to obtain bounds for the distributions of suprema of the solutions to the initial value problem for Equation (1.4). The bounds are presented in a form different than those obtained in the paper [7], and can be more useful in particular situations. In such a way, the results of the present paper complement and extend the results presented in [7].
To make the paper self-contained, in Sections 2 we present all important definitions and facts on harmonizable ϕ-sub-Gaussian processes, which will be used in the derivation of the main results. We also formulate the result on the conditions of existence of solutions to (1.4) with the ϕ-sub-Gaussian initial condition (see [2,7] for its derivation). The main result on the bounds for the distributions of supremum of the solutions is stated in Section 3. We present examples of processes for which the assumptions of the general result are verified and bounds are written explicitly. The main result is also specified for the case of Gaussian initial condition.

Preliminaries
Since in the paper we consider a partial differential equation with random initial condition given by a real-valued harmonizable ϕ-sub-Gaussian process, in this section we present the necessary definitions and facts concerning such processes.

Harmonizable processes
Harmonizable processes are a natural extension of stationary processes to secondorder nonstationary ones. Such class of processes allows us to retain advantages of the Fourier analysis. Harmonizable processes were introduced by Loève [12]. Recent developments on this theory are due to Rao [14] and Swift [15] among others.
Definition 2.1 ( [12]). The second-order random function X = {X(t), t ∈ R}, EX(t) = 0, is called harmonizable if there exists a second-order random function y = y(t), t ∈ R, Ey(t) = 0 such that the covariance Γ y (t, s) = Ey(t)y(s) has finite variation and X(t) = R e itu dy(u), where the integral is defined in the mean-square sense. (2.1) Remark 2.1. In the theorem above, the covariance function Γ y is of bounded Vitali variation (see [14,15]). This fact guarantees that integral in (2.1) is in the Lebesgue sense. The function Γ y is also called the spectral function or bi-measure of the process X.

Remark 2.2.
In what follows, an integral of the type A f (t, s)dg(t, s) is understood as a common Lebesgue-Stieltjes integral, that is, the limit of the sum f (t, s)∆ i ∆ j g(t, s), and an integral of the type A f (t, s)|dg(t, s)| is is understood as the limit of the sum f (t, s)|∆ i ∆ j g(t, s)|.
Below we shall focus on real-valued harmonizable processes.
Definition 2.2. Real-valued second order random function X = {X(t), t ∈ R} is called harmonizable, if there exists a real-valued second order function y(u), Ey(u) = 0, u ∈ R, such that X(t) = ∞ −∞ sin tu dy(u) or X(t) = ∞ −∞ cos tu dy(u) and the covariance function Γ y (t, s) = Ey(t)y(s) has finite variation. The integral is defined in the mean-square sense.
Theorem 2.2. Real-valued second order function X = {X(t), t ∈ R}, EX(t) = 0, is harmonizable if and only if there exists the covariance function Γ y (u, v) with finite variation such that The theorem above follows from Theorem 2.1.

ϕ-sub-Gaussian random variables and processes
Here we present some basic facts from the theory of ϕ-sub-Gaussian random variables and processes, as well as some necessary results.
We say that ϕ satisfies the Condition Q, if ϕ is such an N -function that lim inf x→0 ϕ(x) [4,8].
In what follows we will always deal with N -functions for which condition Q holds.
Examples of N-functions, for which the condition Q is satisfied: (2.2) 4,8]). Let {Ω, L, P} be a standard probability space. The random variable ζ is said to be ϕ-sub-Gaussian (or belongs to the space Sub ϕ (Ω)), if Eζ = 0, E exp{λζ} exists for all λ ∈ R and there exists a constant a > 0 such that the following inequality holds for all λ ∈ R : E exp{λζ} ≤ exp{ϕ(λa)}.
The space Sub ϕ (Ω) is a Banach space with respect to the norm [4,8] τ which is called the ϕ-sub-Gaussian standard of the random variable ζ.
Definition 2.4. A family ∆ of random variables ζ ∈ Sub ϕ (Ω) is called strictly ϕsub-Gaussian (see [5]), if there exists a constant C ∆ such that for all countable sets I of random variables ζ i ∈ ∆, i ∈ I, the following inequality holds: The constant C ∆ is called the determining constant of the family ∆.
The linear closure of a strictly ϕ-sub-Gaussian family ∆ in the space L 2 (Ω) is the strictly ϕ-sub-Gaussian with the same determining constant [5].
The following example of strictly ϕ-sub-Gaussian random process is important for our study. The solutions of partial differential equations considered in the next sections are of the same form as in this example.
Example 2.1 ( [5]). Let K be a deterministic kernel and suppose that the process X = {X(t), t ∈ T} can be represented in the form where ξ(t), t ∈ T, is a strictly ϕ-sub-Gaussian random process and the integral above is defined in the mean-square sense. Then the process X(t), t ∈ T, is a strictly ϕsub-Gaussian random process with the same determining constant.
The notion of admissible function for the space Sub ϕ (Ω) will be used to state the conditions of the existence of solutions of partial differential equations considered in the paper and to write down the bounds for suprema of these solutions.
Characteristic feature of ϕ-sub-Gaussian random variables is the exponential bounds for their tail probabilities. For ϕ-sub-Gaussian processes estimates for their suprema are available in different forms, see, for example, the book [3].
To derive our main results, we shall use the following theorem on the distribution of supremum of a ϕ-sub-Gaussian random process, proved in the paper [6] (see also [3]). Theorem 2.3. Let X = {X(t), t ∈ T} be a ϕ-sub-Gaussian process and ρ X be the pseudometrics generated by X, that is, ρ X (t, s) = τ ϕ (X(t) − X(s)), t, s ∈ T. Assume that the pseudometric space (T, ρ X ) is separable, the process X is separable on (T, ρ X ) and ε 0 := sup t∈T τ ϕ (X(t)) < ∞.
Let r(x), x ≥ 1, be a non-negative, monotone increasing function such that the function r(e x ), x ≥ 0, is convex, and for 0 < ε ≤ ε 0 denotes the smallest number of elements in a v-covering of T , and the covering is formed by closed balls of radius of at most v. Then for all λ > 0, 0 < θ < 1 and u > 0 it holds Remark 2.3. The integrals of the form , v ≥ 1, being a nonnegative nondecreasing function, are called entropy integrals. Entropy characteristics of the parametric set T with respect to the pseudometrics ρ X (t, s) = τ ϕ (X(t)−X(s)), t, s ∈ T, generated by the process X = {X(t), t ∈ T}, and the rate of growth of metric massiveness N (v), or metric entropy H(v) := ln(N (v)), are closely related to the properties of the process X (see [3] for details). The integrals (2.8) play an important role in the study of such properties as boundedness and continuity of sample paths of a process, these integrals appear in estimates for modulii of continuity and distribution of supremum.
General results of this kind for ϕ-sub-Gaussian processes are related to the convergence of the integrals (2.8), where for g(v) one takes Ψ(ln(v)) with Ψ(v) = v ϕ (−1) (v) , v > 0. Theorem 2.3 is more suitable for the case of "moderate" growth of the metric entropy and can lead to improved inequalities for upper bound for the distribution of supremum of the process, in comparison with more general inequalities involving the integrals based on the above function Ψ (see [3]).
Entropy methods are also used in the modern approximation theory. Theorem 2.3 was applied, for example, in [6], for developing uniform approximation schemes for ϕ-sub-Gaussian processes.

Solutions of linear odd-order heat-type equations with random initial conditions
Let us consider the linear equation subject to the random initial condition and {a k } N k=1 are some constants. The next theorem gives the conditions of the existence of the solutions of the equation above with a ϕ-sub-Gaussian initial condition η(x) (see [2,7]).
Here converge uniformly in probability for |x| ≤ A and 0 ≤ t ≤ T for all A > 0, T > 0. This guarantees that the derivatives of orders s = 1, 2, . . . , 2N + 1 of solution U (t, x) given by (2.12) exist with probability one. In this sense we can treat U (t, x) as a classical solution. We refer for more details to [2]. where α is a constant such that α > 1 − 1 p (see [2]).

Some auxiliary estimates
Let us consider a separable metric space Theorem 3.1. Let X = {X(t), t ∈ T} be a separable ϕ-sub-Gaussian random process such that ε 0 = sup t∈T τ ϕ (X(t)) < ∞ and sup d(t,s)≤h, t,s∈T

2)
where γ 0 = σ(max i=1,2 |b i − a i |) and r(x), x ≥ 1, is defined in Theorem 2.3. Then for all 0 < θ < 1 and u > 0 it holds Proof. To prove this theorem, we apply Theorem 2.3 in the case of the separable met- In particular, condition (3.1) means that From the fact that the process X is separable on (T, ρ X ) and the function {σ(h), 0 < h ≤ max i=1,2 |b i − a i |} is a monotonically increasing continuous function, we get that for ε ≤ γ 0 the smallest number of elements in an ε-covering of the pseudometric space (T, ρ X ) can be estimated as the smallest number of elements in a (σ (−1) (ε))covering of the metric space (T, d) as follows: Hence, from condition (3.2) we get that that is, conditions of Theorem 2.3 are satified for the process X. Finally, taking into account the estimates above and the properties of the function r we derive the estimate for distribution of supremum of the process X for all 0 < θ < 1 and u > 0: As can be seen from Theorem 3.1, it is crucial to guarantee some kind of continuity of X on T in the form (3.1), that is, with respect to the norm τ ϕ induced by the process X itself. Fulfilment of condition (3.1) enables us to write down the upper bound (3.3) for the distribution of supremum of a ϕ-sub-Gaussian process X = {X(t), t ∈ T} defined on a separable metric space (T, d).
Below we present as a separate theorem the very useful result giving the conditions for the estimate (3.1) to hold for the field {U (t, x), a ≤ t ≤ b, c ≤ x ≤ d} representing the solution to (2.9)-(2.10).
Theorem 3.2. Let y = {y(u), u ∈ R} be a strictly ϕ-sub-Gaussian random process with a determining constant C y and U (t, x) exists and is continuous with probability one (this condition holds if Theorem 2.4 holds). Let Ey(t)y(s) = Γ y (s, t). Assume that {Z(u), u ≥ 0} is an admissible function for the space Sub ϕ (Ω). If the integral converges, then there exists the function This result was obtained in the paper [7] as an intermediate statement in the course of the proof of Theorem 5.1. To make the present paper self-contained, we present in Appendix the main steps of the proof.

On the distribution of supremum of solution to the problem (2.9)-(2.10)
Now we have all the necessary tools to derive the estimate for the distribution of supremum of the field U (t, x) representing the solution to (2.9)-(2.10). Theorem 3.3. Let y = {y(u), u ∈ R} be a strictly ϕ-sub-Gaussian random process with a determining constant C y and U (t, x) = ∞ −∞ I(t, x, λ) dy(λ), where I(t, x, λ) is given in Theorem 2.4, a ≤ t ≤ b, c ≤ x ≤ d. Assume that for U (t, x) the conditions of Theorem 3.2 hold. Let r(x), x ≥ 1, be a non-negative, monotone increasing function such that the function r(e x ), x ≥ 0, is convex, and condition (3.2) is satisfied for σ(h) given by (3.5).
Then for all 0 < θ < 1 and u > 0 the following inequality holds true P sup a≤t≤b c≤x≤d Proof. The assertion of this theorem follows from Theorems 3.1 and 3.2. Since the conditions of Theorem 3.2 are satisfied, then there exists the function In this case, and for ε 0 the upper bound (A.1) holds (see Appendix A). Since the conditions (3.1) and (3.2) of Theorem 3.1 also hold true, the final estimate directly follows.
Remark 3.1. The derivation of our main result is based on Theorem 2.3, and due to this we present the bounds for the distribution of the supremum of the process U (t, x) in the form different than those obtained in the paper [7]. This form of bounds can be more useful in particular situations giving the possibility to calculate the explicit expressions for the bounds. Now we will specify the statement of Theorem 3.3 for particular choices of the admissible function Z and the function ϕ.
Now it is necessary to make the following steps: 1. to check the fulfilment of (3.4) with a particular function Z, admissible for a given ϕ; 1 ρ , u > 0, and This integral converges if the following integral converges Note that for the existence of the solution U (t, x) we have to impose the condition (2.11), which for the admissible function Z(u) = |u| ρ takes the form and implies the fulfilment of (3.12). Therefore, C 2 Z is well defined. If (3.13) holds true, then we can define σ(h) by means of the formula (3.5), and, for our choice of Z, we have that where we have denoted C = 2C y C Z . Therefore, in view of Theorem 3.2, condition (3.6) holds with σ(h) of the form (3.14), that is, we have the Hölder continuity of sample paths of the solution U (t, x).
The choice of the function ϕ in Example 3.1 is reasoned by the fact that the corresponding class of random process is the natural generalization of Gaussian processes. This example is rather simple and, at the same time, it is very illustrative and instructive. Therefore, the derivations above are worth to be summarized as a separate statement.
Example 3.2. Consider ϕ(x) = exp{|x|} − |x| − 1, x ∈ R. Then ϕ * (x) = (|x| + 1) ln(|x| + 1) − |x|, x ∈ R. Hence, Let us take the function Z(u) = ln α (u + 1), u ≥ 0, α > 1, as an admissible function for the space Sub ϕ (Ω). In this case, The above integral converges if the following integral converges Let d−c 2 e α > 1 and b−a 2 e α > 1. That is, we choose some α > max 1, Then In our case, the function r(v) = ln v, v ≥ 1, satisfies the conditions defined in Theorem 2.3 and is convenient for the estimation ofÎ r (δ). Substituting it in the expression above, we getÎ Since r (−1) (v) = e v , v ≥ 0, for such θ ∈ (0, 1) that θε 0 < γ 0 we obtain and, finally, for all u > 0 the following inequality emerges P sup a≤t≤b c≤x≤d The derivation of the bound (3.6) for the process U (t, x) is based on the particular structure of this process and on the use of the property (2.4) of admissible function Z. Firstly note that the process U (t, x) is separable since U (t, x) is continuous with probability one. The process U (t, x) is strictly ϕ-sub-Gaussian with the determining constant C y , and, therefore, we can write: We can also estimate ε 0 = sup (t,x)∈D τ ϕ (U (t, x)), where D = {a ≤ t ≤ b, c ≤ x ≤ d}, as follows: We can write |I(t, x, λ) − I(t 1 , x 1 , λ)| = | cos A − cos B|, where Thus |I(t, x, λ) − I(t 1 , x 1 , λ)| = 2 sin a k λ 2k+1 (−1) k .