A limit theorem for a class of stationary increments L\'{e}vy moving average process with multiple singularities

In this paper we present some new limit theorems for power variations of stationary increment L\'{e}vy driven moving average processes. Recently, such asymptotic results have been investigated in [Ann. Probab. 45(6B) (2017), 4477--4528, Festschrift for Bernt {\O}ksendal, Stochastics 81(1) (2017), 360--383] under the assumption that the kernel function potentially exhibits a singular behaviour at $0$. The aim of this work is to demonstrate how some of the results change when the kernel function has multiple singularity points. Our paper is also related to the article [Stoch. Process. Appl. 125(2) (2014), 653--677] that studied the same mathematical question for the class of Brownian semi-stationary models.


Introduction
In recent years limit theorems and statistical inference for high frequency observations of stochastic processes have received a great deal of attention. The most prominent class of high frequency statistics are power variations that have been proved to be of immense importance for the analysis of the fine structure of an underlying stochastic process. The asymptotic theory for power variations and related statistics has been intensively studied in the setting of Itô semimartingales, fractional Brownian motion and Brownian semi-stationary processes, to name just a few; see for example [2-4, 7, 9] among many others.
In the recent work [6,5] power variations of stationary increments Lévy moving average processes have been investigated in details. These are continuous-time stochastic processes (X t ) t≥0 , defined on a probability space (Ω, F , P), that are given by where L = (L t ) t∈R is a symmetric Lévy process on R with L 0 = 0 and without Gaussian component. Moreover, g, g 0 : R → R are deterministic functions vanishing on (−∞, 0). The most prominent subclasses include Lévy moving average processes, which correspond to the setting g 0 = 0, and the linear fractional stable motion, which is obtained by taking g(s) = g 0 (s) = s α + and L being a symmetric β-stable Lévy process with β ∈ (0, 2). The latter is a self-similar process with index H = α + 1/β; see [12].
We introduce the kth order increments ∆ n i,k X of X, k ∈ N, that are defined by For example, we have that ∆ n i,1 X = X i n −X i−1 n and ∆ n i,2 X = X i n −2X i−1 n +X i−2 n . The main statistic of interest is the power variation computed on the basis of kth order increments: V (X, p; k) n := n i=k |∆ n i,k X| p , p > 0. (1.3) A variety of asymptotic results has been shown for the statistic V (p; k) n in [6,5]. The mode of convergence and possible limits heavily depend on the interplay between the power p, the form of the kernel function g and the Blumenthal-Getoor index of L.
Assumption (A-log): In addition to (A) suppose that Intuitively speaking, Assumption (A) says that g (k) may have a singularity at 0 when α is small, but it is smooth outside of 0. The theorem below has been proved in [6,5]. We recall that a sequence of R d -valued random variables (Y n ) n≥1 is said to converge stably in law to a random variable Y , defined on an extension of the original probability space (Ω, F , P), whenever the joint convergence in distribution (Y n , Z) d − → (Y, Z) holds for any F -measurable Z; in this case we use the notation Y n L−s − −− → Y . We refer to [1,11] for a detailed exposition of stable convergence. . Suppose that Assumption (A) holds, the Blumenthal-Getoor index satisfies β < 2 and p > β. If w = 1 assume that (A-log) holds. Then we obtain the following cases: (i) When α < k − 1/p then we have the stable convergence where (T m ) m≥1 denote the jump times of L, (U m ) m≥1 is a sequence of independent identically (i.i.) U(0, 1)-distributed variables independent of L, and the function h k is defined by (ii) When α = k − 1/p and additionally 1/p + 1/w > 1, then we have (1.8) We remark that the first order asymptotic theory of [6, Theorem 1.1] includes two more regimes: an ergodic type limit theorem in the setting p < β, α < k − 1/β and convergence in probability to a random integral in the setting p ≥ 1, α > k − 1/ max{p, β}. However, in this paper we concentrate ourselves on results of Theorem 1.1, which are quite non-standard in the literature. More specifically, our aim is to extend the theory of Theorem 1.1 to kernels g that exhibit multiple singularities. We call a point x ∈ R + a singularity point when the kth derivative g (k) of g explodes at x. Note that under Assumption (A) and condition α ≤ k − 1/p the function g has only one singularity point at x = 0. In practical applications a singularity point x ∈ R + leads to a strong feedback effect stemming from the past jumps around the time t − x. Such effects has been discussed in the context of turbulence modelling in [8].
We will show that the limits in Theorem 1.1(i) and (ii) will be affected by the presence of multiple singularity points. More precisely, we will see that the increments ∆ n i,k X can be heavily influenced by the jumps of L that happened in the past, and the time delay is determined by the singularity points of g. The obtained result is similar in spirit to the work [8] that studied quadratic variation of Brownian semi-stationary processes under multiple singularities of the kernel g. Furthermore, we will prove that in general the stable convergence in Theorem 1.1(i) only holds along a subsequence.
The paper is structured as follows. Section 2 presents the main results of the article. Proofs are collected in Section 3.

Main results
We consider stationary increments Lévy moving average processes as defined at (1.1) and recall that the driving motion L is a pure jump Lévy process with Lévy measure ν. Now, we introduce the condition on the kernel function g: Assumption (B): For some w ∈ (0, 2], lim sup t→∞ ν(x : |x| ≥ t)t w < ∞ and g − g 0 is a bounded function in L w (R + ). Furthermore, there exist points 0 = θ 0 < θ 1 < · · · < θ l such that the following properties hold: (i) g(t) ∼ c 0 t α0 as t ↓ 0 for some α 0 > 0 and c 0 = 0.
Let us give some remarks on Assumption (B). First of all, conditions (B)(i) and (B)(ii), which are direct extensions of (1.5), mean that for small powers α z > 0 the points θ z are singularities of g in the sense that g (k) (θ z ) does not exist. On the other hand, condition (B)(iii) states that there exist no further singularities. The parameter w is by no means unique. It simultaneously describes the tail behaviours of the Lévy measure ν and the integrability of the function |g (k) |, which exhibit a trade-off. When L is βstable we always take w = β. Furthermore, Assumption (B) guarantees the existence of X t for all t ≥ 0. Indeed, it follows from [10,Theorem 7] that the process X is well-defined if and only if for all t ≥ 0, where f t (s) = g(t + s) − g 0 (s). By adding and subtracting g to f t it follows by Assumption (B) and the mean value theorem that f t is a bounded function in L w (R + ). For all ǫ > 0, Assumption (B) implies that Remark 2.1 (Toy example). Recall the following well-known results about the power variation of a pure jump Lévy process L: for any k ≥ 1 and any p > β. Let us now consider a simple stationary increments Lévy moving average process X with g 0 = 0 and g(x) = 1 [0,1] (x). In this case we may call the points θ 0 = 0 and θ 1 = 1 the singularities of g, although they do not precisely correspond to conditions (B)(i) and (B)(ii), and we observe that X t = L t − L t−1 . Hence, we obtain the convergence in probability for any k ≥ 1 and any p > β. This result demonstrates that even in the simplest setting multiple singularities lead to a different limit.
It turns out that only the minimal powers among {α 0 , . . . , α l } determine the asymptotic behaviour of the statistic V (X, p; k) n . Thus, we define α := min{α 0 , . . . , α l } and Furthermore, we introduce the notation h k,0 := h k and In the main result below we consider a subsequence (n j ) j∈N such that the following condition holds: where {x} denotes the fractional part of x ∈ R. Obviously, such a subsequence always exists since {nθ z } is a bounded sequence. Sometimes we will require a stronger condition, which is analogous to Assumption (A-log): Assumption (B-log): Condition (B) holds and we have that The main result of the paper is the following theorem.
We remark that the stable convergence in Theorem 2.2(i) only holds along the subsequence (n j ) j≥1 , which is seen from the form of the limit in (2.5) that depends on (η z ). The original statistic n αp V (X, p; k) n is tight, but does not converge except when θ z ∈ N for all z ∈ A. On the other hand, in Theorem 2.2(ii) we do not require to consider a subsequence.
Notice that the interval [−θ z , 1 − θ z ], which appears in Theorem 2.2, is the set [0, 1] shifted by θ z to the left. Given the discussion of Remark 2.1, such a shift in the limit is not really surprising. We recall that a similar phenomenon has been discovered in [8] in the context of Brownian semi-stationary processes. These are stochastic processes where W is a two-sided Brownian motion and (σ t ) t∈R is a cádlág process. When the kernel function g satisfies conditions (B)(i) and (B)(ii) along with some further assumptions, which in particular ensure the existence of Y t , the authors have shown the following convergence in probability (see [8,Theorem 3.2]): and the probability weights (π z ) z∈A are given by .
Hence, we observe the same shift phenomenon in the integration region as in Theorem 2.2.

Proofs
Throughout this section all positive constants are denoted by C although they may change from line to line. We will divide the proof of Theorem 2.2 into several steps. First, we will show the statements (2.5) and (2.6) for a compound Poisson process. In the second step we will decompose the jump measure of L into jumps that are bigger than ǫ and jumps that are smaller than ǫ. The big jumps form a compound Poisson process and hence the claim follows from the first step. Finally, we prove negligibility of small jumps when ǫ → 0.
We start with an important proposition.

1)
where {x} denotes the fractional parts of the vector x ∈ R d and x + a, a ∈ R, is componentwise addition. Here U = (U 1 , . . . , U d ) consists of i.i. U(0, 1)-distributed random variables defined on an extension of the space (Ω, F , P) and being independent of F .
Proof. We first show the stable convergence This statement has already been shown in [6, Lemma 4.1], but we demonstrate its proof for completeness. Let f : R d × R d → R be a C 1 -function, which vanishes outside some closed ball in A×R d . We claim that there exists a finite constant K > 0 such that for all ρ > 0 Hence, we conclude that Using that A is convex and open, we deduce by the mean value theorem that there exists a positive constant K and a compact set Thus, D ρ ≤ Kρ (0,1] d R d 1 B (x, u) dx du, which shows (3.2). Now, we are ready to prove the statement (3.1). By (3.2) and condition (2.4) we conclude that Next, consider the map f : This map is discontinuous exactly in those points x, y 1 , . . . , y l ′ for which x j + y i ∈ Z for some i ∈ {1, . . . , l ′ } and some j ∈ {1, . . . , d}. Note that the probability of the limiting variable (U, η z ) z∈A lying in the latter set is 0. Hence, it follows from the continuous mapping theorem for stable convergence that Now, we introduce the notation

5)
and observe the identity The next lemma presents some estimates for the function g i,n . Its proof is a straightforward consequence of Assumption (B) and the Taylor expansion.
(v) For each ε > 0 it holds that Furthermore, similar estimates hold for z = 0 with obvious adjustments that account for the fact that g and h k,0 are both vanishing on (−∞, 0).

Proof of Theorem 2.2 in the compound Poisson case
In this subsection we assume that L is a compound Poisson process. Recall that (T m ) m≥1 denotes the jump times of L. Let ε > 0 and consider n j ∈ N such that εn j > 4k. Define the set Roughly speaking, on the set Ω ε the jump times in [−θ l , 1] are well separated, their increments are outside a small neighbourhood of θ z − θ z ′ , and there are no jumps around the fixed points −θ z and 1−θ z . In particular, it obviously holds that P(Ω ε ) → 1 as ε → 0.
Throughout the proof we assume without loss of generality that 0 ∈ A. Now, we introduce a decomposition, which is central for the proof. Recalling the definition of g i,n at (3.5), we observe the identity It turns out that the first term z∈A M i,n,ε,z is dominating, while the other two are negligible.

Main terms in Theorem 2.2(i)
In this subsection we consider the dominating term in the decomposition (3.6). We want to prove that, on Ω ε , then for j → ∞ where the limit has been introduced in (2.5). Let us fix an index z ∈ A. Then, on Ω ε , for each jump time T m ∈ (−θ z , 1 − θ z ] there exists a unique random variable i m,z ∈ N such that We also observe the following implication, which follows directly from the definition of the set Ω ε : On Indeed, this is the consequence of the definition of the term M i,n,ε,z and the statement which holds on Ω ε . Hence, we conclude that where v z m are random variables taking values in {−2, −1, 0} that are measurable with respect to T m . If z = 0 then the sum above is one-sided, i.e. from u = 0 to ⌊nε⌋, cf. [6,Eq. (4.2)]. Next, we observe the identity Due to Assumption (B), we can write g(x) = c z |x − θ z | α f (x) with f (x) → 1 as x → θ z , for any z ∈ A (for θ 0 = 0 we need to replace |x| α by x α + ). This allows us to decompose for any m ∈ N, 0 ≤ r ≤ k and z ∈ A. Since f (x) → 1 as x → θ z , we find that for any d ∈ N n α j g which holds due to condition (2.4), decomposition (3.9) and Proposition 3.1 (for θ 0 = 0 we again need to replace |x| α by x α + ). Hence, by continuous mapping theorem for stable convergence we deduce that From (3.10) and properties of stable convergence we conclude that Applying a monotone convergence argument, we deduce the almost sure convergence where the second sum on the right-hand side is finite, since |h k,z (x)| ≤ C|x| α−k for large enough |x| and all z ∈ A, and α < k − 1/p. In view of (3.11) and (3.12), we are left to prove the convergence on Ω ε . Set K d = m>d:Tm∈(−θz,1−θz] |∆L Tm | p and observe that K d → 0 as d → ∞, since L is a compound Poisson process. Due to Lemma 3.2 we conclude that |n α g i,n (x)| ≤ C min{1, |i/n − x| α−k } and thus for all z ∈ A, and the latter converges to 0 almost surely as d → ∞, because α < k − 1/p. Consequently, we have shown (3.7).

Main terms in Theorem 2.2(ii)
We start with a simple lemma. Proof. Due to the assumption of the lemma, we have that (a i ) i∈N is a bounded sequence and for each ǫ > 0 there exists an N = N (ǫ) with It obviously holds that lim n→∞ cn i=1 i −1 / log(n) = 1. On the other hand, we obtain that lim sup Since ǫ > 0 is arbitrary, we conclude the statement of Lemma 3.3. Now, we will again use the decomposition (3.8), which holds on Ω ε , and treat each term V n,ε,z separately. We consider z ≥ 1 and we will show that as n → ∞, for any m ∈ N. Let us first consider the case |u| ≥ k. Recall that we have assumed that f z (x) = g(x)/|x − θ z | α is in C k ((θ z − δ, θ z + δ)) for any δ < max 1≤j≤l (θ j − θ j−1 ). Now, due to identity (3.9) and Taylor expansion of order k, we obtain the bound (cf. for any ε < max 1≤j≤l (θ j − θ j−1 ). Since |n α g im,z +u,n (T m )| is bounded for any |u| < k due to Lemma 3.2, we deduce the convergence in (3.13).
Next, for large enough |u| we observe the bounds Hence, by Lemma 3.3, we conclude the convergence 14) The same statement holds for z = 0, but the limit becomes |q k,α | p , since in this setting the sum is one-sided. We set x p p = m i=1 |x i | p for any x ∈ R m and p > 0, and recall that x p is a norm for p ≥ 1. It holds that |∆L Tm | p as n → ∞, on Ω ε .

Negligible terms
Due to inequalities at (3.15), it suffices to show that on Ω ε a n n i=k |R i,n,ε | p P −→ 0 and a n n i=k |M i,n,ε,z | p P −→ 0 for z ∈ A c , (3.16) as n → ∞, where a n = n αp in Theorem 2.2(i) and a n = n αp / log(n) in Theorem 2.2(ii), and this will prove that these terms do not affect the limits in Theorem 2.2. At this stage we notice that outside the singularity points the kernel function g satisfies the same properties under Assumption (B) (resp. Assumption (B-log)) as under Assumption (A) (resp. Assumption (A-log)). Consequently, we can apply the estimates for the term R i,n,ε derived in [6, Eqs.
where q is determined via 1/q+1/w = 1, since R i,n,ε is only affected by the function g outside the singularity points θ z . We readily conclude the first convergence at (3.16) in the setting of Theorem 2.2(i), because α < k − 1/p. It also holds in the setting of Theorem 2.2(ii), where for w ∈ (1, 2] we use the assumption that 1/p + 1/w > 1. Now, we show the second statement of (3.16), which is only relevant in the setting of Theorem 2.2(i). Since α z < k − 1/p for all z, we can apply to n i=k |M i,n,ε,z | p , z ∈ A c , the same techniques as for n i=k |M i,n,ε,z | p , z ∈ A. Hence, using the same methods as in Section 3.1.1, we conclude that on Ω ε n αp n i=k |M i,n,ε,z | p = O P n p(α−αz ) for all z ∈ A c , where the notation Y n = O P (a n ) means that the sequence a −1 n Y n is tight. Since α z > α for all z ∈ A c , we obtain the second statement of (3.16). The results of Sections 3.1.1-3.1.3 and the fact that Ω ε ↑ Ω as ε → 0 imply the assertion of Theorem 2.2 in the compound Poisson case.

Proof of Theorem 2.2 in the general case
Let now (L t ) t∈R be a general symmetric pure jump Lévy process with Blumenthal-Getoor index β. We denote by N the corresponding Poisson random measure defined by N (A) := #{t ∈ R : (t, ∆L t ) ∈ A} for all measurable A ⊆ R × (R \ {0}). Next, we introduce the process which only involves small jumps of L. We will prove that lim m→∞ lim sup n→∞ P a n V X(m), p; k n > ǫ = 0 for any ǫ > 0, (3.17) where a n = n αp in Theorem 2.2(i) and a n = n αp / log(n) in Theorem 2.2(ii). First, due to Markov's inequality and the stationary increments of X t (m), it follows that P a n V X(m), p; k n > ǫ ≤ ǫ −1 a n where b n = na n . Hence it is enough to prove that using the assumption that p > β. We consider only (3.19) in the case of Theorem 2.2(i) as (ii) is very similar, see [5]. In the case of (i) then b 1/p n = n α+1/p . For short notation define Φ p : R → R + as the function Note that Φ p is of modular growth, i.e. there exists a constant K p > 0 depending only on p such that Φ p (x + y) ≤ K p (Φ p (x) + Φ p (y)) for any x, y ∈ R. We consider the following decomposition where we used that α ≤ α z < k − 1/p. Moreover, δ 1 n |xn α+1/p−k s αz −k | p 1 {|xn α+1/p−k s αz −k |>1} ds ≤ K|x| p .
The term I r 2,z is handled similarly. For the last term I b 2,z we note that, since we are bounded away from both θ z−1 and θ z , there exists a constant K > 0 such that |g k,n (s)| ≤ Kn −k for all s ∈ k n − θ z + δ, k n − θ z−1 − δ .
Estimation of I 3 . Arguments as in Lemma 3.2 imply that |g k,n (s)| ≤ Kn −k | k n − s − θ z | α l −k for all s ∈ k n − θ l − δ, k n − θ l − 1 n .
One may then proceed as for the term I l 2,z above to conclude that I 3 (x) ≤ K(x 2 + |x| p ).

Negligibility of small jumps
Now, we note that X t − X t (m) is the integral (1.1), where the integrator is a compound Poisson process that corresponds to big jumps of L. Hence, we obtain the results of Theorem 2.2 for the process X − X(m) as in Section 3.1. More specifically, under assumptions of Theorem 2.2(i) it holds that |∆L Tr | p < ∞ for any p > β.
Finally, using the decomposition X = (X − X(m)) + X(m) and letting first n j → ∞ and then m → ∞, we deduce the statement of Theorem 2.2 by (3.17) and the inequalities (3.15). This completes the proof.