VMSTA Modern Stochastics: Theory and Applications 2351-6054 2351-6046 2351-6046 VTeXMokslininkų g. 2A, 08412 Vilnius, Lithuania VMSTA01 10.15559/vmsta-2014.1.1.1 Research Article Strong limit theorems for anisotropic self-similar fields MakoginV.makoginv@ukr.net MishuraYu.myus@univ.kiev.ua Taras Shevchenko National University of Kyiv, Kyiv, Ukraine Corresponding author. 2014 2762014117393 1382013 622014 562014 © 2014 The Author(s). Published by VTeX2014 Open access article under the CC BY license.

Our paper starts from presentation and comparison of three definitions for the self-similar field. The interconnection between these definitions has been established. Then we consider the Lamperti scaling transformation for the self-similar field and investigate the connection between the scaling transformation for such field and the shift transformation for the corresponding stationary field. It was also shown that the fractional Brownian sheet has the ergodic scaling transformation. The strong limit theorems for the anisotropic growth of the sample paths of the self-similar field at 0 and at for the upper and lower functions have been proved. It was obtained the upper bound for growth of the field with ergodic scaling transformation for slowly varying functions. We present some examples of iterated log-type limits for the Gaussian self-similar random fields.

Self-similar random field fractional Brownian sheet strong limit theorem iterated log-type law 60G60 60G18 60G22 60F15 60G17
Introduction

A self-similar process is a process invariant by distribution under specific time and/or space scaling. Namely, a stochastic process {X(t),tR} is self-similar with index H0 , if for any a>0 0$]]> {X(at),tR}=d{aHX(t),tR} , where =d denotes the equality of the finite-dimensional distributions. The books by Embrechts & Maejima  and Samorodnitsky & Taqqu  are devoted to the theory of self-similar processes. A classical example of a self-similar process with index H(0,1) is a fractional Brownian motion {BH(t),tR+} ( R+=[0,+) ) with a corresponding Hurst index. This process has centered stationary increments and the following covariance function E(BH(t)BH(u))=12(t2H+u2H|tu|2H),t,uR+. The investigation of self-similar random fields (multiparameter processes) was caused by the evidence of the self-similarity property of phenomena in climatology, environmental sciences, etc. (see ). In particular, so called anisotropic random fields are used for modeling phenomena in spatial statistics, statistical hydrology and image processing (see ). The attempts to extend the self-similarity concept from processes to fields resulted in arising of the several approaches. In our paper we present three different definitions of the self-similar fields and establish the interconnection between them. We show that the covariance function of the centered Gaussian field determines the self-similarity property and its type. The definitions of the fractional Brownian fields and sheets are also presented in the paper. It is proved that they are self-similar fields but according to the different definitions. We consider the fields which are self-similar with respect to every coordinate with individual index. Such fields are used to call anisotropic and in the Brownian case they usually are called as Brownian sheets. The paper’s aim is to investigate the asymptotic growth of the sample paths of these fields. We introduce the notions of upper and lower functions for the sample paths of the random field which are similar to the paper  and prove the zero–one law for such functions in the case of growth at 0 and at . We also assume the ergodicity of the scaling transformation. The ergodicity property should be proved independently for every particular case and this can be easily done for the stationary fields and processes. Let us also mention that the non-singular self-similar process has not to be stationary. But there is a one-to-one correspondence between self-similar and stationary processes. For every self-similar process X with index H>0 0$]]>, its Lamperti transformation Z={Z(t)=tHX(et)} is a stationary process. The Lamperti transformation for anisotropic random fields was introduced in the paper  and there was established the correspondence between self-similar and stationary random fields as well. In the present paper it is proved that the ergodicity of the shift transformation for the corresponding stationary field is a sufficient condition for the ergodicity of the scaling transformation. For the Gaussian fields the last statement can be ensured by the proper conditions on the covariance function. In particular, we prove that the fractional Brownian sheet has the ergodic scaling transformation.

In this paper the strong limit theorems for the anisotropic growth of the sample paths of the self-similar fields for the upper and lower functions arising in the zero–one law is proved. The similar theorems for the self-similar stochastic processes were proved in the paper . Application of these theorems to the Gaussian fields allows to obtain the iterated log-type lows. Comparing the results for the fractional Brownian fields and sheets with the results from the paper  we can conclude that our theorems enable us to obtain more precise estimates.

The paper is organized as follows. Section 2 contains the different definitions of a self-similar field and the interconnection between them is proved. We focused on the Gaussian case and present the definitions of the fractional Brownian field and sheet. In Section 3, the Lamperti transformation for the self-similar field is considered and the connection between the scaling transformation for such a field and the shift transformation for the corresponding stationary field is stated proved. It is shown that the fractional Brownian sheet has the ergodic scaling transformation. In Section 4, we introduce the definitions of the upper and lower functions for asymptotic growth of the sample paths of the self-similar field. The zero–one law is proved for the fields with ergodic scaling transformation. The strong limit theorems for asymptotic growth of sample paths at 0 and at are obtained in Section 5. We also establish there the upper bounds for growth of the field with ergodic scaling transformation for the case of the slowly varying functions. In Section 6 we apply the theorem to prove the iterated log-type laws for the Gaussian self-similar fields.

Definition of self-similarity for random fields

Let us start from giving three different definitions of the self-similar random fields and then show their interrelation. We assume that {Ω,F,P} is a standard probability space defining all the random objects considered further on.

([<xref ref-type="bibr" rid="j_vmsta01_ref_014">14</xref>]).

A random field {X(t),t=(t1,,tn)Rn} is self-similar with index H>0 0$]]> if for every a>0 0$]]> {X(at),tRn}=d{aHX(t),tRn} .

Hereinafter we shall use the designation x·y in order to denote the vector consisting of the coordinatewise products of two vectors x,yRn x·y=(x1y1,,xnyn), where x=(x1,,xn),y=(y1,,yn) .

([<xref ref-type="bibr" rid="j_vmsta01_ref_007">7</xref>]).

A random field {X(t),tRn} is self-similar with index H=(H1,,Hn)R+n if for any a=(a1,,an)(0,+)n {X(a·t),tRn}=d{a1H1anHnX(t),tRn}.

In addition it is possible to give the third definition of the self-similar field as a field which is self-similar with respect to every time coordinate.

A random field {X(t),tRn} is coordinatewise self-similar with index H=(H1,,Hn)R+n , if for any a>0 0$]]> and 1kn {X(t1,,tk1,atk,tk+1,,tn),tRn}=d{aHkX(t),tRn}. Now let us explain how these definitions interact between each other. Definitions 2.2 and 2.3 are equivalent. (2.22.3) Assume that a random field {X(t),tRn} is self-similar by Definition 2.2 with index H=(H1,,Hn)R+n . For arbitrary a>0 0$]]> and 1kn we put a1=1,,ak1=1 , ak=a , ak+1=1,,an=1 , a=(a1,,an) . Then X is self-similar with respect to the k-th coordinate.

(2.22.3) Let a random field {X(t),tRn} is self-similar by Definition 2.3 with index H=(H1,,Hn)R+n . Then, for any a(0,+)n , m1 , t1,,tmRn , ti=(t1i,,tni) , x1,,xmR we get P( i=1m{X(a·ti)<xi})=Def. 2.3P( i=1m{a1H1X(t1i,a2t2i,,antni)<xi})=P( i=1m{X(t1i,a2t2i,,antni)<xia1H1})==P( i=1m{X(ti)<xia1H1anHn})=P( i=1m{a1H1anHnX(ti)<xi}). The lemma is proved.  □

Let a random field {X(t),t=(t1,,tn)Rn} be self-similar by Definition 2.2 with index H=(H1,,Hn)R+n . Then X is self-similar by Definition 2.1 with index H=H1++Hn .

Let us put a=(a,,a) where a>0 0$]]> is an arbitrary number. Then for any m1 , t1,,tmRn , ti=(t1i,,tni) , x1,,xmR we obtain P( i=1m{X(ati)<xi})=P( i=1m{X(a·ti)<xi})=P( i=1m{aH1++HnX(ti)<xi})=P( i=1m{aHX(ti)<xi}). The lemma is proved. □ There is a strong correspondence between a type of covariance function and a certain type of self-similarity property for centered Gaussian random fields. Let the covariance functions C1,C2:Rn×RnR of the centered Gaussian fields {X1(t),tRn} and {X2(t),tRn} respectively, satisfy the following properties: For any t,sRn , a(0,+)n C1(a·t,a·s)=a12H1an2HnC1(t,s), where 0<H1<1,,0<Hn<1 . For any t,sRn , a>0 0$]]> C2(at,as)=a2HC2(t,s), where 0<H<1 .

Then the field X1 is self-similar with index H=(H1,,Hn) by Definition 2.2, and the field X2 is self-similar with index H by Definition 2.1.

The fact that the finite-dimensional distributions of the centered Gaussian fields are uniquely determined by the covariance function implies the lemma’s proof. Within the lemma’s conditions the covariance matrices Σ1 and Σ2 of the corresponding random fields X1 and X2 have the following properties: Σ1(a·t,a·s)=a12H1an2HnΣ1(t,s),t,sRn, Σ2(at,as)=a2HΣ2(t,s),t,sRn. The lemma is proved.  □

Let us give a few examples of Gaussian self-similar random fields.

As a standard Levy fractional Brownian field with index H>0 0$]]> we shall call a centered Gaussian random field BH={BH(t),tR+n} with a covariance function E(BH(t)BH(s))=12(t2H+s2Hts2H),t,sR+n, where · is set for the Euclidean norm in Rn . This field is self-similar by Definition 2.1 (see , Example 8.1.3). The Lévy fractional Brownian field is isotropic. This field is the only one in law Gaussian self-similar field within Definition 2.1 with stationary isotropic increments. As a standard fractional Brownian sheet with index H=(H1,,Hn) , 0<Hi<1 , i=1,n we shall call a centered Gaussian random field BH={BH(t),tR+n} with a covariance function E(BH(t)BH(s))=2n i=1n(|ti|2Hi+|si|2Hi|tisi|2Hi),t,sR+n. This field is self-similar by Definition 2.2 and has stationary rectangular increments. The proof of this property for the R2 case can be found in the paper . A similar property for the case n>2 2$]]> can be easily proved as well.

A random field satisfying Definition 2.1 is not necessary self-similar in a sense of Definition 2.2. Indeed, let us consider the Levy fractional Brownian field {BH(t),tR+n} . It is self-similar by Definition 2.1 (, Example 8.1.3). We intend to prove that this field is not self-similar by Definition 2.2. Let a1=a>0 0$]]>, a2=1,,an=1 , then EX2(a1,1,,1)=(a,1,,1)2HEX2(1,,1). But, if the field satisfies Definition 2.2, then there should be EX2(a1,1,,1)=a2HEX2(1,,1). Self-similar fields with ergodic scaling transformation Further in the paper, we assume that the fields satisfy Definition 2.2 and are real-valued and continuous in probability. Under such assumptions we could work with separable versions without loss of generality. Moreover, we shall consider only the case n=2 since switching to the parameter of the higher dimension is rather technical. A scaling transformation SaH for the random field X={X(t),tR+2} and for H=(H1,H2)(0,+)2 , a=(a1,a2)(0,+)2 is defined as: (SaHX)(t)=a1H1a2H2X(a·t),tR+2. Using the notion of scaling transformation we can formulate Definition 2.2 for the case R+2 as follows. A random field X={X(t),tR+2} is said to be self-similar with index H=(H1,H2)(0,+)2 , if for any a1>0 0$]]>, a2>0 0$]]> the field {(SaHX)(t),tR+2} has the same finite-dimensional distributions as the field X. Hereinafter we shall use the notation Sa=SaH . Let us consider a self-similar field X with index H=(H1,H2)(0,+)2 , then X(0,s)=X(s,0)=0 , s0 a.s. (, Proposition 2.4.1). For such a field the Lamperti transformation τH was introduced in the paper : τHX(t)=Z(t)=eH1t1eH2t2X(et1,et2),tR2. The field Z is stationary. The converse is also true: for any stationary field Z the field X={X(s)=s1H1s2H2Z(lns1,lns2),s(0,+)2} is such that X(0,s)=X(s,0)=0 a.s., s0} is self-similar with index H=(H1,H2) . The scaling transformation Sa of the field X corresponds to the shift transformation θu of the field Z where u=(u1,u2)=(lna1,lna2) and the shift transformation is defined as (θuZ)(s)=Z(s1+u1,s2+u2),sR2 . Indeed, (τHSaX)(t)=τH(a1H1a2H2X(a·t))=a1H1a2H2et1H1et2H2X(a1et1,a2et2)=eH1(t1+u1)eH2(t2+u2))X(et1+u1,et2+u2)=θueH1t1eH2t2X(et1,et2)=(θuτHX)(t)=(θuZ)(t),tR+2. So τHSa=θuτH . Zero–one laws naturally occur for the processes and field with ergodicity property. It follows from Definition 3.1 that the scaling transformation Sa of the field X preserves the same distribution so the notion of ergodicity of Sa can be defined in a usual way (see ). Let T:ΩΩ be some transformation defined on the probability space (Ω,F,P) . The transformation T, which preserves the measure, is ergodic if for every set EF such that P(T1(E)ΔE)=0 either P(E)=0 or P(E)=1 . We shall call the field X self-similar with ergodic scaling transformation if Sa is ergodic. Further in this section we shall assume that any scaling transformation Sa , a(0,+)2 , a(1,1) for the self-similar field is ergodic. It follows from the interconnection between the transformations Sa and θu that the ergodicity of the field Z=τHX implies the ergodicity of the field X. In particular, the ergodicity of the scaling transformation for the Gaussian stationary processes follows from the covariance function properties. The class of the stationary fields that are ergodic is quite wide. ([<xref ref-type="bibr" rid="j_vmsta01_ref_016">16</xref>], Proposition 4.1). Let Z={Z(t),tR2} be a stationary Gaussian field with a mean M and a continuous covariance function R(y)=EZ(x)Z(x+y)M2 , yR2 . If limyR(y)=0 , then Z has the ergodic shift transformation. Let us show that fractional Brownian sheet is ergodic. Let BH={BH(t),tR+2} be fractional Brownian sheet with index H=(H1,H2)(0,1)2 (Definition 2.5). Then BH is the self-similar field with ergodic scaling transformation and the stationary field Z=τHBH is centered with the following covariance function R(y)=14i=1,2(eHiyi+eHiyi|eyi/2eyi/2|2Hi),yR2. The proof of the equality (2) can be found in the paper . All we need to do is to prove that the field Z is ergodic. Let’s check the conditions of Theorem 3.1. The covariance function R is continuous and bounded, and can be represented as follows: R(y)=14i=1,2(eHiyi+eHiyieHi|yi|(1e|yi|)2Hi)14i=1,2(eHiyi+eHiyieHi|yi|)=eH1|y1|H2|y2|40,y. Thus, it follows from Theorem 3.1 that the field Z is stationary with ergodic shift transformation. This implies that the corresponding anisotropic Brownian sheet BH is self-similar with ergodic scaling transformation. The corollary is proved. □ Upper and lower functions for ergodic fields In this section we continue considering of the self-similar fields with ergodic scaling transformation and prove the zero–one laws for the asymptotic growth of the field’s sample paths. Let us introduce the following definitions. For the positively defined function g:R+2(0,+) we consider the following random events: Eg0={ωΩ:δ=δ(ω)>0,t,0<t1t2<δ:|X(ω,t)|g(t)}, 0,\hspace{2.5pt}\forall \mathbf{t},\hspace{2.5pt}0 Eg={ωΩ:N=N(ω)>0,t,t1t2>N:|X(ω,t)|g(t)}, 0,\hspace{2.5pt}\forall \mathbf{t},\hspace{2.5pt}t_{1}\wedge t_{2}>N:\hspace{2.5pt}\big|X(\omega ,\mathbf{t})\big|\le g(\mathbf{t})\big\},\]]]> where t1t2=max{t1,t2} , t1t2=min{t1,t2} . The positive function g:R+2(0,+) is said to be the upper (lower) function with respect to the growth at 0 if P(Eg0)=1(=0) and the upper (lower) function with respect to the growth at if P(Eg)=1(=0). In addition we define the functionals LΛ,φ0 , LΛ,φ for Λ=(λ1,λ2) , λ1>0 0$]]>, λ2>0 0$]]> and positive function φ:R+2(0,+) in the following way: LΛ,φ0=lim supt1t20|X(t)|t1λ1t2λ2φ(t),LΛ,φ=lim supt1t2|X(t)|t1λ1t2λ2φ(t). Let the function g(t)=t1λ1t2λ2φ(t) , tR+2 . If LΛ,φ0=0 a.s., then the function g is upper with respect to the growth at 0. If LΛ,φ0= a.s., then the function g is lower with respect to the growth at 0. If LΛ,φ=0 a.s., then the function g is upper with respect to the growth at . If LΛ,φ= a.s., then the function g is lower with respect to the growth at . Let the function φ:R+2(0,+) be either non-decreasing or non-increasing in every coordinate. Then P(EH,φ0)=0or1,P(EH,φ)=0or1, where the notation EH,φ0 (or EH,φ ) is used for Eg0 (or Eg ) with a function g(t)=t1H1t2H2φ(t) , tR+2 and H=(H1,H2)(0,+)2 . Let us consider the case when the function φ is non-decreasing in the first coordinate and non-increasing in the second one. First we prove the theorem for EH,φ0 . Let 0<a1<1 , a2>1 1$]]> and ωEH,φ0 . In this case δ=δ(ω)>0:|X(t)|t1H1t2H2φ(t),0<t1t2<δ. 0:\hspace{1em}\big|X(\mathbf{t})\big|\le {t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{t}),\hspace{1em}0 If a1t1a2t2<δ , then the definition of the scaling transformation and the last inequality imply |(SaX)(t)|=a1H1a2H2|X(a·t)|t1H1t2H2φ(a·t). Here a1t1<t1 , a2t2>t2 t_{2}$]]>, so it follows from the monotonicity of the function φ that φ(a·t)φ(t) , t1>0 0$]]>, t2>0 0$]]>. Thus, for EH,φ0 the following inequality holds true |(SaX)(t)|t1H1t2H2φ(t),0<t1t2<δa2. So, we have proved that EH,φ0Sa1EH,φ0 , where Sa1EH,φ0 denotes the set EH,φ0 for the field SaX . This implies that P(EH,φ0Sa1EH,φ0)=0 . Since Sa is ergodic for any a(0,+)2 , a(1,1) , then P(EH,φ0)=0 or 1. Now let ωEH,φ . In this case N=N(ω)>0:|X(t)|t1H1t2H2φ(t),t1t2>N. 0:\hspace{1em}\big|X(\mathbf{t})\big|\le {t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{t}),\hspace{1em}t_{1}\wedge t_{2}>N.\]]]> If a1t1a2t2>N N$]]>, then the definition of the scaling transformation and the last inequality imply that |(SaX)(t)|=a1H1a2H2|X(a·t)|t1H1t2H2φ(a·t). Here a1t1<t1 , a2t2>t2 t_{2}$]]>, and it follows from the monotonicity of the function φ that φ(a·t)φ(t) , t1>0 0$]]>, t2>0 0$]]>. Thus, for EH,φ the following inequality holds true |(SaX)(t)|t1H1t2H2φ(t),t1t2>Na1. \frac{N}{a_{1}}.\]]]> So, we have proved that EH,φSa1EH,φ . This implies that P(EH,φSa1EH,φ)=0 . Since Sa is ergodic for any a(0,+)2 , a(1,1) , so P(EH,φ)=0 or 1. For the other monotonicity types of the function φ the proofs are similar. The theorem is proved. □ Under the condition of Theorem 4.1 there exist such constants 0cH,φ0+ , 0cH,φ+ that LH,φ0=cH,φ0 , LH,φ=cH,φ a.s. Let us put cH,φ0=sup{c0,P(EH,cφ0)=0} . It follows from Theorem 4.1 that P(EH,cφ0)=0 or 1. Thus ε>0P(EH,(cH,φ0ε)φ0)=0andP(EH,(cH,φ0+ε)φ0)=1. 0\hspace{1em}\mathbf{P}\big({E_{\mathbf{H},({c_{\mathbf{H},\varphi }^{0}}-\varepsilon )\varphi }^{0}}\big)=0\hspace{1em}\text{and}\hspace{1em}\mathbf{P}\big({E_{\mathbf{H},({c_{\mathbf{H},\varphi }^{0}}+\varepsilon )\varphi }^{0}}\big)=1.\]]]> So, the following events occur with probability one δ>0t1t2<δ:|X(t)|t1H1t2H2φ(t)cH,φ0+ε 0\hspace{2.5pt}\forall t_{1}\vee t_{2}<\delta :\hspace{1em}\frac{|X(\mathbf{t})|}{{t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{t})}\le {c_{\mathbf{H},\varphi }^{0}}+\varepsilon \]]]> and δ>0(t):t1t2<δ,|X(t)|t1H1t2H2φ(t)>cH,φ0ε. 0\hspace{2.5pt}\exists (\mathbf{t}):\hspace{1em}t_{1}\vee t_{2}<\delta ,\hspace{1em}\frac{|X(\mathbf{t})|}{{t_{1}^{H_{1}}}{t_{2}^{H_{2}}}\varphi (\mathbf{t})}>{c_{\mathbf{H},\varphi }^{0}}-\varepsilon .\]]]> This means that cH,φ0LH,φ0cH,φ0+εa.s. Since ε>0 0$]]> is arbitrary, it concludes the corollary statement. The proof for the case of the growth at can be done in a similar way.  □

If we consider slowly varying functions we are able to obtain more specific result for the functionals LΛ,φ0 and LΛ,φ .

A function φ:R+2(0,+) is said to be slowly varying

with respect to the growth at 0, if a1>0,a2>0limt1t20φ(a·t)φ(t)=1; 0,a_{2}>0\hspace{1em}\underset{t_{1}\vee t_{2}\to 0}{\lim }\frac{\varphi (\mathbf{a}\cdot \mathbf{t})}{\varphi (\mathbf{t})}=1;\]]]>

with respect to the growth at , if a1>0,a2>0limt1t2φ(a·t)φ(t)=1. 0,a_{2}>0\hspace{1em}\underset{t_{1}\wedge t_{2}\to \infty }{\lim }\frac{\varphi (\mathbf{a}\cdot \mathbf{t})}{\varphi (\mathbf{t})}=1.\]]]>

Let Λ=(λ1,λ2)(H1,H2)=H . If a function φ:R+2(0,+) is slowly varying

with respect to the growth at 0, then LΛ,φ0=0 or ∞ a.s.,

with respect to the growth at ∞, then LΛ,φ=0 or ∞ a.s.

Let us introduce an auxiliary function ψ(t)=t1λ1H1t2λ2H2φ(t) , tR+2 . Then t1λ1t2λ2φ(t)=t1H1t2H2ψ(t) , tR+2 . The function ψ is not necessary monotone in every coordinate. But we can show that ψ is monotone on some neighborhood of 0, if φ is slowly varying at 0 and if φ is slowly varying at .

Let us consider the case when λ1>H1 H_{1}$]]>, λ2<H2 and investigate the growth at 0. We intend to prove that the function ψ increases in the first coordinate and decreases in the second one on some neighborhood of 0. It follows from the equality (3) that for any a1<1 , a2>1 1$]]> the following holds true ε>0δ>0:t1t2<δφ(a·t)<(1+ε)φ(t). 0\hspace{2.5pt}\exists \delta >0:\hspace{1em}t_{1}\vee t_{2}<\delta \hspace{1em}\varphi (\mathbf{a}\cdot \mathbf{t})<(1+\varepsilon )\varphi (\mathbf{t}).\]]]> Then, for 0<t1t2<δ we get ψ(a·t)=a1λ1H1a2λ2H2t1λ1H1t2λ2H2φ(a·t)<a1λ1H1a2λ2H2(1+ε)ψ(t). Therefore, for ε<(1a1)λ1H1a2H2λ21 there exists such δ>0 0$]]> that the following inequality is true ψ(a·t)ψ(t),t1t2<δ. In a similar way we can prove the monotonicity for the function ψ with other choices of λ1 , λ2 and with respect to the growth at . Thus, it follows from Theorem 4.1 that P(EH,ψ0)=0or1 ( P(EH,ψ)=0or1 ). According to Corollary 4.1 there exist such constants cH,ψ0 and cH,ψ that cH,ψ0=LH,ψ0 and cH,ψ=LH,ψ a.s. Let us consider cH,ψ0 . It is evident that LΛ,φ0=LH,ψ0=cH,ψ0=cΛ,φ0 a.s. Therefore the event {LH,ψ0=LΛ,φ0=cΛ,φ0} occurs with probability one under an arbitrary ergodic scaling transformation Sa . Let LΛ,φ0Sa be a functional LΛ,φ0 applied to the field SaX . Since the function φ is slowly varying, then LΛ,φ0Sa=lim supt1t20a1H1a2H2|X(a·t)|t1λ1t2λ2φ(t)=lim supt1t20a1λ1H1a2λ2H2|X(a·t)|(a1t1)λ1(a2t2)λ2φ(a·t)φ(a·t)φ(t)=a1λ1H1a2λ2H2LΛ,φ0. It follows from the condition Λ=(λ1,λ2)(H1,H2)=H that the last equality holds true for all a1>0 0$]]>, a2>0 0$]]> only in the case when cΛ,φ0=LΛ,φ0=0 or + a.s. The proof for the case when we have the growth at + can be done in a similar way. □ Strong limit theorems This section is devoted to strong limit theorems for real-valued self-similar fields within Definition 2.2. Let us prove these theorems for the function t1H1t2H2φ(t) , tR+2 , arising in Theorem 4.1, for the fields with ergodic scaling transformation. It worth to mention that it is possible to prove the theorems in this section without imposing the additional condition about ergodicity of the scaling transformation. We use the following notation defined for the self-similar field X={X(t),tR+2} with index H(0,+)2 : X(ω)=sup0t11,0t21|X(t,ω)|. Since the distributions of a self-similar field are invariant under the scale transformation Sa, all distribution properties can be concentrated on any finite interval. That is why all theorems within this section deal with the random variable X defined by the values of the random field on the unit square. The following theorems are focused on establishing the sufficient conditions for the function to be upper or lower for the self-similar field. Let’s start from proving one auxiliary result. Let a function f:R+(0,+) be non-decreasing and continuous. If E[f(X)]=K<+ , then for x>0 0$]]> P(sup0t1λ1,0t2λ2|X(t,ω)|x)Kf(λ1H1λ2H2x).

It follows from the self-similarity of the field that sup0t1λ1,0t2λ2|X(t)|=dλ1H1λ2H2sup0t11,0t21|X(t)|=λ1H1λ2H2X, and therefore P(sup0t1λ1,0t2λ2|X(t,ω)|x)=P(λ1H1λ2H2X(ω)x)=P(X(ω)λ1H1λ2H2x).

Since f is positive and non-decreasing function, the Chebyshev’s inequality implies that P(X(ω)λ1H1λ2H2x)E[f(X)]f(λ1H1λ2H2x). The lemma is proved.  □

Further in the text we shall use the following notation 1=(1,1) .

Let f:R+(0,+) be such a non-decreasing continuous function that E[f(X)] is finite. We assume that a continuous function φ:R+2(0,+) satisfies the following conditions (i)φis non-decreasing in every coordinate,(ii)limx1supn,m=1,2,φ(xn,xm)φ(xn1,xm1)=c<+,(iii)1+dxxf(φ(x·1))<+. Then lim sups1s2+|X(s)|s1H1s2H2φ(s)ca.s.

Let ξ>1 1$]]>, n,mN . We put xnm=ξnH1ξ(n+m)H2φ(ξn,ξn+m) , and Anm={ωΩ|sup0t1ξn,0t2ξn+m|X(t,ω)|xnm}. The following inequality follows from Lemma 5.1 P(Anm)Kf(φ(ξn,ξn+m)). Now let’s prove the convergence of the series n=1P(Anm) . The functions f and φ are non-decreasing and their superposition {f(φ(x)),xR+2} is also non-decreasing in every coordinate. Thus, f(φ(xn,xn))f(φ(xn,xn+m)) for x>1 1$]]>.

Taking into account the inequality (7) we obtain n=1P(Anm) n=1Kf(φ(ξn,ξn+m)) n=1Kf(φ(ξn1)). According to the integral criterion of series convergence for the positive non-decreasing function f(φ(ξx1)) , x>0 0$]]> it can be concluded that the series converges if I(f,φ):=1+dxf(φ(ξx1))<+. Let us make the substitution y=ξx in the integral I(f,φ) . Then dx=dy/(ylnξ) and I(f,φ)=ξ+dyyf(φ(y1))lnξ<1+dyyf(φ(y1))lnξ. Thus, the integral I(f,φ) is finite by the condition (iii) of the theorem. So, it follows from the Borel-Cantelli’s lemma that there exists with probability one such a number n0m(ω) that P(Anm)=0 for all nn0m(ω) . It means that for all m>1 1$]]> and nn0m(ω) sup0t1ξn,0t2ξn+m|X(t,ω)|ξnH1ξ(n+m)H2φ(ξn,ξn+m)a.s. Moreover, for every ξ>1 1$]]>, m>1 1$]]> and n>n0m(ω) {n_{0}^{m}}(\omega )$]]> we choose a point s=(s1,s2) in such a way that ξn1s1ξn and ξn+m1s2ξn+m . Then we obtain with probability one that |X(s,ω)|s1H1s2H2φ(s)ξnH1ξ(n+m)H2φ(ξn,ξn+m)ξ(n1)H1ξ(n+m1)H2φ(ξn1,ξn+m1)ξH1+H2supn,m1φ(ξn,ξm)φ(ξn1,ξm1). For the case s1s2 we get the same inequality using the similar reasoning. So, lim sups1s2+|X(s)|s1H1s2H2φ(s)ξH1+H2supn,m1φ(ξn,ξm)φ(ξn1,ξm1)a.s., and lim sups1s2+|X(s)|s1H1s2H2φ(s)limξ1supn,m1φ(ξn,ξm)φ(ξn1,ξm1)=ca.s. The theorem is proved. □ Let us consider the asymptotic growth at 0. Let f:R+(0,+) be such a non-decreasing continuous function that E[f(X)] is finite. We assume that a continuous function φ:R+2(0,+) satisfies the following conditions (i)φis non-decreasing in every coordinate,(ii)01dxxf(φ(x1))<+. Then lim sups1s20|X(s)|s1H1s2H2φ(s)1a.s. Let ξ>1 1$]]>. We put xnm:=ξnH1ξ(n+m)H2φ(ξn,ξnm) , n,mN , and Anm={ωΩ|sup0t1ξn,0t2ξnm|X(t,ω)|xnm}.

Lemma 5.1 implies the following inequality P(Anm)Kf(φ(ξn,ξnm)).

It follows immediately from the inequality (9) that in order to prove the convergence of the series n=1P(Anm) it is sufficient to prove that n=1Kf(φ(ξn,ξnm))<+.

The function f is non-decreasing and the function φ is non-increasing, so their super-position f(φ(·)) is non-increasing in coordinate.

Thus, f(φ(xn1))f(φ(xn,xnm)) for x>1 1$]]>. Therefore, n=1Kf(φ(ξn,ξnm)) n=1Kf(φ(ξn1)). The last series converges if I(f,φ):=1+dxf(φ(ξx1))<+. Let’s make the substitution y=ξx in the integral I(f,φ) . Then dx=dy/(ylnξ) and I(f,φ)=0ξ1dyyf(φ(y1))lnξ<01dyyf(φ(y1))lnξ. The integral I(f,φ) is finite by the condition (ii) of the theorem. Thus, it follows from the Borel-Cantelli’s lemma that there exists with probability one such a number n0m(ω) that P(Anm)=0 for all nn0m(ω) . It means that for all m>1 1$]]> and nn0m(ω) sup0t1ξn,0t2ξnm|X(t,ω)|ξnH1ξ(n+m)H2φ(ξn,ξnm)a.s. Furthermore, for every ξ>1 1$]]>, m>1 1$]]> and n>n0m(ω) {n_{0}^{m}}(\omega )$]]> we choose the point s=(s1,s2) in such a way that ξn1s1ξn and ξnm1s2ξnm . Then, we obtain with probability one the following |X(s,ω)|s1H1s2H2φ(s)ξnH1ξ(n+m)H2φ(ξn,ξnm)ξ(n+1)H1ξ(n+m+1)H2φ(ξn,ξnm)ξH1+H2, and lim sups1s20|X(s)|s1H1s2H2φ(s)limξ1ξH1+H2=1a.s. The theorem is proved. □ Now we can use these theorems for the self-similar fields with ergodic scaling transformation. The following corollary gives the sufficient conditions for the function to be upper one for such fields. Let for the self-similar field X={X(t),tR+2} with ergodic scaling transformation there exists such a constant γ>0 0$]]> that E(X)γ<+. Then for any ε>0 0$]]> and an arbitrary slowly varying function φ:R+2(0,+) with respect to the growth at 0 (at ∞) LHε,φ0=0a.s.(LH+ε,φ=0a.s.) Let ε>0 0$]]> be fixed. We put f(x)=xγ , x>0 0$]]>. Then E[f(X)]<+ . Let us check whether the functions ψ(t)=t1εt2ε and ψ0(t)=t1εt2ε , t(0,+)2 satisfy the conditions of Theorems 5.1 and 5.2, respectively. It is evident that the conditions (i) of both theorems are fulfilled. Now we consider the behavior on . Let us check the condition (ii) of Theorem 5.1 for the function ψ : limx1supn,m1ψ(xn,xm)ψ(xn1,xm1)=limx1supn,m1x2ε=1. The condition (iii) is also fulfilled since 1+dxxf(x2ε)=1+dxx1+2γε<+. Thus, Theorem 5.1 implies that LH+ε,11 a.s. Since the constant 1 can be regarded as slowly varying function, so it follows from Lemma 4.1 that LH+ε,1=0 or a.s. Therefore, LH+ε,1=0 a.s. for any ε>0 0$]]>.

Let us now consider the behavior at 0. We check the condition (ii) of Theorem 5.2 for the function ψ0 . 01dxxf(x2ε)=01dxx12γε<+. Thus, Theorem 5.2 implies that LHε,101 a.s. It follows from Lemma 4.1 that LHε,10=0 or a.s. Therefore, LHε,10=0 a.s. for any ε>0 0$]]>. Further we intend to prove that LHε,φ0=0 a.s. ( LH+ε,φ=0 a.s.). Let’s investigate the behavior at . We assume that a0>1 1$]]>, a=(a1,a2) . The convergence in (4) is uniform in a on any finite rectangle, for instance, on [1,a02]2 . Let 0<α<ε . We choose such δ>0 0$]]> that 0<δ<1a0αε . Since the limit in (4) is uniform then there exists t0>0 0$]]> (t1,t2)R+2,t1t2>t0:φ(a·t)>φ(t)(1δ) t_{0}:\varphi (\mathbf{a}\cdot \mathbf{t})>\varphi (\mathbf{t})(1-\delta )$]]>, a[1,a02]2 . Now for the arbitrary t with t1>t0 t_{0}$]]>, t2>t0 t_{0}$]]> we can define such numbers n,mN , a[a0,a02]2 that a0nt1t0a0n+1 , a0mt2t0a0m+1 and t1=a1nt0 , t2=a2mt0 . Then t1εt2εφ(t)=t02εa1nεa2mεφ(a1nt0,a2mt0)>t02εa1nεa2mεφ(a1nt0,a2m1t0)(1δ)>>t02εa1nεa2mεφ(a1nt0,t0)(1δ)m>>t02εa1nεa2mεφ(t01)(1δ)n+m>t02εa1nεa2mεφ(t01)a0(αε)(n+m)>t02εa1nεa2mεφ(t01)a1(αε)na2(αε)m>(t0a1n)α(t0a2m)αt02(εα)φ(t01)=t1αt2αφ(t01)t02(εα). {t_{0}^{2\varepsilon }}{a_{1}^{n\varepsilon }}{a_{2}^{m\varepsilon }}\varphi \big({a_{1}^{n}}t_{0},{a_{2}^{m-1}}t_{0}\big)(1-\delta )>\cdots \\{} \displaystyle >& \displaystyle {t_{0}^{2\varepsilon }}{a_{1}^{n\varepsilon }}{a_{2}^{m\varepsilon }}\varphi \big({a_{1}^{n}}t_{0},t_{0}\big){(1-\delta )}^{m}>\cdots >{t_{0}^{2\varepsilon }}{a_{1}^{n\varepsilon }}{a_{2}^{m\varepsilon }}\varphi (t_{0}\mathbf{1}){(1-\delta )}^{n+m}\\{} \displaystyle >& \displaystyle {t_{0}^{2\varepsilon }}{a_{1}^{n\varepsilon }}{a_{2}^{m\varepsilon }}\varphi (t_{0}\mathbf{1}){a_{0}^{(\alpha -\varepsilon )(n+m)}}>{t_{0}^{2\varepsilon }}{a_{1}^{n\varepsilon }}{a_{2}^{m\varepsilon }}\varphi (t_{0}\mathbf{1}){a_{1}^{(\alpha -\varepsilon )n}}{a_{2}^{(\alpha -\varepsilon )m}}\\{} \displaystyle >& \displaystyle {\big(t_{0}{a_{1}^{n}}\big)}^{\alpha }{\big(t_{0}{a_{2}^{m}}\big)}^{\alpha }{t_{0}^{2(\varepsilon -\alpha )}}\varphi (t_{0}\mathbf{1})={t_{1}^{\alpha }}{t_{2}^{\alpha }}\varphi (t_{0}\mathbf{1}){t_{0}^{2(\varepsilon -\alpha )}}.\end{array}\]]]> Thus, t1εt2εφ(t)+ , t1t2+ and LH+ε,φ=0 . The proof of the equality LHε,φ0=0 a.s. can be done in a similar way. □ The following theorem includes the sufficient conditions for the function to be lower one. But before we shall prove the auxiliary lemma. Let g:R+(0,+) be such a continuous non-increasing function that E[g(X)]=K<+ . Then for any x>0 0$]]> P(sup0t1λ1,0t2λ2|X(t,ω)|x)Kg(λ1H1λ2H2x).

Using the similar argumentation as in Lemma 5.1 we obtain P(sup0t1λ1,0t2λ2|X(t,ω)|x)=P(X(ω)λ1H1λ2H2x).

Since the condition of the lemma implies that the function g is positively defined and continuous, then Eg(X)Eg(λ1H1λ2H2x)χ{Xλ1H1λ2H2x}=P(Xλ1H1λ2H2x)g(λ1H1λ2H2x). The lemma is proved.  □

Let g:R+(0,+) be such a continuous non-increasing function that E[g(X)] is finite. We assume that a function ψ:R+2(0,+) is continuous and satisfies the conditions: (i)ψis non-increasing in every coordinate,(ii)1+dxxg(ψ(x1))<+. Then lim infs1s2supt[0,s1]×[0,s2]|X(t)|s1H1s2H2ψ(s)1a.s.

Let ξ>1 1$]]>, x,mN . We put ynm=ξnH1ξ(n+m)H2ψ(ξn,ξn+m) and define a sequence of the random events Bnm={ωΩ|sup0t1ξn,0t2ξn+m|X(t,ω)|ynm}. The following inequality for the probability of such events follows from Lemma 5.2 P(Bnm)Kg(ψ(ξn,ξn+m)). In order to prove the theorem we shall show that starting from some number the events Bnm have zero probability. For this we need to prove the convergence of the series n=1P(Bnm) . Let us recall that the functions g and ψ are non-decreasing under the theorem conditions and their superposition g(ψ(·,·)) is a non-decreasing function in every coordinate. So, the reasons similar to the ones from Theorem 5.1 will lead us to the relations n=1P(Bnm) n=1Kg(ψ(ξn,ξn+m)) n=1Kg(ψ(ξn1)). Since the function g(ψ(ξx,ξx)) , x>0 0$]]> is non-decreasing, the integral criterion of series convergence implies that it is sufficient to prove the finiteness of the integral I(g,ψ)=1+dxg(ψ(ξx1))<+. Let’s make the substitution y=ξx in the integral I(g,ψ) . Then dy=y(lnξ)dx and I(g,ψ)=ξ+dyyg(ψ(y1))lnξ1+dyyg(ψ(y1))lnξ. So, the integral I(g,ψ) is finite according to the condition (ii) of the theorem.

Therefore, the series n=1P(Bnm) is convergent. It follows from the Borel-Cantelli’s lemma that there exists with probability one such a number N0m(ω) that for all nN0m(ω): P(Bnm)=0 . It means that sup0t1ξn,0t2ξn+m|X(t,ω)|ξnH1ξ(n+m)H2ψ(ξn,ξn+m).

Now, for arbitrary ξ>1 1$]]>, m>0 0$]]> and n>N0m(ω) {N_{0}^{m}}(\omega )$]]> we choose such a point s=(s1,s2) that ξns1ξn+1 , ξn+ms2ξn+m+1 . Then, the following is true with probability one sup0t1s1,0t2s2|X(t,ω)|s1H1s2H2ψ(s)sup0t1ξn,0t2ξn+m|X(t,ω)|ξ(n+1)H1ξ(n+m+1)H2ψ(ξn,ξn+m)ξnH1ξ(n+m)H2ψ(ξn,ξn+m)ξ(n+1)H1ξ(n+m+1)H2ψ(ξn,ξn+m)=ξH1H2. And therefore, lim infs1s2supt[0,s1]×[0,s2]|X(t)|s1H1s2H2ψ(s)1a.s. The theorem is proved. □ Strong limit theorems for Gaussian fields Let us consider a few examples of how Theorems 5.1 and 5.3 can be applied to centered Gaussian fields. In this section we assume that the real-valued Gaussian fields have continuous sample paths. The first condition of Theorems 5.1 and 5.3 is an existence of such a non-decreasing function f and a non-increasing function g that E[f(X)]<+ , E[g(X)]<+ . It is not so easy to check these conditions directly. But there are a lot of well-known results for the Gaussian fields concerning the tail probability behavior and the probability of the small deviations. The following lemma shows how this information can be utilized for checking the first condition of Theorems 5.1 and 5.3. Let f,g:R+(0,+) , f be non-decreasing, g be non-increasing, f,gC1(R+) . We assume that Z is a positive random variable and the functions a,b:R+R+ are such that P(Z>x)a(x) x)\le a(x)$]]> and P(Zx)b(x) . If ·+f(x)a(x)dx<+(or0·g(x)b(x)dx<+), then E[f(Z)]<+ (or E[g(Z)]<+ respectively).

The following relations are valid for the function f E[f(Z)]=0+f(x)dP(Zx)=0+f(x)dP(Z>x)=f(x)P(Z>x)|0+0+f(x)P(Z>x)dx=limxf(x)P(Z>x)+f(0)+0+f(x)P(Z>x)dxf(0)+0+f(x)a(x)dx<+. x)\\{} \displaystyle =& \displaystyle -f(x)\mathbf{P}(Z>x){|_{0}^{\infty }}+{\int _{0}^{+\infty }}{f^{\prime }}(x)\mathbf{P}(Z>x)dx\\{} \displaystyle =& \displaystyle -\underset{x\to \infty }{\lim }f(x)\mathbf{P}(Z>x)+f(0)+{\int _{0}^{+\infty }}{f^{\prime }}(x)\mathbf{P}(Z>x)dx\\{} \displaystyle \le & \displaystyle f(0)+{\int _{0}^{+\infty }}{f^{\prime }}(x)a(x)dx<+\infty .\end{array}\]]]> And for the function g the following is true E[g(Z)]=0+g(x)dP(Zx)=g(x)P(Zx)|00+g(x)P(Zx)dx=g(+)limx0g(x)P(Zx)0+g(x)P(Zx)dxg(+)0+g(x)b(x)dx<+. The lemma is proved.  □

So, if there is an inequality for the tail probability P(Xx)a(x) then it is sufficient to find such a positive non-decreasing function f that ·+f(x)a(x)dx<+ . Lemma 6.1 implies that the expectation E[f(X)] will be finite. Similarly, Lemma 6.1 can be used for the probability of small deviations P(Xx)b(x) .

Example 1. Let us apply Theorem 5.1 to the centered Gaussian self-similar field X={X(t),tR+2} with index H=(H1,H2)(0,1)2 . It follows from  that there exists such a constant c1>0 0$]]>, that E[f(X)]<+ for the function f(y)=exp{(c1ε)y22},0<ε<c1. Now we need to define the non-decreasing function φ:R+2(0,+) in such a way that the condition (iii) of Theorem 5.1 is fulfilled; namely, 1+dxxf(φ(x1))<+ . Let us choose φ satisfying f(φ(x1))=(ln(x+e))1+η . In this case the condition (iii) of Theorem 5.1 holds true for every η>0 0$]]>. Thus, for η=ε we obtain exp{(c1ε)φ2(x1)2}=(ln(x+e))1+ε,xR+.

Furthermore, the function φ is defined on {(x1),x0} φ(x1)=2(1+ε)c1εlnln(x+e),xR+.

Moreover, the function φ can be extended to the plane arbitrarily with imposing the conditions (i) and (ii) of Theorem 5.1. For example, the following three functions satisfy the mentioned conditions:

φ1(x)=(2+δ)lnln(x1+x22+e) ,

φ2(x)=(2+δ)ln(lnx1+e+lnx2+e) ,

φ3(x)=2+δlnln(x1+e)4lnln(x2+e)4 ,

where x=(x1,x2)R+2 and δ=23ε+1c1c1ε>21c1c1 2\frac{1-c_{1}}{c_{1}}$]]>. Indeed, these functions are convex and non-decreasing in every coordinate. So, the supremum in the condition (ii) of Theorem 5.1 is attained when n,m=1 . Therefore, c=1 and i{1,2,3}:lim sups1s2|X(s)|s1H1s2H2φi(s)1a.s. Example 2. Now let us consider how Theorem 5.3 can be applied. As it was mention before, the estimates for the probability of small deviations can be used for checking the first condition of the theorem. Such estimates are quite crude for the general Gaussian random fields. But for the narrower class of fields, namely, for the fractional Brownian sheet, there exist more precise results. Let {BH(t),tR+2} be an fractional Brownian sheet with index H=(H1,H2)(0,1)2 (Definition 2.5). It had been proved in the paper  that the following limit holds for H1H2 limx0x2/HlnP{BHx}=τH, where H is a minimum between H1 , H2 ; τH is some constant depending on H. And for the case H1=H2=H there is an inequality for the probability of small deviations lnP{BHx}K1(lnx)2/Hx2/HK11x2/H,0<x<1, where K1 is some constant. Summarizing the results for both cases we conclude that there exist such constants c2>0 0$]]> and b>0 0$]]> that for all y(0,b) the following inequality holds true P{BHy}exp{c2y2/H}. Let us define a function g:R+(0,+) for arbitrary 0<δ<c2 as g(y)=exp{c2δy2/H}. Then Lemma 6.1 implies that E[g(BH)]<+ . Let us choose the function ψ:R+2(0,+) in such a way that the condition (ii) of Theorem 5.3 is fulfilled for it. It means that 1+dxxg(ψ(x1))<+ . This condition holds if g(ψ(x1))=(ln(x+e))1+ε,ε>0 0$]]>. Then, ψ(x1)=(c2δ1+ε)H/2(lnln(x+e))H/2,x>0. 0.\]]]> Further, we need to define the function ψ on R+2 in such a way that this function remains non-increasing in every coordinate. Let ε=δc22δ , 2δ<c2 , then the following functions satisfy the conditions (i)–(ii) of Theorem 5.3: ψ1(x)=(c22δ)H2(lnln(x1+x22+e))H2, ψ2(x)=(c22δ)H2(lnln(x1+e))H4(lnln(x2+e))H4, where x=(x1,x2)R+ .

Thus, we obtain the following inequalities lim infs1s2[lnln(s1+s22+e)]H2s1H1s2H2sup0t1s1,0t2s2|BH(t)|c2H2a.s., lim infs1s2[lnln(s1+e)]H4[lnln(s2+e)]H4s1H1s2H2sup0t1s1,0t2s2|BH(t)|c2H2a.s.

So, we have presented the examples of the upper and lower limiting functions for fractional Brownian sheet BH .

References Ayache, A., Leger, S., Pontier, M.: Drap Brownien fractionnaire. Potential Anal. 17(1), 31–43 (2002). MR1906407 Benson, D., Meerschaert, M.M., Bäaumer, B., Scheffler, H.-P.: Aquifer operator scaling and the effect on solute mixing and dispersion. Water Resour. Res. 42, 1–18 (2006) Bonami, A., Estrade, A.: Anisotropic analysis of some Gaussian models. J. Fourier Anal. Appl. 9, 215–236 (2003). MR1988750 Cornfeld, I.P., Fomin, S.V., Sinai, Ya.G.: Ergodic Theory. Springer, New York (1982). MR0832433 Davies, S., Hall, P.: Fractal analysis of surface roughness by using spatial data (with discussion). J. R. Stat. Soc. B 61, 3–37 (1999). MR1664088 Embrechts, P., Maejima, M.: Selfsimilar Processes. Princeton University Press (2001). MR1920153 Genton, M.G., Perrin, O., Taqqu, M.S.: Self-similarity and Lamperti transformation for random fields. Stochastic Models 23, 397–411 (2007). MR2341075 Kôno, N.: Iterated log type strong limit theorems for self-similar precesses. Proc. Jpn. Acad., Ser. A 59, 85–87 (1983). MR0711302 Ledoux, M.: Isoperimetry and Gaussian analysis. In: Lectures on Probability Theory and Statistics, vol. 1648, pp. 165–294. Springer, Berlin, Heidelberg (1996). MR1600888 Lu, W.Z., Wang, X.K.: Evolving trend and self-similarity of ozone pollution in central Hong Kong ambient during 1984–2002. Sci. Total Environ. 357, 160–168 (2006) Makogin, V.I., Mishura, Yu.S., Shevchenko, G.M., Zolota, A.V.: Asymptotic behaviour of the trajectories of the fractional Brownian motion, anisotropic fractional Brownian field and their fractional derivatives. Appl. Stat. Actuar. Financ. Math. 1–2, 110–115 (2013) (in Ukrainian) Mason, D.M., Shi, Z.: Small deviations for some multi-parameter Gaussian processes. J. Theor. Probab. 14, 213–239 (2001). MR1822902 Morata, A., Martin, M.L., Luna, M.Y., Valero, F.: Self-similarity patterns of precipitation in the Iberian Peninsula. Theor. Appl. Climatol. 85, 41–59 (2006) Samorodnitsky, G., Taqqu, M.S.: Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance. Chapman & Hall (1994). MR1280932 Takashima, K.: Sample path properties of ergodic self-similar processes. Osaka J. Math. 26, 159–189 (1989). MR0991287 Taylor, M.: Random fields: stationarity, ergodicity, and spectral behavior, http://www.unc.edu/math/Faculty/met/rndfcn.pdf