Sufficient conditions are presented on the offspring and immigration distributions of a second-order Galton–Watson process (Xn)n⩾−1 with immigration, under which the distribution of the initial values (X0,X−1) can be uniquely chosen such that the process becomes strongly stationary and the common distribution of Xn, n⩾−1, is regularly varying.
Second-order Galton–Watson process with immigrationregularly varying distributiontail behavior60J8060G70Hungarian Academy of SciencesSupported by the Hungarian Croatian Intergovernmental S&T Cooperation Programme for 2017–2018 under Grant No. 16-1-2016-0027. Mátyás Barczy is supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. Introduction
Higher-order Galton–Watson processes with immigration having finite second moment (also called Generalized Integer-valued AutoRegressive (GINAR) processes) have been introduced by Latour [14, equation (1.1)]. Pénisson and Jacob [16] used higher-order Galton–Watson processes (without immigration) for studying the decay phase of an epidemic, and, as an application, they investigated the Bovine Spongiform Encephalopathy epidemic in Great Britain after the 1988 feed ban law. As a continuation, Pénisson [15] introduced estimators of the so-called infection parameter in the growth and decay phases of an epidemic. Recently, Kashikar and Deshmukh [12, 13] and Kashikar [11] used second order Galton–Watson processes (without immigration) for modeling the swine flu data for Pune, India and La-Gloria, Mexico. Kashikar and Deshmukh [12] also studied their basic probabilistic properties such as a formula for their probability generator function, probability of extinction, long run behavior and conditional least squares estimation of the offspring means.
Let Z+, N, R and R+ denote the set of non-negative integers, positive integers, real numbers and non-negative real numbers, respectively. The natural basis of Rd will be denoted by {e1,…,ed}. For x∈R, the integer part of x is denoted by ⌊x⌋. Every random variable will be defined on a probability space (Ω,A,P). Convergence in distribution and equality in distributions of random variables or stochastic processes is denoted by ⟶D and =D, respectively.
First, we recall the Galton–Watson process with immigration, which assumes that an individual can reproduce only once during its lifetime at age 1, and then it dies immediately. The initial population size at time 0 will be denoted by X0. For each n∈N, the population consists of the offsprings born at time n and the immigrants arriving at time n. For each n,i∈N, the number of offsprings produced at time n by the ith individual of the (n−1)th generation will be denoted by ξn,i. The number of immigrants in the nth generation will be denoted by εn. Then, for the population size Xn of the nth generation, we have
Xn=∑i=1Xn−1ξn,i+εn,n∈N,
where ∑i=10:=0. Here {X0,ξn,i,εn:n,i∈N} are supposed to be independent non-negative integer-valued random variables, and {ξn,i:n,i∈N} and {εn:n∈N} are supposed to consist of identically distributed random variables, respectively. If εn=0, n∈N, then we say that (Xn)n∈Z+ is a Galton–Watson process (without immigration).
Next, we introduce the second-order Galton–Watson branching model with immigration. In this model we suppose that an individual reproduces at age 1 and also at age 2, and then it dies immediately. For each n∈N, the population consists again of the offsprings born at time n and the immigrants arriving at time n. For each n,i,j∈N, the number of offsprings produced at time n by the ith individual of the (n−1)th generation and by the jth individual of the (n−2)th generation will be denoted by ξn,i and ηn,j, respectively, and εn denotes the number of immigrants in the nth generation. Then, for the population size Xn of the nth generation, we have
Xn=∑i=1Xn−1ξn,i+∑j=1Xn−2ηn,j+εn,n∈N,
where X−1 and X0 are non-negative integer-valued random variables (the initial population sizes). Here {X−1,X0,ξn,i,ηn,j,εn:n,i,j∈N} are supposed to be non-negative integer-valued random variables such that {(X−1,X0),ξn,i,ηn,j,εn:n,i,j∈N} are independent, and {ξn,i:n,i∈N}, {ηn,j:n,j∈N} and {εn:n∈N} are supposed to consist of identically distributed random variables, respectively. Note that the number of individuals alive at time n∈Z+ is Xn+Xn−1, which can be larger than the population size Xn of the nth generation, since the individuals of the population at time n−1 are still alive at time n, because they can reproduce also at age 2. The stochastic process (Xn)n⩾−1 given by (2) is called a second-order Galton–Watson process with immigration or a Generalized Integer-valued AutoRegressive process of order 2 (GINAR(2) process), see, e.g., Latour [14]. Especially, if ξ1,1 and η1,1 are Bernoulli distributed random variables, then (Xn)n⩾−1 is also called an Integer-valued AutoRegressive process of order 2 (INAR(2) process), see, e.g., Du and Li [8]. If ε1=0, then we say that (Xn)n⩾−1 is a second-order Galton–Watson process without immigration, introduced and studied by Kashikar and Deshmukh [12] as well.
The process given in (2) with the special choice η1,1=0 gives back the process given in (1), which will be called a first-order Galton–Watson process with immigration to make a distinction.
For notational convenience, let ξ, η and ε be random variables such that ξ=Dξ1,1, η=Dη1,1 and ε=Dε1, and put mξ:=E(ξ)∈[0,∞], mη:=E(η)∈[0,∞] and mε:=E(ε)∈[0,∞].
If (Xn)n∈Z+ is a (first-order) Galton–Watson process with immigration such that mξ∈(0,1), P(ε=0)<1 and ∑j=1∞P(ε=j)log(j)<∞, then the Markov process (Xn)n∈Z+ admits a unique stationary distribution (see, e.g., Quine [17]), i.e., the distribution of the initial value X0 can be uniquely chosen so that the process becomes strongly stationary. If ε is regularly varying with index α∈(0,∞), i.e., P(ε>x)∈(0,∞)x)\in (0,\infty )$]]> for all x∈(0,∞), and
limx→∞P(ε>qx)P(ε>x)=q−αfor allq∈(0,∞),qx)}{\mathbb{P}(\varepsilon >x)}={q^{-\alpha }}\hspace{2em}\text{for all}\hspace{5pt}q\in (0,\infty )\text{,}\]]]>
then, by Lemma A.3, ∑j=1∞P(ε=j)log(j)<∞. The content of Theorem 2.1.1 in Basrak et al. [4] is the following statement.
Let(Xn)n∈Z+be a (first-order) Galton–Watson process with immigration such thatmξ∈(0,1)and ε is regularly varying with indexα∈(0,2). In case ofα∈[1,2), assume additionally thatE(ξ2)<∞. Let the distribution of the initial valueX0be such that the process is strongly stationary. Then we haveP(X0>x)∼∑i=0∞mξiαP(ε>x)=11−mξαP(ε>x)asx→∞,x)\sim {\sum \limits_{i=0}^{\infty }}{m_{\xi }^{i\alpha }}\mathbb{P}(\varepsilon >x)=\frac{1}{1-{m_{\xi }^{\alpha }}}\mathbb{P}(\varepsilon >x)\hspace{2em}\textit{as}\hspace{5pt}x\to \infty ,\]]]>and henceX0is also regularly varying with index α.
Note that in case of α=1 and mε=∞ Basrak et al. [4, Theorem 2.1.1] assume additionally that ε is consistently varying (or in other words intermediate varying), but, eventually, this follows from the fact that ε is regularly varying. Basrak et al. [4, Remark 2.2.2] derived the result of Theorem 1 also for α∈[2,3) under the additional assumption E(ξ3)<∞ (not mentioned in the paper), and they remarked that the same applies to all α∈[3,∞) (possibly under an additional moment assumption E(ξ⌊α⌋+1)<∞).
In Barczy et al. [3] we study regularly varying non-stationary (first-order) Galton–Watson processes with immigration.
If (Xn)n⩾−1 is a second-order Galton–Watson process with immigration such that mξ,mη∈(0,1) with mξ+mη<1, P(ε=0)<1 and ∑j=1∞P(ε=j)log(j)<∞, then the distribution of the initial values (X0,X−1) can be uniquely chosen so that the process becomes strongly stationary, see Lemma 3.
The main result of the paper is the following analogue of Theorem 1.
Let(Xn)n⩾−1be a second-order Galton–Watson process with immigration such thatmξ,mη∈(0,1)withmξ+mη<1, and ε is regularly varying with indexα∈(0,2). In case ofα∈[1,2), assume additionally thatE(ξ2)<∞andE(η2)<∞. Let the distribution of the initial values(X0,X−1)be such that the process is strongly stationary. Then we haveP(X0>x)∼∑i=0∞miαP(ε>x)asx→∞,x)\sim {\sum \limits_{i=0}^{\infty }}{m_{i}^{\alpha }}\mathbb{P}(\varepsilon >x)\hspace{2em}\textit{as}\hspace{5pt}x\to \infty \textit{,}\]]]>wherem0:=1,mk:=λ+k+1−λ−k+1λ+−λ−,k∈N, andλ+:=mξ+mξ2+4mη2,λ−:=mξ−mξ2+4mη2.Consequently,X0is also regularly varying with index α.
Note that λ+ and λ− are the eigenvalues of the offspring mean matrix given in (8) of a corresponding 2-type Galton–Watson process with immigration. Note that for all k∈Z+, we have mk=E(Vk,0), where (Vn,0)n⩾−1 is a second-order Galton–Watson process (without immigration) with the initial values V0,0=1 and V−1,0=0, and with the same offspring distributions as (Xn)n⩾−1, see (9). Consequently, the series ∑i=0∞miα appearing in Theorem 2 is convergent, since for each i∈N, we have mi=E(Vi,0)⩽λ+i<1 by (10) and the assumption mξ+mη<1.
Our technique and result might be extended to p-th order Galton–Watson branching processes with immigration. More generally, one can pose an open problem, namely, under what conditions on the offspring and immigration distributions of a general p-type Galton–Watson branching process with immigration, its unique (p-dimensional) stationary distribution is jointly regularly varying. We also note that there is a vast literature on tail behavior of regularly varying time series (see, e.g., Hult and Samorodnitsky [10]), however, the available results do not seem to be applicable for describing the tail behavior of the stationary distribution for regularly varying branching processes with immigration. The link between GINAR and autoregressive processes is that their autocovariance functions are identical under finite second moment assumptions, but we cannot see that it would imply anything for the tail behavior of a GINAR process knowing the tail behaviour of a corresponding autoregressive process. Further, in our situation the second moment is infinite, so the autocovariance function is not defined.
Very recently, Bősze and Pap [5] have studied regularly varying non-stationary second-order Galton–Watson processes with immigration. They have found some sufficient conditions on the initial, the offspring and the immigration distributions of a non-stationary second-order Galton–Watson process with immigration under which the distribution of the process in question is regularly varying at any fixed time. The results in Bősze and Pap [5] can be considered as extensions of the results in Barczy et al. [3] on not necessarily stationary (first-order) Galton–Watson processes with immigration. Concerning the results in Bősze and Pap [5] and in the present paper, there is no overlap, for more details see Remark 1.
The paper is organized as follows. In Section 2 we present preliminaries. First we recall a representation of a second-order Galton–Watson process without or with immigration as a (special) 2-type Galton–Watson process without or with immigration, respectively. Then, we derive an explicit formula for the expectation of a second-order Galton–Watson process with immigration at time n and describe its asymptotic behavior as n→∞, and, assuming finiteness of the second moments of the offspring distributions, we give an estimation of the second moment of a second-order Galton–Watson process (without immigration). Next, we recall sufficient conditions for the existence of a unique stationary distribution for a 2-type Galton–Watson process with immigration, and a representation of this stationary distribution. Applying these results to the special 2-type Galton–Watson process with immigration belonging to the class of second-order Galton–Watson processes with immigration, we obtain sufficient conditions for the existence of a unique distribution of the initial values (X0,X−1) such that the process becomes strongly stationary, see Lemma 3. Section 3 is devoted to the proof of Theorem 2. In the course of the proof sufficient conditions are given under which the distribution of a second-order Galton–Watson processes (without immigration) (Xn)n⩾−1 at a fixed time is regularly varying provided that X0 is regularly varying and X−1=0, see Proposition 1. In the Appendix we collect some results on regularly varying functions and distributions, to name a few of them: convolution property, Karamata’s theorem and Potter’s bounds. Note that the ArXiv version [2] of this paper contains more details, proofs and appendices.
Preliminaries on second-order Galton–Watson processes with immigration
First, we recall a representation of a second-order Galton–Watson process without or with immigration as a (special) 2-type Galton–Watson process without or with immigration, respectively. Let (Xn)n⩾−1 be a second-order Galton–Watson process with immigration given in (2), and let us introduce the random vectors
Yn:=Yn,1Yn,2:=XnXn−1,n∈Z+.
Then we have
Yn=∑i=1Yn−1,1ξn,i1+∑j=1Yn−1,2ηn,j0+εn0,n∈N,
hence (Yn)n∈Z+ is a (special) 2-type Galton–Watson process with immigration and with initial vector
Y0=X0X−1.
In fact, the type 1 and 2 individuals are identified with individuals of age 0 and 1, respectively, and for each n,i,j∈N, at time n, the ith individual of type 1 of the (n−1)th generation produces ξn,i individuals of type 1 and exactly one individual of type 2, and the jth individual of type 2 of the (n−1)th generation produces ηn,j individuals of type 1 and no individual of type 2.
The representation (5) works backwards as well, namely, let (Yk)k∈Z+ be a special 2-type Galton–Watson process with immigration given by
Yk=∑j=1Yk−1,1ξk,j,1,11+∑j=1Yk−1,2ξk,j,2,10+εk,10,k∈N,
where Y0 is a 2-dimensional integer-valued random vector. Here, for each k,j∈N and i∈{1,2}, ξk,j,i,1 denotes the number of type 1 offsprings in the kth generation produced by the jth offspring of the (k−1)th generation of type i, and εk denotes the number of type 1 immigrants in the kth generation. For the second coordinate process of (Yk)k∈Z+, we get Yk,2=Yk−1,1, k∈N, and substituting this into (6), the first coordinate process of (Yk)k∈Z+ satisfies
Yk,1=∑j=1Yk−1,1ξk,j,1,1+∑j=1Yk−2,1ξk,j,2,1+εk,1,k⩾2.
Thus, the first coordinate process of (Yk)k∈Z+ given by (6) satisfies equation (2) with Xn:=Yn,1, ξn,i:=ξn,i,1,1, ηn,j:=ξn,j,2,1, εn:=εn,1, n,i,j∈N, and with the initial values X0:=Y0,1 and X−1:=Y0,2, i.e., it is a second-order Galton–Watson process with immigration.
Note that, for a second-order Galton–Watson process (Xn)n⩾−1 (without immigration), the additive (or branching) property of a 2-type Galton–Watson process (without immigration) (see, e.g. in Athreya and Ney [1, Chapter V, Section 1]), together with the law of total probability, for each n∈N, implies
Xn=D∑i=1X0ζi,0(n)+∑j=1X−1ζj,−1(n),
where {(X0,X−1),ζi,0(n),ζj,−1(n):i,j∈N} are independent random variables such that {ζi,0(n):i∈N} are independent copies of Vn,0 and {ζj,−1(n):j∈N} are independent copies of Vn,−1, where (Vk,0)k⩾−1 and (Vk,−1)k⩾−1 are second-order Galton–Watson processes (without immigration) with initial values V0,0=1, V−1,0=0, V0,−1=0 and V−1,−1=1, and with the same offspring distributions as (Xk)k⩾−1.
Moreover, if (Xn)n⩾−1 is a second-order Galton–Watson process with immigration, then for each n∈N, we have
Xn=V0(n)(X0,X−1)+∑i=1nVi(n−i)(εi,0),
where {V0(n)(X0,X−1),Vi(n−i)(εi,0):i∈{1,…,n}} are independent random variables such that V0(n)(X0,X−1) represents the number of newborns at time n, resulting from the initial individuals X0 at time 0 and X−1 at time −1, and for each i∈{1,…,n}, Vi(n−i)(εi,0) represents the number of newborns at time n, resulting from the immigration εi at time i, see the ArXiv version [2] of this paper.
Our next aim is to derive an explicit formula for the expectation of a subcritical second-order Galton–Watson process with immigration at time n and to describe its asymptotic behavior as n→∞.
Recall that ξ, η and ε are random variables such that ξ=Dξ1,1, η=Dη1,1 and ε=Dε1, and we put mξ=E(ξ)∈[0,∞], mη=E(η)∈[0,∞] and mε=E(ε)∈[0,∞]. If mξ∈R+, mη∈R+, mε∈R+, E(X0)∈R+ and E(X−1)∈R+, then (2) implies
E(Xn|Fn−1X)=Xn−1mξ+Xn−2mη+mε,n∈N,
where FnX:=σ(X−1,X0,…,Xn), n∈Z+. Consequently,
E(Xn)=mξE(Xn−1)+mηE(Xn−2)+mε,n∈N,
which can be written in the matrix form
E(Xn)E(Xn−1)=Mξ,ηE(Xn−1)E(Xn−2)+mε0,n∈N,
with
Mξ,η:=mξmη10.
Note that Mξ,η is the mean matrix of the 2-type Galton–Watson process (Yn)n∈Z+ given in (4). Thus, we conclude
E(Xn)E(Xn−1)=Mξ,ηnE(X0)E(X−1)+∑k=1nMξ,ηn−kmε0,n∈N.
Hence, the asymptotic behavior of the sequence (E(Xn))n∈N depends on the asymptotic behavior of the powers (Mξ,ηn)n∈N, which is related to the spectral radius ϱ of Mξ,η. The matrix Mξ,η has eigenvalues λ+ and λ− given in (3) and satisfying λ+∈R+ and λ−∈[−λ+,0], hence the spectral radius of Mξ,η is ϱ=λ+. If (Xn)n⩾−1 is a second-order Galton–Watson process with immigration such that mξ∈R+ and mη∈R+, then (Xn)n⩾−1 is called subcritical, critical or supercritical if ϱ<1, ϱ=1 or ϱ>11$]]>, respectively. It is easy to check that a second-order Galton–Watson process with immigration is subcritical, critical or supercritical if and only if mξ+mη<1, mξ+mη=1 or mξ+mη>11$]]>, respectively.
Let(Xn)n⩾−1be a second-order Galton–Watson process with immigration such thatmξ,mη∈(0,1)withmξ+mη<1,mε∈R+,E(X0)∈R+andE(X−1)∈R+. Then, for alln∈N, we haveE(Xn)=λ+n+1−λ−n+1λ+−λ−E(X0)+λ+n−λ−nλ+−λ−mηE(X−1)=+1λ+−λ−(λ+1−λ+n1−λ+−λ−1−λ−n1−λ−)mε,and henceE(Xn)=mε(1−λ+)(1−λ−)+O(λ+n)asn→∞.Further, in case ofmε=0, i.e., when there is no immigration, we have the following more precise statements:E(Xn)=λ+E(X0)+mηE(X−1)λ+−λ−λ+n+O(|λ−|n)asn→∞,andE(Xn)⩽ϱnE(X0)+ϱn−1mηE(X−1),n∈N.
The first moment of a subcritical second-order Galton–Watson process (Xn)n⩾−1 (without immigration) can be estimated by (10). Next, we present an auxiliary lemma on an estimation of the second moment of a subcritical second-order Galton–Watson process (without immigration).
Let(Xn)n⩾−1be a second-order Galton–Watson process (without immigration) such thatmξ,mη∈(0,1)withmξ+mη<1,X0=1,X−1=0,E(ξ2)<∞andE(η2)<∞. Then for alln∈N,E(Xn2)⩽(1+Var(ξ)ϱ(1−ϱ)+Var(η)ϱ2(1−ϱ))ϱn.
The proofs of Lemmata 1 and 2 together with statements in the critical and supercritical cases can be found in the ArXiv version [2] of this paper.
Next, we recall 2-type Galton–Watson processes with immigration. For each k,j∈Z+ and i,ℓ∈{1,2}, the number of individuals of type i born or arrived as immigrants in the kth generation will be denoted by Xk,i, the number of type ℓ offsprings produced by the jth individual who is of type i belonging to the (k−1)th generation will be denoted by ξk,j,i,ℓ, and the number of type i immigrants in the kth generation will be denoted by εk,i. Then we have
Xk,1Xk,2=∑j=1Xk−1,1ξk,j,1,1ξk,j,1,2+∑j=1Xk−1,2ξk,j,2,1ξk,j,2,2+εk,1εk,2,k∈N.
Here {X0,ξk,j,i,εk:k,j∈N,i∈{1,2}} are supposed to be independent, and {ξk,j,1:k,j∈N}, {ξk,j,2:k,j∈N} and {εk:k∈N} are supposed to consist of identically distributed random vectors, where
X0:=X0,1X0,2,ξk,j,i:=ξk,j,i,1ξk,j,i,2,εk:=εk,1εk,2.
For notational convenience, let ξ1, ξ2 and ε be random vectors such that ξ1=Dξ1,1,1, ξ2=Dξ1,1,2 and ε=Dε1, and put mξ1:=E(ξ1)∈[0,∞]2, mξ2:=E(ξ2)∈[0,∞]2, mε:=E(ε)∈[0,∞]2, and
Mξ:=mξ1mξ2∈[0,∞]2×2.
We call Mξ the offspring mean matrix, and note that many authors define the offspring mean matrix as Mξ⊤. If mξ1∈R+2, mξ2∈R+2, the spectral radius of Mξ is less than 1, Mξ is primitive, i.e., there exists m∈N such that Mξm∈R++2×2, P(ε=0)<1 and E(1{ε≠0}log((e1+e2)⊤ε))<∞, then, by the Theorem in Quine [17], there exists a unique stationary distribution π for (Xn)n∈Z+. As a consequence of formula (16) for the probability generating function of π in Quine [17], we have
∑i=0nVi(i)(εi)⟶Dπasn→∞,
where (Vk(i)(εi))k∈Z+, i∈Z+, are independent copies of a 2-type Galton–Watson process (Vk(ε))k∈Z+ (without immigration) with an initial vector V0(ε)=ε and with the same offspring distributions as (Xk)k∈Z+. Consequently, we have
∑i=0∞Vi(i)(εi)=Dπ,
where the series ∑i=0∞Vi(i)(εi) converges with probability 1, see, e.g., Heyer [9, Theorem 3.1.6]. The above representation of the stationary distribution π for (Xn)n∈Z+ can be interpreted in a way that we consider independent 2-type Galton–Watson processes without immigration such that the ith one admits initial vector εi, i∈Z+, evaluate the ith 2-type Galton–Watson processes at time point i, and then sum up all these random variables.
Next, we give sufficient conditions for the strong stationarity of a subcritical second-order Galton–Watson process with immigration.
If(Xn)n⩾−1is a second-order Galton–Watson process with immigration such thatmξ,mη∈(0,1)withmξ+mη<1,P(ε=0)<1and∑j=1∞P(ε=j)log(j)<∞, then the distribution of the initial values(X0,X−1)can be uniquely chosen so that the process becomes strongly stationary, and we have a representationX0=D∑i=0∞Vi(i)(εi),where the series converges with probability 1 and(Vk(i)(εi))k⩾−1,i∈Z+, are independent copies of(Vk(ε))k⩾−1, which is a second-order Galton–Watson process (without immigration) with the initial valuesV0(ε)=εandV−1(ε)=0, and with the same offspring distributions as(Xk)k⩾−1. In fact, the distribution of(X0,X−1)is the unique stationary distribution of the corresponding special 2-type Galton–Watson process(Yn)n∈Z+with immigration given in (5).
First we show that the process (Xn)n⩾−1 is strongly stationary if and only if the distribution of the initial population sizes (X0,X−1)⊤ coincides with the stationary distribution π of the Markov chain (Yk)k∈Z+. If (X0,X−1)⊤=Dπ, then Y0=Dπ, thus (Yk)k∈Z+ is strongly stationary, and hence for each n,m∈Z0, (Y0,…,Yn)=D(Ym,…,Yn+m), yielding
(X0,X−1,X1,X0,…,Xn,Xn−1)=D(Xm,Xm−1,Xm+1,Xm,…,Xn+m,Xn+m−1).
Especially, (X−1,X0,X1,…,Xn)=D(Xm−1,Xm,Xm+1,…,Xn+m), hence (Xn)n⩾−1 is strongly stationary. Since
(Xm,Xm−1,Xm+1,Xm,…,Xn+m,Xn+m−1)
is a continuous function of (Xm−1,Xm,Xm+1,…,Xn+m), these considerations work backwards as well. Consequently, π is the unique stationary distribution of the second-order Markov chain (Xn)n⩾−1.
The offspring mean matrix of (Yn)n∈Z+ has the form
mξmη10=Mξ,η,
the spectral radius of Mξ,η is ϱ which is less than 1, and Mξ,η is primitive, since
Mξ,η2=mξmη102=mξ2+mηmξmηmξmη∈(0,∞)2.
Hence, as it was recalled earlier, there exists a unique stationary distribution π for (Yn)n∈Z+. Moreover, the stationary distribution π of (Yn)n∈Z+ has a representation given in (11). Using the considerations for the backward representation, we have (e1⊤Vk(ε))k∈Z+=(Vk(ε))k∈Z+ and (e2⊤Vk(ε))k∈Z+=(Vk−1(ε))k∈Z+, where (Vk(ε))k⩾−1 is a second-order Galton–Watson process (without immigration) with initial values V0(ε)=ε and V−1(ε)=0, and with the same offspring distributions as (Xk)k⩾−1. Consequently, the marginals of the stationary distribution π are the same distributions π. So, under the given conditions, (Xn)n⩾−1 is strongly stationary if and only if the distribution of (X0,X−1) coincides with π. In this case the distribution of X0 is π, and it admits the representation (12). □
Note also that (Xn)n⩾−1 is only a second-order Markov chain, but not a Markov chain.
Note that there is no overlap between the results in the recent paper of Bősze and Pap [5] on non-stationary second-order Galton–Watson processes with immigration and in the present paper. In [5] the authors always suppose that the initial values X0 and X−1 of a second-order Galton–Watson process with immigration (Xn)n⩾−1 are independent, so in the results of [5] the distribution of (X0,X−1) cannot be chosen as the unique stationary distribution π of the special 2-type Galton–Watson process (Yn)n∈Z+ with immigration given in (5), since the marginals of π are not independent in general. □
Proof of Theorem 2
For the proof of Theorem 2, we need an auxiliary result on the tail behaviour of second-order Galton–Watson processes (without immigration) (Xn)n⩾−1 such that X0 is regularly varying and X−1=0.
Let(Xn)n⩾−1be a second-order Galton–Watson process (without immigration) such thatX0is regularly varying with indexβ0∈R+,X−1=0,mξ∈(0,∞)andmη∈R+. In case ofβ0∈[1,∞), assume additionally that there existsr∈(β0,∞)withE(ξr)<∞andE(ηr)<∞. Then for alln∈N,P(Xn>x)∼mnβ0P(X0>x)asx→∞,x)\sim {m_{n}^{{\beta _{0}}}}\mathbb{P}({X_{0}}>x)\hspace{2em}\textit{as}\hspace{5pt}x\to \infty \textit{,}\]]]>wheremi,i∈Z+, are given in Theorem2, and hence,Xnis also regularly varying with indexβ0for eachn∈N.
Let us fix n∈N. In view of the additive property (7), it is sufficient to prove
P(∑i=1X0ζi,0(n)>x)∼mnβ0P(X0>x)asx→∞.x\Bigg)\sim {m_{n}^{{\beta _{0}}}}\mathbb{P}({X_{0}}>x)\hspace{2em}\text{as}\hspace{5pt}x\to \infty \text{.}\]]]>
This follows from Proposition A.1, since E(ζ1,0(n))=mn∈(0,∞), n∈N, by (9). □
We will use the ideas of the proof of Theorem 2.1.1 in Basrak et al. [4] and the representation (12) of the distribution of X0. Recall that (Vk(i)(εi))k⩾−1, i∈Z+, are independent copies of (Vk(ε))k⩾−1, which is a second-order Galton–Watson process (without immigration) with the initial values V0(ε)=ε and V−1(ε)=0, and with the same offspring distributions as (Xk)k⩾−1. Due to the representation (7), for each i∈Z+, we have
Vi(i)(εi)=D∑j=1εiζj,0(i),
where {εi,ζj,0(i):j∈N} are independent random variables such that {ζj,0(i):j∈N} are independent copies of Vi,0, where (Vk,0)k⩾−1 is a second-order Galton–Watson process (without immigration) with the initial values V0,0=1 and V−1,0=0, and with the same offspring distributions as (Xk)k⩾−1. For each i∈Z+, by Proposition 1, we obtain P(Vi(i)(εi)>x)∼miαP(ε>x)x)\sim {m_{i}^{\alpha }}\mathbb{P}(\varepsilon >x)$]]> as x→∞, yielding that the random variables Vi(i)(εi), i∈Z+, are also regularly varying with index α. Since Vi(i)(εi), i∈Z+, are independent, for each n∈Z+, by Lemma A.5, we have
P(∑i=0nVi(i)(εi)>x)∼∑i=0nmiαP(ε>x)asx→∞,x\bigg)\sim {\sum \limits_{i=0}^{n}}{m_{i}^{\alpha }}\mathbb{P}(\varepsilon >x)\hspace{2em}\text{as}\hspace{5pt}x\to \infty \text{,}\]]]>
and hence the random variables ∑i=0nVi(i)(εi), n∈Z+, are also regularly varying with index α. For each n∈N, using that Vi(i)(εi), i∈Z+, are non-negative, we have
lim infx→∞P(X0>x)P(ε>x)=lim infx→∞P(∑i=0∞Vi(i)(εi)>x)P(ε>x)⩾lim infx→∞P(∑i=0nVi(i)(εi)>x)P(ε>x)=∑i=0nmiα,x)}{\mathbb{P}(\varepsilon >x)}& =\underset{x\to \infty }{\liminf }\frac{\mathbb{P}({\textstyle\textstyle\sum _{i=0}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>x)}{\mathbb{P}(\varepsilon >x)}\\ {} & \geqslant \underset{x\to \infty }{\liminf }\frac{\mathbb{P}({\textstyle\textstyle\sum _{i=0}^{n}}{V_{i}^{(i)}}({\varepsilon _{i}})>x)}{\mathbb{P}(\varepsilon >x)}={\sum \limits_{i=0}^{n}}{m_{i}^{\alpha }},\end{aligned}\]]]>
hence, letting n→∞, we obtain
lim infx→∞P(X0>x)P(ε>x)⩾∑i=0∞miα.x)}{\mathbb{P}(\varepsilon >x)}\geqslant {\sum \limits_{i=0}^{\infty }}{m_{i}^{\alpha }}.\]]]>
Moreover, for each n∈N and q∈(0,1), we have
lim supx→∞P(X0>x)P(ε>x)=lim supx→∞P(∑i=0n−1Vi(i)(εi)+∑i=n∞Vi(i)(εi)>x)P(ε>x)⩽lim supx→∞P(∑i=0n−1Vi(i)(εi)>(1−q)x)+P(∑i=n∞Vi(i)(εi)>qx)P(ε>x)⩽L1,n(q)+L2,n(q)x)}{\mathbb{P}(\varepsilon >x)}=\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=0}^{n-1}}{V_{i}^{(i)}}({\varepsilon _{i}})+{\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>x\big)}{\mathbb{P}(\varepsilon >x)}\\ {} & \leqslant \underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=0}^{n-1}}{V_{i}^{(i)}}({\varepsilon _{i}})>(1-q)x\big)+\mathbb{P}\big({\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>qx\big)}{\mathbb{P}(\varepsilon >x)}\\ {} & \leqslant {L_{1,n}}(q)+{L_{2,n}}(q)\end{aligned}\]]]>
with
L1,n(q):=lim supx→∞P(∑i=0n−1Vi(i)(εi)>(1−q)x)P(ε>x),L2,n(q):=lim supx→∞P(∑i=n∞Vi(i)(εi)>qx)P(ε>x).(1-q)x\big)}{\mathbb{P}(\varepsilon >x)},\\ {} & {L_{2,n}}(q):=\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>qx\big)}{\mathbb{P}(\varepsilon >x)}.\end{aligned}\]]]>
Since ε is regularly varying with index α, by (13), we obtain
L1,n(q)=lim supx→∞P(∑i=0n−1Vi(i)(εi)>(1−q)x)P(ε>(1−q)x)·P(ε>(1−q)x)P(ε>x)=(1−q)−α∑i=0n−1miα(1-q)x\big)}{\mathbb{P}(\varepsilon >(1-q)x)}\cdot \frac{\mathbb{P}(\varepsilon >(1-q)x)}{\mathbb{P}(\varepsilon >x)}\\ {} & ={(1-q)^{-\alpha }}{\sum \limits_{i=0}^{n-1}}{m_{i}^{\alpha }}\end{aligned}\]]]>
and
L2,n(q)=lim supx→∞P(∑i=n∞Vi(i)(εi)>qx)P(ε>qx)·P(ε>qx)P(ε>x)=q−αlim supx→∞P(∑i=n∞Vi(i)(εi)>qx)P(ε>qx),qx\big)}{\mathbb{P}(\varepsilon >qx)}\cdot \frac{\mathbb{P}(\varepsilon >qx)}{\mathbb{P}(\varepsilon >x)}\\ {} & ={q^{-\alpha }}\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})>qx\big)}{\mathbb{P}(\varepsilon >qx)},\end{aligned}\]]]>
and hence
limn→∞L1,n(q)=(1−q)−α∑i=0∞miα,limn→∞L2,n(q)=q−αlimn→∞lim supx→∞P(∑i=n∞Vi(i)(εi)>qx)P(ε>qx).qx\big)}{\mathbb{P}(\varepsilon >qx)}.\end{aligned}\]]]>
The aim of the following discussion is to show
limn→∞lim supx→∞P(∑i=n∞Vi(i)(εi)>qx)P(ε>qx)=0,q∈(0,1).qx\big)}{\mathbb{P}(\varepsilon >qx)}=0,\hspace{2em}q\in (0,1).\]]]>
First, we consider the case α∈(0,1). For each x∈(0,∞), n∈N and δ∈(0,1), we have
P(∑i=n∞Vi(i)(εi)>x)=P(∑i⩾nVi(i)(εi)>x,supi⩾nϱiεi>(1−δ)x)=+P(∑i⩾nVi(i)(εi)>x,supi⩾nϱiεi⩽(1−δ)x)=P(∑i⩾nVi(i)(εi)>x,supi⩾nϱiεi>(1−δ)x)+P(∑i⩾nVi(i)(εi)1{εi⩽(1−δ)ϱ−ix}>x,supi⩾nϱiεi⩽(1−δ)x)⩽P(supi⩾nϱiεi>(1−δ)x)+P(∑i⩾nVi(i)(εi)1{εi⩽(1−δ)ϱ−ix}>x)=:P1,n(x,δ)+P2,n(x,δ),x\Bigg)\\ {} & =\mathbb{P}\Bigg(\sum \limits_{i\geqslant n}{V_{i}^{(i)}}({\varepsilon _{i}})>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{i}}{\varepsilon _{i}}>(1-\delta )x\Bigg)\\ {} & \phantom{=\hspace{0.2778em}}+\mathbb{P}\Bigg(\sum \limits_{i\geqslant n}{V_{i}^{(i)}}({\varepsilon _{i}})>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{i}}{\varepsilon _{i}}\leqslant (1-\delta )x\Bigg)\\ {} & =\mathbb{P}\Bigg(\sum \limits_{i\geqslant n}{V_{i}^{(i)}}({\varepsilon _{i}})>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{i}}{\varepsilon _{i}}>(1-\delta )x\Bigg)\\ {} & \hspace{1em}+\mathbb{P}\Bigg(\sum \limits_{i\geqslant n}{V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}}\leqslant (1-\delta ){\varrho ^{-i}}x\}}}>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{i}}{\varepsilon _{i}}\leqslant (1-\delta )x\Bigg)\\ {} & \leqslant \mathbb{P}\bigg(\underset{i\geqslant n}{\sup }{\varrho ^{i}}{\varepsilon _{i}}>(1-\delta )x\bigg)+\mathbb{P}\Bigg(\sum \limits_{i\geqslant n}{V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}}\leqslant (1-\delta ){\varrho ^{-i}}x\}}}>x\Bigg)\\ {} & =:{P_{1,n}}(x,\delta )+{P_{2,n}}(x,\delta ),\end{aligned}\]]]>
where ϱ=λ+. By subadditivity of probability,
P1,n(x,δ)⩽∑i⩾nP(ϱiεi>(1−δ)x)=∑i⩾nP(ε>(1−δ)ϱ−ix).(1-\delta )x)=\sum \limits_{i\geqslant n}\mathbb{P}(\varepsilon >(1-\delta ){\varrho ^{-i}}x).\]]]>
Using Potter’s upper bound (see Lemma A.6), for δ∈(0,α2), there exists x0∈(0,∞) such that
P(ε>(1−δ)ϱ−ix)P(ε>x)<(1+δ)[(1−δ)ϱ−i]−α+δ<(1+δ)[(1−δ)ϱ−i]−α2(1-\delta ){\varrho ^{-i}}x)}{\mathbb{P}(\varepsilon >x)}<(1+\delta ){[(1-\delta ){\varrho ^{-i}}]^{-\alpha +\delta }}<(1+\delta ){[(1-\delta ){\varrho ^{-i}}]^{-\frac{\alpha }{2}}}\]]]>
if x∈[x0,∞) and (1−δ)ϱ−i∈[1,∞), which holds for sufficiently large i∈N due to ϱ∈(0,1). Consequently, if δ∈(0,α2), then
limn→∞lim supx→∞P1,n(x,δ)P(ε>x)⩽limn→∞∑i⩾n(1+δ)[(1−δ)ϱ−i]−α2=0,x)}\leqslant \underset{n\to \infty }{\lim }\sum \limits_{i\geqslant n}(1+\delta ){[(1-\delta ){\varrho ^{-i}}]^{-\frac{\alpha }{2}}}=0,\]]]>
since ϱα2<1 (due to ϱ∈(0,1)) yields ∑i=0∞(ϱ−i)−α/2<∞. Now we turn to prove that limn→∞lim supx→∞P2,n(x,δ)P(ε1>x)=0x)}=0$]]>. By Markov’s inequality,
P2,n(x,δ)⩽1x∑i⩾nE(Vi(i)(εi)1{εi⩽(1−δ)ϱ−ix}).
By the representation Vi(i)(εi)=D∑j=1εiζj,0(i), we have
E(Vi(i)(εi)1{εi⩽(1−δ)ϱ−ix})=E(∑j=1εiζj,0(i)1{εi⩽(1−δ)ϱ−ix})=E[E(∑j=1εiζj,0(i)1{εi⩽(1−δ)ϱ−ix}|εi)]=E(∑j=1εiE(ζ1,0(i))1{εi⩽(1−δ)ϱ−ix})=E(ζ1,0(i))E(εi1{εi⩽(1−δ)ϱ−ix}),
since {ζj,0(i):j∈N} and εi are independent. Moreover,
E(εi1{εi⩽(1−δ)ϱ−ix})=E(ε1{ε⩽(1−δ)ϱ−ix})=∫0∞P(ε1{ε⩽(1−δ)ϱ−ix}>t)dt=∫0(1−δ)ϱ−ixP(t<ε⩽(1−δ)ϱ−ix)dt⩽∫0(1−δ)ϱ−ixP(ε>t)dt.t\big)\hspace{0.1667em}\mathrm{d}t\\ {} & \hspace{1em}={\int _{0}^{(1-\delta ){\varrho ^{-i}}x}}\mathbb{P}(t<\varepsilon \leqslant (1-\delta ){\varrho ^{-i}}x)\hspace{0.1667em}\mathrm{d}t\leqslant {\int _{0}^{(1-\delta ){\varrho ^{-i}}x}}\mathbb{P}(\varepsilon >t)\hspace{0.1667em}\mathrm{d}t.\end{aligned}\]]]>
By Karamata’s theorem (see Theorem A.1), we have
limy→∞∫0yP(ε>t)dtyP(ε>y)=11−α,t)\hspace{0.1667em}\mathrm{d}t}{y\mathbb{P}(\varepsilon >y)}=\frac{1}{1-\alpha },\]]]>
thus there exists y0∈(0,∞) such that
∫0yP(ε>t)dt⩽2yP(ε>y)1−α,y∈[y0,∞),t)\hspace{0.1667em}\mathrm{d}t\leqslant \frac{2y\mathbb{P}(\varepsilon >y)}{1-\alpha },\hspace{2em}y\in [{y_{0}},\infty ),\]]]>
hence
∫0(1−δ)ϱ−ixP(ε>t)dt⩽2(1−δ)ϱ−ixP(ε>(1−δ)ϱ−ix)1−αt)\hspace{0.1667em}\mathrm{d}t\leqslant \frac{2(1-\delta ){\varrho ^{-i}}x\mathbb{P}(\varepsilon >(1-\delta ){\varrho ^{-i}}x)}{1-\alpha }\]]]>
whenever (1−δ)ϱ−ix∈[y0,∞), which holds for i⩾n with sufficiently large n∈N and x∈[(1−δ)−1ϱny0,∞) due to ϱ∈(0,1). Thus, for sufficiently large n∈N and x∈[(1−δ)−1ϱny0,∞), we obtain
P2,n(x,δ)P(ε>x)⩽1xP(ε>x)∑i⩾nE(ζ1,0(i))∫0(1−δ)ϱ−ixP(ε>t)dt⩽2(1−δ)1−α∑i⩾nP(ε>(1−δ)ϱ−ix)P(ε>x),x)}& \leqslant \frac{1}{x\mathbb{P}(\varepsilon >x)}\sum \limits_{i\geqslant n}\mathbb{E}({\zeta _{1,0}^{(i)}}){\int _{0}^{(1-\delta ){\varrho ^{-i}}x}}\mathbb{P}(\varepsilon >t)\hspace{0.1667em}\mathrm{d}t\\ {} & \leqslant \frac{2(1-\delta )}{1-\alpha }\sum \limits_{i\geqslant n}\frac{\mathbb{P}(\varepsilon >(1-\delta ){\varrho ^{-i}}x)}{\mathbb{P}(\varepsilon >x)},\end{aligned}\]]]>
since E(ζ1,0(i))⩽ϱi, i∈Z+, by (10) and ζ1,0(0)=1. Using (16), we get
P2,n(x,δ)P(ε>x)⩽2(1−δ)1−α∑i⩾n(1+δ)[(1−δ)ϱ−i]−α2x)}\leqslant \frac{2(1-\delta )}{1-\alpha }\sum \limits_{i\geqslant n}(1+\delta ){[(1-\delta ){\varrho ^{-i}}]^{-\frac{\alpha }{2}}}\]]]>
for δ∈(0,α2), for sufficiently large n∈N and for all x∈[max(x0,(1−δ)−1ϱny0),∞). Hence for δ∈(0,α2) we have
limn→∞lim supx→∞P2,n(x,δ)P(ε>x)⩽limn→∞2(1−δ2)1−α∑i⩾n[(1−δ)ϱ−i]−α2=0,x)}\leqslant \underset{n\to \infty }{\lim }\frac{2(1-{\delta ^{2}})}{1-\alpha }\sum \limits_{i\geqslant n}{[(1-\delta ){\varrho ^{-i}}]^{-\frac{\alpha }{2}}}=0,\]]]>
where the last step follows by the fact that the series ∑i=0∞(ϱi)α2 is convergent, since ϱ∈(0,1).
Consequently, due to the fact that P(∑i=n∞Vi(i)(εi)>x)⩽P1,n(x,δ)+P2,n(x,δ)\hspace{0.1667em}x)\hspace{0.1667em}\leqslant \hspace{0.1667em}{P_{1,n}}(x,\delta )+{P_{2,n}}(x,\delta )$]]>, x∈(0,∞), n∈N, δ∈(0,1), we obtain (15), and we conclude limn→∞L2,n(q)=0 for all q∈(0,1). Thus we obtain
lim supx→∞P(X0>x)P(ε>x)⩽limn→∞L1,n(q)+limn→∞L2,n(q)=(1−q)−α∑i=0∞miαx)}{\mathbb{P}(\varepsilon >x)}\leqslant \underset{n\to \infty }{\lim }{L_{1,n}}(q)+\underset{n\to \infty }{\lim }{L_{2,n}}(q)={(1-q)^{-\alpha }}{\sum \limits_{i=0}^{\infty }}{m_{i}^{\alpha }}\]]]>
for all q∈(0,1). Letting q↓0, this yields
lim supx→∞P(X0>x)P(ε>x)⩽∑i=0∞miα.x)}{\mathbb{P}(\varepsilon >x)}\leqslant {\sum \limits_{i=0}^{\infty }}{m_{i}^{\alpha }}.\]]]>
Taking into account (14), the proof of (15) is complete in case of α∈(0,1).
Next, we consider the case α∈[1,2). Note that (15) is equivalent to
limn→∞lim supx→∞P(∑i=n∞Vi(i)(εi)>x)P(ε>x)=limn→∞lim supx→∞P((∑i=n∞Vi(i)(εi))2>x)P(ε2>x)=0.\sqrt{x}\big)}{\mathbb{P}(\varepsilon >\sqrt{x})}\\ {} & \hspace{2em}=\underset{n\to \infty }{\lim }\underset{x\to \infty }{\limsup }\frac{\mathbb{P}\big({\big({\textstyle\textstyle\sum _{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})\big)^{2}}>x\big)}{\mathbb{P}({\varepsilon ^{2}}>x)}=0.\end{aligned}\]]]>
Repeating a similar argument as for α∈(0,1), we obtain
P(∑i=n∞Vi(i)(εi)2>x)=P(∑i=n∞Vi(i)(εi)2>x,supi⩾nϱ2iεi2>(1−δ)x)+P(∑i=n∞Vi(i)(εi)2>x,supi⩾nϱ2iεi2⩽(1−δ)x)=P(∑i=n∞Vi(i)(εi)2>x,supi⩾nϱ2iεi2>(1−δ)x)+P(∑i=n∞Vi(i)(εi)1{εi2⩽(1−δ)ϱ−2ix}2>x,supi⩾nϱ2iεi2⩽(1−δ)x)⩽P(supi⩾nϱ2iεi2>(1−δ)x)+P(∑i=n∞Vi(i)(εi)1{εi2⩽(1−δ)ϱ−2ix}2>x)=:P1,n(x,δ)+P2,n(x,δ)x\Bigg)\\ {} & =\mathbb{P}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})\right)^{2}}>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{2i}}{\varepsilon _{i}^{2}}>(1-\delta )x\Bigg)\\ {} & \phantom{\hspace{1em}}+\mathbb{P}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})\right)^{2}}>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{2i}}{\varepsilon _{i}^{2}}\leqslant (1-\delta )x\Bigg)\\ {} & =\mathbb{P}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}})\right)^{2}}>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{2i}}{\varepsilon _{i}^{2}}>(1-\delta )x\Bigg)\\ {} & \hspace{1em}+\mathbb{P}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}\right)^{2}}>x,\hspace{0.2778em}\underset{i\geqslant n}{\sup }{\varrho ^{2i}}{\varepsilon _{i}^{2}}\leqslant (1-\delta )x\Bigg)\\ {} & \leqslant \mathbb{P}\bigg(\underset{i\geqslant n}{\sup }{\varrho ^{2i}}{\varepsilon _{i}^{2}}>(1-\delta )x\bigg)+\mathbb{P}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}\right)^{2}}>x\Bigg)\\ {} & =:{P_{1,n}}(x,\delta )+{P_{2,n}}(x,\delta )\end{aligned}\]]]>
for each x∈(0,∞), n∈N and δ∈(0,1). By the subadditivity of probability,
P1,n(x,δ)⩽∑i=n∞P(ϱ2iεi2>(1−δ)x)=∑i=n∞P(ε2>(1−δ)ϱ−2ix)(1-\delta )x)={\sum \limits_{i=n}^{\infty }}\mathbb{P}({\varepsilon ^{2}}>(1-\delta ){\varrho ^{-2i}}x)\]]]>
for each x∈(0,∞), n∈N and δ∈(0,1). Since ε2 is regularly varying with index α2 (see Lemma A.1), using Potter’s upper bound (see Lemma A.6) for δ∈(0,α4), there exists x0∈(0,∞) such that
P(ε2>(1−δ)ϱ−2ix)P(ε2>x)<(1+δ)[(1−δ)ϱ−2i]−α2+δ<(1+δ)[(1−δ)ϱ−2i]−α4(1-\delta ){\varrho ^{-2i}}x)}{\mathbb{P}({\varepsilon ^{2}}>x)}<(1+\delta ){[(1-\delta ){\varrho ^{-2i}}]^{-\frac{\alpha }{2}+\delta }}<(1+\delta ){[(1-\delta ){\varrho ^{-2i}}]^{-\frac{\alpha }{4}}}\]]]>
if x∈[x0,∞) and (1−δ)ϱ−2i∈[1,∞), which holds for sufficiently large i∈N (due to ϱ∈(0,1)). Consequently, if δ∈(0,α4), then
limn→∞lim supx→∞P1,n(x,δ)P(ε2>x)⩽limn→∞∑i=n∞(1+δ)[(1−δ)ϱ−2i]−α4=0,x)}\leqslant \underset{n\to \infty }{\lim }{\sum \limits_{i=n}^{\infty }}(1+\delta ){[(1-\delta ){\varrho ^{-2i}}]^{-\frac{\alpha }{4}}}=0,\]]]>
since ϱα2<1 (due to ϱ∈(0,1)). By Markov’s inequality, for x∈(0,∞), n∈N and δ∈(0,1), we have
P2,n(x,δ)P(ε2>x)⩽1xP(ε2>x)E(∑i=n∞Vi(i)(εi)1{εi2⩽(1−δ)ϱ−2ix}2)=1xP(ε2>x)E(∑i=n∞Vi(i)(εi)21{εi2⩽(1−δ)ϱ−2ix})+1xP(ε2>x)E(∑i,j=n,i≠j∞Vi(i)(εi)Vj(j)(εj)1{εi2⩽(1−δ)ϱ−2ix}1{εj2⩽(1−δ)ϱ−2jx})=:J2,1,n(x,δ)+J2,2,n(x,δ)x)}\leqslant \frac{1}{x\mathbb{P}({\varepsilon ^{2}}>x)}\mathbb{E}\Bigg({\left({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}\right)^{2}}\Bigg)\\ {} & =\frac{1}{x\mathbb{P}({\varepsilon ^{2}}>x)}\mathbb{E}\Bigg({\sum \limits_{i=n}^{\infty }}{V_{i}^{(i)}}{({\varepsilon _{i}})^{2}}{1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}\Bigg)\\ {} & \hspace{1em}+\frac{1}{x\mathbb{P}({\varepsilon ^{2}}>x)}\mathbb{E}\Bigg({\sum \limits_{i,j=n,\hspace{0.2778em}i\ne j}^{\infty }}\hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}{V_{i}^{(i)}}({\varepsilon _{i}}){V_{j}^{(j)}}({\varepsilon _{j}}){1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}}{1_{\{{\varepsilon _{j}^{2}}\leqslant (1-\delta ){\varrho ^{-2j}}x\}}}\hspace{-0.1667em}\hspace{-0.1667em}\Bigg)\\ {} & =:{J_{2,1,n}}(x,\delta )+{J_{2,2,n}}(x,\delta )\end{aligned}\]]]>
for each x∈(0,∞), n∈N and δ∈(0,1). By Lemma 2, (9) and (10) with X0=1 and X−1=0, we have
E(Vi(i)(n)2)=E∑j=1nζj,0(i)2=∑j=1nE((ζj,0(i))2)+∑j,ℓ=1,j≠ℓnE(ζj,0(i))E(ζℓ,0(i))⩽csub∑j=1nϱi+∑j,ℓ=1,j≠ℓnϱiϱi⩽csubnϱi+(n2−n)ϱ2i⩽csubϱin+ϱ2in2
for i,n∈N. Hence, using that (εi,Vi(i)(εi))=D(εi,∑j=1εiζj,0(i)) and that εi and {ζj,0(i):j∈N} are independent, we have
J2,1,n(x,δ)=∑i=n∞E(Vi(i)(εi)21{εi2⩽(1−δ)ϱ−2ix})xP(ε2>x)=∑i=n∞E((∑j=1εiζj,0(i))21{εi⩽(1−δ)12ϱ−ix12})xP(ε2>x)=∑i=n∞∑0⩽ℓ⩽(1−δ)12ϱ−ix12E((∑j=1ℓζj,0(i))2)P(εi=ℓ)xP(ε2>x)⩽∑i=n∞∑0⩽ℓ⩽(1−δ)12ϱ−ix12csubϱiℓ+ϱ2iℓ2P(ε=ℓ)xP(ε2>x)=∑i=n∞csubϱiE(ε1{ε2⩽(1−δ)ϱ−2ix})xP(ε2>x)=+∑i=n∞ϱ2iE(ε21{ε2⩽(1−δ)ϱ−2ix})xP(ε2>x)=:J2,1,1,n(x,δ)+J2,1,2,n(x,δ).x)}\\ {} & ={\sum \limits_{i=n}^{\infty }}\frac{\mathbb{E}\big({\big({\textstyle\textstyle\sum _{j=1}^{{\varepsilon _{i}}}}{\zeta _{j,0}^{(i)}}\big)^{2}}{1_{\{{\varepsilon _{i}}\leqslant {(1-\delta )^{\frac{1}{2}}}{\varrho ^{-i}}{x^{\frac{1}{2}}}\}}}\big)}{x\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & ={\sum \limits_{i=n}^{\infty }}\frac{{\textstyle\sum _{0\leqslant \ell \leqslant {(1-\delta )^{\frac{1}{2}}}{\varrho ^{-i}}{x^{\frac{1}{2}}}}}\mathbb{E}\Big({\big({\textstyle\textstyle\sum _{j=1}^{\ell }}{\zeta _{j,0}^{(i)}}\big)^{2}}\Big)\mathbb{P}({\varepsilon _{i}}=\ell )}{x\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & \leqslant {\sum \limits_{i=n}^{\infty }}\frac{{\textstyle\sum _{0\leqslant \ell \leqslant {(1-\delta )^{\frac{1}{2}}}{\varrho ^{-i}}{x^{\frac{1}{2}}}}}\left({c_{\mathrm{sub}}}{\varrho ^{i}}\ell +{\varrho ^{2i}}{\ell ^{2}}\right)\mathbb{P}(\varepsilon =\ell )}{x\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & ={\sum \limits_{i=n}^{\infty }}{c_{\mathrm{sub}}}{\varrho ^{i}}\frac{\mathbb{E}(\varepsilon {1_{\{{\varepsilon ^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}})}{x\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & \phantom{=\hspace{0.2778em}}+{\sum \limits_{i=n}^{\infty }}{\varrho ^{2i}}\frac{\mathbb{E}({\varepsilon ^{2}}{1_{\{{\varepsilon ^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}})}{x\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & =:{J_{2,1,1,n}}(x,\delta )+{J_{2,1,2,n}}(x,\delta ).\end{aligned}\]]]>
Since ε2 is regularly varying with index α2∈[12,1) (see Lemma A.1), by Karamata’s theorem (see Theorem A.1), we have
limy→∞∫0yP(ε2>t)dtyP(ε2>y)=11−α2,t)\hspace{0.1667em}\mathrm{d}t}{y\mathbb{P}({\varepsilon ^{2}}>y)}=\frac{1}{1-\frac{\alpha }{2}},\]]]>
thus there exists y0∈(0,∞) such that
∫0yP(ε2>t)dt⩽2yP(ε2>y)1−α2,y∈[y0,∞),t)\hspace{0.1667em}\mathrm{d}t\leqslant \frac{2y\mathbb{P}({\varepsilon ^{2}}>y)}{1-\frac{\alpha }{2}},\hspace{2em}y\in [{y_{0}},\infty ),\]]]>
hence
E(ε21{ε2⩽(1−δ)ϱ−2ix})=∫0∞P(ε21{ε2⩽(1−δ)ϱ−2ix}>y)dy=∫0(1−δ)ϱ−2ixP(y<ε2⩽(1−δ)ϱ−2ix)dy⩽∫0(1−δ)ϱ−2ixP(ε2>t)dt⩽2(1−δ)ϱ−2ixP(ε2>(1−δ)ϱ−2ix)1−α2y)\hspace{0.1667em}\mathrm{d}y\\ {} & ={\int _{0}^{(1-\delta ){\varrho ^{-2i}}x}}\mathbb{P}(y<{\varepsilon ^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x)\hspace{0.1667em}\mathrm{d}y\\ {} & \leqslant {\int _{0}^{(1-\delta ){\varrho ^{-2i}}x}}\mathbb{P}({\varepsilon ^{2}}>t)\hspace{0.1667em}\mathrm{d}t\\ {} & \leqslant \frac{2(1-\delta ){\varrho ^{-2i}}x\mathbb{P}({\varepsilon ^{2}}>(1-\delta ){\varrho ^{-2i}}x)}{1-\frac{\alpha }{2}}\end{aligned}\]]]>
whenever (1−δ)ϱ−2ix∈[y0,∞), which holds for i⩾n with sufficiently large n∈N, and x∈[(1−δ)−1ϱ2ny0,∞) due to ϱ∈(0,1). Thus for δ∈(0,α4), for sufficiently large n∈N (satisfying (1−δ)ϱ−2n∈(1,∞) as well) and for all x∈[max(x0,(1−δ)−1ϱ2ny0),∞), using (17), we obtain
J2,1,2,n(x,δ)⩽2(1−δ)1−α2∑i=n∞P(ε2>(1−δ)ϱ−2ix)P(ε2>x)⩽2(1−δ)1−α2∑i=n∞(1+δ)[(1−δ)ϱ−2i]−α4=2(1−δ2)1−α2∑i=n∞[(1−δ)ϱ−2i]−α4.(1-\delta ){\varrho ^{-2i}}x)}{\mathbb{P}({\varepsilon ^{2}}>x)}\\ {} & \leqslant \frac{2(1-\delta )}{1-\frac{\alpha }{2}}{\sum \limits_{i=n}^{\infty }}(1+\delta ){[(1-\delta ){\varrho ^{-2i}}]^{-\frac{\alpha }{4}}}\\ {} & =\frac{2(1-{\delta ^{2}})}{1-\frac{\alpha }{2}}{\sum \limits_{i=n}^{\infty }}{[(1-\delta ){\varrho ^{-2i}}]^{-\frac{\alpha }{4}}}.\end{aligned}\]]]>
Hence for δ∈(0,α4), we have
limn→∞lim supx→∞J2,1,2,n(x,δ)⩽2(1−δ2)1−α2limn→∞∑i=n∞[(1−δ)ϱ−2i]−α4=0,
yielding limn→∞lim supx→∞J2,1,2,n(x,δ)=0 for δ∈(0,α4). Further, if α∈(1,2), or α=1 and mε<∞, we have
J2,1,1,n(x,δ)⩽csub∑i=n∞ϱimεxP(ε2>x),x)},\]]]>
and hence, using that limx→∞xP(ε2>x)=∞x)=\infty $]]> (see Lemma A.2),
limn→∞lim supx→∞J2,1,1,n(x,δ)⩽csubmεlimn→∞(∑i=n∞ϱi)lim supx→∞1xP(ε2>x)=0,x)}=0,\]]]>
yielding limn→∞lim supx→∞J2,1,1,n(x,δ)=0 for δ∈(0,1).
If α=1 and mε=∞, then we have
J2,1,1,n(x,δ)=∑i=n∞csubϱiE(ε1{ε⩽(1−δ)12ϱ−ix12})xP(ε2>x)x)}\]]]>
for x∈(0,∞), n∈N and δ∈(0,1). Note that
E(ε1{ε⩽y})⩽∫0∞P(ε1{ε⩽y}>t)dt=∫0yP(t<ε⩽y)dt⩽∫0yP(t<ε)dt=:L˜(y)t)\hspace{0.1667em}\mathrm{d}t={\int _{0}^{y}}\mathbb{P}(t<\varepsilon \leqslant y)\hspace{0.1667em}\mathrm{d}t\\ {} & \leqslant {\int _{0}^{y}}\mathbb{P}(t<\varepsilon )\hspace{0.1667em}\mathrm{d}t=:\widetilde{L}(y)\end{aligned}\]]]>
for y∈R+. Because of α=1, Proposition 1.5.9a in Bingham et al. [6] yields that L˜ is a slowly varying function (at infinity). By Potter’s bounds (see Lemma A.6), for every δ∈(0,∞), there exists z0∈(0,∞) such that
L˜(y)L˜(z)<(1+δ)yzδ
for z⩾z0 and y⩾z. Hence, for x⩾z02, we have
E(ε1{ε⩽(1−δ)12ϱ−ix12})⩽L˜((1−δ)12ϱ−ix12)⩽L˜(ϱ−ix12)⩽(1+δ)ϱ−iδL˜(x12)
for i⩾n, where we also used that L˜ is monotone increasing. Using this, we conclude that for every δ∈(0,∞), there exists z0∈(0,∞) such that for x⩾z02, we have
J2,1,1,n(x,δ)⩽(1+δ)csubL˜(x12)xP(ε2>x)∑i=n∞ϱ−iδ.x)}{\sum \limits_{i=n}^{\infty }}{\varrho ^{-i\delta }}.\]]]>
Here, since ϱ∈(0,1) and δ∈(0,∞), we have limn→∞∑i=n∞ϱ−iδ=0, and
L˜(x)xP(ε2>x)=L˜(x)x1/4·1x3/4P(ε>x)→0asx→∞,x)}=\frac{\widetilde{L}(\sqrt{x})}{{x^{1/4}}}\cdot \frac{1}{{x^{3/4}}\mathbb{P}(\varepsilon >\sqrt{x})}\to 0\hspace{2em}\text{as}\hspace{5pt}x\to \infty \text{,}\]]]>
by Lemma A.2, due to the fact that L˜ is slowly varying and the function (0,∞)∋x↦P(ε>x)\sqrt{x})$]]> is regularly varying with index −1/2. Hence limn→∞lim supx→∞J2,1,1,n(x,δ)=0 for δ∈(0,1) in case of α=1 and mε=∞.
Consequently, we have limn→∞lim supx→∞J2,1,n(x,δ)=0 for δ∈(0,α4).
Now we turn to prove limn→∞lim supx→∞J2,2,n(x,δ)=0 for δ∈(0,1). Using that {(εi,Vi(i)(εi)):i∈N} are independent, we have
J2,2,n(x,δ)⩽1xP(ε2>x)∑i,j=n,i≠j∞E(Vi(i)(εi)1{εi2⩽(1−δ)ϱ−2ix})×E(Vj(j)(εj)1{εj2⩽(1−δ)ϱ−2jx}).x)}{\sum \limits_{i,j=n,\hspace{0.1667em}i\ne j}^{\infty }}& \mathbb{E}({V_{i}^{(i)}}({\varepsilon _{i}}){1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}})\\ {} & \hspace{0.1667em}\times \mathbb{E}({V_{j}^{(j)}}({\varepsilon _{j}}){1_{\{{\varepsilon _{j}^{2}}\leqslant (1-\delta ){\varrho ^{-2j}}x\}}}).\end{aligned}\]]]>
Here, using that εi,Vi(i)(εi)=D(εi,∑j=1εiζj,0(i)), where εi and {ζj,0(i):j∈N} are independent, and (10) with X0=1 and X−1=0, we have
E(Vi(i)(εi)1{εi2⩽(1−δ)ϱ−2ix})=E∑j=1εiζj,0(i)1{εi2⩽(1−δ)ϱ−2ix}=∑ℓ=0⌊(1−δ)12ϱ−ix12⌋E∑j=1ℓζj,0(i)P(εi=ℓ)⩽∑ℓ=0⌊(1−δ)12ϱ−ix12⌋ℓϱiP(εi=ℓ)=ϱiE(εi1{εi2⩽(1−δ)ϱ−2ix})
for x∈(0,∞) and δ∈(0,1). If α∈(1,2), or α=1 and mε<∞, then
J2,2,n(x,δ)⩽1xP(ε2>x)∑i,j=n,i≠j∞ϱi+jE(εi1{εi2⩽(1−δ)ϱ−2ix})E(εj1{εj2⩽(1−δ)ϱ−2jx})⩽mε2xP(ε2>x)∑i,j=n,i≠j∞ϱi+j⩽mε2xP(ε2>x)∑i=n∞ϱi2x)}{\sum \limits_{i,j=n,\hspace{0.1667em}i\ne j}^{\infty }}{\varrho ^{i+j}}\mathbb{E}({\varepsilon _{i}}{1_{\{{\varepsilon _{i}^{2}}\leqslant (1-\delta ){\varrho ^{-2i}}x\}}})\mathbb{E}({\varepsilon _{j}}{1_{\{{\varepsilon _{j}^{2}}\leqslant (1-\delta ){\varrho ^{-2j}}x\}}})\\ {} & \leqslant \frac{{m_{\varepsilon }^{2}}}{x\mathbb{P}({\varepsilon ^{2}}>x)}{\sum \limits_{i,j=n,\hspace{0.1667em}i\ne j}^{\infty }}{\varrho ^{i+j}}\leqslant \frac{{m_{\varepsilon }^{2}}}{x\mathbb{P}({\varepsilon ^{2}}>x)}{\left({\sum \limits_{i=n}^{\infty }}{\varrho ^{i}}\right)^{2}}\end{aligned}\]]]>
for x∈(0,∞) and δ∈(0,1), and then, by Lemma A.2,
limn→∞lim supx→∞J2,2,n(x,δ)⩽mε2limn→∞∑i=n∞ϱi2lim supx→∞1xP(ε2>x)=mε2limn→∞ϱ2n(1−ϱ)2·0=0,x)}\\ {} & ={m_{\varepsilon }^{2}}\left(\underset{n\to \infty }{\lim }\frac{{\varrho ^{2n}}}{{(1-\varrho )^{2}}}\right)\cdot 0=0,\end{aligned}\]]]>
yielding that limn→∞lim supx→∞J2,2,n(x,δ)=0.
If α=1 and mε=∞, then we can apply the same argument as for J2,1,1,n(x,δ). Namely,
J2,2,n(x,δ)⩽(1+δ)2xP(ε2>x)∑i,j=n,i≠j∞ϱ(1−δ)(i+j)(L˜(x12))2⩽(1+δ)2(L˜(x12))2xP(ε2>x)∑i,j=n,i≠j∞ϱ(1−δ)(i+j)=(1+δ)2(L˜(x12))2xP(ε2>x)∑i=n∞ϱ(1−δ)i2x)}{\sum \limits_{i,j=n,\hspace{0.1667em}i\ne j}^{\infty }}{\varrho ^{(1-\delta )(i+j)}}{(\widetilde{L}({x^{\frac{1}{2}}}))^{2}}\\ {} & \leqslant {(1+\delta )^{2}}\frac{{(\widetilde{L}({x^{\frac{1}{2}}}))^{2}}}{x\mathbb{P}({\varepsilon ^{2}}>x)}{\sum \limits_{i,j=n,\hspace{0.1667em}i\ne j}^{\infty }}{\varrho ^{(1-\delta )(i+j)}}\\ {} & ={(1+\delta )^{2}}\frac{{(\widetilde{L}({x^{\frac{1}{2}}}))^{2}}}{x\mathbb{P}({\varepsilon ^{2}}>x)}{\left({\sum \limits_{i=n}^{\infty }}{\varrho ^{(1-\delta )i}}\right)^{2}}\end{aligned}\]]]>
for x∈(0,∞) and δ∈(0,1), where
(L˜(x12))2xP(ε2>x)=L˜(x12)x1221x34P(ε>x)→0asx→∞,x)}={\left(\frac{\widetilde{L}({x^{\frac{1}{2}}})}{{x^{\frac{1}{2}}}}\right)^{2}}\frac{1}{{x^{\frac{3}{4}}}\mathbb{P}(\varepsilon >\sqrt{x})}\to 0\hspace{2em}\text{as}\hspace{5pt}x\to \infty \text{,}\]]]>
yielding that limn→∞lim supx→∞J2,2,n(x,δ)=0 for δ∈(0,1) in case of α=1 and mε=∞ as well.
Consequently, limn→∞lim supx→∞P2,n(x,δ)P(ε2>x)=0x)}=0$]]> for δ∈(0,α4) yielding (15) in case of α∈[1,2) as well, and we conclude limn→∞L2,n(q)=0 for all q∈(0,1). The proof can be finished as in case of α∈(0,1). □
The statement of Theorem 2 remains true in the case when mξ∈(0,1) and mη=0. In this case we get the statement for classical Galton–Watson processes, see Theorem 2.1.1 in Basrak et al. [4] or Theorem 1. However, note that this is not a special case of Theorem 2, since in this case the mean matrix Mξ,η is not primitive. □
Regularly varying distributions
First, we recall the notions of slowly varying and regularly varying functions, respectively.
A measurable function U:(0,∞)→(0,∞) is called regularly varying at infinity with index ρ∈R if for all q∈(0,∞),
limx→∞U(qx)U(x)=qρ.
In case of ρ=0, U is called slowly varying at infinity.
Next, we recall the notion of regularly varying random variables.
A non-negative random variable X is called regularly varying with index α∈R+ if U(x):=P(X>x)∈(0,∞)x)\in (0,\infty )$]]> for all x∈(0,∞), and U is regularly varying at infinity with index −α.
If ζ is a non-negative regularly varying random variable with indexα∈R+, then for eachc∈(0,∞),ζcis regularly varying with indexαc.
IfL:(0,∞)→(0,∞)is a slowly varying function (at infinity), thenlimx→∞xδL(x)=∞,limx→∞x−δL(x)=0,δ∈(0,∞).
For Lemma A.2, see Bingham et al. [6, Proposition 1.3.6. (v)].
If ε is a non-negative regularly varying random variable with indexα∈(0,∞), then∑j=1∞P(ε=j)log(j)<∞.
Since ∑j=1∞P(ε=j)log(j)⩽E(log(ε+1)), it is enough to prove that E(log(ε+1))<∞. Since log(ε+1)⩾0, we have
E(log(ε+1))=∫0∞P(log(ε+1)⩾x)dx=∫0∞P(ε⩾ex−1)dx=∫01P(ε⩾ex−1)dx+∫1∞P(ε⩾ex−1)dx:=I1+I2.
Here I1⩽1, and, by substitution y=ex−1,
I2=∫e−1∞y−αL(y)11+ydy,
where L(y):=yαP(ε>y)y)$]]>, y∈(0,∞), is a slowly varying function. By Lemma A.2, there exists y0∈(e−1,∞) such that y−α2L(y)⩽1 for all y∈[y0,∞). Hence
I2=∫e−1y0y−αL(y)11+ydy+∫y0∞y−αL(y)11+ydy⩽∫e−1y0y−αL(y)11+ydy+∫y0∞y−α211+ydy⩽∫e−1y0y−αL(y)11+ydy+∫y0∞y−α2−1dy⩽∫e−1y011+ydy+∫y0∞y−α2−1dy<∞,
since y−αL(y)=P(ε>y)⩽1y)\leqslant 1$]]> for all y∈(0,∞). □
IfX1andX2are non-negative regularly varying random variables with indexα1∈R+andα2∈R+, respectively, such thatα1<α2, thenP(X2>x)=o(P(X1>x))x)=\operatorname{o}(\mathbb{P}({X_{1}}>x))$]]>asx→∞.
For a proof of Lemma A.4, see, e.g., Barczy et al. [3, Lemma C.7].
(Convolution property).
IfX1andX2are non-negative random variables such thatX1is regularly varying with indexα1∈R+andP(X2>x)=o(P(X1>x))x)=\operatorname{o}(\mathbb{P}({X_{1}}>x))$]]>asx→∞, thenP(X1+X2>x)∼P(X1>x)x)\sim \mathbb{P}({X_{1}}>x)$]]>asx→∞, and henceX1+X2is regularly varying with indexα1.
IfX1andX2are independent non-negative regularly varying random variables with indexα1∈R+andα2∈R+, respectively, thenP(X1+X2>x)∼P(X1>x)ifα1<α2,P(X1>x)+P(X2>x)ifα1=α2,P(X2>x)ifα1>α2,x)\sim \left\{\begin{array}{l@{\hskip10.0pt}l}\mathbb{P}({X_{1}}>x)\hspace{1em}& \textit{if}\hspace{5pt}{\alpha _{1}}<{\alpha _{2}}\textit{,}\\ {} \mathbb{P}({X_{1}}>x)+\mathbb{P}({X_{2}}>x)\hspace{1em}& \textit{if}\hspace{5pt}{\alpha _{1}}={\alpha _{2}}\textit{,}\\ {} \mathbb{P}({X_{2}}>x)\hspace{1em}& \textit{if}\hspace{5pt}{\alpha _{1}}>{\alpha _{2}}\textit{,}\end{array}\right.\]]]>asx→∞, and henceX1+X2is regularly varying with indexmin{α1,α2}.
The statements of Lemma A.5 follow, e.g., from parts 1 and 3 of Lemma B.6.1 of Buraczewski et al. [7] and Lemma A.4 together with the fact that the sum of two slowly varying functions is slowly varying.
(Karamata’s theorem).
LetU:(0,∞)→(0,∞)be a locally integrable function such that it is integrable on intervals including 0 as well.
(i) If U is regularly varying (at infinity) with index−α∈[−1,∞), then(0,∞)∋x↦∫0xU(t)dtis regularly varying (at infinity) with index1−α, andlimx→∞xU(x)∫0xU(t)dt=1−α.(ii) If U is regularly varying (at infinity) with index−α∈(−∞,−1), then(0,∞)∋x↦∫x∞U(t)dtis regularly varying (at infinity) with index1−α, andlimx→∞xU(x)∫x∞U(t)dt=−1+α.
For Theorem A.1, see, e.g., Resnick [18, Theorem 2.1].
(Potter’s bounds).
IfU:(0,∞)→(0,∞)is a regularly varying function (at infinity) with index−α∈R, then for everyδ∈(0,∞), there existsx0∈R+such that(1−δ)q−α−δ<U(qx)U(x)<(1+δ)q−α+δ,x∈[x0,∞),q∈[1,∞).
For Lemma A.6, see, e.g., Resnick [18, Proposition 2.6].
Finally, we recall a result on the tail behaviour of regularly varying random sums.
Let τ be a non-negative integer-valued random variable and let{ζ,ζi:i∈N}be independent and identically distributed non-negative random variables, independent of τ, such that τ is regularly varying with indexβ∈R+andE(ζ)∈(0,∞). In case ofβ∈[1,∞), assume additionally that there existsr∈(β,∞)withE(ζr)<∞. Then we haveP(∑i=1τζi>x)∼P(τ>xE(ζ))∼(E(ζ))βP(τ>x)asx→∞,x\bigg)\sim \mathbb{P}\bigg(\tau >\frac{x}{\mathbb{E}(\zeta )}\bigg)\sim {(\mathbb{E}(\zeta ))^{\beta }}\mathbb{P}(\tau >x)\hspace{2em}\textit{as}\hspace{5pt}x\to \infty \textit{,}\]]]>and hence∑i=1τζiis also regularly varying with index β.
For a proof of Proposition A.1, see, e.g., Barczy et al. [3, Proposition F.3].
Acknowledgments
We would like to thank the referee and Prof. Yuliya Mishura, Co-editor-in-chief, for their comments that helped us to improve the paper.
ReferencesAthreya, K.B., Ney, P.E.: pp. 287. Dover Publications, Inc., Mineola, NY (2004). Reprint of the 1972 original [Springer, New York; MR0373040]. MR2047480Barczy, M., Bősze, Zs., Pap, G.: On tail behaviour of stationary second-order Galton–Watson processes with immigration (2018). arXiv:1801.07931.Barczy, M., Bősze, Zs., Pap, G.: Regularly varying non-stationary Galton–Watson processes with immigration. 140, 106–114 (2018). MR3812257. https://doi.org/10.1016/j.spl.2018.05.010Basrak, B., Kulik, R., Palmowski, Z.: Heavy-tailed branching process with immigration. 29(4), 413–434 (2013). MR3175851. https://doi.org/10.1080/15326349.2013.838508Bősze, Zs., Pap, G.: Regularly varying nonstationary second-order Galton–Watson processes with immigration. 35(2), 132–147 (2019). MR3969511. https://doi.org/10.1080/15326349.2019.1572520Bingham, N.H., Goldie, C.M., Teugels, J.L.: . Encyclopedia of Mathematics and its Applications, vol. 27, pp. 491. Cambridge University Press, Cambridge (1987). MR0898871. https://doi.org/10.1017/CBO9780511721434Buraczewski, D., Damek, E., Mikosch, T.: . Springer Series in Operations Research and Financial Engineering, pp. 320. Springer (2016). MR3497380. https://doi.org/10.1007/978-3-319-29679-1Du, J.G., Li, Y.: The integer-valued autoregressive (INAR(p)) model. 12(2), 129–142 (1991). MR1108796. https://doi.org/10.1111/j.1467-9892.1991.tb00073.xHeyer, H.: , 2nd edn. Series on Multivariate Analysis, vol. 8, pp. 412. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ (2010). With an additional chapter by Gyula Pap. MR2568013Hult, H., Samorodnitsky, G.: Tail probabilities for infinite series of regularly varying random vectors. 14(3), 838–864 (2008). MR2537814. https://doi.org/10.3150/08-BEJ125Kashikar, A.S.: Estimation of growth rate in second order branching process. 191, 1–12 (2017). MR3679105. https://doi.org/10.1016/j.jspi.2017.06.003Kashikar, A.S., Deshmukh, S.R.: Probabilistic properties of second order branching process. 67(3), 557–572 (2015). MR3339191. https://doi.org/10.1007/s10463-014-0462-0Kashikar, A.S., Deshmukh, S.R.: Estimation in second order branching processes with application to swine flu data. 45(4), 1031–1046 (2016). MR3459412. https://doi.org/10.1080/03610926.2013.853796Latour, A.: The multivariate GINAR(p) process. 29(1), 228–248 (1997). MR1432938. https://doi.org/10.2307/1427868Pénisson, S.: Estimation of the infection parameter of an epidemic modeled by a branching process. 8(2), 2158–2187 (2014). MR3273622. https://doi.org/10.1214/14-EJS948Pénisson, S., Jacob, C.: Stochastic methodology for the study of an epidemic decay phase, based on a branching model. , Article ID 598701, 32 pages (2012). MR2999458. https://doi.org/10.1155/2012/598701Quine, M.P.: The multi-type Galton–Watson process with immigration. 7, 411–422 (1970). MR0263168. https://doi.org/10.2307/3211974Resnick, S.I.: . Springer Series in Operations Research and Financial Engineering, pp. 404. Springer (2007). MR2271424