VMSTA Modern Stochastics: Theory and Applications 2351-6054 2351-6046 2351-6046 VTeXMokslininkų g. 2A, 08412 Vilnius, Lithuania VMSTA123 10.15559/18-VMSTA123 Research Article Asymptotics for the sum of three state Markov dependent random variables LiaudanskaitėGabijagabija.liaudanskaite@mif.stud.vu.lt ČekanavičiusVydasvydas.cekanavicius@mif.vu.lt Faculty of Mathematics and Informatics, Vilnius University, Naugardukas str. 24, LT-03225, Vilnius, Lithuania Corresponding author. 2019 1911201861109131 1082018 11102018 27102018 © 2019 The Author(s). Published by VTeX2019 Open access article under the CC BY license.

The insurance model when the amount of claims depends on the state of the insured person (healthy, ill, or dead) and claims are connected in a Markov chain is investigated. The signed compound Poisson approximation is applied to the aggregate claims distribution after nN periods. The accuracy of order O(n1) and O(n1/2) is obtained for the local and uniform norms, respectively. In a particular case, the accuracy of estimates in total variation and non-uniform estimates are shown to be at least of order O(n1) . The characteristic function method is used. The results can be applied to estimate the probable loss of an insurer to optimize an insurance premium.

Signed compound Poisson approximation insurance model Markov chain Kolmogorov norm local norm total variation norm non-uniform estimate 60J10
Introduction

This paper is motivated by the insurance model in which the insured is described by a random variable (rv) with three states (healthy, ill, dead), and rvs are connected in a Markov chain. We assume that the insurer pays one unit of money in the case of illness and continuously pays dN units in the case of death. We are interested in aggregate losses for the insurer after nN time periods. More precisely, let ξ0,ξ1,,ξn, be a non-stationary three-state {a1,a2,a3} Markov chain. State a1 corresponds to being healthy, state a2 corresponds to being ill, and state a3 is reached in the case of death. The insurer pays nothing for healthy policy holders, one unit of money for the ill individuals, and constantly pays d units of money ( dN ) in the case of death. We denote the distribution of Sn=f(ξ1)++f(ξn) (nN) by Fn , that is, P(Sn=m)=Fn{m} for mZ . Here f(a1)=0,f(a2)=1,f(a3)=d,dN . We will analyze a little simplified model by assuming that the probability of a healthy person to die is equal to zero (i.e. we exclude the cases of sudden death). Even though this assumption diminishes model’s universality, it is quite reasonable, because usually a person is ill at least for one time period and dies only afterwards.

The matrix of transition probabilities P is defined in the following way P=1γγ01αββα001,α,β,γ(0,1).

It is assumed that at the beginning the insured person is healthy. Hence, the initial distribution is given by P(ξ0=a1)=π1=1,P(ξ0=a2)=π2=0,P(ξ0=a3)=π3=0. Observe, that our Markov chain contains one absorbing state (death).

In this paper, we consider triangular arrays of rvs (the scheme of series), i.e. all transition probabilities α,β,γ can depend on nN . Arguably in insurance models the triangular arrays are more natural than the more frequently studied less general scheme of sequences, when it is assumed that the probability to become ill or to die does not change as time passes.

All results are obtained under the condition 0<β0.15,0<γ0.05,αC0<1,α+β<1.

Here C0(0,1) is any maximum possible value of α(n),nN (strictly less than 1), i.e. the maximum probability of an ill individual to die for all time periods nN . The condition (1) is not very restrictive, because β0.15 means that the probability to remain ill during the next time period does not exceed 15%, and γ0.05 means that the probability of a healthy person to become ill does not exceed 5%, that is, only chronic and epidemic illnesses are excluded.

We denote by C all positive absolute constants, and we denote by θ any complex number satisfying |θ|1 . The values of C and θ can vary from line to line or even within the same line. Sometimes, as in (1), we supply constants with indices. Let Ik denote the distribution concentrated at an integer kZ , and set I=I0 . Let MZ be a set of finite signed measures concentrated on Z . The Fourier transform and analogue of distribution function for MMZ is denoted by Mˆ(t) (tR) and M(x):=j=xM{j} , respectively. Similarly, Fn(x):=Fn{(,x]} . For yR and jN={1,2,3,} , we set yj:=1j!y(y1)(yj+1),y0:=1.

If N,MMZ , then products and powers of N and M are understood in the convolution sense, that is, for a set AZ , NM{A}= k=N{Ak}M{k},M0=I. The exponential of M is denoted by eM=exp{M}:= k=01k!Mk. We define the local norm, the uniform (Kolmogorov) norm, and the total-variation norm of M respectively by M:=supkZ|M{k}|,|M|K:=supxR|M{(,x]}|,M:= j=|M{j}|.

In the proofs, we apply the following well-known relations: MNˆ(t)=Mˆ(t)Nˆ(t),MNMN,|MN|KM|N|K,MNMN,|Mˆ(t)|M,Iˆa(t)=eita,Iˆ(t)=1.

Known results

The compound Poisson approximation is frequently used to approximate aggregate losses in risk models (see, for example, ); however, in those models it is usually assumed that rvs are independent of time period nN . The compound Poisson approximation to sums of Markov dependent rvs was investigated in . Numerous papers were devoted to Markov Binomial distribution, see , and the references therein. It seems, however, that the case of Markov chain containing absorbing state was not considered so far. Our research is closely related to the paper , in which a non-stationary three-state symmetric Markov chain ξ0,ξ1,ξn, was investigated with the matrix of transition probabilities a12aab12bba12aa,a,b(0,0.5).

Let S˜n=f˜(ξ1)++f˜(ξn) (nN) , f˜(a1)=1 , f˜(a2)=0 , f˜(a3)=1 and let the initial distribution be P(ξ0=a1)=π1 , P(ξ0=a2)=π2 , and P(ξ0=a3)=π3 . Denote the distribution of S˜n by F˜n . G˜ defines the measure with the Fourier transform: g˜(t)=(π1+12acost12aπ2+π3)12(ab)12(ab)2a(cost1)×exp{2nb(12a)(cost1)(12a+2b)(12acost)}. As shown in , if a,b1/30 , then F˜nG˜C(min{1n,b}+0.2n|ab|).

The main part of the approximation G˜ is a compound Poisson distribution with a compounding symmetrized geometric distribution. The accuracy of approximation is at least O(n1) . However, due to the symmetry of distribution and possible negative values, it is difficult to find a compatible insurance model.

Measures used for approximation

For convenience we present all Fourier transforms of measures used for construction of approximations in a separate table. Note that all measures are denoted by the same capital letters as their Fourier transforms (for example, Hˆ(t) is the Fourier transform of H).

The measures can be easily found from their Fourier transforms using the formula M{k}=12πππekitMˆ(t)dtfor allkZ. For example, Hˆ(t)=(1β)eit1βeit.

Since Iˆa(t)=eita , for all kZ we have H{k}=12πππekit(1β)eit1βeitdt=1β2πππeikteit j=0(βeit)jdt=(1β)βk1 j=0βjk+112πππekite(j+1)itdt=(1β)βk1 j=0βjk+1Ij+1{k}=(1β) j=0βjIj+1{k}.

The other measures can be calculated analogously using their Fourier transforms presented in Table 1.

Fourier transforms of used measures.

 Hˆ(t)=(1−β)eit1−βeit Aˆ1(t)=1−β1+γ−β(Ψˆ(t)−1) Ψˆ(t)=(1−α−β)eit1−βeit Aˆ2(t)=−β(1−β)(1+γ−β)2(Hˆ(t)−1)(Ψˆ(t)−1) Hˆ(t)−1=eit−11−βeit Aˆ3(t)=β2(1−β)(Hˆ(t)−1)2(Ψˆ(t)−1)(1+γ−β)3 Ψˆ(t)−1=(1−α)eit−11−βeit Aˆ4(t)=−(1−β)3(Ψˆ(t)−1)2(1+γ−β)3(1−βeit) Uˆ(t)=(1−α)eit−1 Aˆ5(t)=3β(1−β)3(Ψˆ(t)−1)2(Hˆ(t)−1)(1+γ−β)4(1−βeit) Δˆ(t)=1+Aˆ1(t)γ Aˆ6(t)=2(1−β)5(Ψˆ(t)−1)3(1+γ−β)5(1−βeit)2 Δˆ1(t)=1+Aˆ1(t)γ+(Aˆ2(t)+Aˆ4(t))γ2 Aˆ(t)=1+Aˆ1(t)γ+Aˆ2(t)γ2+Aˆ3(t)γ3+Aˆ4(t)γ2+Aˆ5(t)γ3+Aˆ6(t)γ3 Vˆ(t)=(e(d+1)it−1)(β−γ(1−α))−(edit−1)Δˆ(t)(Aˆ(t)−edit)(2Δˆ(t)−1+γ−βeit)+(eit−1)[γΔˆ(t)−β+γ(1−α)](Aˆ(t)−edit)(2Δˆ(t)−1+γ−βeit) Vˆ1(t)=(e(d+1)it−1)(β−γ(1−α))−(edit−1)Δˆ(t)(Δˆ1(t)−edit)(2Δˆ(t)−1+γ−βeit)+(eit−1)[γΔˆ(t)−β+γ(1−α)](Δˆ1(t)−edit)(2Δˆ(t)−1+γ−βeit) Vˆ2(t)=(e(d+1)it−1)(β−γ(1−α))−(edit−1)Δˆ(t)(Δˆ1(t)−edit)(2Δˆ1(t)−1+γ−βeit)+(eit−1)[γΔˆ(t)−β+γ(1−α)](Δˆ1(t)−edit)(2Δˆ1(t)−1+γ−βeit) Gˆ(t)=exp{Aˆ(t)−1−12(Aˆ12(t)γ2+2Aˆ1(t)(Aˆ2(t)+Aˆ4(t))γ3)+13Aˆ13(t)γ3} Gˆ1(t)=exp{Aˆ1(t)γ+(Aˆ2(t)+Aˆ4(t)−12Aˆ12(t))γ2} Eˆ(t)=αγe(n+1)dit(e(d−1)it−β)(edit−(1−γ))−γ(1−α−β)
Results

We analyze the scheme of series, when transition probabilities may differ from one time period to another time period, that is, transition probabilities depend on nN : α=α(n),β=β(n),γ=γ(n) . First we formulate a general approximation result for Fn , where possible smallness of α and γ is taken into account.

Let condition (1) hold. Then, for all n=1,2, , |Fn(GnV+E)|KC(d+1)(eCnγαγn+(β+4γ)n),Fn(GnV+E)C(d+1)(eCnγαn+(β+4γ)n).

Observe that, since β+4γ0.35 , the second term in (4) tends to zero exponentially.

Unlike (2), there are two components in our approximation: the first one contains n-fold convolution of a signed compound Poisson measure, the second one takes into account the probability of death (the absorbing state). The measures of approximation are chosen in a way ensuring that the accuracy of approximation is at least as good as in the Berry–Esseen theorem.

Let condition (1) hold. Then, for all n=1,2, , |Fn(GnV+E)|KC(d+1)n.

This accuracy is reached, when αγ=O(n1) . If α,γC1>0 0$]]>, the accuracy of approximation is exponentially sharp. That prompts a question: Is it possible to simplify the structure of approximation by imposing more restrictive assumptions? The answer is positive for α uniformly separated from zero for all n. Let condition (1) hold and αC2 . Then, for all n=1,2, , |Fn(G1nV1+E)|KC(d+1)(γeCnγ+(β+4γ)n). Observe that the accuracy of approximation in (5) is at least of order O(n1) . This accuracy is reached if γ=O(n1) . If both probabilities are uniformly separated from zero, Fn is exponentially close to the measure E. Let condition (1) hold and α,γC2 . Then, for all n=1,2, , FnEC(d+1)eCn. Observe that, if the scheme of sequences is analyzed, all probabilities do not depend on n and hence the conditions of Theorem 3 are satisfied as long as condition (1) holds. Note also that in Theorem 3 the stronger total variation norm is used. Let condition (1) hold and αC2 . Then, for all n=1,2, , Fn(G1nV2+E)C(d+1)(γeCnγ(1+β/γ)+n(β+4γ)n). Let condition (1) hold and αC2 . Then, for all n=1,2, , Fn(G1nV2+E)C(d+1)eCnγn(1+βγ). The local estimates in Theorem 2, 3, and 4 have the same order as in (5), (6), and (7), hence we do no formulate them separately. In insurance models, tail probabilities are very important, see, for example . Therefore, we formulate some non-uniform estimates for the case when α is uniformly separated from zero. Let condition (1) hold and αC2 . Then, for any integer k1 and nN , |Fn{k}(G1nV2+E){k}|C(d+1)eCnγ(β+γ)n(β+(k+1)γ). |Fn(k)(G1nV2+E)(k)|Cd2eCnγn(1+kγ2). The non-uniform estimate for distribution functions (10) is quite inaccurate if γ is small. On the other hand, the local non-uniform estimate is at least of order O(n1k1) , when β is of the same order as γ. When γ is uniformly separated from zero and α is small, estimate (4) could not be simplified. Auxiliary results We begin from the inversion inequalities. Let MMZ . Then |M|K12πππ|Mˆ(t)||eit1|dt, M12πππ|Mˆ(t)|dt. If, in addition, kZ|k||M{k}|< , then M(1+bπ)1/2(12π ππ|Mˆ(t)|2+1b2|(eitaMˆ(t))|2dt)1/2, and, for any aR,b>0 0$]]>, |ka||M{k}|12πππ|(Mˆ(t)eita)|dt, |ka||M(k)|12πππ|(Mˆ(t)eit1eita)|dt.

Observe that (11) and (15) are trivial if integrals on the right-hand side are infinite. All inequalities are well-known and can be found in  Section 6.1 and Section 6.2; see, also  and Lemma 3.3 in .

The characteristic function method is used for the analysis of the model. Therefore our next step is to obtain Fˆn(t) .

Let condition (1) hold. Then the characteristic function Fˆn(t) can be expressed in the following way: Fˆn(t)=Λˆ1n(t)Wˆ1(t)+Λˆ2n(t)Wˆ2(t)+Λˆ3n(t)Wˆ3(t).

Here Λˆ1,2(t)=1γ+βeit±Dˆ(t)2,Λˆ3(t)=edit,Dˆ(t)=(1γ+βeit)24eit(βγ(1α)),Wˆ1,2(t)=(e(d+1)it1)(βγ(1α))(edit1)Λˆ1,2(t)±(Λˆ1,2(t)edit)Dˆ(t)+(eit1)[γΛˆ1,2(t)β+γ(1α)]±(Λˆ1,2(t)edit)Dˆ(t),Wˆ3(t)=αγedit(e(d1)itβ)(edit(1γ))γ(1αβ).

The characteristic function Fˆn(t) can be written as follows, see : Fˆn(t)=(π1,π2,π3)(Λˆ1n(t)y1z1T+Λˆ2n(t)y2z2T+Λˆ3n(t)y3z3T)(1,1,1)T.

Expression (16) is known as Perron’s formula. Similar expression was used for Markov binomial distribution; see, for example, . Λˆj(t) ( j=1,2,3 ) are eigenvalues of the following matrix: P˜(t)=1γγeit01αββeitαedit00edit.

We find the eigenvalues by solving the following equation: |P˜(t)Λˆ(t)I|=0. It is not difficult to prove that Λˆ1,2(t)2Λˆ1,2(t)(1γ+βeit)+eit(βγ(1α))=0, and editΛˆ3(t)=0. Hence, Λˆ1,2(t)=1γ+βeit±Dˆ1/2(t)2,Dˆ(t)=(1γ+βeit)24eit(βγ(1α)),Λˆ3(t)=edit. Eigenvectors yj and zj are obtained by solving the following system of equations: P˜(t)yj=Λˆ(t)yj,zjTP˜(t)=Λˆ(t)zjT,zjTyj=1. From the first equation of system (19) we get that yj,3=0 , hence the other two equations are equivalent because of equation (18). Therefore, yjT=(yj,1,1αβΛˆj(t)βeityj,1,0),j=1,2. Similarly, from the second equation of system (19) we get zjT=(zj,1,Λˆj(t)(1γ)1αβzj,1,αedit(Λˆj(t)(1γ))(Λˆj(t)edit)(1αβ)zj,1),j=1,2. The third equation of system (19) can be written as zjTyj=1,yj,1zj,1+yj,2zj,2+yj,3zj,3=1,yj,1zj,1+Λˆj(t)(1γ)Λˆj(t)βeityj,1zj,1+0=1,1+γeit(1αβ)(Λˆj(t)βeit)2=1yj,1zj,1. According to assumption, (π1,π2,π3)=(1,0,0) . Substituting (20), (21), and (22) into (17), we obtain Wˆ1,2(t)=(1,0,0)yjzjT111=1+Λˆj(t)(1γ)1αβ(1+αeditΛˆj(t)edit)1+γeit(1αβ)(Λˆj(t)βeit)2,j=1,2. From equation (18) we get Λˆj(t)(1γ)1αβ=γeitΛˆj(t)βeit. Hence, Wˆ1,2(t)=1+γeitΛˆ1,2(t)βeit(1+αeditΛˆ1,2(t)edit)1+(1αβ)γeit(Λˆ1,2(t)βeit)2. Applying equation (18), we prove that the numerator of Wˆ1,2(t) is equal to (e(d+1)it1)(βγ(1α))(edit1)Λˆ1,2(t)(Λˆ1,2(t)βeit)(Λˆ1,2(t)edit)+(eit1)[γΛˆ1,2(t)(βγ(1α))](Λˆ1,2(t)βeit)(Λˆ1,2(t)edit). It is easy to check that (1γβeit)2+4γeit(1αβ)=Dˆ(t). Similarly (Λˆ1,2(t)βeit)2=(1γβeit)2±2(1γβeit)Dˆ(t)+Dˆ(t)4. Using (25) and (26), we obtain (Λˆ1,2(t)βeit)2+(1αβ)γeit=Dˆ(t)(Dˆ(t)±(1γβeit))2. Notice that 2(Λˆ1,2(t)βeit)=1γβeit±Dˆ(t). Substituting (24), (26), and (27) into (23), we complete the proof for Λˆ1,2 and Wˆ1,2(t) .

Similarly, system (19) is solved with Λˆ3(t)=edit . We get y3T=(y3,1,edit(1γ)γeity3,1,(editβeit)y3,2(1αβ)y3,1αedity3,1), z3T=(0,0,z3,3). Hence, 1y3,1z3,3=(e(d1)itβ)(edit(1γ))γ(1αβ)αγedit. Substituting (28), (29), and (30) into (17), we get Wˆ3(t)=(1,0,0)y3z3T111=y3,1z3,3=αγedit(e(d1)itβ)(edit(1γ))γ(1αβ).  □

It is not difficult to notice that |Wˆ3(t)| is equal to 1 at some points, for example, Wˆ3(0)=1 , since Wˆ3(0)=αγ(1β)(1(1γ))γ(1αβ)=αγαγ=1. Therefore, one cannot expect that Λˆ3n(t)Wˆ3 be small. Therefore we concentrate our research on possible asymptotic behavior of other components of Fˆn(t) . We begin from a short expansion of Dˆ(t) .

Observe that Dˆ(t) can be written in the following way: Dˆ(t)=(1+γβeit)2(1+4γ((1α)eit1)(1+γβeit)2).

Let condition (1) hold, |t|π . Then Dˆ(t)=1+γβeit+5.81θγ.

Dˆ(t) can be expanded and written as Dˆ(t)=(1+γβeit) j=01/2j(4γ((1α)eit1)(1+γβeit)2)j=(1+γβeit)+2γ((1α)eit1)1+γβeit+16γ2((1α)eit1)2(1+γβeit)3 j=21/2j(4γ((1α)eit1)(1+γβeit)2)j2=(1+γβeit)+2γ((1α)eit1)1+γβeit+2θγ2|(1α)eit1|2|1+γβeit|3 j=0|4γ((1α)eit1)(1+γβeit)2|j. Observe that |4γ((1α)eit1)(1+γβeit)2|8·0.05(0.85+0.05)20.5,θγ2|(1α)eit1|2|1+γβeit|3 j=0|4γ((1α)eit1)(1+γβeit)2|j0.55θγ. Therefore Dˆ(t)=1+γβeit+4θγ0.85+2·0.55θγ=1+γβeit+5.81θγ.  □

Next we prove that Λˆ2(t) is always small.

Let condition (1) hold, |t|π . Then |Λˆ2(t)|β+4γ.

From Lemma 3 we get |Λˆ2(t)|=|1γ+βeitDˆ(t)2|=12|1γ+βeit(1+γβeit+5.81θγ)|β+4γ.  □

Let condition (1) hold, |t|π . Then |Λˆ2(t)|0.35.

The following estimate shows that Λ1 behaves similarly to the compound Poisson distribution.

Let condition (1) hold, |t|π . Then |Λˆ1(t)|1+0.4(1α)γRe(Hˆ(t)1)0.2αγexp{0.4(1α)γRe(Hˆ(t)1)0.2αγ}.

It is not difficult to check that 11+γβeit=1β1+γβ11βeitβγ1+γβeiteit11βeit11+γβ. From (32) and (33) it follows that |Λˆ1(t)|=|1γ+βeit+Dˆ(t)2||1+γ(1β)1+γβ(Ψˆ(t)1)|+βγ2(1+γβ)2|Ψˆ(t)1||eit1|+2γ2|Ψˆ(t)1|2(1+β)2(1+γβ)3. Notice that |Ψˆ(t)|2=(ReΨˆ(t))2+(ImΨˆ(t))2(1α1β)21,|Ψˆ(t)1|22(1ReΨˆ(t))α1β(2α1β). For all 0ν1 , we have |1+ν(Ψˆ(t)1)|=(1ν)+νReΨˆ(t)+iνImΨˆ(t)1+ν(1ν)(ReΨˆ(t)1).

Let ν=γ(1β)1+γβ. Substituting (35) into (34) and applying inequality (36), we get |Λˆ1(t)|1+ν(1ν)(ReΨˆ(t)1)+βγ2(1+γβ)2|Ψˆ(t)1||eit1|+4γ2(1+β)2(1+γβ)3(1ReΨˆ(t))2γ2α1β(1+β)2(1+γβ)3(2α1β). |Ψˆ(t)1| can be estimated as |Ψˆ(t)1|21β, and |eit1| can be estimated as |eit1||(1α)eit1||1βeit||1βeit|+α|Ψˆ(t)1|(1+β)+α. Then |Λˆ1(t)|1+(ReΨˆ(t)1)γ1+γβ((1β)(1γ(1β)1+γβ)2γβ(1+β)1+γβ4γ(1+β)2(1+γβ)2)+2αγ2(1β)(1+γβ)(β1+γβ(1+β)2(1+γβ)2(2α1β)).

Notice that ReΨˆ(t)1=(1α)Re(Hˆ(t)1)ααβcos(t)|1βeit|2.

Finally, |Λˆ1(t)|1+Re(Hˆ(t)1)(1α)γ1+γβ((1β)(1γ(1β)1+γβ)2γβ(1+β)1+γβ4γ(1+β)2(1+γβ)2)αγ1+γβ[1βcos(t)|1βeit|2((1β)(1γ(1β)1+γβ)2γβ(1+β)1+γβ4γ(1+β)2(1+γβ)2)2γ1β(β1+γβ(1+β)2(1+γβ)2(2α1β))]1+0.4(1α)γRe(Hˆ(t)1)0.2αγexp{0.4(1α)γRe(Hˆ(t)1)0.2αγ}.  □

Let condition (1) hold, |t|π . Then |Λˆ1(t)|1+Cγ(ReHˆ(t)1α)exp{Cγ(ReHˆ(t)1α)}.

Next we demonstrate that |Wˆ2(t)| is always small.

Let condition (1) hold, |t|π . Then |Wˆ2(t)|2(d+1)|eit1|.

From Lemma 3 we have |Dˆ(t)|1+γβ5.81γ14.81·0.050.150.6. By applying Corollary 3, we get |Λˆ2(t)edit|1|Λˆ2(t)|10.35=0.65. Hence, |Wˆ2(t)|(d+1)|eit1|(2|βγ(1α)|+(1+γ)|Λˆ2(t)|)0.65·0.6(d+1)|eit1|(2max{β,γ(1α)}+(1+γ)·0.35)0.392(d+1)|eit1|.  □

To approximate |Wˆ1(t)| , we need a longer expansion for Dˆ(t) .

Let condition (1) hold, |t|π . Then Dˆ(t)=2Aˆ(t)1+γβeit+Cθγ4((1ReHˆ(t))2+α4). If also αC2 , then (Dˆ(t))=(2Δˆ1(t)1+γβeit)+Cθγ3.

The expansion of Dˆ(t) follows from equations (31) and (33). The second equation of this lemma is proved similarly.  □

Let condition (1) hold, |t|π . Then Λˆ1(t)=Aˆ(t)+Cθγ4((1ReHˆ(t))2+α4).

Let condition (1) hold, αC2 , |t|π . Then Λˆ1(t)=1+Aˆ1(t)γ+(Aˆ2(t)+Aˆ4(t))γ2+Cθγ3.

The following three lemmas are needed for the approximation of W1 .

Let condition (1) hold, |t|π . Then |Aˆ(t)|1+Cγ(ReHˆ(t)1α). If also αC2 , then there exists C such that |Δˆ1(t)|1Cγ.

The proof is very similar to the proof of Lemma 5 and, therefore, is omitted.  □

Let condition (1) hold, |t|π . Then |Wˆ1(t)Vˆ(t)|C(d+1)γ|eit1|.

From Corollary 4 and Lemma 8 it follows that |Λˆ1(t)edit|Cγ(1ReHˆ(t)+α), |Aˆ(t)edit|Cγ(1ReHˆ(t)+α).

Applying (38), (41), (42), Lemma 7 and Corollary 5, the result follows.  □

Let condition (1) hold, αC2 , |t|π . Then |Wˆ1(t)Vˆ1(t)|C(d+1)γ|eit1|.

Since αC2 , |Λˆ1(t)edit|Cγ(1ReHˆ(t)+α)Cγ(0+C2)Cγ. From Corollary 6 it follows that |Λˆ1(t)Δˆ1|=Cγ3. Also, from Lemma 8 it follows that |Δˆ1edit|1(1Cγ)=Cγ. Hence, it is easy to check that the inequality of the lemma is correct.  □

Let condition (1) hold, |t|π . Then ππ|Λˆ1(t)|n|Wˆ1(t)Vˆ(t)||eit1|dtC(d+1)γneCnγα, ππ|Λˆ1(t)|n|Wˆ1(t)Vˆ(t)|dtC(d+1)eCnγαn.

It is obvious that ReHˆ(t)1=(1+β)(cos(t)1)|1βeit|2=2Csin2(t/2).

We will use the following simple inequality ππ|sin(t/2)|kexp{2λsin2(t/2)}dtC(k)λ(k+1)/2.

By applying Lemma 5, Lemma 9, (46), and (47), we get ππ|Λˆ1(t)|n|Wˆ1(t)Vˆ(t)||eit1|dtππC(d+1)γexp{n(0.4(1α)γ(ReHˆ(t)1)0.2γα)}dtππC(d+1)γexp{Cnγ(ReHˆ(t)1)}eCnγαdtC(d+1)γneCnγα.

The second inequality of the lemma is proved similarly.  □

Let condition (1) hold, αC2 , |t|π . Then ππ|Λˆ1(t)|n|Wˆ1(t)Vˆ1(t)||eit1|dtC(d+1)γeCnγ.

From Lemma 5 and Lemma 10 it follows that ππ|Λˆ1(t)|n|Wˆ1(t)Vˆ1(t)||eit1|dtππC(d+1)γexp{0.2C2γn}dtC(d+1)γeCnγ.  □

Let condition (1) hold, |t|π . Then ππ|Vˆ(t)||Λˆ1n(t)Gˆn(t)||eit1|dtC(d+1)γγneCnγα, ππ|Vˆ(t)||Λˆ1n(t)Gˆn(t)|dtC(d+1)γneCnγα.

Notice that |Vˆ(t)|C(d+1)|eit1|γ(1ReHˆ(t)+α),|Λˆ1n(t)Gˆn(t)||Λˆ1(t)Gˆ(t)|·n·max{|Λˆ1(t)|n1,|Gˆ(t)|n1}.

From Corollary 4 we have |Λˆ1|exp{Cγ(ReHˆ(t)1α)} . Taking into account that |ea+bi|=ea , |Gˆ(t)| can be estimated as |Gˆ(t)|exp{Cγ(ReHˆ(t)1α)}.

Using Corollary 5, we have that |Λˆ1(t)Gˆ(t)|=|exp{lnΛˆ1(t)}exp{lnGˆ(t)}|C|lnΛˆ1(t)lnGˆ(t)|=C|(Λˆ1(t)1)(Λˆ1(t)1)22+(Λˆ1(t)1)33+Cθ|Λˆ1(t)1|44lnGˆ(t)|=C|(Aˆ(t)1)12(Aˆ12(t)γ2+2Aˆ1(t)(Aˆ2(t)+Aˆ4(t))γ3)+13Aˆ13(t)γ3+Cθγ4((1ReHˆ(t))2+α4)lnGˆ(t)|Cγ4((1ReHˆ(t))2+α4).

By applying (50), (51), and the inequality xex1 , for all x>0 0$]]>, we can estimate the following integral: ππ|Vˆ(t)||Λˆ1n(t)Gˆn(t)||eit1|dtC(d+1)ππnexp{nCγ(ReHˆ(t)1α)}γ3((1ReHˆ(t))+1)dtC(d+1)ππnγ3exp{n·0.5Cγ(ReHˆ(t)1)}n·0.5Cγ(ReHˆ(t)+1)eCnγα(2ReHˆ(t))dtC(d+1)ππγ2exp{2Cnγsin2(t/2)}eCnγαdtC(d+1)γγneCnγα. The second inequality of this lemma is proved similarly. □ Let condition (1) hold, αC2 , |t|π . Then ππ|Vˆ1(t)||Λˆ1n(t)Gˆ1n(t)||eit1|dtC(d+1)γeCnγ. Since αC2 , |Vˆ1(t)|C(d+1)|eit1|γ, and |Λˆ1n(t)Gˆ1n(t)||Λˆ1(t)Gˆ1(t)|·n·exp{Cγ(n1)}. |Λˆ1(t)Gˆ1(t)| is estimated by applying Corollary 6: |Λˆ1(t)Gˆ1(t)|C|lnΛˆ1(t)lnGˆ1(t)|=C|(Λˆ1(t)1)(Λˆ1(t)1)22+Cθ|Λˆ1(t)1|33lnGˆ1(t)|=C|Aˆ1(t)γ+(Aˆ2(t)+Aˆ4(t))γ212Aˆ12(t)γ2+Cθγ3lnGˆ1(t)|Cγ3. By applying (53), (55), and the inequality xex1 , for all x>0 0$]]>, we can estimate the following integral: ππ|Vˆ1(t)||Λˆ1n(t)Gˆ1n(t)||eit1|dtC(d+1)ππnγ2exp{nCγ}dtC(d+1)ππnγ2exp{n0.5Cγ}n0.5CγdtC(d+1)γeCnγ.  □

Let condition (1) hold, αC2 , |t|π . Then |Wˆ1(t)|C(d+1)γ,|Wˆ1(t)|C(d+1)(1+β/γ)γ,|Wˆ2(t)|C(d+1),|Wˆ2(t)|C(d+1),|Vˆ2(t)|C(d+1)γ,|Vˆ2(t)|C(d+1)(1+β/γ)γ,|Wˆ1(t)Vˆ2(t)|C(d+1)γ,|Wˆ1(t)Vˆ2(t)|C(d+1)γ(1+β/γ),|Λˆ1(t)|eCγ,|Gˆ1(t)|eCγ,|Λˆ1(t)|Cγ,|Gˆ1(t)|Cγ,|Λˆ2(t)|β+4γ,|Λˆ2(t)|C(β+4γ),|Λˆ1(t)Gˆ1(t)|Cγ3,|(Λˆ1n(t)Gˆ1n(t))|Cγ2eCnγ,|1edit||Λˆ1(t)edit|C,|1edit||Δˆ1(t)edit|C.

All inequalities are based on the previously obtained estimates of |Λˆ1(t)| , |Λˆ2(t)| , |Wˆ2(t)| , |Gˆ1(t)| , and the expansion of Dˆ(t) . The inequalities containing Vˆ2(t) are proved similarly to those of Vˆ1(t) (see Lemma 10).  □

Proofs

Applying inversion formula (11), Lemma 11, and Lemma 13 we prove |Fn(GnV+E)|K12πππ|Fˆn(t)Gˆn(t)Vˆ(t)Eˆ(t)||eit1|dt12πππ|Λˆ1n(t)||Wˆ1(t)Vˆ(t)||eit1|dt+12πππ|Vˆ(t)||Λˆ1n(t)Gˆn(t)||eit1|dt+12πππ|Λˆ2n(t)Wˆ2(t)||eit1|dtC(d+1)γneCnγα+C(d+1)(β+4γ)n.

The local estimate is obtained analogously by applying inversion formula (12).  □

The proof is similar to the proof of Theorem 1. Lemma 12 and Lemma 14 are applied instead of Lemma 11 and Lemma 13, since αC2 .  □

Taking into account Corollary 3 and Lemma 15, we get |Λˆ1,2nWˆ1,2|C(d+1)eCn,|(Λˆ1,2nWˆ1,2)||(Λˆ1,2n)||Wˆ1,2|+|Λˆ1,2n||Wˆ1,2|nC(d+1)eC(n1)+C(d+1)eCnC(d+1)neCn.

From inversion formula (13) applied with a=0 and b=1 we get FnE=Λ1nW1+Λ2nW2Λ1nW1+Λ2nW2(1+π)1/2(12π ππ|Λˆ1nWˆ1|2+|(Λˆ1nWˆ1)|2dt)1/2+(1+π)1/2(12π ππ|Λˆ2nWˆ2|2+|(Λˆ2nWˆ2)|2dt)1/2C(d+1)eCn.  □

Fn(G1nV2+E)(Λ1nG1n)W1+G1n(W1V2)+Λ2nW2.

From Lemma 15, we get |Λˆ2n(t)Wˆ2(t)|C(d+1)(β+4γ)n,|(Λˆ2n(t)Wˆ2(t))||(Λˆ2n(t))Wˆ2(t)|+|Λˆ2n(t)Wˆ2(t)|C(d+1)n(β+4γ)n+C(d+1)(β+4γ)nC(d+1)n(β+4γ)n,|Gˆ1n(t)(Wˆ1(t)Vˆ2(t))|C(d+1)γeCnγ,|(Gˆ1n(t)(Wˆ1(t)Vˆ2(t)))||(Gˆ1n(t))(Wˆ1(t)Vˆ2(t))|+|Gˆ1n(t)(Wˆ1(t)Vˆ2(t))|C(d+1)nγ2eC(n1)γ+C(d+1)γeCnγ(1+β/γ)C(d+1)γeCnγ(1+β/γ),|(Λˆ1n(t)Gˆ1n(t))Wˆ1(t)|n|Λˆ1(t)Gˆ1(t)|eC(n1)γC(d+1)γC(d+1)γeCnγ,|((Λˆ1n(t)Gˆ1n(t))Wˆ1(t))||(Λˆ1n(t)Gˆ1n(t))Wˆ1(t)|+|(Λˆ1n(t)Gˆ1n(t))Wˆ1(t)|C(d+1)γeCnγ(1+β/γ).

By applying inversion formula (13) with a=0 and b=1 , we prove Fn(G1nV2+E)C(d+1)(γeCnγ(1+β/γ)+n(β+4γ)n).  □

We use the inequalities obtained in the proof of Theorem 4 and inversion formula (14) with a=0 . We have k|Fn(G1nV2+E){k}|12πππ|(Wˆ1(t)(Λˆ1n(t)Gˆ1n(t)))|dt+12πππ|(Gˆ1n(t)(Wˆ1(t)Vˆ2(t)))|dt+12πππ|(Λˆ2(t)Wˆ2(t))|dtC(d+1)(γe0.5Cnγe0.5Cnγ(1+β/γ)+nenln(β+4γ)).

Hence, k(1+β/γ)1|Fn(G1nV2+E){k}|C(d+1)eCnγn and |Fn(G1nV2+E){k}|C(d+1)eCnγn, since |M|MM .

Summing those inequalities, we get |Fn(G1nV2+E){k}|C(d+1)eCnγn(1+k(1+β/γ)1)=C(d+1)eCnγ(β+γ)n(β+(k+1)γ).

In order to prove the second inequality of the theorem, we apply the inversion formula (15) with a=0 : k|Fn(G1nV2+E)(k)|12πππ|(Wˆ1(t)eit1(Λˆ1n(t)Gˆ1n(t)))|dt+12πππ|(Gˆ1n(t)(Wˆ1(t)eit1Vˆ2(t)eit1))|dt+12πππ|(Λˆ2(t)Wˆ2(t)eit1)|dt.

The summands can be estimated by using the inequalities from the proof of Theorem 4: |Wˆ1(t)eit1||(Λˆ1n(t)Gˆ1n(t))|C(d+1)γ2eCnγ,e(d+1)it1eit1=(eit1)(1+eit++edit)eit(1eit)=eit(1+eit++edit), |(Wˆ1(t)eit1)|Cd2γ2,|(Wˆ2(t)eit1)|Cd2,|(Wˆ1(t)eit1)||Λˆ1n(t)Gˆ1n(t)|Cnγ3eCnγd2γ2Cd2eCnγ,|Gˆ1n(t)(Wˆ1(t)Vˆ2(t)eit1)|C(d+1)γeCnγ,|Gˆ1n(t)(Wˆ1(t)Vˆ2(t)eit1)|Cd2eCnγγ2,|Λˆ2n(t)Wˆ2(t)eit1|C(d+1)eCn,|Λˆ2n(t)(Wˆ2(t)eit1)|Cd2(β+4γ)n.

Thus, we get kγ2|Fn(G1nV2+E)(k)|Cd2eCnγn and |Fn(G1nV2+E)(k)|C(d+1)eCnγn.

By summing the above inequalities we arrive at |Fn(G1nV2+E)(k)|Cd2eCnγn(1+kγ2).  □

References Barbour, A.D., Lindvall, T.: Translated Poisson approximation for Markov chains. J. Theor. Probab. 19(3), 609630 (2006). MR2280512. https://doi.org/10.1007/s10959-006-0047-9 Čekanavičius, V.: Approximation methods in probability theory. Universitext, Springer (2016). MR3467748. https://doi.org/10.1007/978-3-319-34072-2 Čekanavičius, V., Roos, B.: Poisson type approximations for the Markov binomial distribution. Stoch. Process. Appl. 119, 190207 (2009). MR2485024. https://doi.org/10.1016/j.spa.2008.01.008 Čekanavičius, V., Vellaisamy, P.: Compound Poisson and signed compound Poisson approximations to the Markov binomial law. Bernoulli 16(4), 11141136 (2010). MR2759171. https://doi.org/10.3150/09-BEJ246 De Pril, N., Dhaene, J.: Error bounds for compound Poisson approximations of the individual risk model. ASTIN Bull. 22(2), 135148 (1992). https://doi.org/10.2143/AST.22.2.2005111 Erhardsson, T.: Compound Poisson approximation for Markov chains using Stein’s method. Ann. Probab. 27(1), 565596 (1999). MR1681149. https://doi.org/10.1214/aop/1022677272 Gani, J.: On the probability generating function of the sum of Markov-Bernoulli random variables. J. Appl. Probab. (Special vol.) 19A, 321326 (1982). MR0633201. https://doi.org/10.2307/3213571 Gerber, H.U.: Error bounds for the compound Poisson approximation. Insur. Math. Econ. 3, 191194 (1984). MR0752200. https://doi.org/10.1016/0167-6687(84)90062-3 Hipp, C.: Approximation of aggregate claims distributions by compound Poisson distribution. Insur. Math. Econ. 4(4), 227232 (1985). MR0810720. https://doi.org/10.1016/0167-6687(85)90032-0 Hirano, K., Aki, S.: On number of success runs of specified length in a two-state Markov chain. Stat. Sin. 3, 313320 (1993). MR1243389. https://doi.org/10.1239/aap/1029955143 Leipus, R., Šiaulys, J.: On the random max-closure for heavy-tailed random variables. Lith. Math. J. 57(2), 208221 (2017). MR3654985. https://doi.org/10.1007/s10986-017-9355-2 Pitts, S.M.: A functional approach to approximations for the individual risk model. ASTIN Bull. 34, 379397 (2004). MR2086451. https://doi.org/10.1017/S051503610001374X Presman, E.L.: Approximation in variation of the distribution of a sum of independent Bernoulli variables with a Poisson law. Theory Probab. Appl. 30(2), 417422 (1986). MR0792634. https://doi.org/10.1137/1130051 Roos, B.: On variational bounds in the compound Poisson approximation of the individual risk model. Insur. Math. Econ. 40, 403414 (2007). MR2310979. https://doi.org/10.1016/j.insmatheco.2006.06.003 Šliogere, J., Čekanavičius, V.: Two limit theorems for Markov binomial distribution. Lith. Math. J. 55(3), 451463 (2015). MR3379037. https://doi.org/10.1007/s10986-015-9291-y Šliogere, J., Čekanavičius, V.: Approximation of symmetric three-state Markov chain by compound Poisson law. Lith. Math. J. 56(3), 417438 (2016). MR3530227. https://doi.org/10.1007/s10986-016-9326-z Wang, K., Gao, M., Yang, Y., Chen, Y.: Asymptotics for the finite-time ruin probability in a discrete-time risk model with dependent insurance and financial risks. Lith. Math. J. 58(1), 113125 (2018). MR3779067. https://doi.org/10.1007/s10986-017-9378-8 Xia, A., Zhang, M.: On approximation of Markov binomial distributions. Bernoulli 15, 13351350 (2009). MR2597595. https://doi.org/10.3150/09-BEJ194 Yang, G., Miao, Y.: Moderate and Large Deviation Estimate for the Markov-Binomial Distribution. Acta Appl. Math. 110, 737747 (2010). MR2610590. https://doi.org/10.1007/s10440-009-9471-z Yang, Y., Wang, Y.: Tail behavior of the product of two dependent random variables with applications to risk theory. Extremes 16(1), 5574 (2013). MR3020177. https://doi.org/10.1007/s10687-012-0153-2 Zhang, H., Liu, Y., Li, B.: Notes on discrete compound Poisson model with applications to risk theory. Insur. Math. Econ. 59, 325336 (2014). MR3283233. https://doi.org/10.1016/j.insmatheco.2014.09.012