Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 11, Issue 1 (2024)
  4. A note on randomly stopped sums with zer ...

A note on randomly stopped sums with zero mean increments
Volume 11, Issue 1 (2024), pp. 31–42
Remigijus Leipus   Jonas Šiaulys ORCID icon link to view author Jonas Šiaulys details  

Authors

 
Placeholder
https://doi.org/10.15559/23-VMSTA236
Pub. online: 5 December 2023      Type: Research Article      Open accessOpen Access

Received
22 September 2023
Revised
29 October 2023
Accepted
3 November 2023
Published
5 December 2023

Abstract

In this paper, the asmptotics is considered for the distribution tail of a randomly stopped sum ${S_{\nu }}={X_{1}}+\cdots +{X_{\nu }}$ of independent identically distributed consistently varying random variables with zero mean, where ν is a counting random variable independent of $\{{X_{1}},{X_{2}},\dots \}$. The conditions are provided for the relation $\mathbb{P}({S_{\nu }}\gt x)\sim \mathbb{E}\nu \hspace{0.1667em}\mathbb{P}({X_{1}}\gt x)$ to hold, as $x\to \infty $, involving the finiteness of $\mathbb{E}|{X_{1}}|$. The result improves that of Olvera-Cravioto [14], where the finiteness of a moment $\mathbb{E}|{X_{1}}{|^{r}}$ for some $r\gt 1$ was assumed.

1 Introduction and preliminaries

Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be independent identically distributed (i.i.d.) random variables (r.v.s). Denote a sequence of partial sums $\{{S_{n}},n\geqslant 0\}$ by
(1)
\[ {S_{0}}=0,\hspace{2.5pt}\hspace{2.5pt}{S_{n}}:={X_{1}}+\cdots +{X_{n}},\hspace{2.5pt}n\geqslant 1.\]
In this paper, we consider the randomly stopped sum
\[\begin{aligned}{}{S_{\nu }}& ={X_{1}}+\cdots +{X_{\nu }},\end{aligned}\]
where ν is a counting r.v. taking values in ${\mathbb{N}_{0}}:=\{0,1,2,\dots \hspace{0.1667em}\}$. We assume that ν is nondegenerate at zero, i.e. $\mathbb{P}(\nu \hspace{-0.1667em}=\hspace{-0.1667em}0)\hspace{-0.1667em}\lt \hspace{-0.1667em}1$, and that ν is independent of $\{X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}\}$. Denote ${F_{X}}$, ${F_{\nu }}$ and ${F_{{S_{\nu }}}}$ the distributions of X, ν and ${S_{\nu }}$, respectively. In the case where primary r.v.s are heavy-tailed and nonnegative, the standard result states that if ν has finite mean $\mathbb{E}\nu $, and its distribution tail is lighter than the tail of X, then
(2)
\[\begin{aligned}{}\overline{{F_{{S_{\nu }}}}}(x)& \underset{x\to \infty }{\sim }\mathbb{E}\nu \hspace{0.1667em}\overline{{F_{X}}}(x).\end{aligned}\]
For important contributions, see Stam [15], Daley et al. [4], Embrechts et al. [7], Faÿ et al. [8]. In Section 4 of the last paper, the case of nonnegative regularly varying summands was examined in detail. Note that, generally, relationship (2) can be obtained under different conditions on r.v.s X and ν (see, e.g., Daley et al. [4]). More precisely, (2) holds under various heavy-tailed distribution classes, moment conditions on X and ν, relationships between the distribution tails $\overline{{F_{X}}}$ and $\overline{{F_{\nu }}}$ (typically, $\overline{{F_{\nu }}}(x)=o(\overline{{F_{X}}}(x))$). Usually, weakening the conditions on ${F_{\nu }}$, stronger conditions on ${F_{X}}$ are assumed. In the case of real-valued r.v.s, the conditions for (2) depend also on the sign of the mean $\mu =\mathbb{E}X$ if it exists. For instance, in the case of negative mean, the relation (2) holds for all subclasses of ${\mathcal{S}^{\ast }}$ (see definition below), which includes most subexponential distributions with finite mean, see Denisov et al. [6, Theorem 1].
In this paper, we pose the problem under what ‘minimal’ moment conditions relation (2) holds in the case of real-valued consistently varying distribution ${F_{X}}$ with zero mean. Recall that the class of consistently varying distributions (see definition below) contains the regularly varying class of distributions.
Before formulating and discussing the main result of the paper, we will introduce the related subclasses of heavy-tailed distributions, some notions, and known results. We will say that a distribution $F=1-\overline{F}$ is on $R:=(-\infty ,\infty )$ if $\overline{F}(x)\gt 0$ for all $x\in \mathbb{R}$. All limiting relations are assumed as $x\to \infty $ unless it is stated to the contrary. For two eventually positive functions $a(x)$ and $b(x)$, $a(x)\sim b(x)$ means that $\lim a(x)/b(x)=1$; $a(x)\asymp b(x)$ means that $0\lt \liminf a(x)/b(x)\leqslant \limsup a(x)/b(x)\lt \infty $. We denote ${a^{+}}:=\max \{a,0\}$, ${a^{-}}:=-\min \{a,0\}$.
  • • A distribution F on R is said to be heavy-tailed, denoted $F\in \mathcal{H}$, if its Laplace–Stieltjes transform satisfies
    \[\begin{aligned}{}{\int _{-\infty }^{\infty }}{\mathrm{e}^{\delta x}}\mathrm{d}F(x)& =\infty \hspace{2.5pt}\textit{for any}\hspace{2.5pt}\delta \gt 0.\end{aligned}\]
    Otherwise, F is said to be light-tailed.
Next we introduce the heavy-tailed distribution subclasses which will be used in the paper.
  • • A distribution F on R is said to be regularly varying with index $\alpha \geqslant 0$ and denoted $F\in \mathcal{R}(\alpha )$ if its tail satisfies
    \[\begin{aligned}{}\lim \frac{\overline{F}(xy)}{\overline{F}(x)}& ={y^{-\alpha }}\hspace{2.5pt}\textit{for any}\hspace{2.5pt}y\gt 0.\end{aligned}\]
    $F\in \mathcal{R}(0)$ is said to be slowly varying distribution.
  • • A distribution F on R is said to be consistently varying, denoted by $F\in \mathcal{C}$, if
    (3)
    \[\begin{aligned}{}\underset{y\searrow 1}{\lim }\underset{x\to \infty }{\liminf }\frac{\overline{F}(xy)}{\overline{F}(x)}& =1.\end{aligned}\]
  • • A distribution F on R is said to belong to dominatedly varying class of distributions, denoted $F\in \mathcal{D}$, if
    \[\begin{aligned}{}\limsup \frac{\overline{F}(xy)}{\overline{F}(x)}& \lt \infty \end{aligned}\]
    for all (or, equivalently, for some) $y\in (0,1)$.
It holds that $\mathcal{C}\subset \mathcal{D}\subset \mathcal{H}$.
Next, for a distribution F on R denote
(4)
\[\begin{aligned}{}{\overline{F}_{\ast }}(y)& :=\underset{x\to \infty }{\liminf }\frac{\overline{F}(xy)}{\overline{F}(x)},\hspace{2.5pt}\hspace{2.5pt}{\overline{F}^{\ast }}(y):=\underset{x\to \infty }{\limsup }\frac{\overline{F}(xy)}{\overline{F}(x)},\hspace{2.5pt}y\gt 1,\end{aligned}\]
and introduce the upper and lower Matuszewska indices by equalities
\[ {J_{F}^{+}}=-\underset{y\to \infty }{\lim }\frac{\log {\overline{F}_{\ast }}(y)}{\log y},\hspace{2.5pt}\hspace{2.5pt}{J_{F}^{-}}=-\underset{y\to \infty }{\lim }\frac{\log {\overline{F}^{\ast }}(y)}{\log y}.\]
Clearly, $0\leqslant {J_{F}^{-}}\leqslant {J_{F}^{+}}\leqslant \infty $. It is well known that $F\in \mathcal{D}$ if and only if ${J_{F}^{+}}\lt \infty $.
  • • Set ${F^{+}}(x):=F(x){1_{\{x\geqslant 0\}}}$. A distribution F on R is said to be subexponential and denoted $F\in \mathcal{S}$ if $\overline{{F^{+}}\ast {F^{+}}}(x)\sim 2\overline{F}(x)$.
Note that $F\in \mathcal{S}$ implies
\[\begin{aligned}{}\overline{{F^{\ast n}}}(x)& \sim n\overline{F}(x)\hspace{2.5pt}\hspace{2.5pt}\text{for all}\hspace{2.5pt}\hspace{2.5pt}n\geqslant 2,\end{aligned}\]
see, e.g., Foss et al. [9, Corollary 3.20].
  • • A distribution F on R with ${m_{F}}:={\textstyle\int _{0}^{\infty }}\overline{F}(u)\mathrm{d}u\in (0,\infty )$ belongs to ${\mathcal{S}^{\ast }}$ (or is strong subexponential) if
    \[\begin{aligned}{}{\int _{0}^{x}}\overline{F}(x-y)\overline{F}(y)\mathrm{d}y& \sim 2{m_{F}}\overline{F}(x).\end{aligned}\]
It holds that $\mathcal{C}\subset {\mathcal{S}^{\ast }}\subset \mathcal{S}$ provided the mean is finite.
More details on the mentioned heavy-tailed classes can be found in the recent book [11].
First, we formulate some known results for class $\mathcal{C}$.
Proposition 1.
Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be i.i.d. real-valued r.v.s with the common distribution ${F_{X}}\in \mathcal{C}$ and let ν be an independent counting r.v. Let either of conditions hold:
  • (a) $\mathbb{E}{\nu ^{p+1}}\lt \infty $ for some $p\gt {J_{{F_{X}}}^{+}}$, or
  • (b) $\mathbb{E}|X|\lt \infty $, $\overline{{F_{\nu }}}(x)=o(\overline{{F_{X}}}(x))$, or
  • (c) $\mathbb{E}|X|\lt \infty $, $\mathbb{E}\nu \lt \infty $ and $\mathbb{E}X\lt 0$.
Then
(5)
\[\begin{aligned}{}\overline{{F_{{S_{\nu }}}}}(x)& \sim \mathbb{E}\nu \overline{{F_{X}}}(x).\end{aligned}\]
Proof. In the case of nonnegative r.v.s, part (a) of the proposition can be found in Leipus and Šiaulys [10, Corollary 3]. We provide a short proof for the case of distributions on R. Write
\[\begin{aligned}{}\overline{{F_{{S_{\nu }}}}}(x)& ={\sum \limits_{n=1}^{\infty }}\overline{{F_{X}^{\ast n}}}(x)\mathbb{P}(\nu =n),\hspace{2.5pt}x\geqslant 0.\end{aligned}\]
Since $\mathcal{C}\subset \mathcal{S}$, we have $\overline{{F_{X}^{\ast n}}}(x)\sim n\overline{{F_{X}}}(x)$ for any $n\geqslant 1$. In addition, as $\mathcal{C}\subset \mathcal{D}$, according to Theorem 3 in Daley et al. [4], for any $p\gt {J_{{F_{X}}}^{+}}$, there exists a finite positive constant C, independent of x and n, such that
(6)
\[\begin{aligned}{}\underset{x\in \mathbb{R}}{\sup }\frac{\overline{{F_{X}^{\ast n}}}(x)}{\overline{{F_{X}}}(x)}& \leqslant C{n^{p+1}}.\end{aligned}\]
This implies (5) by the dominated convergence theorem. Part (b) can be found in Ng et al. [13, Theorem 2.3] or Denisov et al. [6, Corollary 3]. Part (c) follows from Denisov et al. [6, Theorem 1] and relationship $\mathcal{C}\subset {\mathcal{S}^{\ast }}$ (in the case of regularly varying distributions, see Borovkov and Borovkov [2, Theorem 7.1.1]).  □
We will focus our attention to the case where $\mathbb{E}X=0$ and show that in this case the result in part (b) can be improved replacing $o(\cdot )$-condition to $O(\cdot )$-condition. Note that, in the case of zero mean and in the more general setup, Olvera-Cravioto [14, Theorem 2.1 (b)] obtained the following result.
Proposition 2.
Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be i.i.d. real-valued r.v.s with the common distribution ${F_{X}}\in \mathcal{C}$, and let ν be an independent counting r.v. Assume that ${J_{{F_{X}}}^{-}}\gt 0$, $\mathbb{E}|X{|^{r}}\lt \infty $ for some $r\gt 1$, $\mathbb{E}X=0$ and $\overline{{F_{\nu }}}(x)=O(\overline{{F_{X}}}(x))$. Then (5) holds.
As noted by Olvera-Cravioto [14], the proof of the result follows the standard heavy-tailed techniques from Nagaev [12], Borovkov [1] (see also Borovkov and Borovkov [2]), based on the exponential bounds for sums of truncated r.v.s. Moreover, it was conjectured that the requirement $\mathbb{E}|X{|^{r}}\lt \infty $, $r\gt 1$ might be weakened with a different proof technique.
In our paper we prove that the result of Proposition 2 indeed holds under the condition $\mathbb{E}|X|\lt \infty $, accordingly modifying the proof. Specifically, some ideas from Cline and Hsing [3], Tang [16] and Danilenko and Šiaulys [5] have been used in the proof of the main result. Apparently, the alternative proof of the main result can be constructed using the bounds in Theorem 1 of Tang and Yan [18].

2 Main results

Theorem 1.
Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be i.i.d. r.v.s with the distribution ${F_{X}}\in \mathcal{C}$, and let ν be an independent counting r.v. If $\mathbb{E}|X|\lt \infty $, $\mathbb{E}X=0$, ${J_{{F_{X}}}^{-}}\gt 0$, and $\overline{{F_{\nu }}}(x)=O(\overline{{F_{X}}}(x))$, then (5) holds.
Observe that conditions $\mathbb{E}|X|\lt \infty $ and $\overline{{F_{\nu }}}(x)=O(\overline{{F_{X}}}(x))$ imply finiteness of the moment $\mathbb{E}\nu \lt \infty $. The statement of the theorem follows from Propositions 3 and 4 below in which the upper and lower asymptotic bounds are obtained.
Remark 1.
Note that, in the case of dominatedly varying distribution ${F_{X}}$ with finite mean, the condition $\overline{{F_{\nu }}}(x)=O(\overline{{F_{X}}}(x))$ (both for $\mu \gt 0$ and $\mu \leqslant 0$) is sufficient for the relationship $\overline{{F_{{S_{\nu }}}}}(x)\asymp \overline{{F_{X}}}(x)$ (see, e.g., Leipus and Šiaulys [10], Yang and Gao [19]). Taking into account the closure of class $\mathcal{D}$ under weak tail equivalence, this yields that the distribution of random sum ${S_{\nu }}$ is in $\mathcal{D}$.
Proposition 3.
Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be i.i.d. r.v.s with the common distribution ${F_{X}}\in \mathcal{S}$, and let ν be an independent counting r.v. with finite mean $\mathbb{E}\nu $. Then
\[\begin{aligned}{}\liminf \frac{\overline{{F_{{S_{\nu }}}}}(x)}{\mathbb{E}\nu \overline{{F_{X}}}(x)}& \geqslant 1.\end{aligned}\]
Proposition 4.
Under the conditions of Theorem 1,
\[\begin{aligned}{}\limsup \frac{\overline{{F_{{S_{\nu }}}}}(x)}{\mathbb{E}\nu \overline{{F_{X}}}(x)}& \leqslant 1.\end{aligned}\]
From the main theorem we obtain the following statement for regularly varying distributions. To the best of our knowledge, this is a new result.
Corollary 1.
Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be i.i.d. r.v.s with the distribution ${F_{X}}\in \mathcal{R}(\alpha )$, $\alpha \geqslant 1$, and let ν be an independent counting r.v. If $\mathbb{E}|X|\lt \infty $, $\mathbb{E}X=0$, and $\overline{{F_{\nu }}}(x)=O(\overline{{F_{X}}}(x))$, then (5) holds.
Remark that if ${F_{X}}\in \mathcal{R}(\alpha )$, $\alpha \gt 1$, then the condition $\mathbb{E}|X|\lt \infty $ is automatically satisfied.

3 Proof of Proposition 3

For $K\in N$ and large x we have
\[\begin{aligned}{}\overline{{F_{{S_{\nu }}}}}(x)& \geqslant \mathbb{P}({S_{\nu }}\gt x,\nu \leqslant K)={\sum \limits_{n=1}^{K}}\overline{{F_{X}^{\ast n}}}(x)\mathbb{P}(\nu =n).\end{aligned}\]
Since $\overline{{F_{X}^{\ast n}}}(x)\sim n\overline{{F_{X}}}(x)$, we get that
\[\begin{aligned}{}\liminf \frac{\overline{{F_{{S_{\nu }}}}}(x)}{\mathbb{E}\nu \overline{{F_{X}}}(x)}& \geqslant \frac{1}{\mathbb{E}\nu }{\sum \limits_{n=1}^{K}}n\mathbb{P}(\nu =n)=\frac{\mathbb{E}\nu {1_{\{\nu \leqslant K\}}}}{\mathbb{E}\nu }.\end{aligned}\]
The assertion of the proposition follows now from the last estimate by passing K to infinity.  □

4 Proof of Proposition 4

Let $K\in N$ and $\delta \in (0,1)$ be temporarily fixed numbers. For sufficiently large x we have
(7)
\[\begin{aligned}{}\overline{{F_{{S_{\nu }}}}}(x)& =\mathbb{P}({S_{\nu }}\gt x,\nu \leqslant K)+\mathbb{P}\big({S_{\nu }}\gt x,K\lt \nu \leqslant x{\delta ^{-1}}\big)\\ {} & \hspace{1em}+\mathbb{P}\big({S_{\nu }}\gt x,\nu \gt x{\delta ^{-1}}\big)\\ {} & ={\sum \limits_{n=1}^{K}}\mathbb{P}({S_{n}}\gt x)\mathbb{P}(\nu =n)\\ {} & \hspace{1em}+\sum \limits_{K\lt n\leqslant x{\delta ^{-1}}}\mathbb{P}\big({S_{n}}\gt x,{\cup _{k=1}^{n}}\big\{{X_{k}}\gt x(1-\delta )\big\}\big)\mathbb{P}(\nu =n)\\ {} & \hspace{1em}+\sum \limits_{K\lt n\leqslant x{\delta ^{-1}}}\mathbb{P}\big({S_{n}}\gt x,{\cap _{k=1}^{n}}\big\{{X_{k}}\le x(1-\delta )\big\}\big)\mathbb{P}(\nu =n)\\ {} & \hspace{1em}+\mathbb{P}\big({S_{\nu }}\gt x,\nu \gt x{\delta ^{-1}}\big)\\ {} & \leqslant {\sum \limits_{n=1}^{K}}\overline{{F_{X}^{\ast n}}}(x)\mathbb{P}(\nu =n)+\sum \limits_{K\lt n\leqslant x{\delta ^{-1}}}n\overline{{F_{X}}}\big(x(1-\delta )\big)\mathbb{P}(\nu =n)\\ {} & \hspace{1em}+\sum \limits_{K\lt n\leqslant x{\delta ^{-1}}}\mathbb{P}\Bigg({\sum \limits_{k=1}^{n}}{\widehat{X}_{k}}\gt x\Bigg)\mathbb{P}(\nu =n)+\overline{{F_{\nu }}}\big(x{\delta ^{-1}}\big)\\ {} & =:{\mathcal{J}_{1}}+{\mathcal{J}_{2}}+{\mathcal{J}_{3}}+{\mathcal{J}_{4}},\end{aligned}\]
where ${\widehat{X}_{k}}:=\min \{{X_{k}},x(1-\delta )\}$.
Since ${F_{X}}\in \mathcal{C}\subset \mathcal{S}$, it holds that $\overline{{F_{X}^{\ast n}}}(x)\sim n\overline{{F_{X}}}(x)$ for any fixed n. Therefore,
(8)
\[\begin{aligned}{}{\mathcal{J}_{1}}& \leqslant (1+\delta )\overline{{F_{X}}}(x)\mathbb{E}\nu {1_{\{\nu \leqslant K\}}}\end{aligned}\]
for sufficiently large $x\geqslant {x_{1}}(K,\delta )$.
In addition,
(9)
\[\begin{aligned}{}{\mathcal{J}_{2}}& \leqslant \overline{{F_{X}}}\big(x(1-\delta )\big)\mathbb{E}\nu {1_{\{\nu \gt K\}}},\end{aligned}\]
(10)
\[\begin{aligned}{}{\mathcal{J}_{3}}& \leqslant \mathbb{E}\nu \underset{n\leqslant x{\delta ^{-1}}}{\max }\frac{\mathbb{P}({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x)}{n}.\end{aligned}\]
Using the bound in Lemma 1 (i) for the class $\mathcal{D}$ and the condition $\overline{{F_{\nu }}}(x)=O(\overline{{F_{X}}}(x))$, we obtain
(11)
\[\begin{aligned}{}{\mathcal{J}_{4}}& =\frac{\overline{{F_{\nu }}}(x{\delta ^{-1}})}{\overline{{F_{X}}}(x{\delta ^{-1}})}\hspace{0.1667em}\frac{\overline{{F_{X}}}(x{\delta ^{-1}})}{\overline{{F_{X}}}(x)}\hspace{0.1667em}\overline{{F_{X}}}(x)\\ {} & \leqslant {c_{1}}\frac{\overline{{F_{X}}}(x{\delta ^{-1}})}{\overline{{F_{X}}}(x)}\hspace{0.1667em}\overline{{F_{X}}}(x)\leqslant {c_{2}}\hspace{0.1667em}{\delta ^{{J_{{F_{X}}}^{-}}/2}}\hspace{0.1667em}\overline{{F_{X}}}(x)\end{aligned}\]
for large $x\geqslant {x_{2}}(\delta )$ with some positive constants ${c_{1}}$ and ${c_{2}}$.
Substituting estimates (8)–(11) into (7), we obtain
\[\begin{aligned}{}\frac{\overline{{F_{{S_{\nu }}}}}(x)}{\mathbb{E}\nu \overline{{F_{X}}}(x)}& \leqslant \max \bigg\{\frac{{\mathcal{J}_{1}}}{\overline{{F_{X}}}(x)\mathbb{E}\nu {1_{\{\nu \leqslant K\}}}},\frac{{\mathcal{J}_{2}}}{\overline{{F_{X}}}(x)\mathbb{E}\nu {1_{\{\nu \gt K\}}}}\bigg\}\\ {} & \hspace{1em}+\frac{{\mathcal{J}_{3}}}{\overline{{F_{X}}}(x)\mathbb{E}\nu }+\frac{{\mathcal{J}_{4}}}{\overline{{F_{X}}}(x)\mathbb{E}\nu }\\ {} & \leqslant \max \bigg\{1+\delta ,\frac{\overline{{F_{X}}}(x(1-\delta ))}{\overline{{F_{X}}}(x)}\bigg\}+\underset{n\leqslant x{\delta ^{-1}}}{\max }\frac{\mathbb{P}({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x)}{n\overline{{F_{X}}}(x)}\\ {} & \hspace{1em}+\frac{{c_{2}}}{\mathbb{E}\nu }\hspace{0.1667em}{\delta ^{\hspace{0.1667em}{J_{{F_{X}}}^{-}}/2}}\end{aligned}\]
for $x\geqslant \max \{{x_{1}}(K,\delta ),{x_{2}}(\delta )\}$. Therefore,
\[\begin{aligned}{}\limsup \frac{\overline{{F_{{S_{\nu }}}}}(x)}{\mathbb{E}\nu \overline{{F_{X}}}(x)}& \leqslant \max \bigg\{1+\delta ,\limsup \frac{\overline{{F_{X}}}(x(1-\delta ))}{\overline{{F_{X}}}(x)}\bigg\}\\ {} & \hspace{1em}+\limsup \frac{\overline{{F_{X}}}(x(1-\delta ))}{\overline{{F_{X}}}(x)}\limsup \underset{n\leqslant x{\delta ^{-1}}}{\max }\frac{\mathbb{P}({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x)}{n\overline{{F_{X}}}(x(1-\delta ))}\\ {} & \hspace{1em}+\frac{{c_{2}}}{\mathbb{E}\nu }\hspace{0.1667em}{\delta ^{{J_{{F_{X}}}^{-}}/2}}\\ {} & =\max \bigg\{1+\delta ,\limsup \frac{\overline{{F_{X}}}(x(1-\delta ))}{\overline{{F_{X}}}(x)}\bigg\}+\frac{{c_{2}}}{\mathbb{E}\nu }\hspace{0.1667em}{\delta ^{{J_{{F_{X}}}^{-}}/2}}\end{aligned}\]
according to Lemma 2. The desired upper bound is then obtained taking $\delta \searrow 0$.

5 Auxiliary lemmas

The first auxiliary lemma can be found in Tang and Tsitsiashvili [17, Lemma 3.5].
Lemma 1.
Let the distribution $F\in \mathcal{D}$ with lower and upper Matuszewska indices ${J_{F}^{-}}$ and ${J_{F}^{+}}$, respectively.
  • (i) If ${J_{F}^{-}}\gt 0$, then for any $0\leqslant {p_{1}}\lt {J_{F}^{-}}$ there exist positive constants ${C_{1}}={C_{1}}({p_{1}})$ and ${D_{1}}={D_{1}}({p_{1}})$, such that
    (12)
    \[\begin{aligned}{}\frac{\overline{F}(y)}{\overline{F}(x)}& \geqslant {C_{1}}{\bigg(\frac{x}{y}\bigg)^{{p_{1}}}}\end{aligned}\]
    for all $x\geqslant y\geqslant {D_{1}}$.
  • (ii) For any ${p_{2}}\gt {J_{F}^{+}}\geqslant 0$ there exist positive constants ${C_{2}}={C_{2}}({p_{2}})$ and ${D_{2}}={D_{2}}({p_{2}})$, such that
    (13)
    \[\begin{aligned}{}\frac{\overline{F}(y)}{\overline{F}(x)}& \leqslant {C_{2}}{\bigg(\frac{x}{y}\bigg)^{{p_{2}}}}\end{aligned}\]
    for all $x\geqslant y\geqslant {D_{2}}$.
  • (iii) For any $p\gt {J_{F}^{+}}$ it holds that ${x^{-p}}=o(\overline{F}(x))$.
The following lemma is crucial in the proof of Proposition 4.
Lemma 2.
Let $X,{X_{1}},{X_{2}},\dots \hspace{0.1667em}$ be i.i.d. real-valued r.v.s with the dominatedly varying distribution ${F_{X}}\in \mathcal{D}$. If $\mathbb{E}|X|\lt \infty $, $\mathbb{E}X=0$, then, for any $\delta \in (0,1)$,
\[\begin{aligned}{}\lim \underset{n\leqslant x{\delta ^{-1}}}{\max }\frac{\mathbb{P}\big({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x\big)}{n\overline{{F_{X}}}(x(1-\delta ))}& =0,\end{aligned}\]
where ${\widehat{X}_{k}}:=\min \{{X_{k}},x(1-\delta )\}$.
Proof. For any $\delta \in (0,1)$, set
\[ a=a(x,n):=\max \bigg\{\log \frac{1}{n\hspace{0.1667em}\overline{{F_{X}}}(x(1-\delta ))},1\bigg\},\hspace{2.5pt}x\in R,\hspace{2.5pt}n\in N.\]
The assumption $\mathbb{E}|X|\lt \infty $ implies that $x\overline{{F_{X}}}(x(1-\delta ))\to 0$ as $x\to \infty $. Since $a(x,n)$ is nonincreasing in n, we get that for any $\delta \in (0,1)$
(14)
\[\begin{aligned}{}\underset{n\leqslant x{\delta ^{-1}}}{\min }a(x,n)& \geqslant \log \frac{1}{x{\delta ^{-1}}\overline{{F_{X}}}(x(1-\delta ))}\to \infty \end{aligned}\]
and $a(x,n)=\log (1/(n\overline{{F_{X}}}(x(1-\delta ))))$ for large x ($x\geqslant {x_{3}}(\delta )$) and $n\leqslant x{\delta ^{-1}}$.
By the exponential Markov inequality, for any $h,x\gt 0$, we have
\[\begin{aligned}{}\mathbb{P}\Bigg({\sum \limits_{k=1}^{n}}{\widehat{X}_{k}}\gt x\Bigg)& \leqslant {\mathrm{e}^{-hx}}\mathbb{E}\exp \Bigg\{h{\sum \limits_{k=1}^{n}}{\widehat{X}_{k}}\Bigg\}\\ {} & ={\mathrm{e}^{-hx}}{\big(1+\mathbb{E}\big({\mathrm{e}^{h{\widehat{X}_{1}}}}-1\big)\big)^{n}}.\end{aligned}\]
Thus, by inequality $1+z\leqslant {\mathrm{e}^{z}}$, $z\in R$,
(15)
\[ \frac{\mathbb{P}\big({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x\big)}{n\overline{{F_{X}}}(x(1-\delta ))}\leqslant \exp \big\{-hx+a+n\mathbb{E}\big({\mathrm{e}^{h{\widehat{X}_{1}}}}-1\big)\big\}.\]
Split the expectation $\mathbb{E}({\mathrm{e}^{h{\widehat{X}_{1}}}}-1)$ as follows:
(16)
\[\begin{aligned}{}\mathbb{E}\big({\mathrm{e}^{h{\widehat{X}_{1}}}}-1\big)& ={\mathcal{K}_{1}}+{\mathcal{K}_{2}}+{\mathcal{K}_{3}}+{\mathcal{K}_{4}},\end{aligned}\]
where
\[\begin{aligned}{}{\mathcal{K}_{1}}& :={\int _{(-\infty ,0]}}\big({\mathrm{e}^{hu}}-1\big)\mathrm{d}{F_{X}}(u),\\ {} {\mathcal{K}_{2}}& :={\int _{(0,x(1-\delta ){a^{-2}}]}}\big({\mathrm{e}^{hu}}-1\big)\mathrm{d}{F_{X}}(u),\\ {} {\mathcal{K}_{3}}& :={\int _{(x(1-\delta ){a^{-2}},x(1-\delta )]}}\big({\mathrm{e}^{hu}}-1\big)\mathrm{d}{F_{X}}(u),\\ {} {\mathcal{K}_{4}}& :=\big({\mathrm{e}^{hx(1-\delta )}}-1\big)\overline{{F_{X}}}\big(x(1-\delta )\big).\end{aligned}\]
The inequalities $|{\mathrm{e}^{z}}-1|\leqslant |z|$, $|{\mathrm{e}^{z}}-z-1|\leqslant {z^{2}}/2$, $z\leqslant 0$, imply that
(17)
\[\begin{aligned}{}{\mathcal{K}_{1}}& =h\mathbb{E}X{1_{\{X\leqslant 0\}}}+\mathbb{E}\big({\mathrm{e}^{hX}}-hX-1\big){1_{\{X\leqslant 0\}}}\\ {} & =-h\mathbb{E}{X^{-}}+\mathbb{E}\big({\mathrm{e}^{hX}}-1\big){1_{\{X\leqslant -{h^{-1/4}}\}}}-h\mathbb{E}X{1_{\{X\leqslant -{h^{-1/4}}\}}}\\ {} & \hspace{1em}+\mathbb{E}\big({\mathrm{e}^{hX}}-hX-1\big){1_{\{-{h^{-1/4}}\lt X\leqslant 0\}}}\\ {} & \leqslant -h\mathbb{E}{X^{-}}+2h\mathbb{E}|X|{1_{\{X\leqslant -{h^{-1/4}}\}}}+\frac{{h^{3/2}}}{2}.\end{aligned}\]
The inequality ${\mathrm{e}^{z}}-1\leqslant z{\mathrm{e}^{z}}$, $z\geqslant 0$, implies that
(18)
\[\begin{aligned}{}{\mathcal{K}_{2}}& \leqslant h{\mathrm{e}^{hx(1-\delta ){a^{-2}}}}{\int _{(0,x(1-\delta ){a^{-2}}]}}u\hspace{0.1667em}\mathrm{d}{F_{X}}(u)\\ {} & \leqslant h{\mathrm{e}^{hx(1-\delta ){a^{-2}}}}\mathbb{E}{X^{+}}.\end{aligned}\]
In addition, observe that
(19)
\[\begin{aligned}{}{\mathcal{K}_{3}},{\mathcal{K}_{4}}& \leqslant {\mathrm{e}^{hx(1-\delta )}}\overline{{F_{X}}}\big(x(1-\delta ){a^{-2}}\big).\end{aligned}\]
Substituting estimates (17), (18), (19) into (15)–(16), we get
\[\begin{aligned}{}& \frac{\mathbb{P}\big({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x\big)}{n\overline{{F_{X}}}(x(1-\delta ))}\\ {} & \hspace{1em}\leqslant \exp \big\{2n{\mathrm{e}^{hx(1-\delta )}}\overline{{F_{X}}}\big(x(1-\delta ){a^{-2}}\big)\big\}\\ {} & \hspace{2em}\times \exp \bigg\{-hx+a+nh\bigg(2\mathbb{E}|X|{1_{\{X\leqslant -{h^{-1/4}}\}}}+\frac{{h^{1/2}}}{2}-\mathbb{E}{X^{-}}+{\mathrm{e}^{\frac{hx(1-\delta )}{{a^{2}}}}}\mathbb{E}{X^{+}}\bigg)\bigg\}.\end{aligned}\]
According to Lemma 1 (iii), ${(x(1-\delta ))^{p}}\hspace{0.1667em}\overline{{F_{X}}}(x(1-\delta ))\to \infty $ for any $p\gt {J_{{F_{X}}}^{+}}$. Hence, for large x ($x\geqslant {x_{4}}(\delta ,p)\gt {x_{3}}(\delta )$),
(20)
\[ \underset{1\leqslant n\leqslant x{\delta ^{-1}}}{\max }a(x,n)\leqslant \log \frac{{(x(1-\delta ))^{p}}}{\overline{{F_{X}}}(x(1-\delta )){(x(1-\delta ))^{p}}}\leqslant p\log \big(x(1-\delta )\big).\]
This relation implies that
\[ \underset{1\leqslant n\leqslant x{\delta ^{-1}}}{\min }x(1-\delta ){a^{-2}}\to \infty \]
and, since ${F_{X}}\in \mathcal{D}$, by Lemma 1 (ii), it holds
(21)
\[\begin{aligned}{}\frac{\overline{{F_{X}}}(x(1-\delta ){a^{-2}})}{\overline{{F_{X}}}(x(1-\delta ))}& \leqslant {c_{3}}{a^{2p}}\end{aligned}\]
for any $p\gt {J_{{F_{X}}}^{+}}$, large x $(x\geqslant {x_{5}}(\delta ,p)\gt {x_{4}}(\delta ,p))$ and some positive constant ${c_{3}}={c_{3}}(\delta ,p)$.
Therefore, by condition $\mathbb{E}X=\mathbb{E}{X^{+}}-\mathbb{E}{X^{-}}=0$, we get
(22)
\[\begin{aligned}{}& \frac{\mathbb{P}\big({\textstyle\textstyle\sum _{k=1}^{n}}{\widehat{X}_{k}}\gt x\big)}{n\overline{{F_{X}}}(x(1-\delta ))}\\ {} & \hspace{1em}\leqslant \exp \big\{2{c_{3}}n{a^{2p}}{\mathrm{e}^{hx(1-\delta )}}\overline{{F_{X}}}\big(x(1-\delta )\big)\big\}\\ {} & \hspace{1em}\hspace{1em}\times \exp \bigg\{-hx+a+nh\bigg(2\mathbb{E}|X|{1_{\{X\leqslant -{h^{-1/4}}\}}}+\frac{{h^{1/2}}}{2}+\big({\mathrm{e}^{\frac{hx(1-\delta )}{{a^{2}}}}}-1\big)\mathbb{E}{X^{+}}\bigg)\bigg\}\\ {} & =:{P_{1}}{P_{2}}\end{aligned}\]
for $h\gt 0$, $n\leqslant x{\delta ^{-1}}$ and large x ($x\geqslant {x_{5}}(\delta ,p)$).
Now, for $x\gt 0$ set
\[\begin{aligned}{}h& =h(x,n):=\max \bigg\{\frac{a(x,n)-2p\log a(x,n)}{x(1-\delta )},\frac{1}{x(1-\delta )}\bigg\}.\end{aligned}\]
By (14), for large x ($x\geqslant {x_{6}}(\delta ,p)\gt {x_{5}}(\delta ,p)$),
\[ h=\frac{a-2p\log a}{x(1-\delta )}.\]
Hence, from (20) we obtain, that for $x\geqslant {x_{6}}(\delta ,p)$
(23)
\[ \underset{n\leqslant x{\delta ^{-1}}}{\max }h(x,n)\leqslant \frac{{\max _{n\leqslant x{\delta ^{-1}}}}a(x,n)}{x(1-\delta )}\leqslant \frac{p\log (x(1-\delta ))}{x(1-\delta )}\to 0.\]
With this choice of h, we obtain that, for large x $(x\geqslant {x_{6}}(\delta ,p))$ and any $n\leqslant x{\delta ^{-1}}$,
(24)
\[\begin{aligned}{}{P_{1}}& =\exp \big\{2{c_{3}}n{a^{2p}}{\mathrm{e}^{a-2p\log a}}\overline{{F_{X}}}\big(x(1-\delta )\big)\big\}\\ {} & =\exp \big\{2{c_{3}}{\mathrm{e}^{a}}n\overline{{F_{X}}}\big(x(1-\delta )\big)\big\}\\ {} & ={\mathrm{e}^{2{c_{3}}}}.\end{aligned}\]
For ${P_{2}}$, we have for large x and $n\leqslant x{\delta ^{-1}}$
(25)
\[\begin{aligned}{}{P_{2}}& =\exp \bigg\{-\frac{a\delta }{1-\delta }+\frac{2p\log a}{1-\delta }+n\hspace{0.1667em}\frac{a-2p\log a}{x(1-\delta )}\bigg(2\mathbb{E}|X|{1_{\{X\leqslant -{h^{-1/4}}\}}}+\frac{{h^{1/2}}}{2}\\ {} & \hspace{1em}+\big({\mathrm{e}^{(a-2p\log a){a^{-2}}}}-1\big)\mathbb{E}{X^{+}}\bigg)\bigg\}\\ {} & \leqslant \exp \bigg\{-\frac{a\delta }{1-\delta }+\frac{2p\log a}{1-\delta }+\frac{a}{\delta (1-\delta )}\bigg(2\mathbb{E}|X|{1_{\{X\leqslant -{h^{-1/4}}\}}}+\frac{{h^{1/2}}}{2}\\ {} & \hspace{1em}+\big({\mathrm{e}^{1/a}}-1\big)\mathbb{E}{X^{+}}\bigg)\bigg\}.\end{aligned}\]
Since, by (14) and (23), ${\min _{n\leqslant x{\delta ^{-1}}}}a(x,n)\to \infty $ and ${\max _{n\leqslant x{\delta ^{-1}}}}h(x,n)\to 0$, for large x ($x\geqslant {x_{7}}(\delta ,p)\gt {x_{6}}(\delta ,p))$, it holds that
\[\begin{aligned}{}\underset{n\leqslant x{\delta ^{-1}}}{\max }\bigg(2\mathbb{E}|X|{1_{\{X\leqslant -{h^{-1/4}}\}}}+\frac{{h^{1/2}}}{2}+\big({\mathrm{e}^{1/a}}-1\big)\mathbb{E}{X^{+}}\bigg)& \leqslant \frac{{\delta ^{2}}}{2}.\end{aligned}\]
Substituting this bound into (25), we obtain that, for large x,
\[\begin{aligned}{}\underset{n\leqslant x{\delta ^{-1}}}{\max }{P_{2}}& \leqslant \underset{n\leqslant x{\delta ^{-1}}}{\max }\exp \bigg\{-\frac{a\delta }{1-\delta }+\frac{2p\log a}{1-\delta }+\frac{a\delta }{2(1-\delta )}\bigg\}\\ {} & =\underset{n\leqslant x{\delta ^{-1}}}{\max }\exp \bigg\{-\frac{a\delta -4p\log a}{2(1-\delta )}\bigg\}\to 0.\end{aligned}\]
This, together with (22) and (24), implies the statement of the lemma.  □

References

[1] 
Borovkov, A.A.: Estimates for the distribution of sums and maxima of sums of random variables without the Cramer condition. Sib. Math. J. 41, 811–848 (2000). MR1803562. https://doi.org/10.1007/BF02674739
[2] 
Borovkov, A.A., Borovkov, K.A.: Asymptotic Analysis of Random Walks: Heavy-Tailed Distributions. Cambridge University Press, Cambridge (2008). MR2424161. https://doi.org/10.1017/CBO9780511721397
[3] 
Cline, D.B.H., Hsing, T.: Large Deviation Probabilities for Sums and Maxima of Random Variables with Heavy or Subexponential Tails. Preprint, Texas A&M University (1991)
[4] 
Daley, D.J., Omey, E., Vesilo, R.: The tail behaviour of a random sum of subexponential random variables and vectors. Extremes 10, 21–39 (2007). MR2397550. https://doi.org/10.1007/s10687-007-0033-3
[5] 
Danilenko, S., Šiaulys, J.: Randomly stopped sums of not identically distributed heavy tailed random variables. Stat. Probab. Lett. 113, 84–93 (2016). MR3480399. https://doi.org/10.1016/j.spl.2016.03.001
[6] 
Denisov, D., Foss, S., Korshunov, D.: Asymptotics of randomly stopped sums in the presence of heavy tails. Bernoulli 16, 971–994 (2010). MR2759165. https://doi.org/10.3150/10-BEJ251
[7] 
Embrechts, P., Goldie, C.M., Veraverbeke, N.: Subexponentiality and infinite divisibility. Adv. Appl. Probab. 44, 1142–1172 (2012)
[8] 
Faÿ, G., González-Arévalo, B., Mikosch, T., Samorodnitsky, G.: Modeling teletraffic arrivals by a Poisson cluster process. Queueing Syst. 54, 121–140 (2006). MR2268057. https://doi.org/10.1007/s11134-006-9348-z
[9] 
Foss, S., Korshunov, D., Zachary, S.: An Introduction to Heavy-Tailed and Subexponential Distributions. Springer, New York (2013). MR2810144. https://doi.org/10.1007/978-1-4419-9473-8
[10] 
Leipus, R., Šiaulys, J.: Closure of some heavy-tailed distribution classes under random convolution. Lith. Math. J. 52, 249–258 (2012). MR3020941. https://doi.org/10.1007/s10986-012-9171-7
[11] 
Leipus, R., Šiaulys, J., Konstantinides, D.: Closure Properties for Heavy-Tailed and Related Distributions. Springer, Cham (2023). MR4660026. https://doi.org/10.1007/978-3-031-34553-1
[12] 
Nagaev, S.V.: On the asymptotic behavior of one-sided large deviation probabilities. Theory Probab. Appl. 26, 362–366 (1982). MR0616627
[13] 
Ng, K.W., Tang, Q.H., Yang, H.: Maxima of sums of heavy-tailed random variables. ASTIN Bull. 32, 43–55 (2002). MR1928012. https://doi.org/10.2143/AST.32.1.1013
[14] 
Olvera-Cravioto, M.: Asymptotics for weighted random sums. Adv. Appl. Probab. 44, 1142–1172 (2012). MR3052852
[15] 
Stam, A.J.: Regular variation of the tail of a subordinated probability distribution. Adv. Appl. Probab. 5, 287–307 (1973). MR0339353. https://doi.org/10.2307/1426038
[16] 
Tang, Q.: Insensitivity to negative dependence of the asymptotic behavior of precise large deviations. Electron. J. Probab. 11, 107–120 (2006). MR2217811. https://doi.org/10.1214/EJP.v11-304
[17] 
Tang, Q., Tsitsiashvili, G.: Precise estimates for the ruin probability in finite horizon in a discrete-time model with heavy-tailed insurance and financial risks. Stoch. Process. Appl. 108, 299–325 (2003). MR2019056. https://doi.org/10.1016/j.spa.2003.07.001
[18] 
Tang, Q., Yan, J.: A sharp inequality for the tail probabilities of sums of i.i.d. r.v.’s with dominatedly varying tails. Sci. China Ser. A 45, 1006–1011 (2002). MR1942914. https://doi.org/10.1007/BF02879983
[19] 
Yang, Y., Gao, Q.: On closure properties of heavy-tailed distributions for random sums. Lith. Math. J. 54, 366–377 (2014). MR3240977. https://doi.org/10.1007/s10986-014-9249-5
Exit Reading PDF XML


Table of contents
  • 1 Introduction and preliminaries
  • 2 Main results
  • 3 Proof of Proposition 3
  • 4 Proof of Proposition 4
  • 5 Auxiliary lemmas
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy