Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 11, Issue 3 (2024)
  4. Distribution of shifted discrete random ...

Distribution of shifted discrete random walk generated by distinct random variables and applications in ruin theory
Volume 11, Issue 3 (2024), pp. 323–357
Simonas Gervė   Andrius Grigutis ORCID icon link to view author Andrius Grigutis details  

Authors

 
Placeholder
https://doi.org/10.15559/24-VMSTA249
Pub. online: 19 March 2024      Type: Research Article      Open accessOpen Access

Received
3 November 2023
Revised
13 February 2024
Accepted
22 February 2024
Published
19 March 2024

Abstract

In this paper, the distribution function
\[ \varphi (u)=\mathbb{P}\Bigg(\underset{n\geqslant 1}{\sup }{\sum \limits_{i=1}^{n}}({X_{i}}-\kappa )\lt u\Bigg),\]
and the generating function of $\varphi (u+1)$ are set up. We assume that $u\in \mathbb{N}\cup \{0\}$, $\kappa \in \mathbb{N}$, the random walk $\{{\textstyle\sum _{i=1}^{n}}{X_{i}},\hspace{0.1667em}n\in \mathbb{N}\}$ involves $N\in \mathbb{N}$ periodically occurring distributions, and the integer-valued and nonnegative random variables ${X_{1}},{X_{2}},\dots $ are independent. This research generalizes two recent works where $\{\kappa =1,N\in \mathbb{N}\}$ and $\{\kappa \in \mathbb{N},N=1\}$ were considered respectively. The provided sequence of sums $\{{\textstyle\sum _{i=1}^{n}}({X_{i}}-\kappa ),\hspace{0.1667em}n\in \mathbb{N}\}$ generates the so-called multi-seasonal discrete-time risk model with arbitrary natural premium and its known distribution enables to compute the ultimate time ruin probability $1-\varphi (u)$ or survival probability $\varphi (u)$. The obtained theoretical statements are verified in several computational examples where the values of the survival probability $\varphi (u)$ and its generating function are provided when $\{\kappa =2,\hspace{0.1667em}N=2\}$, $\{\kappa =3,\hspace{0.1667em}N=2\}$, $\{\kappa =5,\hspace{0.1667em}N=10\}$ and ${X_{i}}$ adopts the Poisson and some other distributions. The conjecture on the nonsingularity of certain matrices is posed.

1 Introduction

Many probabilistic models estimating the likelihoods of certain events are based on the sequence of sums $\{{\textstyle\sum _{i=1}^{n}}{X_{i}},\hspace{0.1667em}n\in \mathbb{N}\}$, where ${X_{i}}$ are some random variables. This sequence is called the random walk. Random walks are usually visualized as branching trees or certain paths on the plane or space; their occurrence spreads from pure mathematics to many applied sciences. For instance, one may refer to the Case–Shiller home pricing index [9] or even more generally to the random walk hypothesis [30]. From a pure mathematics standpoint, one may mention the random matrix theory, see, for example, [3, 13, 31] and other related works. Perhaps the closest context where the need to know the distribution of ${\sup _{n\geqslant 1}}{\textstyle\sum _{i=1}^{n}}({X_{i}}-\kappa )$ arises is insurance mathematics. In the ruin theory one may assume that the insurer’s wealth ${W_{u}}(n)$ in discrete time moments $n\in \mathbb{N}$ consists of incoming cash premiums and outgoing payoffs (claims), and ${W_{u}}(n)$ admits the representation:
(1)
\[ {W_{u}}(n)=u+\kappa n-{\sum \limits_{i=1}^{n}}{X_{i}}=u-{\sum \limits_{i=1}^{n}}({X_{i}}-\kappa ),\]
where $u\in {\mathbb{N}_{0}}:=\mathbb{N}\cup \{0\}$ is interpreted as initial surplus ${W_{u}}(0):=u$, $\kappa \in \mathbb{N}$, denotes the arrival rate of premiums paid by customers and the subtracted sum of random variables represents claims. Here we assume that random variables ${X_{i}}$ are independent, nonnegative, and integer-valued but not necessarily identically distributed. More precisely, we assume that ${X_{i}}\stackrel{d}{=}{X_{i+N}}$ for all $i\in \mathbb{N}$ and some fixed $N\in \mathbb{N}$. The model (1) can be visualized as a “race” between deterministic line $u+\kappa n$ and the sum of random variables ${\textstyle\sum _{i=1}^{n}}{X_{i}}$ when n varies, see Figure 1.1
vmsta249_g001.jpg
Fig. 1.
Lines $1+n$, $1+2n$, $1+3n$, and random walk ${\textstyle\sum _{i=1}^{n}}{X_{i}}{1_{\{i\hspace{2.5pt}\mathrm{mod}\hspace{2.5pt}2=1\}}}+{\textstyle\sum _{i=1}^{n}}{Y_{i}}{1_{\{i\hspace{2.5pt}\mathrm{mod}\hspace{2.5pt}2=0\}}}$, where $\mathbb{P}({X_{i}}=0)=0.3$, $\mathbb{P}({X_{i}}=1)=0.1$, $\mathbb{P}({X_{i}}=5)=0.6$ and $\mathbb{P}({Y_{i}}=0)=0.8$, $\mathbb{P}({Y_{i}}=1)=0.1$, $\mathbb{P}({Y_{i}}=10)=0.1$, and n varies from 1 to 20
We say that the fixed natural number N is the number of periods or seasons and call the model (1) N-seasonal discrete-time risk model. The model (1) with $N=1$ is a discrete version of the continuous-time Cramér–Lundberg model (also known as the classical risk process)
(2)
\[ {\tilde{W}_{u}}(t)=x+ct-{\sum \limits_{i=1}^{{P_{t}}}}{\xi _{i}},\hspace{1em}t\geqslant 0,\]
where, analogously as in model (1), $x\geqslant 0$ represents initial surplus, $c\gt 0$ premium, ${\xi _{i}}$ are independent and identically distributed nonnegative random variables, and ${P_{t}}$ is the counting Poisson process with intensity $\lambda \gt 0$. The model (2) can be further extended, cf. E. Spare Andersen’s model [2] or models considered in [6, 7].
Being curious whether initial surplus and collected premiums can always cover incurred claims, for the N-seasonal discrete-time risk model (1) we define the finite time survival probability
(3)
\[ \varphi (u,\hspace{0.1667em}T):=\mathbb{P}\Bigg({\bigcap \limits_{n=1}^{T}}\big\{{W_{u}}(n)\gt 0\big\}\Bigg)=\mathbb{P}\Bigg(\underset{1\leqslant n\leqslant T}{\sup }{\sum \limits_{i=1}^{n}}({X_{i}}-\kappa )\lt u\Bigg),\]
where T is some fixed natural number, and the ultimate time survival probability
(4)
\[ \varphi (u):=\mathbb{P}\Bigg({\bigcap \limits_{n=1}^{\infty }}\big\{{W_{u}}(n)\gt 0\big\}\Bigg)=\mathbb{P}\Bigg(\underset{n\geqslant 1}{\sup }{\sum \limits_{i=1}^{n}}({X_{i}}-\kappa )\lt u\Bigg).\]
Computation of finite time survival (or ruin, $\psi (u,\hspace{0.1667em}T):=1-\varphi (u,\hspace{0.1667em}T))$ probability (3) is far easier than the computation of ultimate time survival (or ruin, $\psi (u):=1-\varphi (u)$) probability (4), see Theorem 4 for the precise expressions of $\varphi (u,\hspace{0.1667em}T)$. Difficulties computing $\varphi (u)$ arise due to $\varphi (\kappa N),\hspace{0.1667em}\varphi (\kappa N+1),\dots $ being expressible via $\varphi (0),\hspace{0.1667em}\varphi (1),\dots ,\varphi (\kappa N-1)$ which are initially unknown, see the formula (5) in the next section. Therefore, the essence of the problem we solve is nothing but finding these initial values $\varphi (0),\hspace{0.1667em}\varphi (1),\dots ,\varphi (\kappa N-1)$. In this work, we demonstrate that the required values of $\varphi (0),\hspace{0.1667em}\varphi (1),\dots ,\varphi (\kappa N-1)$ satisfy a certain system of linear equations (see the system (16)) whose coefficients are based on certain polynomials and the roots of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$, where $s\in \mathbb{C}$ and ${G_{{S_{N}}}}(s)$ is the probability-generating function of ${S_{N}}={X_{1}}+\cdots +{X_{N}}$.
Let us briefly overview the history and some fundamental works on the subject and mention a few recent papers. The foundation of ruin theory dates back to 1903 when Swedish actuary Filip Lundberg published his work [29], which was republished in 1930 also by Swedish mathematician Harald Cramér, while the random walk formulation, as such, was first introduced by English mathematician and biostatistician Karl Pearson [33]. The Cramér–Lundberg risk model (2) was extended by Danish mathematician Erik Sparre Andersen by allowing claim inter-arrival times to have arbitrary distribution [2]. The next famous works were published in the late eighties by Hans U. Gerber and Elias S. W. Shiu, see [16, 15, 38, 39]. Equally, in the second half of the twentieth century, there were many sound studies regarding the random walk by such authors as William Feller, Frank L. Spitzer, David G. Kendal, Félix Pollaczek and others, see [14, 40, 41, 23, 35] and related works. Scrolling across the timeline in recent decades, one may reference a notable survey [27]. Various assumptions on random walk’s structure in models (1) or (2) (cf. [20, 37]), the variety of other numerical characteristics of renewal risk models than the defined ones in (3) and (4) (cf. [28, 24]), and the research methods (cf. [42]) are the reasons which make the recent literature voluminous. Next to the mentioned references, see [11, 36, 4, 12, 10, 34, 26, 25, 32, 8] as the recent ones on the subject, too.

2 Recursive nature of ultimate time survival probability, basic notations, and the net profit condition

This section starts by deriving the basic recurrent relation for the ultimate time survival probability. The definition (4), the law of total probability and rearrangements imply
(5)
\[\begin{aligned}{}& \varphi (u)=\mathbb{P}\Bigg({\bigcap \limits_{n=1}^{\infty }}\big\{{W_{u}}(n)\gt 0\big\}\Bigg)\\ {} & =\mathbb{P}\Bigg({\bigcap \limits_{n=1}^{N}}\Bigg\{u+\kappa n-{\sum \limits_{i=1}^{n}}{X_{i}}\gt 0\Bigg\}\cap {\bigcap \limits_{n=N+1}^{\infty }}\Bigg\{u+\kappa n-{\sum \limits_{i=1}^{n}}{X_{i}}\gt 0\Bigg\}\Bigg)\\ {} & =\mathbb{P}\Bigg({\bigcap \limits_{n=1}^{N}}\Bigg\{{\sum \limits_{i=1}^{n}}{X_{i}}\leqslant u+\kappa n-1\Bigg\}\cap \\ {} & \hspace{1em}\hspace{2.5pt}\cap {\bigcap \limits_{n=N+1}^{\infty }}\Bigg\{u+\kappa N-{\sum \limits_{i=1}^{N}}{X_{i}}+\kappa (n-N)-{\sum \limits_{i=N+1}^{n}}{X_{i}}\gt 0\Bigg\}\Bigg)\\ {} & =\sum \limits_{\substack{{i_{1}}\leqslant u+\kappa -1\\ {} {i_{1}}+{i_{2}}\leqslant u+2\kappa -1\\ {} \vdots \\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N}}\leqslant u+\kappa N-1}}\hspace{-14.22636pt}\mathbb{P}({X_{1}}\hspace{-0.1667em}=\hspace{-0.1667em}{i_{1}})\mathbb{P}({X_{2}}\hspace{-0.1667em}=\hspace{-0.1667em}{i_{2}})\cdots \mathbb{P}({X_{N}}\hspace{-0.1667em}=\hspace{-0.1667em}{i_{N}})\varphi \Bigg(u\hspace{-0.1667em}+\hspace{-0.1667em}\kappa N\hspace{-0.1667em}-\hspace{-0.1667em}{\sum \limits_{j=1}^{N}}{i_{j}}\Bigg).\end{aligned}\]
Substituting $u=0$ into the recursive formula (5), we notice that in order to find $\varphi (\kappa N)$ we need to know the values of $\varphi (0),\hspace{0.1667em}\varphi (1),\dots ,\varphi (\kappa N-1)$. Moreover, if we know the values of $\varphi (0),\hspace{0.1667em}\varphi (1),\dots ,\varphi (\kappa N-1)$, the same recurrence (5) allows us to compute $\varphi (u)$ for any $u\geqslant \kappa N$ by substituting $u=0,\hspace{0.1667em}1,\dots $ there. Thus, as mentioned in the introduction, all we need is a way to compute these initial values.
We now define a series of notations. Recalling that we aim to know the distribution of ${\sup _{n\geqslant 1}}{\textstyle\sum _{i=1}^{n}}({X_{i}}-\kappa )$, we define N random variables:
(6)
\[\begin{aligned}{}& {\mathcal{M}_{1}}:=\underset{n\geqslant 1}{\sup }{\Bigg({\sum \limits_{i=1}^{n}}({X_{i}}-\kappa )\Bigg)^{+}},\hspace{0.1667em}{\mathcal{M}_{2}}:=\underset{n\geqslant 2}{\sup }{\Bigg({\sum \limits_{i=2}^{n}}({X_{i}}-\kappa )\Bigg)^{+}},\dots ,\\ {} & {\mathcal{M}_{N}}:=\underset{n\geqslant N}{\sup }{\Bigg({\sum \limits_{i=N}^{n}}({X_{i}}-\kappa )\Bigg)^{+}},\end{aligned}\]
where ${x^{+}}=\max \{0,\hspace{0.1667em}x\}$, $x\in \mathbb{R}$, is the positive part function. Obviously, same as ${X_{1}},{X_{2}},\dots ,{X_{N}}$, every random variable ${\mathcal{M}_{1}},\hspace{0.1667em}{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$ attains the values from the set {0, 1, …}.
Let us denote the probability mass functions of ${\mathcal{M}_{j}}$, their generators ${X_{j}}$ and the sum ${S_{N}}={X_{1}}+{X_{2}}+\cdots +{X_{N}}$:
(7)
\[ {m_{i}^{(j)}}:=\mathbb{P}({\mathcal{M}_{j}}=i),\hspace{2.5pt}\hspace{2.5pt}{x_{i}^{(j)}}:=\mathbb{P}({X_{j}}=i),\hspace{2.5pt}\hspace{2.5pt}{s_{i}^{(N)}}:=\mathbb{P}({S_{N}}=i),\]
where $j\in \{1,\hspace{0.1667em}2,\dots ,N\}$ and $i\in {\mathbb{N}_{0}}$. Let ${F_{{X_{j}}}}(i)$ be the distribution function of the random variable ${X_{j}}$, i.e.
\[ {F_{{X_{j}}}}(u)=\mathbb{P}({X_{j}}\leqslant u)={\sum \limits_{i=0}^{u}}{x_{i}^{(j)}},\hspace{1em}j\in \{1,\hspace{0.1667em}2,\dots ,N\},\hspace{0.1667em}u\in {\mathbb{N}_{0}}.\]
In addition, let ${\overline{S}_{1}}(0):=\{s\in \mathbb{C}:|s|\leqslant 1\}$, ${S_{1}}(0):=\{s\in \mathbb{C}:|s|\lt 1\}$ be the circles in the complex plane centered at the origin with radius one and denote the probability-generating function of some nonnegative and integer-valued random variable X,
\[ {G_{X}}(s):={\sum \limits_{i=0}^{\infty }}{s^{i}}\mathbb{P}(X=i)=\mathbb{E}{s^{X}},\hspace{1em}s\in {\overline{S}_{1}}(0).\]
The definition of the survival probability (4) and the definition of random variable ${\mathcal{M}_{1}}$ imply
(8)
\[ \varphi (u+1)=\mathbb{P}({\mathcal{M}_{1}}\leqslant u)={\sum \limits_{i=0}^{u}}{m_{i}^{(1)}}\hspace{1em}\text{for all}\hspace{2.5pt}u\in {\mathbb{N}_{0}}.\]
Thus, the survival probability computation turns into the setup of the distribution function of ${\mathcal{M}_{1}}$. It is simple to explain the core idea of the paper, i.e. how the probabilities ${m_{i}^{(1)}}$, $i\in {\mathbb{N}_{0}}$, are attained. Let us refer to Feller’s book [14, Theorem on page 198]. The referenced Theorem states that if $N=1$ in model (1), i.e. the random walk $\{{\textstyle\sum _{i=1}^{n}}{X_{i}},\hspace{0.1667em}n\in \mathbb{N}\}$ is generated by independent and identically distributed random variables, which are the copies of X, then ${({\mathcal{M}_{1}}+X-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{1}}$. For arbitrary number of periods $N\in \mathbb{N}$ the mentioned distribution property is as follows:
(9)
\[ {({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{N}}\hspace{1em}\text{and}\hspace{1em}{({\mathcal{M}_{j}}+{X_{j-1}}-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{j-1}},\]
for all $j=2,\hspace{0.1667em}3,\dots ,N$, where ${\tilde{X}_{N}}$ is an independnet copy of ${X_{N}}$; see Lemma 2 in Section 5. Metaphorically, the distributions’ equalities in (9) mean that the random variables ${\mathcal{M}_{1}},\hspace{0.1667em}{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$ “can see each other”, and, more precisely, based on the equalities in (9), we can set up a system of corresponding equalities of probability-generating functions
(10)
\[ \left\{\begin{array}{l}\mathbb{E}{s^{{({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}}}=\mathbb{E}{s^{{\mathcal{M}_{N}}}}\hspace{1em}\\ {} \mathbb{E}{s^{{({\mathcal{M}_{2}}+{X_{1}}-\kappa )^{+}}}}=\mathbb{E}{s^{{\mathcal{M}_{1}}}}\hspace{1em}\\ {} \hspace{2.5pt}\vdots \hspace{1em}\\ {} \mathbb{E}{s^{{({\mathcal{M}_{N}}+{X_{N-1}}-\kappa )^{+}}}}=\mathbb{E}{s^{{\mathcal{M}_{N-1}}}}\hspace{1em}\end{array}\right..\]
The system (10) contains the desired information on ${m_{i}^{(1)}},\hspace{0.1667em}i\in {\mathbb{N}_{0}}$.
In general, the random variables ${\mathcal{M}_{1}},\hspace{0.1667em}{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$ can be extended, i.e. ${\lim \nolimits_{u\to \infty }}\mathbb{P}({\mathcal{M}_{j}}=u)\gt 0$, $j=1,\hspace{0.1667em}2,\dots ,N$. However, ${\lim \nolimits_{u\to \infty }}\mathbb{P}({\mathcal{M}_{j}}\lt u)=1$ for all $j=1,\hspace{0.1667em}2,\dots ,N$ if $\mathbb{E}{S_{N}}\lt \kappa N$, see Lemma 1 in Section 5.
The condition $\mathbb{E}{S_{N}}\lt \kappa N$ is called the net profit condition and it is crucially important for the survival probability $\varphi (u)$. Intuitively, an insurer has no chance to survive in long-term activity, if threatening amounts of claims on average are greater or equal to the collected premiums. This can also be well illustrated by the expected value of ${W_{u}}(n)$ in (1). For instance,
\[ \mathbb{E}{W_{u}}(nN)=u+n(-\mathbb{E}{S_{N}}+\kappa N)\lt 0\]
if $\mathbb{E}{S_{N}}\gt \kappa N$ and n is sufficiently large. Consequently, the negative value of ${W_{u}}(n)$ is unavoidable. See Theorem 3 in Section 3 for the precise expressions of the survival probability $\varphi (u),\hspace{0.1667em}u\in {\mathbb{N}_{0}}$ when the net profit condition is violated, i.e. $\mathbb{E}{S_{N}}\geqslant \kappa N$.

3 Main results

In this section, based on the previously introduced notations and explanation that our goal is to know the probability mass function of ${\mathcal{M}_{1}}$, we formulate two main theorems for the ultimate time survival probability computation under the net profit condition. Theorem 1, being implied by system (10), provides the relations between ${m_{i}^{(1)}},\hspace{0.1667em}{m_{i}^{(2)}},\dots ,{m_{i}^{(N)}}$ and ${x_{i}^{(1)}},\hspace{0.1667em}{x_{i}^{(2)}},\dots ,{x_{i}^{(N)}}$ for all $i\in {\mathbb{N}_{0}}$ and lays down the foundation for the computation of the ultimate time survival probability $\varphi (u+1)={\textstyle\sum _{i=0}^{u}}{m_{i}^{(1)}},\hspace{0.1667em}u\in {\mathbb{N}_{0}}$.
Theorem 1.
Suppose that the N-seasonal discrete-time risk model (1) is generated by random variables ${X_{1}},\hspace{0.1667em}{X_{2}},\dots ,{X_{N}}$ and the net profit condition $\mathbb{E}{S_{N}}\lt \kappa N$ is satisfied. Then the following statements are correct:
  • 1. The probability mass functions of random variables ${\mathcal{M}_{1}},\hspace{0.1667em}{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$ and ${X_{1}},{X_{2}},\dots ,{X_{N}}$ satisfy the relation
    (11)
    \[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}(\kappa -i-j)+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}(\kappa -i-j)\\ {} & \hspace{2em}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}(\kappa \hspace{-0.1667em}-\hspace{-0.1667em}i\hspace{-0.1667em}-\hspace{-0.1667em}j)+\cdots +{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=0}^{\kappa -1-i}}\hspace{-0.1667em}{x_{j}^{(N-1)}}(\kappa \hspace{-0.1667em}-\hspace{-0.1667em}i\hspace{-0.1667em}-\hspace{-0.1667em}j)\\ {} & \hspace{1em}=\kappa N-\mathbb{E}{S_{N}}.\end{aligned}\]
  • 2. If $s\in {\overline{S}_{1}}(0)\setminus \{0\}$, then
    (12)
    \[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big({s^{\kappa }}-{s^{i+j}}\big)+\frac{{G_{{X_{N}}}}(s)}{{s^{\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}\big({s^{\kappa }}-{s^{i+j}}\big)\\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}}}(s)}{{s^{2\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}\big({s^{\kappa }}-{s^{i+j}}\big)+\cdots \\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(s)}{{s^{\kappa (N-1)}}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}\big({s^{\kappa }}-{s^{i+j}}\big)\\ {} & \hspace{1em}={s^{\kappa }}{G_{{M_{N}}}}(s)\bigg(1-\frac{{G_{{S_{N}}}}(s)}{{s^{\kappa N}}}\bigg),\end{aligned}\]
    and, if $\alpha \in {\overline{S}_{1}}(0)\setminus \{0,\hspace{0.1667em}1\}$, is a root of ${G_{{S_{N}}}}(s)={s^{\kappa N}}$, then
    (13)
    \[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{N}}}}(j-i)+\frac{{G_{{X_{N}}}}(\alpha )}{{\alpha ^{\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{1}}}}(j-i)\\ {} & \hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}}}(\alpha )}{{\alpha ^{2\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{2}}}}(j-i)+\cdots \\ {} & \hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(\alpha )}{{\alpha ^{\kappa (N-1)}}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{N-1}}}}(j-i)=0.\end{aligned}\]
  • 3. If $\alpha \in {\overline{S}_{1}}(0)\setminus \{0,\hspace{0.1667em}1\}$, is a root of ${G_{{S_{N}}}}(s)={s^{\kappa N}}$ of multiplicity r, $r=2,\hspace{0.1667em}3,\dots ,\kappa N-1$, then
    (14)
    \[\begin{aligned}{}& \hspace{-21.33955pt}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=i}^{\kappa -1}}\frac{{d^{n}}}{d{s^{n}}}\big({s^{j}}\big){\bigg|_{s=\alpha }}{F_{{X_{N}}}}(j-i)\\ {} & \hspace{-21.33955pt}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=i}^{\kappa -1}}\frac{{d^{n}}}{d{s^{n}}}\big({G_{{X_{N}}}}(s){s^{j-\kappa }}\big){\bigg|_{s=\alpha }}{F_{{X_{1}}}}(j-i)\\ {} & \hspace{-21.33955pt}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=i}^{\kappa -1}}\frac{{d^{n}}}{d{s^{n}}}\big({G_{{X_{N}}+{X_{1}}}}(s){s^{j-2\kappa }}\big){\bigg|_{s=\alpha }}{F_{{X_{2}}}}(j-i)+\cdots \\ {} & \hspace{-21.33955pt}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=i}^{\kappa -1}}\frac{{d^{n}}}{d{s^{n}}}\big({G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(s){s^{j-\kappa (N-1)}}\big){\bigg|_{s=\alpha }}\hspace{-0.1667em}{F_{{X_{N-1}}}}(j-i)\hspace{-0.1667em}=\hspace{-0.1667em}0,\end{aligned}\]
    for all $n\in \{1,\hspace{0.1667em}2,\dots ,r-1\}$, where $\frac{{d^{n}}}{d{t^{n}}}(\dots ){|_{s=\alpha }}$ denotes the n-th derivative with respect to s at $s=\alpha $.
  • 4. For $n=\kappa ,\hspace{0.1667em}\kappa +1,\dots $ , the probability mass functions of random variables ${\mathcal{M}_{1}},\hspace{0.1667em}{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$ and ${X_{1}},{X_{2}},\dots ,{X_{N}}$ satisfy the following system of equations
    (15)
    \[ \hspace{-14.22636pt}\left\{\begin{array}{l@{\hskip10.0pt}l}{m_{n}^{(1)}}{x_{0}^{(N)}}\hspace{1em}& \hspace{-7.11317pt}={m_{n-\kappa }^{(N)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(1)}}{x_{n-i}^{(N)}}-{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}{1_{\{n=\kappa \}}}\\ {} {m_{n}^{(2)}}{x_{0}^{(1)}}\hspace{1em}& \hspace{-7.11317pt}={m_{n-\kappa }^{(1)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(2)}}{x_{n-i}^{(1)}}-{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}{1_{\{n=\kappa \}}}\\ {} {m_{n}^{(3)}}{x_{0}^{(2)}}\hspace{1em}& \hspace{-7.11317pt}={m_{n-\kappa }^{(2)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(3)}}{x_{n-i}^{(2)}}-{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}{1_{\{n=\kappa \}}}\\ {} \hspace{1em}& \hspace{-0.1667em}\hspace{-0.1667em}\hspace{-0.1667em}\vdots \\ {} {m_{n}^{(N)}}{x_{0}^{(N-1)}}\hspace{1em}& \hspace{-7.11317pt}={m_{n-\kappa }^{(N-1)}}\hspace{-0.1667em}-\hspace{-0.1667em}{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(N)}}{x_{n-i}^{(N-1)}}\hspace{-0.1667em}-\hspace{-0.1667em}{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}{1_{\{n=\kappa \}}}\end{array}\right.\hspace{-14.22636pt}.\]
Let us comment on how Theorem 1 is used to obtain the distribution of ${\mathcal{M}_{1}}$, i.e. ${m_{i}^{(1)}}$, $i\in {\mathbb{N}_{0}}$. First, we note that the equation ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ has exactly $\kappa N-1$ roots in $s\in {\overline{S}_{1}}(0)\setminus \{1\}$ counted with their multiplicities, see Lemma 4 in Section 5. We denote these roots by ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{\kappa N-1}}$. We then create the system of linear equations (see eq. (16)) by replicating the equation (13) $\kappa N-1$ times over the roots ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{\kappa N-1}}$ and include (11) as the last equation. To illustrate that explicitly, we define the matrices ${\boldsymbol{M}_{1}},\hspace{0.1667em}{\boldsymbol{M}_{2}},\dots ,{\boldsymbol{M}_{N}}$ and ${\boldsymbol{G}_{2}},\hspace{0.1667em}{\boldsymbol{G}_{3}},\dots ,{\boldsymbol{G}_{N}}$:
vmsta249_g002.jpg
and set up the system
vmsta249_g003.jpg
where ∘ denotes the Hadamard matrix product, also known as the element-wise product, entry-wise product, or Schur product, i.e. two elements in corresponding positions in two matrices are multiplied, see, for example, [21].
Clearly, the solution of (16) (see Section 4 on solvability and modifications of (16)) gives ${m_{0}^{(1)}},\hspace{0.1667em}{m_{1}^{(1)}},\dots ,{m_{\kappa -1}^{(1)}}$, while using the system (15) (note that we can have ${m_{0}^{(j)}},\hspace{0.1667em}{m_{1}^{(j)}},\dots ,{m_{\kappa -1}^{(j)}}$, $j\in \{2,\hspace{0.1667em}3,\dots ,N\}$, from (16), too) we can compute ${m_{\kappa }^{(1)}},{m_{\kappa +1}^{(1)}},\dots ,{m_{\kappa N-1}^{(1)}}$ and consequently $\varphi (1),\varphi (2),\dots ,\varphi (\kappa N)$. Having $\varphi (1),\varphi (2),\dots ,\varphi (\kappa N)$ we can obtain $\varphi (0)$ by setting $u=0$ in recurrence (5) and use the same recurrence (5) to proceed computing $\varphi (\kappa N+1),\varphi (\kappa N+2),\dots \hspace{0.1667em}$. Of course, the survival probabilities $\varphi (\kappa N+1),\hspace{0.1667em}\varphi (\kappa N+2),\dots $ can be cumputed by system (15), too. Moreover, we can set up the ultimate time survival probability-generating function. Let
(17)
\[ \Xi (s):={\sum \limits_{i=0}^{\infty }}\varphi (i+1){s^{i}},\hspace{1em}s\in {S_{1}}(0).\]
In view of (8), it is easy to observe that, for $s\in {S_{1}}(0)$,
(18)
\[ \Xi (s)={\sum \limits_{i=0}^{\infty }}\varphi (i+1){s^{i}}={\sum \limits_{i=0}^{\infty }}{s^{i}}{\sum \limits_{j=0}^{i}}{m_{j}^{(1)}}={\sum \limits_{j=0}^{\infty }}{m_{j}^{(1)}}\frac{{s^{j}}}{1-s}=\frac{{G_{{\mathcal{M}_{1}}}}(s)}{1-s}.\]
Then, the ultimate time survival probability-generating function $\Xi (s)$ admits the following expression.
Theorem 2.
Let us assume that the probabilities ${m_{0}^{(j)}},\hspace{0.1667em}{m_{1}^{(j)}},\dots ,{m_{\kappa -1}^{(j)}}$, $j\in \{1,\hspace{0.1667em}2,\dots ,N\}$, are known beforehand. Then, the survival probability-generating function $\Xi (s)$, for $s\in {S_{1}}(0)$ and ${s^{\kappa N}}\ne {G_{{S_{N}}}}(s)$ admits the following representation
\[\begin{array}{l}\displaystyle \Xi (s)=\frac{{u^{T}}(s)v(s)}{{G_{{S_{N}}}}(s)-{s^{\kappa N}}},\hspace{1em}\textit{where}\hspace{2.5pt}\\ {} \displaystyle u(s)=\left(\begin{array}{c}{s^{\kappa (N-1)}}\\ {} {s^{\kappa (N-2)}}{G_{{X_{1}}}}(s)\\ {} {s^{\kappa (N-3)}}{G_{{X_{1}}+{X_{2}}}}(s)\\ {} \vdots \\ {} {s^{\kappa }}{G_{{X_{1}}+{X_{2}}+\cdots +{X_{N-2}}}}(s)\\ {} {G_{{X_{1}}+{X_{2}}+\cdots +{X_{N-1}}}}(s)\end{array}\right),\hspace{1em}v(s)=\left(\begin{array}{c}{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=i}^{\kappa -1}}{s^{j}}{F_{{X_{1}}}}(j-i)\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\textstyle\sum \limits_{j=i}^{\kappa -1}}{s^{j}}{F_{{X_{2}}}}(j-i)\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(4)}}{\textstyle\sum \limits_{j=i}^{\kappa -1}}{s^{j}}{F_{{X_{3}}}}(j-i)\\ {} \vdots \\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\textstyle\sum \limits_{j=i}^{\kappa -1}}{s^{j}}{F_{{X_{N-1}}}}(j-i)\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=i}^{\kappa -1}}{s^{j}}{F_{{X_{N}}}}(j-i)\end{array}\right).\end{array}\]
The next theorem states that if the net profit condition is unsatisfied, the ultimate time survival is impossible except in some cases when ${S_{N}}$ is degenerate.
Theorem 3.
Suppose that N-seasonal discrete-time risk model (1) is generated by random variables ${X_{1}},{X_{2}},\dots ,{X_{N}}$, and the net profit condition is not satisfied. In this case:
  • 1. If $\mathbb{E}{S_{N}}\gt \kappa N$, then $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$.
  • 2. If $\mathbb{E}{S_{N}}=\kappa N$ and $\mathbb{P}({S_{N}}=\kappa N)\lt 1$, then $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$.
  • 3. If $\mathbb{P}({S_{N}}=\kappa N)=1$, then random variables ${X_{1}},{X_{2}},\dots ,{X_{N}}$ are degenerate and
    \[\begin{aligned}{}& u+\kappa {n^{\ast }}-{\sum \limits_{k=1}^{{n^{\ast }}}}{X_{k}}\leqslant 0\Rightarrow \varphi (u)=0,\\ {} & u+\kappa {n^{\ast }}-{\sum \limits_{k=1}^{{n^{\ast }}}}{X_{k}}\gt 0\Rightarrow \varphi (u)=1,\end{aligned}\]
    here ${n^{\ast }}$ is equal to such $n\in \{1,2,\dots ,N\}$ for which $\kappa n-{\textstyle\sum _{k=1}^{n}}{X_{k}}$ attains its minimum.
The last theorem provides an algorithm for the computation of finite time survival probability $\varphi (u,\hspace{0.1667em}T)$. Let us define
(19)
\[ {\varphi ^{(j)}}(u,\hspace{0.1667em}T)=\mathbb{P}\Bigg(\underset{1\leqslant n\leqslant T}{\sup }{\sum \limits_{i=1}^{n}}\big({X_{i}^{(j)}}-\kappa \big)\lt u\Bigg),\]
where $j\in \{1,\hspace{0.1667em}2,\dots ,N\}$ and ${X_{i}^{(j)}}={X_{i+j-1}}$. It is easy to observe that ${\varphi ^{(N+j)}}(u,\hspace{0.1667em}T)={\varphi ^{(j)}}(u,\hspace{0.1667em}T)$.
Theorem 4.
For the finite time survival probability (3) of the N-seasonal discrete time risk model defined in (1), the following holds:
(20)
\[\begin{aligned}{}\varphi (u,1)& =\sum \limits_{i\leqslant u+\kappa -1}{x_{i}^{(1)}},\hspace{1em}\varphi (u,2)=\sum \limits_{\substack{{i_{1}}\leqslant u+\kappa -1\\ {} {i_{1}}+{i_{2}}\leqslant u+2\kappa -1}}{x_{{i_{1}}}^{(1)}}{x_{{i_{2}}}^{(2)}},\dots ,\\ {} \varphi (u,\hspace{0.1667em}N)& =\sum \limits_{\substack{{i_{1}}\leqslant u+\kappa -1\\ {} {i_{1}}+{i_{2}}\leqslant u+2\kappa -1\\ {} \vdots \\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N}}\leqslant u+\kappa N-1}}{x_{{i_{1}}}^{(1)}}{x_{{i_{2}}}^{(2)}}\cdots {x_{{i_{N}}}^{(N)}},\end{aligned}\]
and
(21)
\[\begin{aligned}{}& \varphi (u,T)\hspace{-0.1667em}=\hspace{-0.1667em}\hspace{-21.33955pt}\sum \limits_{\substack{{i_{1}}\leqslant u+\kappa -1\\ {} {i_{1}}+{i_{2}}\leqslant u+2\kappa -1\\ {} \vdots \\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N}}\leqslant u+\kappa N-1}}\hspace{-21.33955pt}\hspace{-0.1667em}{x_{{i_{1}}}^{(1)}}{x_{{i_{2}}}^{(2)}}\cdots {x_{{i_{N}}}^{(N)}}\varphi (u+\kappa N-{i_{1}}-\cdots -{i_{N}},\hspace{0.1667em}T-N),\end{aligned}\]
if $T\in \{N+1,\hspace{0.1667em}N+2,\dots \}$.
Moreover, for the defined probabilities (19), it holds that
(22)
\[\begin{aligned}{}& {\varphi ^{(j)}}(u,\hspace{0.1667em}1)={F_{{X_{j}}}}(u+\kappa -1),\end{aligned}\]
(23)
\[\begin{aligned}{}& {\varphi ^{(j)}}(u,\hspace{0.1667em}T)={\sum \limits_{i=0}^{u+\kappa -1}}{\varphi ^{(j+1)}}(u+\kappa -i,\hspace{0.1667em}T-1){x_{i}^{(j)}},\hspace{1em}T=2,\hspace{0.1667em}3,\dots \hspace{0.1667em}.\end{aligned}\]
The formulated Theorems 1, 2, 3 and 4 are proved in Section 6.

4 Notes on the solution of linear system involving probabilities of ${\mathcal{M}_{1}},{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$

In general, it is not easy to give an explicit solution of system (16) or even to prove that the system’s determinant of size $\kappa N\times \kappa N$ never equals zero. For instance, if $N=1$ and $\kappa \in \mathbb{N}$, then the system (16) is
(24)
\[\begin{aligned}{}& \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\textstyle\sum \limits_{j=0}^{\kappa -1}}{\alpha _{1}^{j}}{F_{X}}(j)& \hspace{-14.22636pt}{\textstyle\sum \limits_{j=0}^{\kappa -2}}{\alpha _{1}^{j+1}}{F_{X}}(j)& \hspace{-14.22636pt}\dots & \hspace{-14.22636pt}{\alpha _{1}^{\kappa -1}}{x_{0}}\\ {} \vdots & \hspace{-14.22636pt}\vdots & \hspace{-14.22636pt}\ddots & \hspace{-14.22636pt}\vdots \\ {} {\textstyle\sum \limits_{j=0}^{\kappa -1}}{\alpha _{\kappa -1}^{j}}{F_{X}}(j)& \hspace{-14.22636pt}{\textstyle\sum \limits_{j=0}^{\kappa -2}}{\alpha _{\kappa -1}^{j+1}}{F_{X}}(j)& \hspace{-14.22636pt}\dots & \hspace{-14.22636pt}{\alpha _{\kappa -1}^{\kappa -1}}{x_{0}}\\ {} {\textstyle\sum \limits_{j=0}^{\kappa -1}}{x_{j}}(\kappa -j)& \hspace{-2.84544pt}{\textstyle\sum \limits_{j=0}^{\kappa -2}}{x_{j}}(\kappa -j-1)& \hspace{-7.11317pt}\dots & \hspace{-14.22636pt}{x_{0}}\end{array}\right)\left(\begin{array}{c}{m_{0}^{(1)}}\\ {} \vdots \\ {} {m_{\kappa -2}^{(1)}}\\ {} {m_{\kappa -1}^{(1)}}\end{array}\right)=\left(\begin{array}{c}0\\ {} \vdots \\ {} 0\\ {} \kappa -\mathbb{E}X\end{array}\right),\end{aligned}\]
where $X\stackrel{d}{=}{X_{1}}$, ${x_{i}}={x_{i}^{(1)}}$ and ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{\kappa -1}}$ are the simple roots of ${s^{\kappa }}={G_{X}}(s)$ when $s\in {\overline{S}_{1}}(0)\setminus \{1\}$. Then, if ${x_{0}}\gt 0$, the determinant of the system’s matrix in (24) is the Vandermonde-like one,
\[ \frac{{x_{0}^{\kappa }}}{{(-1)^{\kappa +1}}}{\prod \limits_{j=1}^{\kappa -1}}({\alpha _{j}}-1)\prod \limits_{1\leqslant i\lt j\leqslant \kappa -1}({\alpha _{j}}-{\alpha _{i}})\ne 0,\]
and the probabilities ${m_{0}^{(1)}},\hspace{0.1667em}{m_{1}^{(1)}},\dots ,{m_{\kappa -1}^{(1)}}$ together with the survival probabilities $\varphi (0),\hspace{0.1667em}\varphi (1),\dots ,\varphi (\kappa )$ admit neat closed-form expressions, see [17, Thm. 4]. On the other hand, if $\kappa =1$ and $N\in \mathbb{N}$, then the system (16) is
(25)
\[\begin{aligned}{}& \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{x_{0}^{(N)}}& \frac{{x_{0}^{(1)}}{G_{{X_{N}}}}({\alpha _{1}})}{{\alpha _{1}}}& \dots & \frac{{x_{0}^{(N-1)}}{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}({\alpha _{1}})}{{\alpha _{1}^{N-1}}}\\ {} {x_{0}^{(N)}}& \frac{{x_{0}^{(1)}}{G_{{X_{N}}}}({\alpha _{2}})}{{\alpha _{2}}}& \dots & \frac{{x_{0}^{(N-1)}}{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}({\alpha _{2}})}{{\alpha _{2}^{N-1}}}\\ {} \vdots & \vdots & \ddots & \vdots \\ {} {x_{0}^{(N)}}& \frac{{x_{0}^{(1)}}{G_{{X_{N}}}}({\alpha _{N-1}})}{{\alpha _{N-1}}}& \dots & \frac{{x_{0}^{(N-1)}}{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}({\alpha _{N-1}})}{{\alpha _{N-1}^{N-1}}}\\ {} {x_{0}^{(N)}}& {x_{0}^{(1)}}& \dots & {x_{0}^{(N-1)}}\end{array}\right)\left(\begin{array}{c}{m_{0}^{(1)}}\\ {} {m_{0}^{(2)}}\\ {} \vdots \\ {} {m_{0}^{(N-1)}}\\ {} {m_{0}^{(N)}}\end{array}\right)\\ {} & \hspace{227.62204pt}=\left(\begin{array}{c}0\\ {} 0\\ {} \vdots \\ {} 0\\ {} N-\mathbb{E}{S_{N}}\end{array}\right),\end{aligned}\]
where ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{N-1}}$ are the simple roots of ${s^{N}}={G_{{S_{N}}}}(s)$ in $s\in {\overline{S}_{1}}(0)\setminus \{1\}$, see [19]. If $N=3$, the main matrix in (25) is
(26)
\[ A:=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{x_{0}^{(3)}}& \frac{{x_{0}^{(1)}}{G_{{X_{3}}}}({\alpha _{1}})}{{\alpha _{1}}}& \frac{{x_{0}^{(2)}}{G_{{X_{3}}+{X_{1}}}}({\alpha _{1}})}{{\alpha _{1}^{2}}}\\ {} {x_{0}^{(3)}}& \frac{{x_{0}^{(1)}}{G_{{X_{3}}}}({\alpha _{2}})}{{\alpha _{2}}}& \frac{{x_{0}^{(2)}}{G_{{X_{3}}+{X_{1}}}}({\alpha _{2}})}{{\alpha _{2}^{2}}}\\ {} {x_{0}^{(3)}}& {x_{0}^{(1)}}& {x_{0}^{(2)}}\end{array}\right)\]
and one may check that for ${s_{0}^{(3)}}=\mathbb{P}({X_{1}}+{X_{2}}+{X_{3}}=0)\gt 0$ the matrix A is nonsingular iff
\[ \bigg(\frac{{G_{{X_{3}}}}({\alpha _{1}})}{{\alpha _{1}}}-1\bigg)\bigg(\frac{{G_{{X_{3}}+{X_{1}}}}({\alpha _{2}})}{{\alpha _{2}^{2}}}-1\bigg)\ne \bigg(\frac{{G_{{X_{3}}}}({\alpha _{2}})}{{\alpha _{2}}}-1\bigg)\bigg(\frac{{G_{{X_{3}}+{X_{1}}}}({\alpha _{1}})}{{\alpha _{1}^{2}}}-1\bigg)\]
where ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}}$ are the simple roots of ${s^{3}}={G_{{X_{1}}+{X_{2}}+{X_{3}}}}(s)$ in $s\in {\overline{S}_{1}}(0)\setminus \{1\}$. Computer computations with some chosen random variables ${X_{1}},{X_{2}},\dots ,{X_{N}}$, $N\geqslant 3$, do not reveal any examples that the system’s matrix in (16) is singular. The listed thoughts raise the following conjecture.
Conjecture 1.
Assume that ${s_{0}^{(N)}}=\mathbb{P}({X_{1}}+{X_{2}}+\cdots +{X_{N}}=0)\gt 0$ and ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{\kappa N-1}}$ are the simple roots of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ in $s\in {\overline{S}_{1}}(0)\setminus \{1\}$. Then, the system’s matrix in (16) is nonsingular for all $\kappa ,\hspace{0.1667em}N\in \mathbb{N}$. In particular, if $N=3$ and $\kappa =1$, then
\[ \frac{{G_{{S_{3}}}}({\alpha _{1}})}{{\alpha _{1}^{3}}}=\frac{{G_{{S_{3}}}}({\alpha _{2}})}{{\alpha _{2}^{3}}}=1\]
implies
\[ \bigg(\frac{{G_{{X_{3}}}}({\alpha _{1}})}{{\alpha _{1}}}-1\bigg)\bigg(\frac{{G_{{X_{3}}+{X_{1}}}}({\alpha _{2}})}{{\alpha _{2}^{2}}}-1\bigg)\ne \bigg(\frac{{G_{{X_{3}}}}({\alpha _{2}})}{{\alpha _{2}}}-1\bigg)\bigg(\frac{{G_{{X_{3}}+{X_{1}}}}({\alpha _{1}})}{{\alpha _{1}^{2}}}-1\bigg)\]
and consequently $\det A\ne 0$.
Let us comment on how the system (16) gets modified if there are multiple roots among ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{\kappa N-1}}$ and/or the random variable ${S_{N}}$ does not attain its “small” values. It is clear that $\mathbb{P}({S_{N}}\geqslant j)=1$ for some $j\in \{1,\hspace{0.1667em}2,\dots ,\kappa N-1\}$ implies at least one column of zeros in the main matrix of (16). Note that $\mathbb{P}({S_{N}}\geqslant \kappa N)=1$ violates the net profit condition $\mathbb{E}{S_{N}}\lt \kappa N$. Also, $\mathbb{P}({S_{N}}\geqslant j)=1$ for some $j\in \{1,\hspace{0.1667em}2,\dots ,\kappa N-1\}$ always implies fewer terms in the right-hand side of the recurrence (5) as some values of the probability mass function equal zero then. For instance, if $\mathbb{P}({X_{N}}\geqslant \kappa N-1)=1$, then $\varphi (0)={s_{\kappa N-1}^{(N)}}\varphi (1)$ and there is only $\varphi (0)$ that we must know in order to find $\varphi (1),\hspace{0.1667em}\varphi (2),\dots $ by the recurrence (5). Thus, when $\mathbb{P}({S_{N}}\geqslant j)=1$ for some $j\in \{1,\hspace{0.1667em}2,\dots ,\kappa N-1\}$, we have to adjust the main matrix in (16) according to the equalities (13) and (11) not including any columns of zeros.
In addition, when some roots of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ are of multiplicity $r\in \{2,\hspace{0.1667em}3,\dots ,\kappa N-1\}$, then, to avoid identical lines in (16), we must replace the corresponding lines with derivatives as provided in equality (14).
Once again, computational examples with some chosen random variables ${X_{1}},{X_{2}},\dots ,{X_{N}}$, $N\geqslant 3$, and $\kappa \geqslant 1$ do not reveal any examples showing that such a modified (due to multiple roots and/or ${S_{N}}$ not attaining “small” values) system’s matrix in (16) is singular.

5 Lemmas

In this section, we formulate and prove several auxiliary lemmas that are later used to prove theorems formulated in Section 3. Some of the presented lemmas are direct generalizations of statements from [19, Sec. 5], where they are proved for ${X_{j}}-1$, $j\in \{1,\hspace{0.1667em}2,\dots ,N\}$, while here we need them for ${X_{j}}-\kappa $, $j\in \{1,\hspace{0.1667em}2,\dots ,N\}$, $\kappa \in \mathbb{N}$.
Lemma 1.
If the net profit condition is satisfied, i.e. $\mathbb{E}{S_{N}}\lt \kappa N$, then
\[ \underset{u\to \infty }{\lim }\mathbb{P}({\mathcal{M}_{j}}\lt u)=1,\]
for all $j\in \{1,\hspace{0.1667em}2,\dots ,N\}$.
Proof.
We prove the case $j=1$ only and note that the other cases can be proven similarly.
According to the strong law of large numbers
\[\begin{aligned}{}& \frac{1}{n}{\sum \limits_{i=1}^{n}}({X_{i}}-\kappa )=\frac{1}{N}\Bigg(\frac{N}{n}{\sum \limits_{\begin{array}{c}i=1\\ {} i\equiv 1\hspace{0.1667em}\mathrm{mod}\hspace{0.1667em}N\end{array}}^{n}}({X_{i}}-\kappa )+\cdots +\frac{N}{n}{\sum \limits_{\begin{array}{c}i=1\\ {} i\equiv N\hspace{0.1667em}\mathrm{mod}\hspace{0.1667em}N\end{array}}^{n}}({X_{i}}-\kappa )\Bigg)\\ {} & \hspace{1em}\underset{n\to \infty }{\longrightarrow }\frac{1}{N}\big((\mathbb{E}{X_{1}}-\kappa )+\cdots +(\mathbb{E}{X_{N}}-\kappa )\big)=\frac{\mathbb{E}{S_{N}}-\kappa N}{N}=:-\mu \lt 0\hspace{2.5pt}\text{a.s.}\end{aligned}\]
Therefore,
\[ \mathbb{P}\Bigg(\underset{j\geqslant n}{\sup }\Bigg|\frac{1}{j}{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )+\mu \Bigg|\lt \frac{\mu }{2}\Bigg)\underset{n\to \infty }{\longrightarrow }1.\]
Consequently, for any arbitrarily small $\varepsilon \gt 0$, there exists number ${N_{\varepsilon }}\in \mathbb{N}$ such that
\[\begin{aligned}{}& \mathbb{P}\Bigg({\bigcap \limits_{j=n}^{\infty }}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\leqslant 0\Bigg\}\Bigg)\\ {} & \hspace{1em}\geqslant \mathbb{P}\Bigg({\bigcap \limits_{j=n}^{\infty }}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\lt -\frac{\mu }{2}\Bigg\}\Bigg)\\ {} & \hspace{1em}\geqslant \mathbb{P}\Bigg({\bigcap \limits_{j=n}^{\infty }}\Bigg\{\Bigg|\frac{1}{j}{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )+\mu \Bigg|\lt \frac{\mu }{2}\Bigg\}\Bigg)\\ {} & \hspace{1em}=\mathbb{P}\Bigg(\underset{j\geqslant n}{\sup }\Bigg|\frac{1}{j}{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )+\mu \Bigg|\lt \frac{\mu }{2}\Bigg)\geqslant 1-\varepsilon \end{aligned}\]
for all $n\geqslant {N_{\varepsilon }}$.
It follows that for any such ε and any $u\in \mathbb{N}$ we have
\[\begin{aligned}{}\mathbb{P}({\mathcal{M}_{1}}\lt u)& =\mathbb{P}\Bigg({\bigcap \limits_{j=1}^{\infty }}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\lt u\Bigg\}\Bigg)\\ {} & \geqslant \mathbb{P}\Bigg(\Bigg\{{\bigcap \limits_{j=1}^{{N_{\varepsilon }}-1}}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\lt u\Bigg\}\Bigg\}\cap \Bigg\{{\bigcap \limits_{j={N_{\varepsilon }}}^{\infty }}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\leqslant 0\Bigg\}\Bigg\}\Bigg)\\ {} & \geqslant \mathbb{P}\Bigg({\bigcap \limits_{j=1}^{{N_{\varepsilon }}-1}}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\lt u\Bigg\}\Bigg)+\mathbb{P}\Bigg({\bigcap \limits_{j={N_{\varepsilon }}}^{\infty }}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\leqslant 0\Bigg\}\Bigg)-1\\ {} & \geqslant \mathbb{P}\Bigg({\bigcap \limits_{j=1}^{{N_{\varepsilon }}-1}}\Bigg\{{\sum \limits_{i=1}^{j}}({X_{i}}-\kappa )\lt u\Bigg\}\Bigg)-\varepsilon .\end{aligned}\]
The last inequality implies
\[ \underset{u\to \infty }{\lim }\mathbb{P}({\mathcal{M}_{1}}\lt u)\geqslant 1-\varepsilon ,\]
where $\varepsilon \gt 0$ is arbitrarily small, and the assertion of lemma follows.  □
Lemma 2.
If the net profit condition is satisfied, i.e. $\mathbb{E}{S_{N}}\lt \kappa N$, it holds that ${({\mathcal{M}_{j}}+{X_{j-1}}-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{j-1}}$, for all $j=2,\hspace{0.1667em}3,\dots ,N$, and ${({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{N}}$, where ${\tilde{X}_{N}}$ is an independent copy of ${X_{N}}$.
Proof.
We prove the equality ${({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{N}}$ only and note that the other ones can be proved by the same arguments. According to Lemma 1, $\mathbb{P}({\mathcal{M}_{1}}\lt \infty )=1$. Let us denote ${\hat{X}_{j}}={X_{j}}-\kappa $ for all $j\in \{1,\hspace{0.1667em}2,\dots ,N-1\}$ and say that ${\hat{X}_{N}}$ is an independent copy of ${X_{N}}-\kappa $. Then
\[\begin{aligned}{}& {({\mathcal{M}_{1}}+{\hat{X}_{N}})^{+}}\\ {} & \hspace{1em}\stackrel{d}{=}{\big(\max \big\{0,\hspace{0.1667em}\max \{{\hat{X}_{1}},\hspace{0.1667em}{\hat{X}_{1}}+{\hat{X}_{2}},\hspace{0.1667em}{\hat{X}_{1}}+{\hat{X}_{2}}+{\hat{X}_{3}},\dots \}\big\}+{\hat{X}_{N}}\big)^{+}}\\ {} & \hspace{1em}\stackrel{d}{=}{\big(\max \{0,\hspace{0.1667em}{\hat{X}_{1}},\hspace{0.1667em}{\hat{X}_{1}}+{\hat{X}_{2}},\hspace{0.1667em}{\hat{X}_{1}}+{\hat{X}_{2}}+{\hat{X}_{3}},\dots \}+{\hat{X}_{N}}\big)^{+}}\\ {} & \hspace{1em}\stackrel{d}{=}{\big(\max \{{\hat{X}_{N}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{1}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{1}}+{\hat{X}_{2}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{1}}+{\hat{X}_{2}}+{\hat{X}_{3}},\hspace{0.1667em}\dots \}\big)^{+}}\\ {} & \hspace{1em}\stackrel{d}{=}{\big(\max \{{\hat{X}_{N}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{N+1}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{N+1}}+{\hat{X}_{N+2}},\dots \}\big)^{+}}\\ {} & \hspace{1em}\stackrel{d}{=}\max \big\{0,\max \{{\hat{X}_{N}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{N+1}},\hspace{0.1667em}{\hat{X}_{N}}+{\hat{X}_{N+1}}+{\hat{X}_{N+2}},\dots \}\big\}\\ {} & \hspace{1em}\stackrel{d}{=}{\mathcal{M}_{N}}.\end{aligned}\]
 □
Lemma 3.
Let $s\in {\overline{S}_{1}}(0)\setminus \{0\}$ and say that the net profit condition holds. Then the probability-generating functions of ${X_{1}},{X_{2}},\dots ,{X_{N}}$ and ${\mathcal{M}_{1}},\hspace{0.1667em}{\mathcal{M}_{2}}\dots ,\hspace{0.1667em}{\mathcal{M}_{N}}$ are related in the following way:
(27)
\[ \hspace{-11.38092pt}\left\{\begin{array}{l@{\hskip10.0pt}l}{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}({s^{\kappa }}-{s^{i+j}})\hspace{1em}& ={s^{\kappa }}{G_{{\mathcal{M}_{N}}}}(s)-{G_{{X_{N}}}}(s){G_{{\mathcal{M}_{1}}}}(s)\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}({s^{\kappa }}-{s^{i+j}})\hspace{1em}& ={s^{\kappa }}{G_{{\mathcal{M}_{1}}}}(s)-{G_{{X_{1}}}}(s){G_{{\mathcal{M}_{2}}}}(s)\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}({s^{\kappa }}-{s^{i+j}})\hspace{1em}& ={s^{\kappa }}{G_{{\mathcal{M}_{2}}}}(s)-{G_{{X_{2}}}}(s){G_{{\mathcal{M}_{3}}}}(s)\\ {} \hspace{1em}& \hspace{0.1667em}\hspace{0.1667em}\vdots \\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}({s^{\kappa }}-{s^{i+j}})\hspace{-8.0pt}\hspace{1em}& ={s^{\kappa }}{G_{{\mathcal{M}_{N-1}}}}(s)-{G_{{X_{N-1}}}}(s){G_{{\mathcal{M}_{N}}}}(s)\end{array}\right..\]
Proof.
Let us demonstrate how the first equality in (27) is derived and note that the remaining ones follow the same logic.
By Lemma 2, the equality of distributions ${({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}\stackrel{d}{=}{\mathcal{M}_{N}}$ implies the equality of probability-generating functions ${G_{{\mathcal{M}_{N}}}}(s)={G_{{({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}}}(s)$, where ${\tilde{X}_{N}}$ denotes an independent copy of ${X_{N}}$. Then, applying the law of total expectation for the last equality, we obtain
\[\begin{aligned}{}{G_{{\mathcal{M}_{N}}}}(s)& =\mathbb{E}{s^{{({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}}}=\mathbb{E}\big(\mathbb{E}\big({s^{{({\mathcal{M}_{1}}+{\tilde{X}_{N}}-\kappa )^{+}}}}|{\mathcal{M}_{1}}\big)\big)\\ {} & ={\sum \limits_{i=0}^{\infty }}{m_{i}^{(1)}}\mathbb{E}{s^{{({X_{N}}-\kappa +i)^{+}}}}={\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}\mathbb{E}{s^{{({X_{N}}-\kappa +i)^{+}}}}+{G_{{X_{N}}}}(s){s^{-\kappa }}{\sum \limits_{i=\kappa }^{\infty }}{m_{i}^{(1)}}{s^{i}}\\ {} & ={\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}\big(\mathbb{E}{s^{{({X_{N}}-\kappa +i)^{+}}}}-{s^{i-\kappa }}{G_{{X_{N}}}}(s)\big)+{s^{-\kappa }}{G_{{X_{N}}}}(s){G_{{\mathcal{M}_{1}}}}(s).\end{aligned}\]
Multiplying both sides of the last equality by ${s^{\kappa }}$ when $s\ne 0$ and observing that
\[ {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}\big({s^{\kappa }}\mathbb{E}{s^{{({X_{N}}-\kappa +i)^{+}}}}-{s^{i}}{G_{{X_{N}}}}(s)\big)={\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big({s^{\kappa }}-{s^{i+j}}\big)\]
we get the desired result.  □
The next lemma provides the quantity and location of the roots of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$.
Lemma 4.
Assume that the net profit condition ${G^{\prime }_{{S_{N}}}}(1)=\mathbb{E}{S_{N}}\lt \kappa N$ is valid. Then there are exactly $\kappa N-1$ roots, counted with their multiplicities, of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ in $s\in {\overline{S}_{1}}(0)\setminus \{1\}$.
Proof.
We follow the proof of [18, Lemma 9]. Due to the estimate
\[ |{G_{{S_{N}}}}(s)|\leqslant 1\lt \lambda |s{|^{\kappa N}}\]
on the boundary $|s|=1$ when $\lambda \gt 1$, Rouché’s theorem implies that both functions ${G_{{S_{N}}}}(s)-\lambda {s^{\kappa N}}$ and $\lambda {s^{\kappa N}}$ have the same number of roots in $|s|\lt 1$ and this number is $\kappa N$ due to the fundamental theorem of algebra. When $\lambda \to {1^{+}}$ some roots of ${G_{{S_{N}}}}(s)-\lambda {s^{\kappa N}}$ remain in $|s|\lt 1$ and some migrate to the boundary points $|s|=1$. Obviously, $s=1$ is always the root of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ and it is the simple root because the net profit condition holds, i.e.
\[ {\big({G_{{S_{N}}}}(s)-{s^{\kappa N}}\big)^{\prime }}{|_{s=1}}=\mathbb{E}{S_{N}}-\kappa N\lt 0.\]
Thus, there remain $\kappa N-1$ roots of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ in $s\in {\overline{S}_{1}}(0)\setminus \{1\}$ and additionally one can say that s, such that $|s|=1$, $s\ne 1$, is the root of ${s^{\kappa N}}={G_{{S_{N}}}}(s)$ if the greatest common divisor of $\kappa N$ and all the powers of s in ${G_{{S_{N}}}}(s)$ is greater than one.  □

6 Proofs

In this section, we prove all of the theorems formulated in Section 3.
Proof of Theorem 1.
We first prove equality (12). To derive (12), we use the system of equations (27) from Lemma 3. According to the conditions of Lemma 3, $s\ne 0$ and we rearrange the system (27) by multiplying its first equality by 1, the second one by ${G_{{X_{N}}}}(s)/{s^{\kappa }}$, the third one by ${G_{{X_{N}}+{X_{1}}}}(s)/{s^{2\kappa }}$ and so on till the last equality which is multiplied by ${G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(s)/{s^{\kappa (N-1)}}$. We then add up all these equations and obtain
(28)
\[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big({s^{\kappa }}-{s^{i+j}}\big)+\frac{{G_{{X_{N}}}}(s)}{{s^{\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}\big({s^{\kappa }}-{s^{i+j}}\big)\\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}}}(s)}{{s^{2\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}\big({s^{\kappa }}-{s^{i+j}}\big)+\cdots \\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(s)}{{s^{\kappa (N-1)}}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=0}^{\kappa -1-i}}{m_{j}^{(N-1)}}\big({s^{\kappa }}-{s^{i+j}}\big)\\ {} & \hspace{1em}={s^{\kappa }}{G_{{\mathcal{M}_{N}}}}(s)-{G_{{X_{N}}}}(s){G_{{\mathcal{M}_{1}}}}(s)+\frac{{G_{{X_{N}}}}(s)}{{s^{\kappa }}}\big({s^{\kappa }}{G_{{\mathcal{M}_{1}}}}(s)-{G_{{X_{1}}}}(s){G_{{\mathcal{M}_{2}}}}(s)\big)\\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}}}(s)}{{s^{2\kappa }}}\big({s^{\kappa }}{G_{{\mathcal{M}_{2}}}}(s)-{G_{{X_{2}}}}(s){G_{{\mathcal{M}_{3}}}}(s)\big)+\cdots \\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(s)}{{s^{\kappa (N-1)}}}\big({s^{\kappa }}{G_{{\mathcal{M}_{N-1}}}}(s)-{G_{{X_{N-1}}}}(s){G_{{\mathcal{M}_{N}}}}(s)\big)\\ {} & \hspace{1em}={s^{\kappa }}{G_{{\mathcal{M}_{N}}}}(s)-\frac{{G_{{S_{N}}}}(s)}{{s^{\kappa (N-1)}}}{G_{{\mathcal{M}_{N}}}}(s)={s^{\kappa }}{G_{{\mathcal{M}_{N}}}}(s)\bigg(1-\frac{{G_{{S_{N}}}}(s)}{{s^{\kappa N}}}\bigg).\end{aligned}\]
Here we have used the fact that if any random variables X are Y independent, then their probability generating functions satisfy
\[ {G_{X+Y}}(s)={G_{X}}(s){G_{Y}}(s).\]
Thus, the equality (12) is proved. We now derive (13).
It is obvious that the right-hand side of (28) equals zero if we set $s=\alpha $, where α is the root of ${G_{{S_{N}}}}(s)={s^{\kappa N}},\hspace{0.1667em}s\in {\overline{S}_{1}}(0)\setminus \{1\}$. We then divide both sides of (28) by $\alpha -1$, i.e.
\[ \frac{{\alpha ^{\kappa }}-{\alpha ^{i+j}}}{\alpha -1}={\alpha ^{j+i}}+{\alpha ^{j+i+1}}+\cdots +{\alpha ^{\kappa -1}},\hspace{1em}\alpha \ne 1,\]
and get
\[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}{\sum \limits_{l=j+i}^{\kappa -1}}{\alpha ^{l}}+\frac{{G_{{X_{N}}}}(\alpha )}{{\alpha ^{\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}{\sum \limits_{l=j+i}^{\kappa -1}}{\alpha ^{l}}\\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}}}(\alpha )}{{\alpha ^{2\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}{\sum \limits_{l=j+i}^{\kappa -1}}{\alpha ^{l}}+\cdots \\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(\alpha )}{{\alpha ^{\kappa (N-1)}}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}{\sum \limits_{l=j+i}^{\kappa -1}}{\alpha ^{l}}\\ {} & \hspace{1em}={\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{N}}}}(j-i)+\frac{{G_{{X_{N}}}}(\alpha )}{{\alpha ^{\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{1}}}}(j-i)\\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}}}(\alpha )}{{\alpha ^{2\kappa }}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{2}}}}(j-i)+\cdots \\ {} & \hspace{1em}\hspace{1em}+\frac{{G_{{X_{N}}+{X_{1}}+\cdots +{X_{N-2}}}}(\alpha )}{{\alpha ^{\kappa (N-1)}}}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=i}^{\kappa -1}}{\alpha ^{j}}{F_{{X_{N-1}}}}(j-i)=0,\end{aligned}\]
which is the claimed equality (13).
We now consider the equation (14). Since $s\ne 1$, we divide the both sides of (28) by $s-1$ and rewrite the right-hand side of it as
\[ {s^{\kappa (1-N)}}{G_{{\mathcal{M}_{N}}}}(s)\big({s^{\kappa N}}-{G_{{S_{N}}}}(s)\big)/(s-1).\]
Clearly, the derivatives
\[ \frac{{d^{n}}}{d{s^{n}}}\big({s^{\kappa (1-N)}}{G_{{\mathcal{M}_{N}}}}(s)\big({s^{\kappa N}}-{G_{{S_{N}}}}(s)\big)/(s-1)\big){\bigg|_{s=\alpha }}=0\]
for all $n\in \{1,\hspace{0.1667em}2,\dots ,r-1\}$ if α is the root of ${s^{\kappa N}}={G_{{S_{N}}}}(s),\hspace{0.1667em}s\in {\overline{S}_{1}}(0)\setminus \{1\}$ and root’s multiplicity is $r\in \{2,\hspace{0.1667em}3,\dots ,\kappa N-1\}$. Thus, the equality (14) is nothing but the n-th derivative with respect to s of equation (28) (divided by $s-1$) at $s=\alpha $.
We now prove equality (11) in Theorem 1. The derivatives by s of both sides of equation (28) give
(29)
\[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big(\kappa {s^{\kappa -1}}-(i+j){s^{i+j-1}}\big)\\ {} & \hspace{1em}\hspace{1em}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}\big(\kappa {s^{\kappa -1}}-(i+j){s^{i+j-1}}\big)\\ {} & \hspace{1em}\hspace{1em}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}\big(\kappa {s^{\kappa -1}}-(i+j){s^{i+j-1}}\big)+\cdots \\ {} & \hspace{1em}\hspace{1em}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}\big(\kappa {s^{\kappa -1}}-(i+j){s^{i+j-1}}\big)\\ {} & \hspace{1em}={G^{\prime }_{{\mathcal{M}_{N}}}}(s)\big({s^{\kappa }}-{G_{{S_{N}}}}(s){s^{\kappa (1-N)}}\big)\\ {} & \hspace{1em}\hspace{1em}+{G_{{\mathcal{M}_{N}}}}(s)\big(\kappa {s^{\kappa -1}}-{G_{{S_{N}}}}(s)\kappa (1-N){s^{\kappa (1-N)-1}}-{G^{\prime }_{{S_{N}}}}(s){s^{\kappa (1-N)}}\big).\end{aligned}\]
We continue the proof by letting $s\to {1^{-}}$ in (29). Because the net profit condition holds, i.e. $\mathbb{E}{S_{N}}\lt \kappa N$, and $\mathbb{P}({\mathcal{M}_{N}}\lt \infty )=1$, we obtain
(30)
\[\begin{aligned}{}& {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big(\kappa -(i+j)\big)+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}\big(\kappa -(i+j)\big)\\ {} & \hspace{1em}\hspace{1em}+{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}\big(\kappa \hspace{-0.1667em}-\hspace{-0.1667em}(i\hspace{-0.1667em}+\hspace{-0.1667em}j)\big)\hspace{-0.1667em}+\hspace{-0.1667em}\cdots \hspace{-0.1667em}+\hspace{-0.1667em}{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}\big(\kappa -(i+j)\big)\\ {} & \hspace{1em}=\underset{s\to {1^{-}}}{\lim }{G^{\prime }_{{\mathcal{M}_{N}}}}(s)\big({s^{\kappa }}-{G_{{S_{N}}}}(s){s^{\kappa (1-N)}}\big)+\kappa N-\mathbb{E}{S_{N}}.\end{aligned}\]
To compute the limit in (30) there are two separate cases to examine: $\mathbb{E}{\mathcal{M}_{N}}\lt \infty $ and $\mathbb{E}{\mathcal{M}_{N}}=\infty $. If $\mathbb{E}{\mathcal{M}_{N}}\lt \infty $, then the limit in (30) is zero. However, this limit is zero even if $\mathbb{E}{\mathcal{M}_{N}}=\infty $. Indeed, if $\mathbb{E}{\mathcal{M}_{N}}=\infty $, then by using L’Hospital’s rule we get
\[\begin{aligned}{}& \underset{s\to {1^{-}}}{\lim }{G^{\prime }_{{\mathcal{M}_{N}}}}(s)\big({s^{\kappa }}-{G_{{S_{N}}}}(s){s^{\kappa (1-N)}}\big)=\underset{s\to {1^{-}}}{\lim }\frac{{s^{\kappa }}-{G_{{S_{N}}}}(s){s^{\kappa (1-N)}}}{1/{G^{\prime }_{{\mathcal{M}_{N}}}}(s)}\\ {} & \hspace{1em}=\underset{s\to {1^{-}}}{\lim }\frac{{({s^{\kappa }}-{G_{{S_{N}}}}(s){s^{\kappa (1-N)}})^{\prime }}}{{(1/{G^{\prime }_{{\mathcal{M}_{N}}}}(s))^{\prime }}}\\ {} & \hspace{1em}=\underset{s\to {1^{-}}}{\lim }\big(\kappa {s^{\kappa -1}}-{G^{\prime }_{{S_{N}}}}(s){s^{\kappa (1-N)}}-{G_{{S_{N}}}}(s)\kappa (1-N){s^{\kappa (1-N)-1}}\big)\frac{{({G^{\prime }_{{\mathcal{M}_{N}}}}(s))^{2}}}{-{G^{\prime\prime }_{{\mathcal{M}_{N}}}}(s)}\\ {} & \hspace{1em}=(\kappa N-\mathbb{E}{S_{N}})\cdot 0=0,\end{aligned}\]
because
(31)
\[ \underset{s\to {1^{-}}}{\lim }\frac{{({G^{\prime }_{{\mathcal{M}_{N}}}}(s))^{2}}}{{G^{\prime\prime }_{{\mathcal{M}_{N}}}}(s)}=0,\]
see [19, Lem. 5].2 Thus, the limit in (30) is zero, and the equality (11) in Theorem 1 follows.
It remains to prove the equalities in system (15). In short, every equality in system (15) is the corresponding equality from (27) expanded at $s=0$. Let us demonstrate the derivation of the first equality in (15) in detail and note that the remaining ones are derived analogously. We need to show that the first equality in (27)
(32)
\[ {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big({s^{\kappa }}-{s^{i+j}}\big)={s^{\kappa }}{G_{{\mathcal{M}_{N}}}}(s)-{G_{{X_{N}}}}(s){G_{{\mathcal{M}_{1}}}}(s)\]
implies (the first one in (15))
\[ {m_{n}^{(1)}}{x_{0}^{(N)}}={m_{n-\kappa }^{(N)}}-{\sum \limits_{i=0}^{n-1}}{m_{i}^{(1)}}{x_{n-i}^{(N)}}-{\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{1}}{1_{\{n=\kappa \}}},\hspace{1em}n=\kappa ,\hspace{0.1667em}\kappa +1,\dots ,\]
or, equivalently,
(33)
\[ {\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}{1_{\{n=\kappa \}}}={m_{n-\kappa }^{(N)}}-{\sum \limits_{i=0}^{n}}{m_{i}^{(1)}}{x_{n-i}^{(N)}},\hspace{1em}n=\kappa ,\hspace{0.1667em}\kappa +1,\dots \hspace{0.1667em}.\]
Equality (33) is implied by (32) because of the following equalities:
\[\begin{aligned}{}\frac{1}{n!}\frac{{d^{n}}}{d{s^{n}}}\Bigg({\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}\big({s^{\kappa }}-{s^{i+j}}\big)\Bigg){\bigg|_{s=0}}& ={\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}{1_{\{n=\kappa \}}},\\ {} \frac{1}{n!}\frac{{d^{n}}}{d{s^{n}}}\big({s^{\kappa }}{G_{{\mathcal{M}_{N}}}}(s)\big){\bigg|_{s=0}}& ={m_{n-\kappa }^{(N)}},\\ {} \frac{1}{n!}\frac{{d^{n}}}{d{s^{n}}}\big({G_{{\mathcal{M}_{1}}}}(s){G_{{X_{N}}}}(s)\big){\bigg|_{s=0}}& ={\sum \limits_{i=0}^{n}}{m_{i}^{(1)}}{x_{n-i}^{(N)}},\end{aligned}\]
when $n=\kappa ,\hspace{0.1667em}\kappa +1,\dots \hspace{0.1667em}$. The proof of Theorem 1 is finished.  □
Proof of Theorem 2.
Let us rewrite the system (27) as
(34)
\[\begin{aligned}{}& \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{s^{\kappa }}& -{G_{{X_{1}}}}(s)& 0& \dots & 0& 0\\ {} 0& {s^{\kappa }}& -{G_{{X_{2}}}}(s)& \dots & 0& 0\\ {} \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ {} 0& 0& 0& \dots & {s^{\kappa }}& -{G_{{X_{N-1}}}}(s)\\ {} -{G_{{X_{N}}}}(s)& 0& 0& \dots & 0& {s^{\kappa }}\end{array}\right)\left(\begin{array}{c}{G_{{\mathcal{M}_{1}}}}(s)\\ {} {G_{{\mathcal{M}_{2}}}}(s)\\ {} \vdots \\ {} {G_{{\mathcal{M}_{N-1}}}}(s)\\ {} {G_{{\mathcal{M}_{N}}}}(s)\end{array}\right)\\ {} & \hspace{1em}=\left(\begin{array}{c}{\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(1)}}({s^{\kappa }}-{s^{i+j}})\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(3)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(2)}}({s^{\kappa }}-{s^{i+j}})\\ {} \vdots \\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(N)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N-1)}}({s^{\kappa }}-{s^{i+j}})\\ {} {\textstyle\sum \limits_{i=0}^{\kappa -1}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=0}^{\kappa -1-i}}{x_{j}^{(N)}}({s^{\kappa }}-{s^{i+j}})\end{array}\right)\end{aligned}\]
and denote this system by $AB=C$. Determinant of the main matrix in (34) is
\[ \det A={s^{\kappa N}}-{G_{{S_{N}}}}(s).\]
Thus, the main matrix in (34) is invertible for all s such that ${s^{\kappa N}}\ne {G_{{S_{N}}}}(s)$ and $B={A^{-1}}C$. Therefore, the previous thoughts and equality (18) imply
\[ \Xi (s)=\frac{{G_{{\mathcal{M}_{1}}}}(s)}{1-s}=\frac{1}{{s^{\kappa N}}-{G_{{S_{N}}}}(s)}\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\boldsymbol{M}_{11}}& {\boldsymbol{M}_{21}}& \dots & {\boldsymbol{M}_{N1}}\end{array}\right)\frac{C}{1-s},\]
where ${\boldsymbol{M}_{11}},\hspace{0.1667em}{\boldsymbol{M}_{21}},\dots ,{\boldsymbol{M}_{N1}}$ are the minors of A and C is the column vector of the right-hand side of (34).  □
Proof of Theorem 3.
We first show that $\mathbb{E}{S_{N}}\gt \kappa N$ implies $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$. The recurrence (5) yields
(35)
\[\begin{aligned}{}& \varphi (u)\\ {} & =\sum \limits_{\substack{{i_{1}}\leqslant u+\kappa -1\\ {} {i_{1}}+{i_{2}}\leqslant u+2\kappa -1\\ {} \vdots \\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N-1}}\leqslant u+\kappa (N-1)-1\\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N}}\leqslant u+\kappa N-1}}\hspace{-28.45274pt}\mathbb{P}({X_{1}}={i_{1}})\mathbb{P}({X_{2}}={i_{2}})\cdots \mathbb{P}({X_{N}}={i_{N}})\hspace{0.1667em}\varphi \Bigg(u+\kappa N-{\sum \limits_{j=1}^{N}}{i_{j}}\Bigg)\\ {} & ={\sum \limits_{i=1}^{u+\kappa N}}{s_{u+\kappa N-i}^{(N)}}\varphi (i)\\ {} & \hspace{1em}-\sum \limits_{\substack{{i_{1}}\leqslant u+\kappa -1\\ {} {i_{1}}+{i_{2}}\leqslant u+2\kappa -1\\ {} \vdots \\ {} u+\kappa (N-1)\leqslant {i_{1}}+{i_{2}}+\cdots +{i_{N-1}}\leqslant u+\kappa N-1\\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N}}\leqslant u+\kappa N-1}}\hspace{-28.45274pt}{x_{{i_{1}}}^{(1)}}{x_{{i_{2}}}^{(2)}}\cdots {x_{{i_{N}}}^{(N)}}\hspace{0.1667em}\varphi \Bigg(u+\kappa N-{\sum \limits_{j=1}^{N}}{i_{j}}\Bigg)\\ {} & \hspace{1em}\hspace{1em}\vdots \\ {} & \hspace{1em}-\sum \limits_{\substack{u+\kappa \leqslant {i_{1}}\leqslant u+\kappa N-1\\ {} {i_{1}}+{i_{2}}\leqslant u+\kappa N-1\\ {} \vdots \\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N-1}}\leqslant u+\kappa N-1\\ {} {i_{1}}+{i_{2}}+\cdots +{i_{N}}\leqslant u+\kappa N-1}}\hspace{0.0pt}{x_{{i_{1}}}^{(1)}}{x_{{i_{2}}}^{(2)}}\cdots {x_{{i_{N}}}^{(N)}}\hspace{0.1667em}\varphi \Bigg(u+\kappa N-{\sum \limits_{j=1}^{N}}{i_{j}}\Bigg)\\ {} & ={\sum \limits_{i=1}^{u+\kappa N}}{s_{u+\kappa N-i}^{(N)}}\varphi (i)-{\sum \limits_{i=1}^{\kappa (N-1)}}{\mu _{i}}(u)\varphi (i),\end{aligned}\]
where ${\mu _{i}}(u)$ for each $i\in \{1,\hspace{0.1667em}2,\dots ,\kappa (N-1)\}$ are coefficients consisting of the products of probability mass functions of random variables ${X_{1}},{X_{2}},\dots ,{X_{N}}$. For instance, if $N=2$ and $\kappa =1$, then
\[ \varphi (u)=\sum \limits_{\substack{{i_{1}}\leqslant u\\ {} {i_{1}}+{i_{2}}\leqslant u+1}}{x_{{i_{1}}}^{(1)}}{x_{{i_{2}}}^{(2)}}\varphi (u+2-{i_{1}}-{i_{2}})={\sum \limits_{i=1}^{u+2}}{s_{u+2-i}^{(2)}}\varphi (i)-{x_{u+1}^{(1)}}{x_{0}^{(2)}}\varphi (1).\]
If ${\mu _{0}}(u):={s_{u+\kappa N}^{(N)}}$ and ${\mu _{j}}(u):=0$ when $j\gt \kappa (N-1)$, then the equality in (35) is
\[ \varphi (u)={\sum \limits_{i=0}^{u+\kappa N}}{s_{u+\kappa N-i}^{(N)}}\varphi (i)-{\sum \limits_{i=0}^{\kappa N-1}}{\mu _{i}}(u)\varphi (i).\]
By summing up both sides of the last equality over u, which varies from 0 to some natural and sufficiently large v, we obtain
(36)
\[ {\sum \limits_{u=0}^{v}}\varphi (u)={\sum \limits_{u=0}^{v}}{\sum \limits_{i=0}^{u+\kappa N}}{s_{u+\kappa N-i}^{(N)}}\varphi (i)-{\sum \limits_{u=0}^{v}}{\sum \limits_{i=0}^{\kappa N-1}}{\mu _{i}}(u)\varphi (i).\]
We now change the order of summation in (36),
\[ {\sum \limits_{u=0}^{v}}{\sum \limits_{i=0}^{u+\kappa N}}(\cdot )={\sum \limits_{i=0}^{\kappa N-1}}{\sum \limits_{u=0}^{v}}(\cdot )+{\sum \limits_{i=\kappa N}^{v+\kappa N}}{\sum \limits_{u=i-\kappa N}^{v}}(\cdot ),\]
and obtain
\[\begin{aligned}{}& {\sum \limits_{u=0}^{v+\kappa N}}\varphi (u)-{\sum \limits_{u=v+1}^{v+\kappa N}}\varphi (u)\\ {} & \hspace{1em}={\sum \limits_{i=0}^{\kappa N-1}}\varphi (i){\sum \limits_{u=0}^{v}}{s_{u+\kappa N-i}^{(N)}}+{\sum \limits_{i=\kappa N}^{v+\kappa N}}\varphi (i){\sum \limits_{u=i-\kappa N}^{v}}{s_{u+\kappa N-i}^{(N)}}-{\sum \limits_{i=0}^{\kappa N-1}}\varphi (i){\sum \limits_{u=0}^{v}}{\mu _{i}}(u).\end{aligned}\]
Subtracting ${\textstyle\sum _{i=0}^{v+\kappa N}}\varphi (i){\textstyle\sum _{u=i-\kappa N}^{v}}{s_{u+\kappa N-i}^{(N)}}$ from both sides of the last equation and rearranging, we get
\[\begin{aligned}{}& {\sum \limits_{i=0}^{v+\kappa N}}\varphi (i)\Bigg(1-{\sum \limits_{u=i-\kappa N}^{v}}{s_{u+\kappa N-i}^{(N)}}\Bigg)-{\sum \limits_{i=v+1}^{v+\kappa N}}\varphi (i)\\ {} & \hspace{1em}={\sum \limits_{i=0}^{\kappa N-1}}\varphi (i)\Bigg({\sum \limits_{u=0}^{v}}{s_{u+\kappa N-i}^{(N)}}-{\sum \limits_{u=0}^{v}}{\mu _{i}}(u)\Bigg)-{\sum \limits_{i=0}^{\kappa N-1}}\varphi (i){\sum \limits_{u=i-\kappa N}^{v}}{s_{u+\kappa N-i}^{(N)}}\end{aligned}\]
or
\[\begin{aligned}{}& {\sum \limits_{i=v+1}^{v+\kappa N}}\varphi (i)-{\sum \limits_{i=0}^{v+\kappa N}}\varphi (i)\Bigg(1-{\sum \limits_{u=0}^{v+\kappa N-i}}{s_{u}^{(N)}}\Bigg)\\ {} & \hspace{1em}={\sum \limits_{i=0}^{\kappa N-1}}\varphi (i)\Bigg({\sum \limits_{u=0}^{\kappa N-i-1}}{s_{u}^{(N)}}+{\sum \limits_{u=0}^{v}}{\mu _{i}}(u)\Bigg),\end{aligned}\]
which implies
(37)
\[\begin{aligned}{}\hspace{-14.22636pt}& {\sum \limits_{i=v+1}^{v+\kappa N}}\varphi (i)-{\sum \limits_{i=0}^{v+\kappa N}}\varphi (i)\mathbb{P}({S_{N}}\gt v+\kappa N-i)\\ {} & \hspace{1em}={\sum \limits_{i=0}^{\kappa N-1}}\varphi (i)\Bigg(\mathbb{P}({S_{N}}\leqslant \kappa N-i-1)+{\sum \limits_{u=0}^{v}}{\mu _{i}}(u)\Bigg).\end{aligned}\]
Clearly, the definition of the survival probability (4) implies that $\varphi (u)$ is a nondecreasing function, i.e. $\varphi (u)\leqslant \varphi (u+1)$ for all $u\in {\mathbb{N}_{0}}$. Thus, there exists a nonnegative limit $\varphi (\infty ):={\lim \nolimits_{u\to \infty }}\varphi (u)$ and $\varphi (\infty )=1$ if the net profit condition $\mathbb{E}{S_{N}}\lt \kappa N$ holds, see Lemma 1. We now let $v\to \infty $ in both sides of (37). For the first sum in (37) we obtain
(38)
\[ \underset{v\to \infty }{\lim }{\sum \limits_{i=v+1}^{v+\kappa N}}\varphi (i)=\underset{v\to \infty }{\lim }\big(\varphi (v+1)+\cdots +\varphi (v+\kappa N)\big)=\varphi (\infty )\kappa N,\]
and for the second
(39)
\[ \underset{v\to \infty }{\lim }{\sum \limits_{i=0}^{v+\kappa N}}\varphi (i)\mathbb{P}({S_{N}}\gt v+\kappa N-i)=\varphi (\infty )\mathbb{E}{S_{N}}.\]
Indeed, let us recall that $\mathbb{E}X={\textstyle\sum _{i=0}^{\infty }}\mathbb{P}(X\gt i)$, when X is some nonnegative and integer-valued random variable. Then, the upper bound of (39) is
\[\begin{aligned}{}& \underset{v\to \infty }{\lim }{\sum \limits_{i=0}^{v+\kappa N}}\varphi (i)\mathbb{P}({S_{N}}\gt v+\kappa N-i)\leqslant \underset{v\to \infty }{\lim }\varphi (v+\kappa N){\sum \limits_{i=0}^{v+\kappa N}}\mathbb{P}({S_{N}}\hspace{-0.1667em}\gt \hspace{-0.1667em}v\hspace{-0.1667em}+\hspace{-0.1667em}\kappa N-i)\\ {} & \hspace{1em}=\underset{v\to \infty }{\lim }\varphi (v+\kappa N){\sum \limits_{i=0}^{v+\kappa N}}\mathbb{P}({S_{N}}\gt i)=\varphi (\infty )\mathbb{E}{S_{N}},\end{aligned}\]
while the lower bound is the same due to inequality
\[\begin{aligned}{}& {\sum \limits_{i=0}^{v+\kappa N}}\varphi (i)\mathbb{P}({S_{N}}\gt v+\kappa N-i)\\ {} & \hspace{1em}={\sum \limits_{j=0}^{M}}\varphi (i)\mathbb{P}({S_{N}}\gt v+\kappa N-i)+{\sum \limits_{i=M+1}^{v+\kappa N}}\varphi (i)\mathbb{P}({S_{N}}\gt v+\kappa N-i)\\ {} & \hspace{1em}\geqslant \underset{i\geqslant M+1}{\inf }\varphi (i){\sum \limits_{i=0}^{v+\kappa N-M-1}}\mathbb{P}({S_{N}}\gt i),\end{aligned}\]
where M is some fixed and sufficiently large natural number. Thus, when $v\to \infty $, the equality in (37) is
(40)
\[ \varphi (\infty )(\kappa N-\mathbb{E}{S_{N}})={\sum \limits_{i=0}^{\kappa N-1}}\varphi (i)\Bigg(\mathbb{P}({S_{N}}\leqslant \kappa N-i-1)+{\sum \limits_{u=0}^{\infty }}{\mu _{i}}(u)\Bigg).\]
If $\mathbb{E}{S_{N}}\gt \kappa N$, the nonnegative right-hand side of (40) implies $\varphi (\infty )=0$ and consequently $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$. Thus, the survival is impossible if $\mathbb{E}{S_{N}}\gt \kappa N$.
Let us now consider the case when $\mathbb{E}{S_{N}}=\kappa N$ and $\mathbb{P}({S_{N}}=\kappa N)\lt 1$. If $\mathbb{E}{S_{N}}=\kappa N$ and $\mathbb{P}({S_{N}}=\kappa N)\lt 1$, then at least one probability ${s_{0}^{(N)}},\hspace{0.1667em}{s_{1}^{(N)}},\dots ,{s_{\kappa N-1}^{(N)}}$ is larger than zero, because otherwise $\mathbb{E}{S_{N}}\gt \kappa N$. Then, from (40),
(41)
\[ {\sum \limits_{i=0}^{\kappa N-1}}\varphi (i)\Bigg({\sum \limits_{u=0}^{\kappa N-i-1}}{s_{u}^{(N)}}+{\sum \limits_{u=0}^{\infty }}{\mu _{i}}(u)\Bigg)=0.\]
If ${s_{0}^{(N)}}\gt 0$, then (41) implies $\varphi (0)=\varphi (1)=\cdots =\varphi (\kappa N-1)=0$. Using recurrence (5) we can show that $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$. If ${s_{0}^{(N)}}=0$ and ${s_{1}^{(N)}}\gt 0$, then (41) implies that $\varphi (0)=\varphi (1)=\cdots =\varphi (\kappa N-2)=0$ and once again, using recurrence (5) we can show that $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$. Arguing in the same we proceed up to ${s_{0}^{(N)}}={s_{1}^{(N)}}=\cdots ={s_{\kappa N-2}^{(N)}}=0,\hspace{0.1667em}{s_{\kappa N-1}^{(N)}}\gt 0$ and observe that in all of these cases (41) and recurrence (5) yields $\varphi (u)=0$ for all $u\in {\mathbb{N}_{0}}$.
Finally, let us consider the case when $\mathbb{P}({S_{N}}=\kappa N)=1$. If ${S_{N}}=\kappa N$ with probability one, the random variables ${X_{1}},\hspace{0.1667em}{X_{2}},\dots ,{X_{N}}$ are degenerate, meaning that ${X_{i}}\equiv {c_{i}}$ for all $i\in \{1,\hspace{0.1667em}2,\dots ,N\}$, where ${c_{i}}\in \{0,\hspace{0.1667em}1,\dots ,\kappa N\}$ and ${c_{1}}+{c_{2}}+\cdots +{c_{N}}=\kappa N$. Thus, the model (1) becomes completely deterministic. Moreover, ${W_{u}}(n)={W_{u}}(n+N)$ for all $n\in {\mathbb{N}_{0}}$, $N\in \mathbb{N}$, and it is sufficient to check if the lowest of value among ${W_{u}}(1),\dots ,{W_{u}}(N)$ is larger than zero.  □
Proof of Theorem 4.
The proof of equalities (20) and (21) is nearly the same as deriving the recurrence (5). Equalities (22) and (23) are implied by [5, Thm. 1].  □

7 Numerical examples

In this section, we illustrate the applicability of theorems formulated in Section 3. All the necessary computations in this section are performed using Wolfram Mathematica [22]. Notice that some of the examples considered here are also considered in [1, Sec. 4], where the ultimate time survival probability was obtained by computing the limits of certain recurrent sequences. Therefore, in some examples here we check if the obtained values of $\varphi (u)$ match the previously known ones obtained by different methods.
We say that a random variable X is distributed according to the shifted Poisson distribution $\mathcal{P}(\lambda ,\hspace{0.1667em}\xi )$ with parameters $\lambda \gt 0$ and $\xi \in {\mathbb{N}_{0}}$, if
\[ \mathbb{P}(X=m)={e^{-\lambda }}\frac{{\lambda ^{m-\xi }}}{(m-\xi )!},\hspace{1em}m=\xi ,\hspace{0.1667em}\xi +1,\dots \hspace{0.1667em}.\]
One can check the following facts for the shifted Poisson distribution:
vmsta249_g004.jpg
Example 1.
Let $\kappa =2$ and consider the bi-seasonal ($N=2$) discrete time risk model (1) where ${X_{1}}\sim \mathcal{P}(1,\hspace{0.1667em}0)$ and ${X_{2}}\sim \mathcal{P}(2,\hspace{0.1667em}0)$. We set up the survival probability-generating function $\Xi (s)$ and compute $\varphi (u)$, when $u=0,1,\dots ,15$.
Let us observe that in the considered example the net profit condition is satisfied $\mathbb{E}{S_{2}}=3\lt 4$. Solving the equation ${G_{{S_{2}}}}(s)={e^{3(s-1)}}={s^{4}}$ when $s\in {\overline{S}_{1}}(0)\setminus \{1\}$, we get ${\alpha _{1}}:=-0.3605,{\alpha _{2}}:=-0.1294+0.4087i,{\alpha _{3}}:=-0.1294-0.4087i$. Since all of the solutions ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\hspace{0.1667em}{\alpha _{3}}$ are simple and none of them are equal to 0, following the description beneath Theorem 1 in Section 3, we set up matrices ${\boldsymbol{M}_{1}},{\boldsymbol{M}_{2}}$ and ${\boldsymbol{G}_{2}}$:
\[\begin{array}{l}\displaystyle {\boldsymbol{M}_{1}}=\left(\begin{array}{c@{\hskip10.0pt}c}{x_{1}^{(2)}}{\alpha _{1}}+{x_{0}^{(2)}}({\alpha _{1}}+1)& {x_{0}^{(2)}}{\alpha _{1}}\\ {} {x_{1}^{(2)}}{\alpha _{2}}+{x_{0}^{(2)}}({\alpha _{2}}+1)& {x_{0}^{(2)}}{\alpha _{2}}\\ {} {x_{1}^{(2)}}{\alpha _{3}}+{x_{0}^{(2)}}({\alpha _{3}}+1)& {x_{0}^{(2)}}{\alpha _{3}}\\ {} {x_{1}^{(2)}}+2{x_{0}^{(2)}}& {x_{0}^{(2)}}\end{array}\right),\\ {} \displaystyle {\boldsymbol{M}_{2}}=\left(\begin{array}{c@{\hskip10.0pt}c}{x_{1}^{(1)}}{\alpha _{1}}+{x_{0}^{(1)}}({\alpha _{1}}+1)& {x_{0}^{(1)}}{\alpha _{1}}\\ {} {x_{1}^{(1)}}{\alpha _{2}}+{x_{0}^{(1)}}({\alpha _{2}}+1)& {x_{0}^{(1)}}{\alpha _{2}}\\ {} {x_{1}^{(1)}}{\alpha _{3}}+{x_{0}^{(1)}}({\alpha _{3}}+1)& {x_{0}^{(1)}}{\alpha _{3}}\\ {} {x_{1}^{(1)}}+2{x_{0}^{(1)}}& {x_{0}^{(1)}}\end{array}\right),\\ {} \displaystyle {\boldsymbol{G}_{2}}=\left(\begin{array}{c@{\hskip10.0pt}c}\frac{{G_{{X_{2}}}}({\alpha _{1}})}{{\alpha _{1}^{2}}}& \frac{{G_{{X_{2}}}}({\alpha _{1}})}{{\alpha _{1}^{2}}}\\ {} \frac{{G_{{X_{2}}}}({\alpha _{2}})}{{\alpha _{2}^{2}}}& \frac{{G_{{X_{2}}}}({\alpha _{2}})}{{\alpha _{2}^{2}}}\\ {} \frac{{G_{{X_{2}}}}({\alpha _{3}})}{{\alpha _{3}^{2}}}& \frac{{G_{{X_{2}}}}({\alpha _{3}})}{{\alpha _{3}^{2}}}\\ {} 1& 1\end{array}\right)\end{array}\]
and the system
\[ {\left(\begin{array}{c@{\hskip10.0pt}c}{\boldsymbol{M}_{1}}& {\boldsymbol{M}_{2}}\circ {\boldsymbol{G}_{2}}\end{array}\right)_{4\times 4}}\left(\begin{array}{c}{m_{0}^{(1)}}\\ {} {m_{1}^{(1)}}\\ {} {m_{0}^{(2)}}\\ {} {m_{1}^{(2)}}\end{array}\right)=\left(\begin{array}{c}0\\ {} 0\\ {} 0\\ {} 1\end{array}\right),\]
which implies ${m_{0}^{(1)}}=0.6501$, ${m_{1}^{(1)}}=0.1395$, ${m_{0}^{(2)}}=0.5083$, ${m_{1}^{(2)}}=0.1855$. Then, $\varphi (1)={m_{0}^{(1)}}=0.6501$ and $\varphi (2)={m_{0}^{(1)}}+{m_{1}^{(1)}}=0.7896$. We then can use the system (15) to find ${m_{2}^{(1)}},{m_{3}^{(1)}},\dots \hspace{0.1667em}$, and consequently $\varphi (3),\varphi (4),\dots $ due to equality (8). In the considered case, the system (15) is
\[ \left\{\begin{array}{l}{m_{n}^{(2)}}=\bigg({m_{n-2}^{(1)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(2)}}{x_{n-i}^{(1)}}-{\textstyle\sum \limits_{i=0}^{1}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=0}^{1-i}}{x_{j}^{(1)}}{1_{\{n=2\}}}\bigg)/{x_{0}^{(1)}}\hspace{1em}\\ {} {m_{n}^{(1)}}=\bigg({m_{n-2}^{(2)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(1)}}{x_{n-i}^{(2)}}-{\textstyle\sum \limits_{i=0}^{1}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=0}^{1-i}}{x_{j}^{(2)}}{1_{\{n=2\}}}\bigg)/{x_{0}^{(2)}}\hspace{1em}\end{array}\right.,\]
$n=2,\hspace{0.1667em}3,\dots \hspace{0.1667em}$. Having $\varphi (1)$, $\varphi (2)$, $\varphi (3)$ and $\varphi (4)$ we use the recurrence (5) in order to find $\varphi (0)$
\[\begin{aligned}{}\varphi (0)& =\sum \limits_{\substack{{i_{1}}\leqslant 1\\ {} {i_{1}}+{i_{2}}\leqslant 3}}\mathbb{P}({X_{1}}={i_{1}})\mathbb{P}({X_{2}}={i_{2}})\hspace{0.1667em}\varphi (4-{i_{1}}-{i_{2}})\\ {} & ={x_{0}^{(1)}}{x_{0}^{(2)}}\varphi (4)+\big({x_{0}^{(1)}}{x_{1}^{(2)}}+{x_{1}^{(1)}}{x_{0}^{(2)}}\big)\varphi (3)+\big({x_{0}^{(1)}}{x_{2}^{(2)}}+{x_{1}^{(1)}}{x_{1}^{(2)}}\big)\varphi (2)\\ {} & \hspace{1em}+\big({x_{0}^{(1)}}{x_{3}^{(2)}}+{x_{1}^{(1)}}{x_{2}^{(2)}}\big)\varphi (1).\end{aligned}\]
Let us recall that the recurrence (5) can be used to compute $\varphi (u)$ when $u\geqslant 5$.
We provide the obtained survival probabilities in Table 1.
The provided values of $\varphi (u)$ match the ones given in [1, Table 1], where they were obtained by a different method.
Table 1.
Survival probabilities for $\kappa =2$, $N=2$, ${X_{1}}\sim \mathcal{P}(1,0)$ and ${X_{2}}\sim \mathcal{P}(2,0)$
vmsta249_g005.jpg
Based on Theorem 2, we now set up the survival probability-generating function $\Xi (s)$, i.e.
\[ \frac{1}{n!}\frac{{d^{n}}}{d{s^{n}}}\big(\Xi (s)\big){\bigg|_{s=0}}=\varphi (n+1),\hspace{1em}n=0,1,\dots \hspace{0.1667em}.\]
So, having ${m_{0}^{(1)}}$, ${m_{1}^{(1)}}$, ${m_{0}^{(2)}}$, ${m_{1}^{(2)}}$ and omitting the elementary rearrangements, we get
\[ \Xi (s)=\frac{(0.187+0.442s){s^{2}}+(0.0224+0.104s){e^{s}}}{{e^{3(s-1)}}-{s^{4}}},\hspace{1em}|s|\lt 1,\hspace{0.1667em}{e^{3(s-1)}}\ne {s^{4}}.\]
Example 2.
Let us consider the model (1) when $\kappa =2$ and ${X_{1}}\sim \mathcal{P}(1,\hspace{0.1667em}1)$ and ${X_{2}}\sim \mathcal{P}(9/10,\hspace{0.1667em}1)$. We find $\varphi (u)$ when $u=0,1,\dots ,50$ and set up the survival probability-generating function $\Xi (s)$.
According to (42) and (43), we check that the net profit condition is satisfied: $\mathbb{E}{S_{2}}=1+1+0.9+1=3.9\lt 4=\kappa N$. The probability-generating function of ${S_{2}}={X_{1}}+{X_{2}}$ is ${G_{{S_{2}}}}(s)={s^{2}}{e^{1.9(s-1)}}$ and the equation ${G_{{S_{2}}}}(s)={s^{4}}$ has one nonzero solution inside the unit circle: $\alpha =-0.2928$. Since ${x_{0}^{(1)}}=0$, ${x_{0}^{(2)}}=0$ we use (11) and (13) to set up the system
(44)
\[ \left(\begin{array}{c@{\hskip10.0pt}c}{x_{1}^{(2)}}\alpha & \frac{{G_{{X_{2}}}}(\alpha )}{\alpha }{x_{1}^{(1)}}\\ {} {x_{1}^{(2)}}& {x_{1}^{(1)}}\end{array}\right)\times \left(\begin{array}{c}{m_{0}^{(1)}}\\ {} {m_{0}^{(2)}}\end{array}\right)=\left(\begin{array}{c}0\\ {} 0.1\end{array}\right).\]
It is easy to see that system (44), as in the previous example, can be expressed using matrices ${\boldsymbol{M}_{1}},{\boldsymbol{M}_{2}}$ and ${\boldsymbol{G}_{2}}$:
\[ {\boldsymbol{M}_{1}}=\left(\begin{array}{c}{x_{1}^{(2)}}\alpha \\ {} {x_{1}^{(2)}}\end{array}\right),\hspace{0.1667em}{\boldsymbol{M}_{2}}=\left(\begin{array}{c}{x_{1}^{(1)}}\alpha \\ {} {x_{1}^{(1)}}\end{array}\right),\hspace{0.1667em}{\boldsymbol{G}_{2}}=\left(\begin{array}{c}\frac{{G_{{X_{2}}}}(\alpha )}{{\alpha ^{2}}}\\ {} 1\end{array}\right).\]
The system (44) implies ${m_{0}^{(1)}}=0.1270$, ${m_{0}^{(2)}}=0.1315$ and consequently, $\varphi (1)={m_{0}^{(1)}}=0.1270$. To proceed computing $\varphi (u)={\textstyle\sum _{i=1}^{u-1}}{m_{i}^{(1)}}$, $u\geqslant 2$, we use (15), which in this particular case is
\[ \left\{\begin{array}{l}{m_{n}^{(2)}}{x_{0}^{(1)}}={m_{n-2}^{(1)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(2)}}{x_{n-i}^{(1)}}-{\textstyle\sum \limits_{i=0}^{1}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=0}^{1-i}}{x_{j}^{(1)}}{1_{\{n=2\}}}\hspace{1em}\\ {} {m_{n}^{(1)}}{x_{0}^{(2)}}={m_{n-2}^{(2)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(1)}}{x_{n-i}^{(2)}}-{\textstyle\sum \limits_{i=0}^{1}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=0}^{1-i}}{x_{j}^{(2)}}{1_{\{n=2\}}}\hspace{1em}\end{array}\right.,\hspace{0.1667em}n=2,\hspace{0.1667em}3,\dots \]
or, equivalently,
\[ \left\{\begin{array}{l}{m_{n-1}^{(2)}}=\bigg({m_{n-2}^{(1)}}-{\textstyle\sum \limits_{i=0}^{n-2}}{m_{i}^{(2)}}{x_{n-i}^{(1)}}-{m_{0}^{(2)}}{x_{1}^{(1)}}{1_{\{n=2\}}}\bigg)/{x_{1}^{(1)}}\hspace{1em}\\ {} {m_{n-1}^{(1)}}=\bigg({m_{n-2}^{(2)}}-{\textstyle\sum \limits_{i=0}^{n-2}}{m_{i}^{(1)}}{x_{n-i}^{(2)}}-{m_{0}^{(1)}}{x_{1}^{(2)}}{1_{\{n=2\}}}\bigg)/{x_{1}^{(2)}}\hspace{1em}\end{array}\right.,\hspace{0.1667em}n=2,\hspace{0.1667em}3,\dots \hspace{0.1667em}.\]
Substituting $n=2,\hspace{0.1667em}3,\dots $ into the last two equations, we obtain ${m_{1}^{(1)}},{m_{2}^{(1)}},\dots \hspace{0.1667em}$. The survival probability $\varphi (0)$ is found using recurrence (5):
\[ \varphi (0)\hspace{-0.1667em}=\hspace{-0.1667em}\sum \limits_{\substack{{i_{1}}\leqslant 1\\ {} {i_{1}}+{i_{2}}\leqslant 3}}\mathbb{P}({X_{1}}={i_{1}})\mathbb{P}({X_{2}}={i_{2}})\hspace{0.1667em}\varphi (4-{i_{1}}-{i_{2}})={x_{1}^{(1)}}{x_{1}^{(2)}}\varphi (2)+{x_{1}^{(1)}}{x_{2}^{(2)}}\varphi (1).\]
After completing all the necessary arithmetic, we get survival probabilities which are provided in Table 2.
Once again, the results obtained in Table 2 match the ones presented in [1, Table 3], where the numbers are obtained differently, i.e. computing limits of certain recurrent sequences.
Table 2.
Survival probabilities for $\kappa =2$, $N=2$, ${X_{1}}\sim \mathcal{P}(1,1)$ and ${X_{2}}\sim \mathcal{P}(9/10,1)$
u 0 1 2 3 4 5 10 20 30 40 50
$\varphi (u)$ 0.048 0.127 0.209 0.286 0.355 0.417 0.649 0.873 0.954 0.983 0.994
Theorem 2 yields the following survival probability-generating function
\[ \Xi (s)=\frac{0.0516{e^{s-1}}+0.0484s}{{e^{1.9(s-1)}}-{s^{2}}},\hspace{1em}s\in {S_{1}}(0),\hspace{0.1667em}{e^{1.9(s-1)}}\ne {s^{2}}.\]
Example 3.
Let us consider the bi-seasonal model (1) with $\kappa =3$ where claims are represented by two independent random variables ${X_{1}}$ and ${X_{2}}$, whose distributions are given in Table 3 and Table 4.
Table 3.
Probability distribution of random variable ${X_{1}}$
vmsta249_g006.jpg
Table 4.
Probability distribution of random variable ${X_{2}}$
vmsta249_g007.jpg
We find the survival probability $\varphi (u)$ for all $u\in {\mathbb{N}_{0}}$ and its generating function $\Xi (s)$.
It is easy to observe that $\mathbb{E}{S_{2}}=2.4\lt 6$. Thus, the net profit condition is valid. The probability-generating function of the sum ${X_{1}}+{X_{2}}$ is
\[\begin{aligned}{}{G_{{S_{2}}}}(s)& =\big(0.4096+0.4096s+0.1536{s^{2}}+0.0256{s^{3}}+0.0016{s^{4}}\big)\\ {} & \hspace{1em}\times \big(0.04+0.32s+0.64{s^{2}}\big).\end{aligned}\]
Solving ${G_{{S_{2}}}}(s)={s^{6}}$, we obtain the following roots inside the unit circle:
\[ {\alpha _{1}}=-\frac{4}{11},\hspace{0.1667em}{\alpha _{2}}=-0.2250,\hspace{0.1667em}{\alpha _{3}}=-0.0154-0.7423i,\hspace{0.1667em}{\alpha _{4}}=-0.0154+0.7423i.\]
Note that the complex roots always occur in conjugate pairs due to ${G_{{S_{N}}}}(\overline{s})-{\overline{s}^{\kappa N}}=\overline{{G_{{S_{N}}}}(s)-{s^{\kappa N}}}$, where over-line denotes conjugation. According to Lemma 4, there must be one root of multiplicity two and one may check that ${\alpha _{1}}$ is such.
We then employ (14) to create the modified versions of ${\boldsymbol{M}_{1}}$, ${\boldsymbol{M}_{2}}$ and ${\boldsymbol{G}_{2}}$. Let ${\tilde{\boldsymbol{M}}_{1}}$, ${\tilde{\boldsymbol{M}}_{2}}$ and ${\tilde{\boldsymbol{G}}_{2}}$ be
\[\begin{array}{l}\displaystyle {\tilde{\boldsymbol{M}}_{1}}:=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\alpha _{1}^{2}}{F_{{X_{2}}}}(2)+{\alpha _{1}}{F_{{X_{2}}}}(1)+{x_{0}^{(2)}}& {\alpha _{1}^{2}}{F_{{X_{2}}}}(1)+{\alpha _{1}}{x_{0}^{(2)}}& {x_{0}^{(2)}}{\alpha _{1}^{2}}\\ {} {\alpha _{2}^{2}}{F_{{X_{2}}}}(2)+{\alpha _{2}}{F_{{X_{2}}}}(1)+{x_{0}^{(2)}}& {\alpha _{2}^{2}}{F_{{X_{2}}}}(1)+{\alpha _{2}}{x_{0}^{(2)}}& {x_{0}^{(2)}}{\alpha _{2}^{2}}\\ {} {\alpha _{3}^{2}}{F_{{X_{2}}}}(2)+{\alpha _{3}}{F_{{X_{2}}}}(1)+{x_{0}^{(2)}}& {\alpha _{3}^{2}}{F_{{X_{2}}}}(1)+{\alpha _{3}}{x_{0}^{(2)}}& {x_{0}^{(2)}}{\alpha _{3}^{2}}\\ {} {\alpha _{4}^{2}}{F_{{X_{2}}}}(2)+{\alpha _{4}}{F_{{X_{2}}}}(1)+{x_{0}^{(2)}}& {\alpha _{4}^{2}}{F_{{X_{2}}}}(1)+{\alpha _{4}}{x_{0}^{(2)}}& {x_{0}^{(2)}}{\alpha _{4}^{2}}\\ {} {F_{{X_{2}}}}(1)+2{\alpha _{1}}{F_{{X_{2}}}}(2)& {x_{0}^{(2)}}+2{\alpha _{1}}{F_{{X_{2}}}}(1)& 2{x_{0}^{(2)}}{\alpha _{1}}\\ {} {x_{2}^{(2)}}+2{x_{1}^{(2)}}+3{x_{0}^{(2)}}& {x_{1}^{(2)}}+2{x_{0}^{(2)}}& {x_{0}^{(2)}}\end{array}\right),\\ {} \displaystyle {\tilde{\boldsymbol{M}}_{2}}:=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\alpha _{1}^{2}}{F_{{X_{1}}}}(2)+{\alpha _{1}}{F_{{X_{1}}}}(1)+{x_{0}^{(1)}}& {\alpha _{1}^{2}}{F_{{X_{1}}}}(1)+{\alpha _{1}}{x_{0}^{(1)}}& {x_{0}^{(1)}}{\alpha _{1}^{2}}\\ {} {\alpha _{2}^{2}}{F_{{X_{1}}}}(2)+{\alpha _{2}}{F_{{X_{1}}}}(1)+{x_{0}^{(1)}}& {\alpha _{2}^{2}}{F_{{X_{1}}}}(1)+{\alpha _{2}}{x_{0}^{(1)}}& {x_{0}^{(1)}}{\alpha _{2}^{2}}\\ {} {\alpha _{3}^{2}}{F_{{X_{1}}}}(2)+{\alpha _{3}}{F_{{X_{1}}}}(1)+{x_{0}^{(1)}}& {\alpha _{3}^{2}}{F_{{X_{1}}}}(1)+{\alpha _{3}}{x_{0}^{(1)}}& {x_{0}^{(1)}}{\alpha _{3}^{2}}\\ {} {\alpha _{4}^{2}}{F_{{X_{1}}}}(2)+{\alpha _{4}}{F_{{X_{1}}}}(1)+{x_{0}^{(2)}}& {\alpha _{4}^{2}}{F_{{X_{1}}}}(1)+{\alpha _{4}}{x_{0}^{(1)}}& {x_{0}^{(1)}}{\alpha _{4}^{2}}\\ {} {\tilde{M}_{5,\hspace{0.1667em}1}}& {\tilde{M}_{5,\hspace{0.1667em}2}}& {\tilde{M}_{5,\hspace{0.1667em}3}}\\ {} 3{x_{0}^{(1)}}+2{x_{1}^{(1)}}+{x_{2}^{(1)}}& 2{x_{0}^{(1)}}+{x_{1}^{(1)}}& {x_{0}^{(1)}}\end{array}\right),\\ {} \displaystyle \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\tilde{\boldsymbol{M}}_{5,\hspace{0.1667em}1}}& {\tilde{\boldsymbol{M}}_{5,\hspace{0.1667em}2}}& {\tilde{\boldsymbol{M}}_{5,\hspace{0.1667em}3}}\end{array}\right)=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\Big(\frac{{G_{{X_{2}}}}(s)}{{s^{2}}}\Big)^{\prime }}{\Big|_{s={\alpha _{1}}}}& {\Big(\frac{{G_{{X_{2}}}}(s)}{s}\Big)^{\prime }}{\Big|_{s={\alpha _{1}}}}& {G^{\prime }_{{X_{2}}}}(s){|_{s={\alpha _{1}}}}\end{array}\right)\\ {} \displaystyle \hspace{1em}\hspace{1em}\hspace{1em}\hspace{1em}\times \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{x_{0}^{(1)}}& 0& 0\\ {} {F_{{X_{1}}}}(1)& {x_{0}^{(1)}}& 0\\ {} {F_{{X_{1}}}}(2)& {F_{{X_{1}}}}(1)& {x_{0}^{(1)}}\end{array}\right),\\ {} \displaystyle {\tilde{\boldsymbol{G}}_{2}}:=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{G_{{X_{2}}}}({\alpha _{1}})/{\alpha _{1}^{3}}& {G_{{X_{2}}}}({\alpha _{1}})/{\alpha _{1}^{3}}& {G_{{X_{2}}}}({\alpha _{1}})/{\alpha _{1}^{3}}\\ {} {G_{{X_{2}}}}({\alpha _{2}})/{\alpha _{2}^{3}}& {G_{{X_{2}}}}({\alpha _{2}})/{\alpha _{2}^{3}}& {G_{{X_{2}}}}({\alpha _{2}})/{\alpha _{2}^{3}}\\ {} {G_{{X_{2}}}}({\alpha _{3}})/{\alpha _{3}^{3}}& {G_{{X_{2}}}}({\alpha _{3}})/{\alpha _{3}^{3}}& {G_{{X_{2}}}}({\alpha _{3}})/{\alpha _{3}^{3}}\\ {} {G_{{X_{2}}}}({\alpha _{4}})/{\alpha _{4}^{3}}& {G_{{X_{2}}}}({\alpha _{4}})/{\alpha _{4}^{3}}& {G_{{X_{2}}}}({\alpha _{4}})/{\alpha _{4}^{3}}\\ {} 1& 1& 1\\ {} 1& 1& 1\end{array}\right).\end{array}\]
Then
\[ {\left(\begin{array}{c@{\hskip10.0pt}c}{\tilde{\boldsymbol{M}}_{1}}& {\tilde{\boldsymbol{M}}_{2}}\circ {\tilde{\boldsymbol{G}}_{2}}\end{array}\right)_{6\times 6}}\left(\begin{array}{c}{m_{0}^{(1)}}\\ {} {m_{1}^{(1)}}\\ {} {m_{2}^{(1)}}\\ {} {m_{0}^{(2)}}\\ {} {m_{1}^{(2)}}\\ {} {m_{2}^{(2)}}\end{array}\right)=\left(\begin{array}{c}0\\ {} 0\\ {} 0\\ {} 0\\ {} 0\\ {} 3.6\end{array}\right)\hspace{1em}\Rightarrow \hspace{1em}\left(\begin{array}{c}{m_{0}^{(1)}}\\ {} {m_{1}^{(1)}}\\ {} {m_{2}^{(1)}}\\ {} {m_{0}^{(2)}}\\ {} {m_{1}^{(2)}}\\ {} {m_{2}^{(2)}}\end{array}\right)=\left(\begin{array}{c}0.9984\\ {} 0.0016\\ {} 0\\ {} 1\\ {} 0\\ {} 0\end{array}\right).\]
It follows that $\varphi (1)={m_{0}^{(1)}}=0.9984$, $\varphi (2)={m_{0}^{(1)}}+{m_{1}^{(1)}}=1$ and consequently $\varphi (u)=1$ for all $u\geqslant 3$. Therefore,
\[\begin{aligned}{}\varphi (0)& =\sum \limits_{\substack{{i_{1}}\leqslant 2\\ {} {i_{1}}+{i_{2}}\leqslant 5}}\mathbb{P}({X_{1}}={i_{1}})\mathbb{P}({X_{2}}={i_{2}})\hspace{0.1667em}\varphi (6-{i_{1}}-{i_{2}})\\ {} & ={x_{0}^{(1)}}{x_{0}^{(2)}}\varphi (6)+\big({x_{0}^{(1)}}{x_{1}^{(2)}}+{x_{1}^{(1)}}{x_{0}^{(2)}}\big)\varphi (5)\\ {} & \hspace{1em}+\big({x_{0}^{(1)}}{x_{2}^{(2)}}+{x_{1}^{(1)}}{x_{1}^{(2)}}+{x_{2}^{(1)}}{x_{0}^{(2)}}\big)\varphi (4)\\ {} & \hspace{1em}+\big({x_{2}^{(1)}}{x_{1}^{(2)}}+{x_{1}^{(1)}}{x_{2}^{(2)}}\big)\varphi (3)+{x_{2}^{(1)}}{x_{2}^{(2)}}\varphi (2)=0.9728.\end{aligned}\]
The correctness of these results can be verified in the following way. If initial surplus $u=1$, ruin can only occur at the first moment of time and only if $1+3\cdot 1-{X_{1}}\leqslant 0$, i.e. ${X_{1}}=4$. Thus, $\varphi (1)=1-\mathbb{P}({X_{1}}=4)=1-0.0016=0.9984$. If initial surplus $u\geqslant 1$, then ruin will never occur. There are two reasons for that. First of all, at the first moment of time insurer’s wealth will never drop below one. Moreover, every two periods insurer earns 6 units of currency and that is the maximum amount of claims that the insurer can suffer during two consecutive periods. The result of $\varphi (0)$ is also logical as with no initial capital ruin can occur only if ${X_{1}}=3$ or ${X_{1}}=4$, thus $\varphi (0)=1-\mathbb{P}({X_{1}}=3)-\mathbb{P}({X_{1}}=4)=1-0.0256-0.0016=0.9728$.
The generating function of $\varphi (1),\hspace{0.1667em}\varphi (2),\dots $ in the considered case is simply
\[ \Xi (s)=0.9984+s+{s^{2}}+{s^{3}}+\cdots =\frac{1}{1-s}-0.0016,\hspace{1em}s\in {S_{1}}(0).\]
One may verify that Theorem 2 produces the same result.
Example 4.
In the last example, we consider ten season model with a premium rate of 5, i.e. $N=10$, $\kappa =5$, and we assume claims to be generated by independent random variables ${X_{k}}\sim \mathcal{P}(k/(k+1)+4,\hspace{0.1667em}0)$, $k\in \{1,\hspace{0.1667em}2,\dots ,10\}$, where $\mathcal{P}(\lambda ,\hspace{0.1667em}0)$ denotes the Poisson distribution with parameter λ. We compute both the finite time survival probability $\varphi (u,T)$ and the ultimate time survival probability $\varphi (u)$ and provide a frame of ultimate time survival probability-generating function $\Xi (s)$.
Let us verify that the net profit condition is satisfied:
\[ \mathbb{E}{S_{10}}={\sum \limits_{i=1}^{10}}\mathbb{E}{X_{k}}={\sum \limits_{k=1}^{10}}\bigg(\frac{k}{k+1}+4\bigg)=\frac{1330009}{27720}\approx 47.9801\lt 50.\]
We now apply Theorem 1. The equation
\[ {G_{{S_{10}}}}(s)={e^{1330009(s-1)/27720}}={s^{50}}\]
has 49 simple roots inside the unit circle depicted in Figure 2.
vmsta249_g008.jpg
Fig. 2.
Roots of ${s^{50}}={G_{{S_{10}}}}(s)$, when ${X_{k}}\sim \mathcal{P}(k/(k+1)+4,\hspace{0.1667em}0)$, $k\in \{1,\hspace{0.1667em}2,\dots ,10\}$
Denoting these roots by ${\alpha _{1}},\hspace{0.1667em}{\alpha _{2}},\dots ,{\alpha _{49}}$ we set up matrices ${\boldsymbol{M}_{1}}$, ${\boldsymbol{M}_{2}}$, $\dots \hspace{0.1667em}$, ${\boldsymbol{M}_{10}}$ and ${\boldsymbol{G}_{2}}$, ${\boldsymbol{G}_{3}}$, $\dots \hspace{0.1667em}$, ${\boldsymbol{G}_{10}}$:
\[\begin{array}{l}\displaystyle {\boldsymbol{M}_{1}}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\textstyle\sum \limits_{j=0}^{4}}{\alpha _{1}^{j}}{F_{{X_{10}}}}(j)& {\textstyle\sum \limits_{j=1}^{4}}{\alpha _{1}^{j}}{F_{{X_{10}}}}(j-1)& \dots & {x_{0}^{(10)}}{\alpha _{1}^{4}}\\ {} \vdots & \vdots & \ddots & \vdots \\ {} {\textstyle\sum \limits_{j=0}^{4}}{\alpha _{49}^{j}}{F_{{X_{10}}}}(j)& {\textstyle\sum \limits_{j=1}^{4}}{\alpha _{49}^{j}}{F_{{X_{10}}}}(j-1)& \dots & {x_{0}^{(10)}}{\alpha _{49}^{4}}\\ {} {\textstyle\sum \limits_{j=0}^{4}}{x_{j}^{(10)}}(5-j)& {\textstyle\sum \limits_{j=0}^{3}}{x_{j}^{(10)}}(5-j-1)& \dots & {x_{0}^{(10)}}\end{array}\right),\\ {} \displaystyle {\boldsymbol{M}_{2}}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\textstyle\sum \limits_{j=0}^{4}}{\alpha _{1}^{j}}{F_{{X_{1}}}}(j)& {\textstyle\sum \limits_{j=1}^{4}}{\alpha _{1}^{j}}{F_{{X_{1}}}}(j-1)& \dots & {x_{0}^{(1)}}{\alpha _{1}^{4}}\\ {} \vdots & \vdots & \ddots & \vdots \\ {} {\textstyle\sum \limits_{j=0}^{4}}{\alpha _{49}^{j}}{F_{{X_{1}}}}(j)& {\textstyle\sum \limits_{j=1}^{4}}{\alpha _{49}^{j}}{F_{{X_{1}}}}(j-1)& \dots & {x_{0}^{(1)}}{\alpha _{49}^{4}}\\ {} {\textstyle\sum \limits_{j=0}^{4}}{x_{j}^{(1)}}(5-j)& {\textstyle\sum \limits_{j=0}^{3}}{x_{j}^{(1)}}(5-j-1)& \dots & {x_{0}^{(1)}}\end{array}\right),\dots ,\\ {} \displaystyle {\boldsymbol{M}_{10}}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\textstyle\sum \limits_{j=0}^{4}}{\alpha _{1}^{j}}{F_{{X_{9}}}}(j)& {\textstyle\sum \limits_{j=1}^{4}}{\alpha _{1}^{j}}{F_{{X_{9}}}}(j-1)& \dots & {x_{0}^{(9)}}{\alpha _{1}^{4}}\\ {} \vdots & \vdots & \ddots & \vdots \\ {} {\textstyle\sum \limits_{j=0}^{4}}{\alpha _{49}^{j}}{F_{{X_{9}}}}(j)& {\textstyle\sum \limits_{j=1}^{4}}{\alpha _{49}^{j}}{F_{{X_{9}}}}(j-1)& \dots & {x_{0}^{(9)}}{\alpha _{49}^{4}}\\ {} {\textstyle\sum \limits_{j=0}^{4}}{x_{j}^{9}}(5-j)& {\textstyle\sum \limits_{j=0}^{3}}{x_{j}^{9}}(5-j-1)& \dots & {x_{0}^{9}}\end{array}\right),\\ {} \displaystyle {\boldsymbol{G}_{2}}\hspace{-0.1667em}=\hspace{-0.1667em}\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{{G_{{X_{10}}}}({\alpha _{1}})}{{\alpha _{1}^{5}}}& \dots & \frac{{G_{{X_{10}}}}({\alpha _{1}})}{{\alpha _{1}^{5}}}\\ {} \vdots & \ddots & \vdots \\ {} \frac{{G_{{X_{10}}}}({\alpha _{49}})}{{\alpha _{49}^{5}}}& \dots & \frac{{G_{{X_{10}}}}({\alpha _{49}})}{{\alpha _{49}^{5}}}\\ {} 1& \dots & 1\end{array}\right),{\boldsymbol{G}_{3}}\hspace{-0.1667em}=\hspace{-0.1667em}\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{{G_{{X_{10}}+{X_{1}}}}({\alpha _{1}})}{{\alpha _{1}^{10}}}& \dots & \frac{{G_{{X_{10}}+{X_{1}}}}({\alpha _{1}})}{{\alpha _{1}^{10}}}\\ {} \vdots & \ddots & \vdots \\ {} \frac{{G_{{X_{10}}+{X_{1}}}}({\alpha _{49}})}{{\alpha _{49}^{10}}}& \dots & \frac{{G_{{X_{10}}+{X_{1}}}}({\alpha _{49}})}{{\alpha _{49}^{10}}}\\ {} 1& \dots & 1\end{array}\right),\dots ,\\ {} \displaystyle {\boldsymbol{G}_{10}}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}\frac{{G_{{X_{10}}+{X_{1}}+\cdots +{X_{8}}}}({\alpha _{1}})}{{\alpha _{1}^{45}}}& \dots & \frac{{G_{{X_{10}}+{X_{1}}+\cdots +{X_{8}}}}({\alpha _{1}})}{{\alpha _{1}^{45}}}\\ {} \vdots & \ddots & \vdots \\ {} \frac{{G_{{X_{10}}+{X_{1}}+\cdots +{X_{8}}}}({\alpha _{49}})}{{\alpha _{49}^{45}}}& \dots & \frac{{G_{{X_{10}}+{X_{1}}+\cdots +{X_{8}}}}({\alpha _{49}})}{{\alpha _{49}^{45}}}\\ {} 1& \dots & 1\end{array}\right).\end{array}\]
Solving the system
(45)
\[ {\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\boldsymbol{M}_{1}}& {\boldsymbol{M}_{2}}\circ {\boldsymbol{G}_{2}}& \dots & {\boldsymbol{M}_{10}}\circ {\boldsymbol{G}_{10}}\end{array}\right)_{50\times 50}}{\left(\substack{{m_{0}^{(1)}}\\ {} {m_{1}^{(1)}}\\ {} \vdots \\ {} {m_{4}^{(1)}}\\ {} \vdots \\ {} {m_{0}^{(10)}}\\ {} {m_{1}^{(10)}}\\ {} \vdots \\ {} {m_{4}^{(10)}}}\right)_{50\times 1}}={\left(\substack{0\\ {} 0\\ {} \vdots \\ {} 0\\ {} \frac{55991}{27720}}\right)_{50\times 1}}\]
we obtain ${m_{0}^{(1)}}=0.1821$, ${m_{1}^{(1)}}=0.0604$, ${m_{2}^{(1)}}=0.0583$, ${m_{3}^{(1)}}=0.0545$, ${m_{4}^{(1)}}=0.0504$.
Therefore, using (8):
\[\begin{aligned}{}\varphi (1)& ={m_{0}^{(1)}}=0.1821,\\ {} \varphi (2)& ={m_{0}^{(1)}}+{m_{1}^{(1)}}=0.2425,\\ {} \varphi (3)& ={m_{0}^{(1)}}+{m_{1}^{(1)}}+{m_{2}^{(1)}}=0.3009,\\ {} \varphi (4)& ={m_{0}^{(1)}}+{m_{1}^{(1)}}+{m_{2}^{(1)}}+{m_{3}^{(1)}}=0.3554,\\ {} \varphi (5)& ={m_{0}^{(1)}}+{m_{1}^{(1)}}+{m_{2}^{(1)}}+{m_{3}^{(1)}}+{m_{4}^{(1)}}=0.4058.\end{aligned}\]
Employing system (15) we find the remaining probabilities ${m_{5}^{(1)}},{m_{6}^{(1)}},\dots \hspace{0.1667em}$:
\[ \left\{\begin{array}{l@{\hskip10.0pt}l}{m_{n}^{(2)}}\hspace{1em}& \hspace{-7.0pt}=\bigg({m_{n-5}^{(1)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(2)}}{x_{n-i}^{(1)}}-{\textstyle\sum \limits_{i=0}^{4}}{m_{i}^{(2)}}{\textstyle\sum \limits_{j=0}^{4-i}}{x_{j}^{(1)}}{1_{\{n=5\}}}\bigg)/{x_{0}^{(1)}}\\ {} \hspace{1em}& \hspace{-7.0pt}\hspace{2.5pt}\vdots \\ {} {m_{n}^{(10)}}\hspace{1em}& \hspace{-7.0pt}=\bigg({m_{n-5}^{(9)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(10)}}{x_{n-i}^{(9)}}-{\textstyle\sum \limits_{i=0}^{4}}{m_{i}^{(10)}}{\textstyle\sum \limits_{j=0}^{4-i}}{x_{j}^{(9)}}{1_{\{n=5\}}}\bigg)/{x_{0}^{(9)}}\\ {} {m_{n}^{(1)}}\hspace{1em}& \hspace{-7.0pt}=\bigg({m_{n-5}^{(10)}}-{\textstyle\sum \limits_{i=0}^{n-1}}{m_{i}^{(1)}}{x_{n-i}^{(10)}}-{\textstyle\sum \limits_{i=0}^{4}}{m_{i}^{(1)}}{\textstyle\sum \limits_{j=0}^{4-i}}{x_{j}^{(10)}}{1_{\{n=5\}}}\bigg)/{x_{0}^{(10)}}\end{array}\right.\]
$n=5,6,\dots $ . We substitute the obtained probabilities ${m_{0}^{(1)}},\hspace{0.1667em}{m_{1}^{(1)}},\dots $ into (8) and compute $\varphi (6),\varphi (7),\dots $ . Finally, $\varphi (0)$ can be found using (5):
\[ \varphi (0)=\sum \limits_{\substack{{i_{1}}\leqslant 4\\ {} {i_{1}}+{i_{2}}\leqslant 9\\ {} {i_{1}}+{i_{2}}+{i_{3}}\leqslant 14\\ {} \vdots \\ {} {i_{1}}+{i_{2}}+\cdots +{i_{10}}\leqslant 49}}\mathbb{P}({X_{1}}={i_{1}})\mathbb{P}({X_{2}}={i_{2}})\cdots \mathbb{P}({X_{10}}={i_{10}})\varphi \Bigg(50-{\sum \limits_{j=1}^{10}}{i_{j}}\Bigg).\]
The final results, including the finite time ruin probability computed by Theorem 4, rounded up to three decimal places, are provided in Table 5.
Table 5.
Survival probabilities for $\kappa =5$, $N=10$, ${X_{k}}\sim \mathcal{P}(k/(k+1)+4,\hspace{0.1667em}0)$, $k\in \{1,2,\dots ,10\}$
T $u=0$ $u=1$ $u=2$ $u=3$ $u=4$ $u=5$ $u=10$ $u=20$ $u=30$
1 0.532 0.703 0.831 0.913 0.960 0.983 1 1 1
2 0.424 0.587 0.727 0.831 0.902 0.946 0.999 1 1
3 0.368 0.520 0.657 0.767 0.849 0.906 0.995 1 1
4 0.332 0.474 0.606 0.717 0.804 0.869 0.988 1 1
5 0.306 0.440 0.567 0.677 0.766 0.834 0.979 1 1
10 0.235 0.343 0.450 0.548 0.635 0.708 0.921 0.998 1
20 0.200 0.294 0.389 0.478 0.558 0.629 0.863 0.990 1
30 0.179 0.264 0.350 0.432 0.507 0.575 0.814 0.979 0.999
∞ 0.125 0.182 0.243 0.301 0.355 0.406 0.605 0.826 0.923
The survival probability $\varphi (1),\hspace{0.1667em}\varphi (2),\dots $ generating function, having ${m_{i}^{(j)}}$, $i=0,1,\dots ,4$, $j=1,\hspace{0.1667em}2,\dots ,10$, from system (45), when $s\in {S_{1}}(0)$ and ${e^{{a_{10}}(s-1)}}\ne {s^{50}}$ is
\[\begin{array}{l}\displaystyle \Xi (s)=\frac{{u^{T}}(s)v(s)}{{e^{{a_{10}}(s-1)}}-{s^{50}}},\\ {} \displaystyle u(s)=\left(\begin{array}{c}{s^{45}}\\ {} {s^{40}}{e^{{a_{1}}(s-1)}}\\ {} {s^{35}}{e^{{a_{2}}(s-1)}}\\ {} \vdots \\ {} {s^{5}}{e^{{a_{8}}(s-1)}}\\ {} {e^{{a_{9}}(s-1)}}\end{array}\right),\hspace{1em}v(s)=\left(\begin{array}{c}{e^{-{\lambda _{1}}}}{\textstyle\textstyle\sum _{i=0}^{4}}{m_{i}^{(2)}}{\textstyle\textstyle\sum _{j=i}^{4}}{s^{j}}{\textstyle\textstyle\sum _{l=0}^{j-i}}{\lambda _{1}^{j}}/l!\\ {} {e^{-{\lambda _{2}}}}{\textstyle\textstyle\sum _{i=0}^{4}}{m_{i}^{(3)}}{\textstyle\textstyle\sum _{j=i}^{4}}{s^{j}}{\textstyle\textstyle\sum _{l=0}^{j-i}}{\lambda _{2}^{j}}/l!\\ {} \vdots \\ {} {e^{-{\lambda _{9}}}}{\textstyle\textstyle\sum _{i=0}^{4}}{m_{i}^{(10)}}{\textstyle\textstyle\sum _{j=i}^{4}}{s^{j}}{\textstyle\textstyle\sum _{l=0}^{j-i}}{\lambda _{9}^{j}}/l!\\ {} {e^{-{\lambda _{10}}}}{\textstyle\textstyle\sum _{i=0}^{4}}{m_{i}^{(1)}}{\textstyle\textstyle\sum _{j=i}^{4}}{s^{j}}{\textstyle\textstyle\sum _{l=0}^{j-i}}{\lambda _{10}^{j}}/l!\end{array}\right),\end{array}\]
where ${a_{n}}=4n+{\textstyle\sum _{k=0}^{n}}k/(k+1)$ and ${\lambda _{n}}=4+n/(n+1)$ when $n=1,\hspace{0.1667em}2,\dots ,10$.

Acknowledgement

The authors are grateful to the anonymous referee for his/her review, deep mathematical insights, and positive evaluation of the work. Sincere thanks to the editorial office and publisher, too, for their work in getting this article published.

Footnotes

1 Implemented using the RandomChoice function in Wolfram Mathematica [22].
2 Proof’s direction of (31) was originally provided by Fedor Petrov.

References

[1] 
Alencenovič, A., Grigutis, A.: Bi-seasonal discrete time risk model with income rate two. Commun. Stat., Theory Methods 52(17), 6161–6178 (2023). MR4611568. https://doi.org/10.1080/03610926.2022.2026962
[2] 
Andersen, E.S.: On the collective theory of risk in case of contagion between the claims. In: Transactions on XVth International Congress of Actuaries, vol. 2, pp. 219–229 (1957)
[3] 
Arguin, L.P., Hartung, L., Kistler, N.: High points of a random model of the Riemann-zeta function and Gaussian multiplicative chaos. Stoch. Process. Appl. 151, 174–190 (2022). MR4441506. https://doi.org/10.1016/j.spa.2022.04.017
[4] 
Asmussen, S., Albrecher, H.: Ruin Probabilities, 2nd edn. Advanced Series on Statistical Science and Applied Probability. World Scientific Publishing Company (2010). MR2766220. https://doi.org/10.1142/9789814282536
[5] 
Blaževičius, K., Bieliauskienė, E., Šiaulys, J.: Finite-time ruin probability in the inhomogenous claim case. Lith. Math. J. 50, 260–270 (2010). MR2719562. https://doi.org/10.1007/s10986-010-9084-2
[6] 
Bohun, V., Marynych, A.: Random walks with sticky barriers. Mod. Stoch. Theory Appl. 9(3), 245–263 (2022). MR4462023. https://doi.org/10.15559/22-vmsta202
[7] 
Buraczewski, D., Dong, C., Iksanov, A., Marynych, A.: Critical branching processes in a sparse random environment. Mod. Stoch. Theory Appl. 10(4), 397–411 (2023). MR4655407. https://doi.org/10.15559/23-vmsta231
[8] 
Cang, Y., Yang, Y., Shi, X.: A note on the uniform asymptotic behavior of the finite-time ruin probability in a nonstandard renewal risk model. Lith. Math. J. 60, 161–172 (2020). MR4110665. https://doi.org/10.1007/s10986-020-09473-x
[9] 
Case, K.E., Shiller, R.J.: The efficiency of the market for single-family homes. Am. Econ. Rev. 79(1), 125–137 (1989)
[10] 
Castañer, A., Claramunt, M.M., Gathy, M., Lefèvre, C., Mármol, M.: Ruin problems for a discrete time risk model with non-homogeneous conditions. Scand. Actuar. J. 2013(2), 83–102 (2013). MR3041119. https://doi.org/10.1080/03461238.2010.546144
[11] 
Dembo, A., Peres, Y., Rosen, J., Zeitouni, O.: Thick points for planar brownian motion and the Erdős-Taylor conjecture on random walk. Acta Math. 186(2), 239–270 (2001). MR1846031. https://doi.org/10.1007/BF02401841
[12] 
Dickson, D.C.M.: On numerical evaluation of finite time survival probabilities. Br. Actuar. J. 5(3), 575–584 (1999). https://doi.org/10.1017/S135732170000057X
[13] 
Edelman, A., Rao, N.R.: Random matrix theory. Acta Numer. 14, 233–297 (2005). MR2168344. https://doi.org/10.1017/S0962492904000236
[14] 
Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 2, 2nd edn. Wiley (1971). MR0270403
[15] 
Gerber, H.: Mathematical fun with ruin theory. Insur. Math. Econ. 7(1), 15–23 (1988). MR0971860. https://doi.org/10.1016/0167-6687(88)90091-1
[16] 
Gerber, H.: Mathematical fun with the compound binomial process. ASTIN Bull. 18(2), 161–168 (1988). https://doi.org/10.2143/AST.18.2.2014949
[17] 
Grigutis, A.: Exact expression of ultimate time survival probability in homogeneous discrete-time risk model. AIMS Math. 8(3), 5181–5199 (2023). MR4525843. https://doi.org/10.3934/math.2023260
[18] 
Grigutis, A., Jankauskas, J.: On $2\times 2$ determinants originating from survival probabilities in homogeneous discrete time risk model. Results Math. 77(5), 204 (2022). MR4470312. https://doi.org/10.1007/s00025-022-01736-y
[19] 
Grigutis, A., Jankauskas, J., Šiaulys, J.: Multi seasonal discrete time risk model revisited. Lith. Math. J. 63, 466–486 (2023). MR4691924. https://doi.org/10.1007/s10986-023-09613-z
[20] 
Grigutis, A., Nakliuda, A.: Note on the bi-risk discrete time risk model with income rate two. Mod. Stoch. Theory Appl. 9(4), 401–412 (2022). MR4510380. https://doi.org/10.15559/22-vmsta209
[21] 
Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (2012). MR2978290
[22] 
Wolfram Research, Inc. Mathematica, Version 14.0. Champaign, IL, 2024.
[23] 
Kendall, D.G.: The genealogy of genealogy branching processes before (and after) 1873. Bull. Lond. Math. Soc. 7(3), 225–253 (1975). 11. MR0426186. https://doi.org/10.1112/blms/7.3.225
[24] 
Landriault, D.: On a generalization of the expected discounted penalty function in a discrete-time insurance risk model. Appl. Stoch. Models Bus. Ind. 24(6), 525–539 (2008). MR2473024. https://doi.org/10.1002/asmb.713
[25] 
Lefèvre, C., Simon, M.: Schur-constant and related dependence models, with application to ruin probabilities. Methodol. Comput. Appl. Probab. 23, 317–339 (2021). MR4224918. https://doi.org/10.1007/s11009-019-09744-2
[26] 
Li, S., Huang, F., Jin, C.: Joint distributions of some ruin related quantities in the compound binomial risk model. Stoch. Models 29(4), 518–539 (2013). MR3175857. https://doi.org/10.1080/15326349.2013.847610
[27] 
Li, S., Lu, Y., Garrido, J.: A review of discrete-time risk models. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 103, 321–337 (2009). MR2582636. https://doi.org/10.1007/BF03191910
[28] 
Losidis, S.: Covariance between the forward recurrence time and the number of renewals. Mod. Stoch. Theory Appl. 9(1), 1–16 (2022). MR4388707. https://doi.org/10.15559/21-vmsta194
[29] 
Lundberg I, F.:. I. Approximerad Framstallning af Sannolikhetsfunktionen; II. Aterforsakring af Kollektivrisker. Dissertation Thesis, 1903. Swedish.
[30] 
Malkiel, B.G.: A Random Walk Down Wall Street. The Best Investment Tactic for the New Century. Norton & Company (2011)
[31] 
Martinsson, P.G., Tropp, J.A.: Randomized numerical linear algebra: Foundations and algorithms. Acta Numer. 29, 403–572 (2020). MR4189294. https://doi.org/10.1017/s0962492920000021
[32] 
Miao, Y., Sendova, K.P., Jones, B.L.: On a risk model with dual seasonalities. North Am. Actuar. J. 27(1), 166–184 (2023). MR4562596. https://doi.org/10.1080/10920277.2022.2068611
[33] 
Pearson, K.: The problem of the random walk. Nature 72, 294 (1905). https://doi.org/10.1038/072294b0
[34] 
Picard, P., Lefèvre, C.: Probabilité de ruine Éventuelle dans un modèle de risque à temps discret. J. Appl. Probab. 40(3), 543–556 (2003). MR1993252. https://doi.org/10.1239/jap/1059060887
[35] 
Pollaczek, F.: Order statistics of partial sums of mutually independent random variables. J. Appl. Probab. 12(2), 390–395 (1975). MR0378092. https://doi.org/10.2307/3212456
[36] 
Raducan, A.M., Vernic, R., Zbaganu, G.: Recursive calculation of ruin probabilities at or before claim instants for non-identically distributed claims. ASTIN Bull. 45(2), 421–443 (2015). MR3394025. https://doi.org/10.1017/asb.2014.30
[37] 
Santana, D.J., Rincón, L.: Ruin probabilities as functions of the roots of a polynomial. Mod. Stoch. Theory Appl. 10(3), 247–266 (2023). MR4608187. https://doi.org/10.15559/23-vmsta226
[38] 
Shiu, E.: Calculation of the probability of eventual ruin by beekman’s convolution series. Insur. Math. Econ. 7(1), 41–47 (1988). MR0971864. https://doi.org/10.1016/0167-6687(88)90095-9
[39] 
Shiu, E.: Ruin probability by operational calculus. Insur. Math. Econ. 8(3), 243–249 (1989). MR1031374. https://doi.org/10.1016/0167-6687(89)90060-7
[40] 
Spitzer, F.: A combinatorial lemma and its application to probability theory. Trans. Am. Math. Soc. 82, 323–339 (1956). MR0079851. https://doi.org/10.2307/1993051
[41] 
Spitzer, F.: Principles of Random Walk. Graduate Texts in Mathematics. Springer, Germany (1988). MR0388547
[42] 
Tzaninis, S.M.: Applications of a change of measures technique for compound mixed renewal processes to the ruin problem. Mod. Stoch. Theory Appl. 9(1), 45–64 (2022). MR4388709. https://doi.org/10.15559/21-vmsta192
Exit Reading PDF XML


Table of contents
  • 1 Introduction
  • 2 Recursive nature of ultimate time survival probability, basic notations, and the net profit condition
  • 3 Main results
  • 4 Notes on the solution of linear system involving probabilities of ${\mathcal{M}_{1}},{\mathcal{M}_{2}},\dots ,{\mathcal{M}_{N}}$
  • 5 Lemmas
  • 6 Proofs
  • 7 Numerical examples
  • Acknowledgement
  • Footnotes
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy