Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 9, Issue 4 (2022)
  4. On occupation time for on-off processes ...

On occupation time for on-off processes with multiple off-states
Volume 9, Issue 4 (2022), pp. 413–430
Chaoran Hu ORCID icon link to view author Chaoran Hu details   Vladimir Pozdnyakov   Jun Yan  

Authors

 
Placeholder
https://doi.org/10.15559/22-VMSTA210
Pub. online: 21 June 2022      Type: Research Article      Open accessOpen Access

Received
15 January 2022
Revised
9 April 2022
Accepted
5 June 2022
Published
21 June 2022

Abstract

The need to model a Markov renewal on-off process with multiple off-states arise in many applications such as economics, physics, and engineering. Characterization of the occupation time of one specific off-state marginally or two off-states jointly is crucial to understand such processes. The exact marginal and joint distributions of the off-state occupation times are derived. The theoretical results are confirmed numerically in a simulation study. A special case when all holding times have Lévy distribution is considered for the possibility of simplification of the formulas.

1 Introduction

Markov renewal on-off processes, also known as alternating renewal processes or telegraph processes, arise in applications in a variety of fields. Classic alternating renewal processes were first studied in [2, 12] and [11] with applications to animal ethology, maintenance of electronic equipment, and communication engineering, respectively. Recently, the telegraph process and their variations are employed in various fields of applications such as pricing in mathematical finance and insurance (see, e.g., [9, 6–8, 16]), modelling the propagation of a damped wave in physics (see, e.g., [4, 10]), and inventory and storage models in engineering [18]. In certain applications, the off-state of an on-off process has multiple types, which raises new questions about its properties such as the distribution of the occupation time in specific off-state type marginally or multiple off-state types jointly. Our interest in on-off processes with multiple off-states is motivated by our recent work in animal movement modeling, where a predator can have different nonmoving states such as resting or handling a kill [15, 14]. The occupation time in the handling state is important for ecologists to understand the behavior of predators.
To fix ideas, consider a server that has two different types of failures, each requiring a different time and cost to repair. One basic question is: if the server is on at time 0, what is the distribution of the time spent on fixing the type one failures by time $t>0$? We model the server state process with the following Markov renewal process. The process starts in state 0 (the on-state) and spends there a random holding time according to a given absolutely continuous distribution. When the first holding time is over, we flip an asymmetric coin to decide which type of failure (off-state 1 or off-state 2) comes next. The holding times in the off-states are also absolutely continuous, generally with different distributions. Once the second holding time is over the process returns to the on-state (state 0) and then we repeat the construction process.
The occupation time of a specific type of the off-state is our focus. For the on-state, the distribution of the occupation time is known from the results on the telegraph process [13, 5, 17, 19]. Indeed, if we collapse the two off-states into one, then the resulting process is a regular on-off process (or alternating renewal process) whose off-state holding time distribution is a mixture of the two original off-state holding time distributions. If the different types of repair require different resources, however, we want to know the distribution of the occupation time in a particular type of off-state. For that task, the results on classical telegraph processes are not directly applicable. Moreover, assume that the cost for being in an off-state is proportional to its occupation time and different for different off-states. If one wants to know the distribution of the total cost for repairs of all kinds by a fixed time, then we need the joint distribution of the two types of off-state occupation times.
Our problem is related to but different from some recent works. An extension of the telegraph process to a process with three states is studied in [1], where within a renewal cycle all three states are visited in a given deterministic order. In our case, however, we have an on-off process with two off-states; only two states (the on-state and an off-state) are visited in each renewal cycle, but the off-states is chosen randomly. Another related work is [3], where there are two states, but at each random epoch, the new state is determined by the outcome of a random trial, and, as a result, the process can stay in the same state.
The key idea to study the occupation time of a specific off-state is to exploit a certain periodicity of our Markov renewal process. In fact, when analyzing any Markov renewal process, it is always convenient to do the conditioning on returning to a certain state. For the on-off process with two off-states, state 0 has an additional nice property. The numbers of steps between two consecutive visits of state 0 is not random, and it is always equal to 2. We derive the marginal distribution of the occupation time of a specific off-state first, and then we modify our derivation to obtain the joint distribution of the two off-state occupation times. Of course, one can get the marginal distribution by integrating the joint one. Our approach, however, is easier to follow.
This article is arranged as follows. In Section 2, the formal construction of on-off process with two off-states and the definition of its occupation time are provided. The marginal and joint distribution of off-states occupation time are derived in Section 3 and 4. In Section 5, a special case with holding time modeled by Lévy distribution are discussed alongside with an application of server maintenance cost. Additional technical discussion is provided in Section 6.

2 Algorithmic construction of on-off process with two off-states

Suppose that we are given the following collection of independent sequences of nonnegative random variables:
  • 1. ${\{{U_{k}}\}_{k\ge 1}}$ are independent identically distributed (iid) positive random variables with absolutely continuous cumulative distribution function (cdf) ${F_{U}}$ and probability density function (pdf) ${f_{U}}$ (these random variables will be used as the holding times when the server is up, state 0);
  • 2. ${\{{S_{k}}\}_{k\ge 1}}$ are iid positive random variables with absolutely continuous cdf ${F_{S}}$ and pdf ${f_{S}}$ (holding times when the server is down for short repairs, state 1);
  • 3. ${\{{L_{k}}\}_{k\ge 1}}$ are iid positive random variables with absolutely continuous cdf ${F_{L}}$ and pdf ${f_{L}}$ (holding times when the server is down for long repairs, state 2);
  • 4. ${\{{\xi _{k}}\}_{k\ge 1}}$ are iid random variables with $\Pr ({\xi _{k}}=1)={p_{1}}>0$ and $\Pr ({\xi _{k}}=0)={p_{2}}=1-{p_{1}}$.
Now, we present our construction of the on-off process with two off-states, $\{X(t),t\ge 0\}$, with state-space $\{0,1,2\}$.
  • 1. Initialize with $X(0)=0$ and ${T_{0}}=0$.
  • 2. For cycles $i=1,2,\dots $:
    • (a) Let ${T_{2i-1}}={U_{i}}+{T_{2i-2}}$, and $X(t)=0$ for all $t\in [{T_{2i-2}},{T_{2i-1}})$.
    • (b) If ${\xi _{i}}=1$ then ${T_{2i}}={T_{2i-1}}+{S_{i}}$, and $X(t)=1$ for all $t\in [{T_{2i-1}},{T_{2i}})$; otherwise, ${T_{2i}}={T_{2i-1}}+{L_{i}}$, and $X(t)=2$ for all $t\in [{T_{2i-1}},{T_{2i}})$.
That is, the process starts in state 0 and stays in this state ${U_{1}}$ time units. Then it switches to state 1 or 2, which is decided by random variable ${\xi _{1}}$. Depending on the value of ${\xi _{1}}$, it stays in state 1 or 2 for ${S_{1}}$ or ${L_{1}}$ time units, respectively. Then the process switches back to state 0. The first renewal cycle is finished, and then the procedure is repeated again. A realization of the on-off process is given in Figure 1.
vmsta210_g001.jpg
Fig. 1.
A realization (first three circles) of on-off process with ${\xi _{1}}=1$, ${\xi _{2}}=0$, ${\xi _{3}}=0$
Note also that ${T_{n}}\to \infty $ with probability 1, because it is bounded from below by a sum of n iid positive random variables. This means that process $X(t)$ is well-defined for all $t>0$.
If all the holding times are exponentially distributed (with rates ${\lambda _{i}}$, $i=0,1,2$), then $X(t)$ is a continuous time Markov chain with the state space $\{0,1,2\}$ and initial distribution $\Pr (X(0)=0)=1$. Its transition rate matrix is given by
\[ \left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}-{\lambda _{0}}& {\lambda _{0}}\hspace{0.1667em}{p_{1}}& {\lambda _{0}}\hspace{0.1667em}{p_{2}}\\ {} {\lambda _{1}}& -{\lambda _{1}}& 0\\ {} {\lambda _{2}}& 0& -{\lambda _{2}}\end{array}\right)\]
where again ${p_{1}},{p_{2}},{\lambda _{0}},{\lambda _{1}},{\lambda _{2}}>0$ and ${p_{1}}+{p_{2}}=1$.
The occupation times in state 0, state 1, and state 2 are, respectively,
\[\begin{aligned}{}U(t)& ={\int _{0}^{t}}{1_{\{X(s)=0\}}}\mathrm{d}s,\\ {} S(t)& ={\int _{0}^{t}}{1_{\{X(s)=1\}}}\mathrm{d}s,\\ {} L(t)& ={\int _{0}^{t}}{1_{\{X(s)=2\}}}\mathrm{d}s.\end{aligned}\]
The corresponding defective marginal densities of $S(t)$ and $L(t)$ are denoted as
\[\begin{aligned}{}{p_{Sj}}(s,t)& =\Pr (S(t)\in \mathrm{d}s,X(t)=j)/\mathrm{d}s,\\ {} {p_{Lj}}(s,t)& =\Pr (L(t)\in \mathrm{d}s,X(t)=j)/\mathrm{d}s,\end{aligned}\]
where $t\ge 0$, $0<s<t$, $j=0,1,2$. The defective joint two-dimensional density of $S(t)$ and $L(t)$ for $u,v>0$, $u+v<t$ is denoted as
\[ {p_{SLj}}(u,v,t)=\frac{1}{\mathrm{d}u\mathrm{d}v}\Pr (S(t)\in \mathrm{d}u,L(t)\in \mathrm{d}v,X(t)=j),\]
where $j=0,1,2$. The densities are defective in the following sense: since both occupation times $S(t)$ and $L(t)$ have an atom at 0, all three sums
\[\begin{array}{l}\displaystyle {\sum \limits_{j=0}^{2}}{\int _{0}^{t}}{p_{Sj}}(s,t)\mathrm{d}s=\Pr (S(t)>0)=1-\Pr (S(t)=0),\\ {} \displaystyle {\sum \limits_{j=0}^{2}}{\int _{0}^{t}}{p_{Lj}}(s,t)\mathrm{d}s=\Pr (L(t)>0)=1-\Pr (L(t)=0),\end{array}\]
and
\[\begin{aligned}{}{\sum \limits_{j=0}^{2}}{\iint _{u,v>0,\hspace{0.1667em}u+v<t}}{p_{SLj}}(u,v,t)\mathrm{d}u\mathrm{d}v=& \Pr (S(t)>0,L(t)>0)\\ {} =& 1-\Pr (S(t)=0\hspace{2.5pt}\text{or}\hspace{2.5pt}L(t)=0)\end{aligned}\]
are less than 1.

3 Marginal distribution of occupation time of an off-state

To derive the distribution of occupation time we will need some auxiliary random variables. Let $N(t)$ be the number of cycles (or returns to the on-state) by time t. Formally, for $n\ge 0$,
(1)
\[ N(t)=n\hspace{1em}\hspace{1em}\hspace{2.5pt}\text{iff}\hspace{2.5pt}\hspace{1em}\hspace{1em}{T_{2n}}\le t<{T_{2n+2}}.\]
Finally, let ${D_{k}}={\xi _{k}}{S_{k}}+(1-{\xi _{k}}){L_{k}}$, $k\ge 1$. These random variables are associated with the off-state holding time of the regular on-off process when two off-states are combined.
Our formulas include convolutions of different distributions. We will use the following notation. If we are given a cdf $G(\cdot )$ and its pdf $g(\cdot )$, then ${G^{(n)}}(\cdot )$ denotes n-fold convolution of $G(\cdot )$, and ${g^{(n)}}(\cdot )$ denotes the n-fold convolution of $g(\cdot )$. If we are given two cdfs $G(\cdot )$ and $H(\cdot )$ with pdfs $g(\cdot )$ and $h(\cdot )$, then $G\ast H(\cdot )$ denotes the convolution of cdfs $G(\cdot )$ and $H(\cdot )$, and $g\ast h(\cdot )$ denotes the convolution of pdfs $g(\cdot )$ and $h(\cdot )$.
We also use the following conventions. Any summation over the empty set is 0. Zero-fold convolution ${G^{(0)}}(\cdot )$ is the cdf of a random variable that is equal to 0 with probability 1. Finally, ${g^{(k)}}\ast {h^{(0)}}(\cdot )={g^{(k)}}(\cdot )$ for any $k\ge 1$.
Theorem 1.
Let $t\ge 0$ and $0<s<t$. Then
\[\begin{aligned}{}& \Pr (S(t)=0,X(t)=0)\\ {} =& {\sum \limits_{n=0}^{\infty }}\left[{F_{U}^{(n)}}\ast {F_{L}^{(n)}}(t)-{F_{U}^{(n+1)}}\ast {F_{L}^{(n)}}(t)\right]{p_{2}^{n}},\\ {} & \Pr (S(t)=0,X(t)=1)=0,\\ {} & \Pr (S(t)=0,X(t)=2)\\ {} =& {\sum \limits_{n=0}^{\infty }}\left[{F_{U}^{(n+1)}}\ast {F_{L}^{(n)}}(t)-{F_{U}^{(n+1)}}\ast {F_{L}^{(n+1)}}(t)\right]{p_{2}^{n+1}},\end{aligned}\]
and
\[\begin{aligned}{}& {p_{S0}}(s,t)={\sum \limits_{n=1}^{\infty }}{\sum \limits_{k=1}^{n}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{1}^{k}}{p_{2}^{n-k}}{f_{S}^{(k)}}(s)\\ {} & \times \left[{F_{U}^{(n)}}\ast {F_{L}^{(n-k)}}(t-s)-{F_{U}^{(n+1)}}\ast {F_{L}^{(n-k)}}(t-s)\right],\\ {} & {p_{S1}}(s,t)={\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=0}^{n}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{1}^{k+1}}{p_{2}^{n-k}}{f_{U}^{(n+1)}}\ast {f_{L}^{(n-k)}}(t-s)\\ {} & \times \left[{F_{S}^{(k)}}(s)-{F_{S}^{(k+1)}}(s)\right],\\ {} & {p_{S2}}(s,t)={\sum \limits_{n=1}^{\infty }}{\sum \limits_{k=1}^{n}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{1}^{k}}{p_{2}^{n-k+1}}{f_{S}^{(k)}}(s)\\ {} & \times \left[{F_{U}^{(n+1)}}\ast {F_{L}^{(n-k)}}(t-s)-{F_{U}^{(n+1)}}\ast {F_{L}^{(n-k+1)}}(t-s)\right].\end{aligned}\]
Proof.
Recall that ${\{{\xi _{k}}\}_{k\ge 1}}$ are independent as defined in Section 2. First, note the distribution of $S(t)$ has an atom. Indeed, if ${U_{1}}>t$ or all the failures that occur before t are of the second type, then $S(t)=0$. More specifically, by conditioning on the number of returns to the on-state, $N(t)$, we obtain that
\[\begin{aligned}{}& \Pr (S(t)=0,X(t)=0)\\ {} & ={\sum \limits_{n=0}^{\infty }}\Pr \left(S(t)=0,X(t)=0,N(t)=n\right)\\ {} & =\Pr ({U_{1}}>t)+{\sum \limits_{n=1}^{\infty }}\Pr \left({\sum \limits_{j=1}^{n}}({U_{j}}+{L_{j}})\le t,\right.\\ {} & \hspace{2em}\left.{\sum \limits_{j=1}^{n}}({U_{j}}+{L_{j}})+{U_{n+1}}>t,{\sum \limits_{j=1}^{n}}{\xi _{j}}=0\right)\\ {} & =\Pr ({U_{1}}>t)+{\sum \limits_{n=1}^{\infty }}\Pr \left({\sum \limits_{j=1}^{n}}({U_{j}}+{L_{j}})\le t,\right.\\ {} & \hspace{2em}\left.{\sum \limits_{j=1}^{n}}({U_{j}}+{L_{j}})+{U_{n+1}}>t\right){p_{2}^{n}}\\ {} & =\Pr ({U_{1}}>t)+{\sum \limits_{n=1}^{\infty }}\left[\Pr \left({\sum \limits_{j=1}^{n}}{U_{j}}+{\sum \limits_{j=1}^{n}}{L_{j}}\le t\right)\right.\\ {} & \hspace{2em}\left.-\Pr \left({\sum \limits_{j=1}^{n+1}}{U_{j}}+{\sum \limits_{j=1}^{n}}{L_{j}}\le t\right)\right]{p_{2}^{n}}\\ {} & =(1-{F_{U}}(t))\\ {} & +{\sum \limits_{n=1}^{\infty }}\left[{F_{U}^{(n)}}\ast {F_{L}^{(n)}}(t)-{F_{U}^{(n+1)}}\ast {F_{L}^{(n)}}(t)\right]{p_{2}^{n}}\\ {} & ={\sum \limits_{n=0}^{\infty }}\left[{F_{U}^{(n)}}\ast {F_{L}^{(n)}}(t)-{F_{U}^{(n+1)}}\ast {F_{L}^{(n)}}(t)\right]{p_{2}^{n}}.\end{aligned}\]
Next, again by conditioning on $N(t)$, we get that, for $0<s<t$,
\[\begin{aligned}{}& \Pr (S(t)\in \mathrm{d}s,X(t)=0)\\ {} & ={\sum \limits_{n=1}^{\infty }}\Pr \left(S(t)\in \mathrm{d}s,X(t)=0,N(t)=n\right).\end{aligned}\]
Note that the summation starts from 1, because $X(t)=0$ and $N(t)=0$ implies that ${U_{1}}>t$, and, therefore, $S(t)=0$.
The next step is to fix the number of switches to failures of type 1. Since the total numbers of switches is n and there is at least one failure of type 1, we have
\[\begin{aligned}{}& \Pr (S(t)\in \mathrm{d}s,X(t)=0,N(t)=n)\\ {} & ={\sum \limits_{k=1}^{n}}\Pr \left(S(t)\in \mathrm{d}s,X(t)=0,N(t)=n,{\sum \limits_{j=1}^{n}}{\xi _{j}}=k\right).\end{aligned}\]
Because
\[\begin{aligned}{}X(t)=0,& N(t)=n\\ {} & \text{iff}\\ {} {\sum \limits_{j=1}^{n}}({U_{j}}+{D_{j}})\le t,& {\sum \limits_{j=1}^{n}}({U_{j}}+{D_{j}})+{U_{n+1}}>t,\end{aligned}\]
and it really does not matter during which cycles the switches to failures of type 1 occur, we find that
\[\begin{aligned}{}& \Pr \Big(S(t)\in \mathrm{d}s,X(t)=0,N(t)=n,{\sum \limits_{j=1}^{n}}{\xi _{j}}=k\Big)\\ {} & =\Pr \left(S(t)\in \mathrm{d}s,{\sum \limits_{j=1}^{n}}({U_{j}}+{D_{j}})\le t,\right.\\ {} & \left.\hspace{2em}{\sum \limits_{j=1}^{n}}({U_{j}}+{D_{j}})+{U_{n+1}}>t,{\sum \limits_{j=1}^{n}}{\xi _{j}}=k\right)\\ {} & =\left(\genfrac{}{}{0.0pt}{}{n}{k}\right)\Pr \left(S(t)\in \mathrm{d}s,{\sum \limits_{j=1}^{n}}({U_{j}}+{D_{j}})\le t,\right.\\ {} & \left.\hspace{2em}{\sum \limits_{j=1}^{n}}({U_{j}}+{D_{j}})+{U_{n+1}}>t,{\sum \limits_{j=1}^{k}}{\xi _{j}}=k,{\sum \limits_{j=k+1}^{n}}{\xi _{j}}=0\right).\end{aligned}\]
Next, observe that in this case $S(t)={\textstyle\sum _{j=1}^{k}}{S_{j}}$. Using independence between ${\{{\xi _{k}}\}_{k\ge 1}}$ and the holding time sequences we have
\[\begin{aligned}{}& \Pr \Big(S(t)\in \mathrm{d}s,X(t)=0,N(t)=0,{\sum \limits_{j=1}^{n}}{\xi _{j}}=k\Big)\\ {} & =\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{1}^{k}}{p_{2}^{n-k}}\\ {} & \times \Pr \left({\sum \limits_{j=1}^{k}}{S_{j}}\in \mathrm{d}s,{\sum \limits_{j=1}^{n}}{U_{j}}+{\sum \limits_{j=1}^{k}}{S_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}\le t,\right.\\ {} & \left.\hspace{2em}\hspace{2em}\hspace{1em}{\sum \limits_{j=1}^{n+1}}{U_{j}}+{\sum \limits_{j=1}^{k}}{S_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}>t\right)\\ {} & =\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{1}^{k}}{p_{2}^{n-k}}\\ {} & \times \Pr \left({\sum \limits_{j=1}^{k}}{S_{j}}\in \mathrm{d}s,{\sum \limits_{j=1}^{n}}{U_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}\le t-s,\right.\\ {} & \left.\hspace{2em}\hspace{2em}\hspace{1em}{\sum \limits_{j=1}^{n+1}}{U_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}>t-s\right).\end{aligned}\]
Finally, independence of holding time sequences gives us that
\[\begin{aligned}{}& \Pr \left({\sum \limits_{j=1}^{k}}{S_{j}}\in \mathrm{d}s,{\sum \limits_{j=1}^{n}}{U_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}\le t-s,\right.\\ {} & \left.\hspace{2em}\hspace{2em}\hspace{2em}{\sum \limits_{j=1}^{n+1}}{U_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}>t-s\right)\\ {} & =\Pr \left({\sum \limits_{j=1}^{k}}{S_{j}}\in \mathrm{d}s\right)\left[\Pr \left({\sum \limits_{j=1}^{n}}{U_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}\le t-s\right)\right.\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\left.-\Pr \left({\sum \limits_{j=1}^{n+1}}{U_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}\le t-s\right)\right]\\ {} & ={f_{S}^{(k)}}(s)\\ {} & \times \left[{F_{U}^{(n)}}\ast {F_{L}^{(n-k)}}(t-s)-{F_{U}^{(n+1)}}\ast {F_{L}^{(n-k)}}(t-s)\right]\mathrm{d}s.\end{aligned}\]
Now let us consider the case when $X(t)=1$ (at time t the server is down for a short repair). One difference is that there is no atom in this case. As before, for $0<s<t$ we have
\[\begin{aligned}{}& \phantom{=\hspace{0.2778em}}\Pr (S(t)\in \mathrm{d}s,X(t)=1)\\ {} & ={\sum \limits_{n=0}^{\infty }}\Pr \left(S(t)\in \mathrm{d}s,X(t)=1,N(t)=n\right)\\ {} & ={\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=0}^{n}}\Pr \left(S(t)\in \mathrm{d}s,X(t)=1,N(t)=n,\phantom{{\sum \limits_{j=1}^{n}}}\right.\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\left.{\sum \limits_{j=1}^{n}}{\xi _{j}}=k,{\xi _{n+1}}=1\right)\\ {} & ={\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=0}^{n}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right)\Pr \left(S(t)\in \mathrm{d}s,X(t)=1,N(t)=n,\phantom{{\sum \limits_{j=1}^{k}}}\right.\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\left.{\sum \limits_{j=1}^{k}}{\xi _{j}}=k,{\sum \limits_{j=k+1}^{n}}{\xi _{j}}=0,{\xi _{n+1}}=1\right).\end{aligned}\]
Note that event
\[\begin{array}{r}\displaystyle \Big\{S(t)\in \mathrm{d}s,X(t)=1,N(t)=n,{\sum \limits_{j=1}^{k}}{\xi _{j}}=k,\\ {} \displaystyle {\sum \limits_{j=k+1}^{n}}{\xi _{j}}=0,{\xi _{n+1}}=1\Big\}\end{array}\]
can be rewritten as
\[\begin{aligned}{}\Big\{S(t)\in \mathrm{d}s,\hspace{0.2778em}& {\sum \limits_{j=1}^{n+1}}{U_{j}}+{\sum \limits_{j=1}^{k}}{S_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}\le t,\\ {} & {\sum \limits_{j=1}^{n+1}}{U_{j}}+{\sum \limits_{j=1}^{k}}{S_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}+{S_{n+1}}>t,\\ {} & \hspace{0.2778em}{\sum \limits_{j=1}^{k}}{\xi _{j}}=k,\hspace{0.2778em}{\sum \limits_{j=k+1}^{n}}{\xi _{j}}=0,\hspace{0.2778em}{\xi _{n+1}}=1\Big\}.\end{aligned}\]
Since $S(t)=t-{\textstyle\sum _{j=1}^{n+1}}{U_{j}}-{\textstyle\sum _{j=k+1}^{n}}{L_{j}}$, we finally obtain
\[\begin{aligned}{}& \Pr (S(t)\in \mathrm{d}s,X(t)=1)\\ {} & ={\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=0}^{n}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{1}^{k+1}}{p_{2}^{n-k}}\Pr \left({\sum \limits_{j=1}^{n+1}}{U_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}\in t-\mathrm{d}s,\right.\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\left.{\sum \limits_{j=1}^{k}}{S_{j}}\le s,{\sum \limits_{j=1}^{k}}{S_{j}}+{S_{n+1}}>s\right)\\ {} & ={\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=0}^{n}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{1}^{k+1}}{p_{2}^{n-k}}{f_{U}^{(n+1)}}\ast {f_{L}^{(n-k)}}(t-s)\\ {} & \times \left[{F_{S}^{(k)}}(s)-{F_{S}^{(k+1)}}(s)\right]\mathrm{d}s.\end{aligned}\]
The other formulas can be derived in a similar way.  □
Theorem 1 gives us the distribution of occupation time in state 1. Since both off-states enter our story in a completely symmetric way, the formulas for the occupation time in state 2 can be obtained by interchanging state 1 and 2 in Theorem 1.

4 Joint distribution of off-state occupation times

As mentioned in the introduction, if we are interested in the distribution of the total cost, then we need the joint distribution of off-state occupation times. More specifically, assume that the total cost $C(t)$ is a linear function of occupation times, that is,
\[ C(t)={\alpha _{0}}U(t)+{\alpha _{1}}S(t)+{\alpha _{2}}L(t),\]
where ${\alpha _{0}}$, ${\alpha _{1}}$, and ${\alpha _{2}}$ are positive constants representing the maintenance cost per unit of time for state 0, 1, and 2, respctively. Then the distribution of $C(t)$ is fully determined by the joint distribution of $S(t)$ and $L(t)$. Note that we do not need the joint distribution of all three occupation times, because $U(t)=t-S(t)-L(t)$.
Let us also note here that the cost function process $\{C(t),t\ge 0\}$ is similar to what is know in the literature as integrated telegraph process. Indeed, $C(t)$ is a linearly increasing process. It goes up with velocity ${\alpha _{0}}$, if $X(t)$ is in state 0. When the process $X(t)$ switches to one of two off-states, it goes up either with velocity ${\alpha _{0}}$ or ${\alpha _{2}}$ (depending on an off-state).
Since both $S(t)$ and $L(t)$ have atoms at 0, for every value of $X(t)$ we have four cases: (1) both occupation times are 0; (2) $S(t)=0$, $L(t)>0$; (3) $S(t)>0$, $L(t)=0$; and (4) both $S(t)$ and $L(t)$ are strictly greater than 0. In total, we have 12 formulas. Some of them are trivial. For instance, event $\{S(t)=0,X(t)=1\}$ has probability 0, therefore, the corresponding defective one-dimensional density of $L(t)$ is also 0. Moreover, since there is a certain symmetry between $S(t)$ and $L(t)$, some formulas can be found by interchanging state 1 and 2. That is why in the following theorem we have only 5 formulas.
Theorem 2.
Let $u,v>0$ and $u+v<t$. Then
\[\begin{aligned}{}& \Pr (S(t)=0,L(t)=0,X(t)=0)=1-{F_{U}}(t),\\ {} & \Pr (S(t)=0,L(t)\in \mathrm{d}v,X(t)=0)/\mathrm{d}v\\ {} & ={\sum \limits_{n=1}^{\infty }}{f_{L}^{n}}(v)\left[{F_{U}^{(n)}}(t-v)-{F_{U}^{(n+1)}}(t-v)\right]{p_{2}^{n}},\\ {} & \Pr (S(t)\in \mathrm{d}u,L(t)=0,X(t)=1)/\mathrm{d}u\\ {} & ={\sum \limits_{n=0}^{\infty }}{p_{1}^{n+1}}{f_{U}^{(n+1)}}(t-u)\left[{F_{S}^{(n)}}(u)-{F_{S}^{(n+1)}}(u)\right],\end{aligned}\]
and
\[\begin{aligned}{}& {p_{SL0}}(u,v,t)={\sum \limits_{n=1}^{\infty }}{\sum \limits_{k=1}^{n-1}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{1}^{k}}{p_{2}^{n-k}}{f_{S}^{(k)}}(u){f_{L}^{(n-k)}}(v)\\ {} & \times \left[{F_{U}^{(n)}}(t-u-v)-{F_{U}^{(n+1)}}(t-u-v)\right],\\ {} & {p_{SL1}}(u,v,t)={\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=0}^{n-1}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{1}^{k+1}}{p_{2}^{n-k}}\\ {} & \times {f_{U}^{(n+1)}}(t-u-v){f_{L}^{(n-k)}}(v)\left[{F_{S}^{(k)}}(u)-{F_{S}^{(k+1)}}(u)\right].\end{aligned}\]
Proof.
We will only derive the formula for ${p_{SL1}}(u,v,t)$. The remaining formulas can be obtained by similar modifications of the proof of Theorem 1.
As before, we start with partitioning with respect to events $\{N(t)=n\}$. More specifically, for $u,v>0$ and $0<u+v<t$ we have
\[\begin{aligned}{}& {p_{SL1}}(u,v,t)\mathrm{d}u\mathrm{d}v\\ {} & ={\sum \limits_{n=0}^{\infty }}\Pr \left(S(t)\in \mathrm{d}u,L(t)\in \mathrm{d}v,X(t)=1,N(t)=n\right)\\ {} & ={\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=0}^{n-1}}\Pr \left(S(t)\in \mathrm{d}u,L(t)\in \mathrm{d}v,X(t)=1,\phantom{{\sum \limits_{j=1}^{n}}{\xi _{j}}=k}\right.\\ {} & \left.\hspace{2em}\hspace{2em}N(t)=n,{\sum \limits_{j=1}^{n}}{\xi _{j}}=k,{\xi _{n+1}}=1\right)\\ {} & ={\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=0}^{n-1}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right)\Pr \left(S(t)\in \mathrm{d}u,L(t)\in \mathrm{d}v,X(t)=1,\phantom{{\sum \limits_{j=1}^{n}}{\xi _{j}}=k}\right.\\ {} & \left.\hspace{2em}\hspace{2em}N(t)=n,{\sum \limits_{j=1}^{k}}{\xi _{j}}=k,{\sum \limits_{j=k+1}^{n}}{\xi _{j}}=0,{\xi _{n+1}}=1\right).\end{aligned}\]
Note that now the upper limit of the inner summation is $n-1$, because ${\textstyle\sum _{j=1}^{n}}{\xi _{j}}=n$ and ${\xi _{n+1}}=1$ implies that $L(t)=0$.
Next, observe that event
\[\begin{array}{r}\displaystyle \Big\{S(t)\in \mathrm{d}u,L(t)\in \mathrm{d}v,X(t)=1,N(t)=n,\\ {} \displaystyle {\sum \limits_{j=1}^{k}}{\xi _{j}}=k,{\sum \limits_{j=k+1}^{n}}{\xi _{j}}=0,{\xi _{n+1}}=1\Big\}\end{array}\]
can be rewritten as
\[\begin{aligned}{}\Big\{& S(t)\in \mathrm{d}u,L(t)\in \mathrm{d}v,\hspace{0.2778em}{\sum \limits_{j=1}^{n+1}}{U_{j}}+{\sum \limits_{j=1}^{k}}{S_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}\le t,\\ {} & \hspace{2em}\hspace{2em}{\sum \limits_{j=1}^{n+1}}{U_{j}}+{\sum \limits_{j=1}^{k}}{S_{j}}+{\sum \limits_{j=k+1}^{n}}{L_{j}}+{S_{n+1}}>t,\\ {} & \hspace{2em}\hspace{2em}{\sum \limits_{j=1}^{k}}{\xi _{j}}=k,{\sum \limits_{j=k+1}^{n}}{\xi _{j}}=0,{\xi _{n+1}}=1\Big\}.\end{aligned}\]
Finally, taking into account that in the case when $\{X(t)=1\}$ occupation time $S(t)=t-{\textstyle\sum _{j=1}^{n+1}}{U_{j}}-{\textstyle\sum _{j=k+1}^{n}}{L_{j}}$ and occupation time $L(t)={\textstyle\sum _{j=k+1}^{n}}{L_{j}}$, we get that
\[\begin{aligned}{}& {p_{SL1}}(u,v,t)\mathrm{d}u\mathrm{d}v={\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=0}^{n-1}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{1}^{k+1}}{p_{2}^{n-k}}\\ {} & \times \Pr \left({\sum \limits_{j=1}^{n+1}}{U_{j}}\in t-\mathrm{d}u-\mathrm{d}v,\right.\\ {} & \left.\hspace{2em}\hspace{2em}{\sum \limits_{j=k+1}^{n}}{L_{j}}\in \mathrm{d}v,{\sum \limits_{j=1}^{k}}{S_{j}}\le u,{\sum \limits_{j=1}^{k}}{S_{j}}+{S_{n+1}}>u\right)\\ {} & ={\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=0}^{n-1}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{1}^{k+1}}{p_{2}^{n-k}}{f_{U}^{(n+1)}}(t-u-v){f_{L}^{(n-k)}}(v)\\ {} & \times \left[{F_{S}^{(k)}}(u)-{F_{S}^{(k+1)}}(u)\right]\mathrm{d}u\mathrm{d}v.\end{aligned}\]
 □
One can verify now that, for instance,
\[\begin{aligned}{}& {p_{S1}}(u,t)={\sum \limits_{n=0}^{\infty }}{p_{1}^{n+1}}{f_{U}^{(n+1)}}(t-u)\left[{F_{S}^{(n)}}(u)-{F_{S}^{(n+1)}}(u)\right]\\ {} & +{\int _{0}^{t-u}}{p_{SL1}}(u,v,t)\mathrm{d}v.\end{aligned}\]
Using symmetry between $S(t)$ and $L(t)$, we also can get that
\[\begin{aligned}{}& {p_{SL2}}(u,v,t)={\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=0}^{n-1}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{2}^{k+1}}{p_{1}^{n-k}}\\ {} & \hspace{1em}\times {f_{U}^{(n+1)}}(t-u-v){f_{S}^{(n-k)}}(u)\left[{F_{L}^{(k)}}(v)-{F_{L}^{(k+1)}}(v)\right].\end{aligned}\]
Remark 1.
Theorem 1 and Theorem 2 can be generalized to the multiple off-states case. We give an example for three off-states here. Define iid random variables ${\{{M_{k}}\}_{k\ge 1}}$ with cumulative distribution ${F_{M}}$ and probability density function ${f_{M}}$, as the holding times for the 3rd off-state (medium repair). Then, ${\{{\xi _{k}}\}_{k\ge 1}}$ will change to iid random variables with $\Pr ({\xi _{k}}=S)={p_{1}}$, $\Pr ({\xi _{k}}=L)={p_{2}}$, $\Pr ({\xi _{k}}=M)={p_{3}}$, and ${p_{1}}+{p_{2}}+{p_{3}}=1$. Accordingly, the occupation time in the 3rd off-state is $M(t)={\textstyle\int _{0}^{t}}1\{X(s)=3\}\mathrm{d}s$. By the same technique, we can get the marginal and joint distribution of the occupation times. The only difference is that binomial probabilities are replaced with multinomial ones. For example,
\[\begin{aligned}{}& {p_{S0}}(s,t)=\Pr (S(t)\in \mathrm{d}s,X(t)=0)/\mathrm{d}s\\ {} & ={\sum \limits_{n=1}^{\infty }}{\sum \limits_{k=1}^{n}}{\sum \limits_{{k^{\ast }}=0}^{n-k}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right)\left(\genfrac{}{}{0.0pt}{}{n-k}{{k^{\ast }}}\right){p_{1}^{k}}{p_{2}^{{k^{\ast }}}}{p_{3}^{n-k-{k^{\ast }}}}{f_{S}^{(k)}}(s)\\ {} & \times \left[{F_{U}^{(n)}}\ast {F_{L}^{({k^{\ast }})}}\ast {F_{M}^{(n-k-{k^{\ast }})}}(t-s)\right.\\ {} & \hspace{2em}\left.-{F_{U}^{(n+1)}}\ast {F_{L}^{({k^{\ast }})}}\ast {F_{M}^{(n-k-{k^{\ast }})}}(t-s)\right],\end{aligned}\]
and
\[\begin{aligned}{}& {p_{SLM1}}(u,v,w;t)\\ {} & =\Pr (S(t)\in \mathrm{d}u,L(t)\in \mathrm{d}v,M(t)\in \mathrm{d}w,X(t)=1)/\mathrm{d}u\mathrm{d}v\mathrm{d}w\\ {} & ={\sum \limits_{n=0}^{\infty }}{\sum \limits_{k=0}^{n-1}}{\sum \limits_{{k^{\ast }}=1}^{n-k-1}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right)\left(\genfrac{}{}{0.0pt}{}{n-k}{{k^{\ast }}}\right){p_{1}^{k}}{p_{2}^{{k^{\ast }}}}{p_{3}^{n-k-{k^{\ast }}}}\\ {} & \times {f_{U}^{(n+1)}}(t-v-w-u){f_{L}^{({k^{\ast }})}}(v){f_{M}^{(n-k-{k^{\ast }})}}(w)\\ {} & \times \left[{F_{S}^{(k)}}(u)-{F_{S}^{(k+1)}}(u)\right].\end{aligned}\]
All other formulas can be derived similarly.

5 Special case: Lévy distribution

In this section we consider a special case when all the holding times have the Lévy distribution (with location parameter 0 and possibly different scale parameters). The Lévy distribution with scale parameter ${c^{2}}$ has pdf
\[ {g_{c}}(x)=\frac{c}{\sqrt{2\pi }}\frac{{e^{-\frac{{c^{2}}}{2x}}}}{{x^{3/2}}},\hspace{1em}x>0,\]
and cdf
\[ {G_{c}}(x)=\frac{2}{\sqrt{\pi }}{\int _{\frac{c}{\sqrt{2x}}}^{\infty }}{e^{-{t^{2}}}}\mathrm{d}t,\hspace{1em}x>0.\]
The Lévy distribution is heavy-tailed with infinite expectation. The median is given by $0.5{c^{2}}{\left({\mathtt{erfc}^{-1}}(0.5)\right)^{-2}}\approx 2.198112{c^{2}}$, where $\mathtt{erfc}$ is the complementary error function:
\[ \mathtt{erfc}(x)=\frac{2}{\sqrt{\pi }}{\int _{x}^{\infty }}{e^{-{t^{2}}}}\hspace{0.1667em}\mathrm{d}t.\]
The parametrization that we use is a bit unusual, but it allows us to shorten our notation for convolutions. More specifically, as a member of the family of stable distributions with mobility parameter 1/2, the Lévy distribution is closed under the operation of convolution in the following way:
\[ {G_{{c_{1}}}}\ast {G_{{c_{2}}}}(x)={G_{{c_{1}}+{c_{2}}}}(x).\]
Let ${c_{U}^{2}}$, ${c_{S}^{2}}$, and ${c_{L}^{2}}$ be the scale parameters for states 0, 1 and 2, respectively. Then the formulas from Theorem 1 and Theorem 2 do not involve any convolutions. For instance, in this case we have that
\[\begin{aligned}{}& {p_{S2}}(s,t)={\sum \limits_{n=1}^{\infty }}{\sum \limits_{k=1}^{n}}\left(\genfrac{}{}{0.0pt}{}{n}{k}\right){p_{1}^{k}}{p_{2}^{n-k+1}}{g_{k{c_{S}}}}(s)\\ {} & \times \left[{G_{(n+1){c_{U}}+(n-k){c_{L}}}}\left(t-s\right)-{G_{(n+1){c_{U}}+(n-k+1){c_{L}}}}\left(t-s\right)\right].\end{aligned}\]
The expectations and variances of the off-times is given by
\[\begin{aligned}{}& \mathrm{E}(S(t))={\sum \limits_{j=0}^{2}}{\int _{0}^{t}}s{p_{Sj}}(s,t)\mathrm{d}s,\\ {} & \mathrm{Var}(S(t))={\sum \limits_{j=0}^{2}}{\int _{0}^{t}}{s^{2}}{p_{Sj}}(s,t)\mathrm{d}s-{[\mathrm{E}(S(t))]^{2}},\\ {} & \mathrm{E}(L(t))={\sum \limits_{j=0}^{2}}{\int _{0}^{t}}s{p_{Lj}}(s,t)\mathrm{d}s,\\ {} & \mathrm{Var}(L(t))={\sum \limits_{j=0}^{2}}{\int _{0}^{t}}{s^{2}}{p_{Lj}}(s,t)\mathrm{d}s-{[\mathrm{E}(L(t))]^{2}}.\end{aligned}\]
Note that the discrete component of the occupation times is not used for these calculations, because the atoms are at 0. The covariance of $S(t)$ and $L(t)$ is given by
\[\begin{aligned}{}\mathrm{Cov}(S(t),L(t))& ={\sum \limits_{j=0}^{2}}{\iint _{u,v>0,\hspace{0.1667em}u+v<t}}uv{p_{SLj}}(u,v,t)\mathrm{d}u\mathrm{d}v\\ {} & -\mathrm{E}(S(t))\mathrm{E}(L(t)).\end{aligned}\]
As an example we consider a server with median holding time 7 days, 0.5 hour, and 4 hours for the on-state, the short off-state, and the long off-state, respectively. Using one day as the time unit, the model parameters are: ${c_{U}}\approx 1.785$, ${c_{S}}\approx 0.097$, and ${c_{L}}\approx 0.275$. Assume also that the total repair cost is given by
\[ C(t)=S(t)+2L(t),\]
that is, long repairs are twice costlier than the short ones.
vmsta210_g002.jpg
Fig. 2.
Defective densities ${p_{Sj}}(s,t)$ and ${p_{Lj}}(s,t)$, $j\in \{0,1,2\}$, with $t=30$, ${c_{U}}\approx 1.785$, ${c_{S}}\approx 0.097$, ${c_{L}}\approx 0.275$, and ${p_{1}}=0.9$
vmsta210_g003.jpg
Fig. 3.
Contour plots of the joint densities ${p_{SLj}}(u,v,t)$, $j\in \{0,1,2\}$, with $t=30$, ${c_{U}}\approx 1.785$, ${c_{S}}\approx 0.097$, ${c_{L}}\approx 0.275$, and ${p_{1}}=0.9$
Figure 2 presents defective marginal densities ${p_{Sj}}(s,t)$ and ${p_{Lj}}(s,t)$ when $t=30$ days and ${p_{1}}=.9$ (that is, the less serious breakdowns occur 9 times more often). The marginal densities for both occupation times are severely defected when $X(30)=0$ (the process is in the on-state). As we mentioned above $S(30)$ and $L(30)$ have atoms at $s=0$:
\[\begin{aligned}{}& \Pr (S(30)=0,X(30)=0)\approx 0.280;\\ {} & \Pr (L(30)=0,X(30)=0)\approx 0.798;\\ {} & \Pr (S(30)=0,X(30)=1)\approx 0.000;\\ {} & \Pr (L(30)=0,X(30)=1)\approx 0.034;\\ {} & \Pr (S(30)=0,X(30)=2)\approx 0.004;\\ {} & \Pr (L(30)=0,X(30)=2)\approx 0.000.\end{aligned}\]
Figure 3 shows the contour plot of defective joint densities ${p_{SLj}}(u,v,t)$ with the same parameter setup, where the negative association of the two occupation times is obvious. We also ran a simulation study to numerically confirm the correctness of Theorem 1 and Theorem 2. A total $1,000,000$ realizations of the above on-off process were generated and the empirical results are consistent with the theoretical densities (not shown).
Table 1 summarizes the expectation and variance of the two types of off-state occupation times and the associated repair cost for $t\in \{30,60\}$ days and ${p_{1}}\in \{0.70,0.75,\dots ,0.95,0.99\}$. As ${p_{1}}$ increases, the expectation and the variance goes up for the less serious breakdown time but goes down for the more serious breakdown time. The total repair cost has lower expectation and variance for smaller values of ${p_{1}}$. The correlation of the two off-state occupation times is negative, with a magnitude decreasing as ${p_{1}}$ increases. When t is doubled, the expectation of the occupation times and the total repair cost is slightly more than doubled, which is because the process always starts from the on-state with a random holding time.
Table 1.
Summaries of off-state occupation times and total repair cost with ${c_{U}}\approx 1.785$, ${c_{S}}\approx 0.097$, and ${c_{L}}\approx 0.275$ as ${p_{1}}$ increases
t ${p_{1}}$ $\mathrm{E}(S(t))$ $\mathrm{Var}(S(t))$ $\mathrm{E}(L(t))$ $\mathrm{Var}(L(t))$ $\mathrm{E}(C(t))$ $\mathrm{Var}(C(t))$ $\rho (S(t),L(t))$
30 0.70 0.81 10.82 0.95 13.01 2.71 60.99 −0.040
0.75 0.87 11.60 0.79 10.95 2.46 53.71 −0.037
0.80 0.93 12.38 0.64 8.85 2.20 46.32 −0.035
0.85 0.99 13.15 0.48 6.70 1.95 38.80 −0.031
0.90 1.05 13.93 0.32 4.51 1.69 31.16 −0.026
0.95 1.11 14.70 0.16 2.28 1.43 23.38 −0.019
0.99 1.16 15.32 0.03 0.46 1.23 17.07 −0.009
60 0.70 1.75 48.20 2.08 57.96 5.92 271.58 −0.040
0.75 1.88 51.68 1.74 48.81 5.37 239.32 −0.038
0.80 2.02 55.16 1.40 39.46 4.82 206.48 −0.035
0.85 2.15 58.64 1.05 29.91 4.26 173.04 −0.031
0.90 2.29 62.13 0.70 20.16 3.70 139.01 −0.026
0.95 2.42 65.62 0.35 10.19 3.13 104.37 −0.019
0.99 2.53 68.41 0.07 2.05 2.67 76.21 −0.009

6 Concluding remarks

The marginal distribution of an off-state occupation time could be obtained by collapsing the on-state with the other off-state and using results on the telegraph process. Nonetheless, this approach would lead to more complicated formulas because the holding time of the new collapsed state is distributed as an infinite mixture of convolutions. Further, this technique cannot be employed if we need the joint distribution of the two off-state occupation times.
The total repair cost process $C(t)$ also can be viewed as a telegraph process governed by the on-off process with multiple off-states. That is, $C(t)$ is the time t position of a particle that moves with speed ${\alpha _{j}}$ whenever $X(t)=j$, $j=0,1,2$. The formulas for the joint distribution of $X(t)$ and occupation times can be useful for parameter estimation when process $C(t)$ is discretely observed. In particular, if $X(t)$ is also Markov (that is, all the holding times are exponential), then the efficient likelihood estimation is possible with help of tools for hidden Markov model [14].

References

[1] 
Bshouty, D., Di Crescenzo, A., Martinucci, B., Zacks, S.: Generalized telegraph process with random delays. J. Appl. Probab. 49, 850–865 (2012). MR3012104. https://doi.org/10.1017/s002190020000958x
[2] 
Cane, V.R.: Behavior sequences as semi-Markov chains. J. R. Stat. Soc., Ser. B 21, 36–58 (1959). MR0109095
[3] 
Crimaldi, I., Di Crescenzo, A., Iuliano, A., Martinucci, B.: A generalized telegraph process with velocity driven by random trials. Adv. Appl. Probab. 45(4), 1111–1136 (2013). MR3161299. https://doi.org/10.1239/aap/1386857860
[4] 
De Gregorio, A., Macci, C.: Large deviation principles for telegraph processes. Stat. Probab. Lett. 82(11), 1874–1882 (2012). MR2970286
[5] 
Di Crescenzo, A.: On random motions with velocities alternating at Erlang-distributed random times. Adv. Appl. Probab. 33, 690–701 (2001). MR1860096. https://doi.org/10.1239/aap/1005091360
[6] 
Di Crescenzo, A., Iuliano, A., Martinucci, B., Zacks, S.: Generalized telegraph process with random jumps. J. Appl. Probab. 50(2), 450–463 (2013). MR3102492
[7] 
Di Crescenzo, A., Martinucci, B., Zacks, S.: On the geometric brownian motion with alternating trend. Mathematical and statistical methods for actuarial sciences and finance, Eds. Perna, C. and Sibillo, M., 81–85 (2014).
[8] 
Di Crescenzo, A., Zacks, S.: Probability law and flow function of Brownian motion driven by a generalized telegraph process. Methodol. Comput. Appl. Probab. 17(3), 761–780 (2015). MR3377859. https://doi.org/10.1007/s11009-013-9392-1
[9] 
Kolesnik, A.D., Ratanov, N.: Telegraph Processes and Option Pricing. Springer Briefs in Statistics. Springer (2013). MR3115087. https://doi.org/10.1007/978-3-642-40526-6
[10] 
Macci, C.: Large deviations for some non-standard telegraph processes. Stat. Probab. Lett. 110, 119–127 (2016). MR3474745
[11] 
Newman, D.S.: On the probability distribution of a filtered random telegraph signal. Ann. Math. Stat. 39(3), 890–896 (1968). MR0230564. https://doi.org/10.1214/aoms/1177698321
[12] 
Page, E.S.: Theoretical considerations of routine maintenance. Comput. J. 2(4), 199–204 (1960).
[13] 
Perry, D., Stadje, W., Zacks, S.: First-exit times for increasing compound processes. Commun. Stat., Stoch. Models 15(5), 977–992 (1999). MR1721237. https://doi.org/10.1080/15326349908807571
[14] 
Pozdnyakov, V., Elbroch, L.M., Hu, C., Meyer, T., Yan, J.: On estimation for brownian motion governed by telegraph process with multiple off states. Methodol. Comput. Appl. Probab. 22, 1275–1291 (2020). MR4129134. https://doi.org/10.1007/s11009-020-09774-1
[15] 
Pozdnyakov, V., Elbroch, L.M., Labarga, A., Meyer, T., Yan, J.: Discretely observed Brownian motion governed by telegraph process: estimation. Methodol. Comput. Appl. Probab. 21(3), 907–920 (2019). MR4001858. https://doi.org/10.1007/s11009-017-9547-6
[16] 
Ratanov, N.: Piecewise linear process with renewal starting points. Stat. Probab. Lett. 131, 78–86 (2017). MR3706699. https://doi.org/10.1016/j.spl.2017.08.010
[17] 
Stadje, W., Zacks, S.: Telegraph processes with random velocities. J. Appl. Probab. 41, 665–678 (2004). MR2074815. https://doi.org/10.1239/jap/1091543417
[18] 
Xu, Y., De, S.K., Zacks, S.: Exact distribution of intermittently changing positive and negative compound Poisson process driven by an alternating renewal process and related functions. Probab. Eng. Inf. Sci. 29(3), 385–397 (2015). MR3355610
[19] 
Zacks, S.: Generalized integrated telegraph processes and the distribution of related stopping times. J. Appl. Probab. 41(2), 497–507 (2004). MR2052587. https://doi.org/10.1239/jap/1082999081
Exit Reading PDF XML


Table of contents
  • 1 Introduction
  • 2 Algorithmic construction of on-off process with two off-states
  • 3 Marginal distribution of occupation time of an off-state
  • 4 Joint distribution of off-state occupation times
  • 5 Special case: Lévy distribution
  • 6 Concluding remarks
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy