Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 3, Issue 4 (2016)
  4. An estimate for an expectation of the si ...

An estimate for an expectation of the simultaneous renewal for time-inhomogeneous Markov chains
Volume 3, Issue 4 (2016), pp. 315–323
Vitaliy Golomoziy  

Authors

 
Placeholder
https://doi.org/10.15559/16-VMSTA68
Pub. online: 23 December 2016      Type: Research Article      Open accessOpen Access

Received
28 November 2016
Revised
6 December 2016
Accepted
7 December 2016
Published
23 December 2016

Abstract

In this paper, we consider two time-inhomogeneous Markov chains ${X_{t}^{(l)}}$, $l\in \{1,2\}$, with discrete time on a general state space. We assume the existence of some renewal set C and investigate the time of simultaneous renewal, that is, the first positive time when the chains hit the set C simultaneously. The initial distributions for both chains may be arbitrary. Under the condition of stochastic domination and nonlattice condition for both renewal processes, we derive an upper bound for the expectation of the simultaneous renewal time. Such a bound was calculated for two time-inhomogeneous birth–death Markov chains.

1 Introduction

1.1 Overview

Simultaneous renewal is an important topic for a practical application of Markov chains. Although it has its own value, for example, in queuing theory, we are interested in its investigation because it plays an essential role in coupling construction, which can be used to derive stability estimates of the n-step transition probabilities and other results like the law of large numbers and limit theorems.
For example, in [5, 6], we can find how a stability estimate can be calculated using the coupling method for two time-inhomogeneous Markov chains with discrete time on the general state space. Good examples of applications of the coupling method (for both homogeneous and inhomogeneous Markov chains) are given in [2, 3].
It worth mentioning that the coupling construction for time-inhomogeneous chains is slightly different from its classical setup (see, e.g., [16, 17]). Such a time-inhomogeneous coupling for general state space can be found in [5]. Its modification, called the maximal coupling, can be used for a discrete space. More information about maximal coupling and its application to stability in the time-homogeneous case can be found in [14, 15].
For maximal coupling and its application to stability in the time-inhomogeneous case, see [10–12].
The crucial problem in the application of the results in the listed papers was calculation of the expectation for the coupling moment deriving from the simultaneous renewal. But there were no good estimates for the expectation of a simultaneous renewal for the time-inhomogeneous case.
For the time-homogeneous case, the paper [13] proposes such an estimate based on the Daley inequality (see [1]).
In [9], we derived conditions (see Thm. 3.1) that guarantee that the expectation for the simultaneous renewal time is finite. But there were no practical estimates for the expectation.
In [8], we derived an analogue of the Daley inequality that is used in this paper. The key condition for this inequality is a finiteness of the second moment for the stochastic dominant of the original renewal process. Thats why it is a crucial condition for the estimate construction.

1.2 Definitions and notation

We consider two independent time-inhomogeneous Markov chains with discrete time and general state space $(E,\mathfrak{E})$. We assume that both chains are defined on the same probability space $(\varOmega ,\mathfrak{F},\mathbb{P})$. Denote these chains as $({X_{n}^{(1)}})$, $({X_{n}^{(2)}})$, $n\ge 0$. We use the following notation for the one-step transition probabilities:
(1)
\[P_{lt}(x,A)=\mathbb{P}\big\{{X_{t+1}^{(l)}}\in A\big|{X_{t}^{(l)}}=x\big\},\]
where $x\in E$ is an arbitrary element, $l\in \{1,2\}$, and $A\in \mathfrak{E}$ is an arbitrary set.
We continue to use the definitions and notation from [9]. We consider some set $C\in \mathfrak{E}$, and our goal is to find an upper bound for the expectation of the first time of visiting the set C by both chains.
Define the renewal intervals
(2)
\[\begin{array}{c}{\theta _{0}^{(l)}}=\inf \big\{t\ge 0,{X_{t}^{(l)}}\in C\big\},\\{} {\theta _{n}^{(l)}}=\inf \big\{t\ge {\theta _{n-1}^{(l)}},{X_{t}^{(l)}}\in C\big\},\end{array}\]
where $l\in \{1,2\}$, $n\ge 1$, and renewal times
(3)
\[{\tau _{n}^{(l)}}=\sum \limits_{k=0}^{n}{\theta _{k}^{(l)}},\hspace{1em}l\in \{1,2\},\hspace{2.5pt}n\ge 1.\]
Then we can define the renewal probabilities
(4)
\[{g_{n}^{(t,l)}}=\mathbb{P}\big\{{\theta _{k}^{(l)}}=n\big|{\tau _{k-1}^{(l)}}=t\big\},\hspace{1em}\hspace{2.5pt}l\in \{1,2\},\hspace{2.5pt}n\ge 1.\]
It is worth mentioning that, in general, ${g_{n}^{(t,l)}}$ also depends on the value x of ${X_{t}^{(l)}}$ which can hit different states inside C. However, we will omit x for simplicity. We refer the reader to [9] for more details about definition (4).
Let us define the renewal sequence recursively:
(5)
\[\begin{array}{c}{u_{0}^{(t,l)}}=1,\\{} {u_{n}^{(t,l)}}=\displaystyle \sum \limits_{k=0}^{n-1}{u_{k}^{(t,l)}}{g_{n-k}^{(t+k,l)}}.\end{array}\]
The time of simultaneous hitting the set C is defined as
(6)
\[T=\inf \big\{t>0:\hspace{2.5pt}\exists m,n,\hspace{2.5pt}t={\tau _{m}^{(1)}}={\tau _{n}^{(2)}}\big\}.\]
The notion of the overshoot or excess is defined as follows:
(7)
\[{R_{n}^{(l)}}=\inf \big\{t>n:\hspace{2.5pt}{X_{t}^{(l)}}\in C\big\},\hspace{1em}l\in \{1,2\}.\]
It is, in fact, the next time after n when the chain ${X}^{(l)}$ hits the set C.

2 Estimate for the expectation of the simultaneous hitting time

First, we need put the condition on ${u_{n}^{(t,l)}}$ that guarantees its separation from 0. In the time-homogeneous case, this follows from the renewal theorem, but for the time-inhomogeneous case, there is no such theorem. Therefore, we need the following condition.
Condition A. There are a constant $\gamma >0$ and a number $n_{0}\ge 0$ such that, for all t, l and $n\ge n_{0}$,
(8)
\[{u_{n}^{(t,l)}}\ge \gamma .\]
It is important that this condition also guarantees certain “regularity” of a chain in terms of periodicity. The periodic chains obviously do not satisfy it.
There are various theorems that allow us to check Condition A in practice. See, for example, [7], Theorems 4.1, 4.2, 4.3. We will later use some of them.
We need a condition of the stochastic domination in order to apply Theorem 3.1 from [8].
Condition B. Distributions $({g_{n}^{(t,l)}})$ are stochastically dominated by some sequence $(\hat{g}_{n})$, $\hat{g}_{n}\ge 0$, which means that
(9)
\[{G_{n}^{(t,l)}}=\sum \limits_{k>n}{g_{k}^{(t,l)}}\le \hat{G}_{n}=\sum \limits_{k>n}\hat{g}_{k}\]
and that the stochastic dominant $(\hat{g}_{n})$ has finite first and second moments
(10)
\[\begin{array}{c}\hat{\mu }_{1}=\displaystyle \sum \limits_{k\ge 1}k\hat{g}_{k}=\displaystyle \sum \limits_{k\ge 0}\hat{G}_{k}<\infty ,\\{} \hat{\mu }_{2}=\displaystyle \sum \limits_{k\ge 1}{k}^{2}\hat{g}_{k}<\infty .\end{array}\]
The sequence $\hat{G}_{n}$ is nonincreasing because $\hat{g}_{n}\ge 0$.
It is worth mentioning that we do not require $(\hat{g}_{n})$ to be a probability distribution, that is, the total mass $\sum _{k\ge 1}\hat{g}_{k}$ not necessarily equals 1.
Theorem 1.
Assume that conditions (A) and (B) hold for the chains $({X_{n}^{(l)}})$, $l\in \{1,2\}$, defined before and that the renewal sequences are generated by them. Then the expectation of the simultaneous hitting time for the set C satisfies the inequality
(11)
\[\mathbb{E}[T]\le \mathbb{E}\big[{\theta _{0}^{(1)}}\big]+\mathbb{E}\big[{\theta _{0}^{(2)}}\big]+\frac{M}{\gamma },\]
where
(12)
\[M=\hat{\mu }_{2}+\hat{\mu }_{1}\big({\gamma }^{-1}+n_{0}\big).\]
Proof.
Let us recall the notation from [9]:
\[\nu _{0}:=\min \big\{j\ge 1:{\tau _{j}^{1}}>n_{0}\big\},\]
\[B_{0}:={\tau _{\nu _{0}}^{1}},\]
\[\nu _{1}:=\min \big\{j\ge \nu _{0}:{\tau _{j}^{2}}-{\tau _{\nu _{0}}^{1}}>n_{0},\hspace{2.5pt}\text{or}\hspace{2.5pt}{\tau _{j}^{2}}-{\tau _{\nu _{0}}^{1}}=0\big\},\]
\[B_{1}:={\tau _{\nu _{1}}^{2}}-{\tau _{\nu _{0}}^{1}},\]
and further on
\[\nu _{2m}:=\min \big\{j\ge \nu _{2m-1}:\hspace{2.5pt}{\tau _{j}^{1}}-{\tau _{\nu _{2m-1}}^{2}}>n_{0}\hspace{0.1667em},\hspace{2.5pt}\text{or}\hspace{2.5pt}{\tau _{j}^{1}}-{\tau _{\nu _{2m-1}}^{2}}=0\big\},\]
\[B_{2m}:={\tau _{\nu _{2m}}^{1}}-{\tau _{\nu _{2m-1}}^{2}},\]
\[\nu _{2m+1}:=\min \big\{j\ge \nu _{2m}:\hspace{2.5pt}{\tau _{j}^{2}}-{\tau _{\nu _{2m}}^{1}}>n_{0}\hspace{0.1667em},\hspace{2.5pt}\text{or}\hspace{2.5pt}{\tau _{j}^{2}}-{\tau _{\nu _{2m}}^{1}}=0\big\},\]
\[B_{2m+1}:={\tau _{\nu _{2m+1}}^{2}}-{\tau _{\nu _{2m}}^{1}}.\]
The moments $\nu _{k}$ are called coupling trials. Let us define $\tau =\min \{n\ge 1:B_{n}=0\}$ and the sequence of sigma-fields $\mathfrak{B}_{n}$, $n\ge 0$, by
\[\mathfrak{B}_{n}=\sigma \big[B_{k},\nu _{k},{\tau _{j}^{l}},\hspace{2.5pt}k\le n,\hspace{2.5pt}j\le \nu _{n}\big].\]
We will use the same idea as in the Theorem 5.1 from [9].
First, we assume that ${\theta _{0}^{(2)}}=0$, which means that the second chain starts from the set C.
The next representation of time T is following directly from the definitions:
(13)
\[T\le {\theta _{0}^{(1)}}+\sum \limits_{n=0}^{\tau }B_{n}={\theta _{0}^{(1)}}+\sum \limits_{n\ge 0}B_{n}\mathbb{1}_{\tau >n}.\]
Using Lemma 1 and the fact that $\{\tau >n-1\}\in \mathfrak{B}_{n-1}$, we can derive the following inequality:
(14)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle E[B_{n}\mathbb{1}_{\tau >n}|\mathfrak{B}_{n-1}]& \displaystyle =E[B_{n}\mathbb{1}_{\tau \ge n}|\mathfrak{B}_{n-1}]+E[\mathbb{01}_{\tau =n}|\mathfrak{B}_{n-1}]\\{} & \displaystyle =\mathbb{1}_{\tau \ge n}E[B_{n}|\mathfrak{B}_{n-1}]\le \mathbb{1}_{\tau \ge n}M.\end{array}\]
Lemma 8.5 from [9] implies
(15)
\[\mathbb{P}\{\tau >n\}\le {(1-\gamma )}^{n}.\]
Taking the unconditional expectation of the both parts in (14) gives us
(16)
\[\mathbb{E}[B_{n}\mathbb{1}_{\tau >n}]\le M\mathbb{P}\{\tau >n\}\le M{(1-\gamma )}^{n}.\]
Applying this inequality to (13), we have
(17)
\[\begin{array}{r@{\hskip0pt}l}\displaystyle \mathbb{E}[T]& \displaystyle \le \mathbb{E}[\theta _{0}]+\mathbb{E}\Bigg[\sum \limits_{n=0}^{\tau }B_{n}\Bigg]=\mathbb{E}[\theta _{0}]+\sum \limits_{n\ge 0}\mathbb{E}[B_{n}\mathbb{1}_{\tau >n}]\\{} & \displaystyle \le \mathbb{E}[\theta _{0}]+\sum \limits_{n\ge 0}M{(1-\gamma )}^{n}=\mathbb{E}[\theta _{0}]+\frac{M}{\gamma }.\end{array}\]
Now, we have to get rid of the assumption ${\theta _{0}^{(2)}}=0$. The same calculations as in [9] after formula (20) give us
\[\mathbb{E}[T]\le \mathbb{E}\big[\max \big({\theta _{0}^{(1)}},{\theta _{0}^{(2)}}\big)\big]+\frac{M}{\gamma }\le \mathbb{E}\big[{\theta _{0}^{(1)}}\big]+\mathbb{E}\big[{\theta _{0}^{(2)}})\big]+\frac{M}{\gamma }.\]
 □

3 Application to the birth–death processes

Consider two time-inhomogeneous processes ${X}^{(1)}$ and ${X}^{(2)}$ with the following transition probabilities on the tth step:
(18)
\[P_{t}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\alpha _{t0}& 1-\alpha _{t0}& 0& 0& 0& \dots \\{} 0& \alpha _{t1}& 0& 1-\alpha _{t1}& 0& \dots \\{} 0& 0& \alpha _{t2}& 0& 1-\alpha _{t2}& \dots \\{} \dots \end{array}\right)\]
and
(19)
\[Q_{t}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\beta _{t0}& 1-\beta _{t0}& 0& 0& 0& \dots \\{} 0& \beta _{t1}& 0& 1-\beta _{t1}& 0& \dots \\{} 0& 0& \beta _{t2}& 0& 1-\beta _{t2}& \dots \\{} \dots \end{array}\right).\]
We would like to estimate the expectation applying Theorem 1. So we have to check the regularity condition A and the domination condition B.
We will need the second moment of the dominating distribution, which is difficult to derive for chains ${X}^{(1)}$ and ${X}^{(2)}$. So the idea is to construct a domination sequence based on some simple homogeneous Markov chain whose renewal sequence is well studied and whose second moment can be calculated easily. The closest chain similar to the birth–death chains we consider here is a random walk on the half-line.
The domination sequence based on such a random walk is constructed in Lemma 2, and Lemma 3 gives its first and second moments that we need for Theorem 1.
Next, we will check regularity condition A. First, we assume that, for every $t>0$, ${g_{1}^{(l)}}=\alpha _{t0}>0$, and
(20)
\[\gamma _{0}=\underset{t}{\inf }\{\alpha _{t0},\beta _{t0}\}>0.\]
We will use Corollary to Theorem 4.2 from [7] in order to check Condition A. It says that if ${g_{1}^{(t)}}>0$ and a domination sequence exists, then Condition A holds. Moreover, its proof (see [7, p. 12], inequality for $F(G)$) contains an estimate for γ:
(21)
\[\gamma =\exp \big(\hat{\mu }\ln (\gamma _{0})/\gamma _{0}\big).\]
Finally, we can state the following result.
Theorem 2.
Assume that for chains with transition probabilities $P_{t}$, $Q_{t}$ defined before, condition (20) holds and that there exists p that satisfies condition (28) for both chains ${X}^{(1)}$ and ${X}^{(2)}$. If both chains start from the zero state, then the expectation of their simultaneous renewal satisfies the inequality
(22)
\[\mathbb{E}[T]\le \hat{\mu }_{2}/\gamma +\hat{\mu }_{1}/{\gamma }^{2},\]
where $\hat{\mu }_{1},\hat{\mu }_{2}$ are defined in Lemma 3, and γ is defined in (21).
Proof.
The statement of the theorem follows from Theorem 1, applied to chains ${X}^{(1)}$ and ${X}^{(2)}$ with domination sequence constructed in Lemma 2, the constant γ defined before, and the variables $\hat{\mu }_{1}$ and $\hat{\mu }_{2}$ calculated in Lemma 3.  □

4 Auxiliary results

Lemma 1.
We have the inequality
(23)
\[\mathbb{E}[B_{n}|\mathfrak{B}_{n-1}]\le M,\]
for $n\ge 1$, where M is defined in (12).
Proof.
From Lemma 8.3 of [9] we can derive:
(24)
\[\begin{array}{c}\mathbb{E}[B_{2n+1}|\mathfrak{B}_{2n}]=\displaystyle \sum \limits_{t}\mathbb{E}\big[{R_{t+n_{0}}^{(2)}}\big]\mathbb{1}_{{\tau _{\nu _{2n}}^{(1)}}=t},\\{} \mathbb{E}[B_{2n}|\mathfrak{B}_{2n-1}]=\displaystyle \sum \limits_{t}\mathbb{E}\big[{R_{t+n_{0}}^{(1)}}\big]\mathbb{1}_{{\tau _{\nu _{2n-1}}^{(2)}}=t}.\end{array}\]
At the same time, Theorem 3.1 from [8] gives us the inequality
(25)
\[\mathbb{E}\big[{R_{m}^{(l)}}\big]\le M,\hspace{1em}m\ge 0,\hspace{2.5pt}l\in \{0,1\},\]
taking into account the domination condition B.
Putting (25) into formulas (24) yields the required result (23).  □
Lemma 2.
Consider the following time-inhomogeneous birth–death Markov chain $Z_{t}$ with the transition probabilities on the t-th step
(26)
\[P_{t}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}\alpha _{t0}& 1-\alpha _{t0}& 0& 0& 0& \dots \\{} 0& \alpha _{t1}& 0& 1-\alpha _{t1}& 0& \dots \\{} 0& 0& \alpha _{t2}& 0& 1-\alpha _{t2}& \dots \\{} \dots ,\end{array}\right)\]
and the time-homogeneous random walk $\hat{Z}_{t}$ with the transition probability matrix
(27)
\[\hat{P}=\left(\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c@{\hskip10.0pt}c}0& 1& 0& 0& 0& \dots \\{} p& 0& 1-p& 0& 0& \dots \\{} 0& p& 0& 1-p& 0& \dots \\{} \dots ,\end{array}\right).\]
Let
\[C=\{0\},\]
and let ${g_{n}^{(t)}}$ be a distribution of the first after t returning into 0 for the chain Z, which is in the zero state at the moment t.
Assume that there exists some p such that, for all t, i, s, j, the following inequations hold:
(28)
\[\begin{array}{c}p>1/2,\\{} p(1-p)\ge (1-\alpha _{ti})\alpha _{sj},\hspace{1em}\forall t,s,i,j.\end{array}\]
Denote by $f_{n}$ the renewal probability for the chain $\hat{Z}_{t}$ ($f_{n}$ is the probability of the first returning to 0 for the chain $\hat{Z}$ started at 0):
(29)
\[f_{n}=\hat{\mathbb{P}}\{\hat{Z}_{0}=0,\hat{Z}_{1}\ne 0,\dots ,\hat{Z}_{n-1}\ne 0,\hat{Z}_{n}=0\},\]
and let $\hat{g}_{n}=f_{n}/p$, $n>1$, and $\hat{g}_{1}=1$.
Then the sequence $(\hat{g}_{n})_{n\ge 1}$ stochastically dominates ${g_{n}^{(t)}}$, or, in other words,
(30)
\[{G_{k}^{(t)}}\le \hat{G}_{k},\]
where ${G_{n}^{(t)}}=\sum _{k>n}{g_{k}^{(t)}}$, $\hat{G}_{n}=\sum _{k>n}\hat{g}_{k}$, and $\mathbb{P}$, $\mathbb{E}$ and $\hat{\mathbb{P}}$, $\hat{\mathbb{E}}$ are the probabilities and expectations on the canonical probability space for the chains Z and $\hat{Z}$, respectively.
Proof.
First of all, notice that $(\hat{g}_{n})$ is not a probability distribution. But this is not a big problem since the domination sequence in our construction does not have to be a distribution.
We will show that
(31)
\[\hat{g}_{n}\ge {g_{n}^{(t)}},\]
for all $t,n$.
Let us start with $n=1$:
\[{g_{1}^{(t)}}=\alpha _{t0}<1=\hat{g}_{1}.\]
For $n>1$, consider the event
\[A_{(2n)t}=\{Z_{t}=0,Z_{t+1}\ne 0,\dots ,Z_{t+2n-1}\ne 0,Z_{t+2n}=0\}.\]
It can be interpreted as a set of trajectories $\omega =(\omega _{1},\dots ,\omega _{2n})$, $\omega _{i}\in \{+1,-1\}$, where $\omega _{i}=+1$ if $Z_{t+i}$ goes up and $\omega _{i}=-1$ otherwise. It is clear that, in order to return back to 0 at time $2n$, there should be exactly n steps up (the first one must be step up) and n steps down. It is worth mentioning that not every trajectory of length $2n$ that has n steps up and n down belongs to $A_{(2n)t}$ because some of them might visit 0 before $2n+t$, which is not acceptable for $A_{(2n)t}$. The exact number of such trajectories in $A_{(2n)t}$ is unknown and not important for this proof. What is important, is that each $\omega \in A_{(2n)t}$ corresponds to the same trajectory for the chain $\hat{Z}$. This means that summing $\hat{\mathbb{P}}\{\omega \}$ for all $\omega \in A_{(2n)t}$ gives the probability $f_{2n}$. Strictly speaking, the chains Z and $\hat{Z}$ are defined on different probability spaces, but there is an obvious correspondence between the trajectories, and the difference is only in the probabilities. So we can use the same symbol ω for both.
Since ω has exactly n steps up and n steps down, its probability is a product of n different $(1-\alpha _{t_{i}n_{i}})$ and n different $\alpha _{t_{j},n_{i}+1}$, for $i,j=\overline{1,n}$. Notice that some of $n_{i}$ can be the same.
This means that, after reordering, the probability of such ω can be presented as
(32)
\[\mathbb{P}\{\omega \}=\prod \big((1-\alpha _{t_{i}n_{i}})\alpha _{t_{j}n_{i}+1}\big),\]
for some $t_{i},n_{i},t_{j}$. We emphasize again that the terms in that product may repeat, but this is not important for this proof.
At the same time, it follows from condition (28) that for any indexes, $\alpha _{tn}(1-\alpha _{sm})<p(1-p)$ and
(33)
\[\mathbb{P}\{\omega \}\le {\big(p(1-p)\big)}^{n-1}=\hat{\mathbb{P}}\{\omega \}/p.\]
Summing over all such ω, we obtain (31) for $n>1$.  □
Lemma 3.
The sequence $(\hat{g}_{n})$ defined in Lemma 2 has the finite first and second moments
(34)
\[\begin{array}{c}\hat{\mu }_{1}=2/(2p-1)+1,\\{} \hat{\mu }_{2}={(2p-1)}^{-1}\bigg(2+\displaystyle \frac{8(1-p)}{1-4p}\bigg)+2/(2p-1)+1.\end{array}\]
Proof.
First we note that since $\hat{g}_{n}=f_{n}/p$, $n>1$, and $\hat{g}_{1}=1=1+f_{1}$, we have $\hat{\mu }_{1}=\mu _{1}/p+1$ and $\hat{\mu }_{2}=\mu _{2}/p+1$, where $\mu _{1}$ and $\mu _{2}$ are the expectation and the second moment for the probability distribution $f_{n},n\ge 1$.
The generating function $F(z)$ for the distribution $f_{n}$ equals (see, e.g., [4, Ch. XIII])
(35)
\[F(z)=\frac{1-\sqrt{1-4p(1-p){s}^{2}}}{2(1-p)}.\]
So, $\mu _{1}={F^{\prime }}(1)$ and $\mu _{2}={F^{\prime\prime }}(1)+\mu _{1}$.  □

References

[1] 
Daley, D.: Tight bounds for the renewal function of a random walk. Ann. Probab. 8, 615–621 (1980). MR0573298
[2] 
Douc, R., Moulines, E., Soulier, P.: Practical drift conditions for subgeometric rates of convergence. Ann. Appl. Probab. 14, 1353–1377 (2004). MR2071426. doi:10.1214/105051604000000323
[3] 
Douc, R., Moulines, E., Soulier, P.: Quantitative bounds on convergence of time-inhomogeneous Markov chains. Ann. Appl. Probab. 14, 1643–1665 (2004). MR2099647. doi:10.1214/105051604000000620
[4] 
Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 1. John Wiley and Sons (1957). MR0088081
[5] 
Golomoziy, V.: A subgeometric estimate of the stability for time-homogeneous Markov chains. Theory Probab. Math. Stat. 81, 35–50 (2010). MR2667308. doi:10.1090/S0094-9000-2010-00808-8
[6] 
Golomoziy, V.: An estimate of the stability for nonhomogeneous Markov chains under classical minorization condition. Theory Probab. Math. Stat. 88, 35–49 (2014). MR3112633. doi:10.1090/S0094-9000-2014-00917-5
[7] 
Golomoziy, V.: An inequality for the coupling moment in the case of two inhomogeneous Markov chains. Theory Probab. Math. Stat. 90, 43–56 (2015)
[8] 
Golomoziy, V.: An estimate for an expectation of the excess of the renewal sequence generated by the non-homogeneous Markov chain under a condition of existence square-integrable stochastic dominant. Theory Probab. Math. Stat. 94, 50–59 (2016) (in Ukrainian)
[9] 
Golomoziy, V., Kartashov, M.: On the integrability of the coupling moment for time-inhomogeneous Markov chains. Theory Probab. Math. Stat. 89, 1–12 (2014)
[10] 
Golomoziy, V., Kartashov, M.: Maximal coupling and stability of discrete non-homogeneous Markov chains. Theory Probab. Math. Stat. 91, 17–27 (2015). MR3364120
[11] 
Golomoziy, V., Kartashov, M.: Maximal coupling and v-stability of discrete inhomogeneous Markov chains. Theory Probab. Math. Stat. 93, 22–33 (2015) (in Ukrainian)
[12] 
Golomoziy, V., Kartashov, M., Kartashov, Y.: Impact of the stress factor on the price of widow’s pensions. Proofs. Theory Probab. Math. Stat. 92, 17–22 (2016)
[13] 
Kartashov, M., Golomoziy, V.: The mean coupling time for independent discrete renewal processes. Theory Probab. Math. Stat. 84, 79–86 (2012). MR2857418. doi:10.1090/S0094-9000-2012-00855-7
[14] 
Kartashov, M., Golomoziy, V.: Maximal coupling procedure and stability of discrete Markov chains. I. Theory Probab. Math. Stat. 86, 93–104 (2013). MR2986452. doi:10.1090/S0094-9000-2013-00891-6
[15] 
Kartashov, M., Golomoziy, V.: Maximal coupling procedure and stability of discrete Markov chains. II. Theory Probab. Math. Stat. 87, 65–78 (2013). MR3241447. doi:10.1090/S0094-9000-2014-00905-9
[16] 
Lindvall, T.: Lectures on Coupling Method. John Wiley and Sons (1991). MR1180522
[17] 
Thorisson, H.: Coupling, Stationarity, and Regeneration. Springer, New York (2000). MR1741181. doi:10.1007/978-1-4612-1236-2
Exit Reading PDF XML


Table of contents
  • 1 Introduction
  • 2 Estimate for the expectation of the simultaneous hitting time
  • 3 Application to the birth–death processes
  • 4 Auxiliary results
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy