Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. To appear
  3. Submartingale condition for weak converg ...

Submartingale condition for weak convergence for semi-Markov processes
Vitaliy Golomoziy ORCID icon link to view author Vitaliy Golomoziy details  

Authors

 
Placeholder
https://doi.org/10.15559/26-VMSTA293
Pub. online: 24 February 2025      Type: Research Article      Open accessOpen Access

Received
21 September 2025
Revised
27 December 2025
Accepted
10 February 2026
Published
24 February 2025

Abstract

In this paper, we consider a modified version of a well-known submartingale condition for the weak convergence of probability measures, adapted to the semi-Markov case. In this setting, it is convenient to work with an embedded Markov chain and the filtration generated by jump times. We demonstrate that a straightforward restatement of the classical result is not valid, and that an additional condition is required.

1 Introduction

The submartingale condition for weak convergence was introduced in the celebrated book [4], Theorem 1.4.6, and is stated as follows:
Theorem 1.
[D. Strook, S. Varadhan] Let $\Omega =C([0,\infty );{\mathbb{R}^{d}})$ be the space of continuous functions on $[0,\infty )$ with values in ${\mathbb{R}^{d}}$, and let ${({ℳ_{t}})_{t\ge 0}}$ be the corresponding canonical filtration. Let 𝒫 be a family of probability measures on $(\Omega ,{({ℳ_{t}})_{t\ge 0}})$, such that for any non-negative $f\in {C_{0}^{\infty }}({\mathbb{R}^{d}})$ there exists a constant ${A_{f}}$ such that, for all $P\in 𝒫$, the stochastic process
\[ {\left(f(x(t))+{A_{f}}t,{ℳ_{t}}\right)_{t\ge 0}}\]
is a non-negative P-submartingale. Assume also that ${A_{f}}$ can be selected in such a way that it works for all translations of f (i.e. if $g(x)=f(x-a)$, $a\in {\mathbb{R}^{d}}$, then ${A_{g}}={A_{f}}$). Assume further that
(1)
\[ \underset{c\to \infty }{\lim }\underset{P\in 𝒫}{\sup }P(|x(0)|\ge c)=0.\]
Then the family 𝒫 is weakly precompact (and thus tight).
This condition is useful in many situations, especially in connection with diffusion processes and the associated martingale problem. However, in the theory of semi-Markov processes we typically face a discrete-time martingale problem.
For instance, it is used as a main tool for establishing weak convergence in [2] (see Preface, page vii). Next, we give some basic definitions and recall the main facts about semi-Markov processes. We will use [2] as the main source, and we restrict our attention to processes with values in ${\mathbb{R}^{d}}$.
Definition 1.
Function $Q:{\mathbb{R}^{d}}\times ℬ\times [0,\infty )\to [0,1]$, where ℬ is a Borel sigma-field in ${\mathbb{R}^{d}}$, called a semi-Markov kernel on $({\mathbb{R}^{d}},ℬ)$ if
  • • For every $x\in {\mathbb{R}^{d}}$ and $B\in ℬ$, function $Q(x,B,\cdot )$ is non-decreasing, right-continuous real function, such that $Q(x,B,0)=0$.
  • • For every $t\ge 0$, $Q(\cdot ,\cdot ,t)$ is a sub-Markov kernel on $({\mathbb{R}^{d}},ℬ)$.
  • • $P(\cdot ,\cdot )=Q(\cdot ,\cdot ,\infty )$ is a Markov kernel on $({\mathbb{R}^{d}},ℬ)$.
Definition 2.
An ${\mathbb{R}^{d}}$-valued Markov renewal process is a two-component, time-homogeneous Markov chain $({x_{n}},{\tau _{n}})$, $n\ge 0$ taking values in ${\mathbb{R}^{d}}\times [0,\infty )$, with transition probability defined by a semi-Markov kernel Q as follows
\[ \mathbb{P}\hspace{-0.1667em}\left({x_{n+1}}\in B,\hspace{3.33333pt}{\tau _{n+1}}-{\tau _{n}}\le t\hspace{3.33333pt}|\hspace{3.33333pt}{𝒢_{n}}\right)=Q({x_{n}},B,t),\]
for any integer $n\ge 0$, real $t\ge 0$, and Borel set $B\subset {\mathbb{R}^{d}}$. We assume that ${\tau _{0}}=0$. Here ${\left({𝒢_{n}}\right)_{n\ge 0}}$ is the natural filtration generated by $\{({x_{n}},{\tau _{n}}):n\ge 0\}$.
In what follows we assume that semi-Markov kernel $Q(x,B,t)$ admits a representation
(2)
\[ Q(x,B,t)=P(x,B){F_{x}}(t),\]
where $P(x,B)$ is a Markov kernel, and ${F_{x}}$ is a distribution function for every x.
Thus, ${\tau _{n+1}}-{\tau _{n}}$ is a holding time, and when (2) holds, it has distribution ${F_{x}}$, where ${x_{n}}={x_{{\tau _{n}}}}=x$. We denote its mean by
(3)
\[ m(x)={\int _{0}^{\infty }}t\hspace{0.1667em}{F_{x}}(dt)={\int _{0}^{\infty }}\big(1-{F_{x}}(t)\big)\hspace{0.1667em}dt.\]
Definition 3.
A semi-Markov process associated with the Markov renewal process $({x_{n}},{\tau _{n}})$, $n\ge 0$, is the stochastic process
\[ x(t)={x_{\nu (t)}},\hspace{1em}t\ge 0,\]
where
\[ \nu (t)=\sup \{n\ge 0:{\tau _{n}}\le t\},\hspace{1em}t\ge 0,\]
is the counting process of jumps.
We also introduce the continuous version of ${\tau _{n}}$ by
\[ \tau (s)={\tau _{\nu (s)}},\]
and the continuous filtration generated by the semi-Markov process by
\[ {ℱ_{t}}=\sigma \big(x(s),\tau (s),0\le s\le t\big).\]
Definition 4.
Let $q(x)=1/m(x)$. We define the compensating operator $\mathbb{L}$ of the Markov renewal process $({x_{n}},{\tau _{n}})$, $n\ge 0$ (or of the associated semi-Markov process $x(t)$, $t\ge 0$) by
\[ \mathbb{L}\varphi (x,t)=q(x)\hspace{-0.1667em}\left[{\int _{0}^{\infty }}{F_{x}}(ds){\int _{{\mathbb{R}^{d}}}}P(x,dy)\hspace{0.1667em}\varphi (y,t+s)-\varphi (x,t)\right],\]
where $\varphi (x,t)$ is a function from an appropriate class of test functions. By convention, we set $\mathbb{L}\varphi (x,t)=0$ when $q(x)=0$.
When $\varphi (x,t)=\varphi (x)$ does not depend on t, this expression reduces to
\[ \mathbb{L}\varphi (x)=q(x)\left({\int _{{\mathbb{R}^{d}}}}P(x,dy)\hspace{0.1667em}\varphi (y)-\varphi (x)\right).\]
Finally, from Proposition 1.4 in [2], we know that the discrete-time process
\[ {Z_{n}^{\varphi }}:=\varphi ({x_{n}},{\tau _{n}})-{\sum \limits_{i=1}^{n}}({\tau _{i}}-{\tau _{i-1}})\hspace{0.1667em}\mathbb{L}\varphi ({x_{i-1}},{\tau _{i-1}}),\hspace{1em}n\ge 0,\]
is a martingale with respect to the discrete filtration ${𝒢_{n}}=\sigma ({x_{k}},{\tau _{k}}:0\le k\le n)$, $n\ge 0$.
In applications, we typically use this fact to establish a discrete-time version of the submartingale condition of Theorem 1. Namely, we can establish the following:
Condition D. Let 𝒰 be an index set, and let ${x^{u}}(t)$, $u\in 𝒰$, be a family of semi-Markov processes (with associated Markov renewal processes $({x_{n}^{u}},{\tau _{n}^{u}})$, $n\ge 0$). Assume that condition (1) holds, and that for any non-negative $\varphi \in {C_{0}^{\infty }}({\mathbb{R}^{d}})$ there exists a constant ${A_{\varphi }}\ge 0$, the same for all translations of φ, such that for every $u\in 𝒰$ the discrete-time process
\[ {\big(\varphi ({x_{n}^{u}})+{A_{\varphi }}{\tau _{n}^{u}}\big)_{n\ge 0}}\]
is a non-negative submartingale with respect to its natural filtration ${({𝒢_{n}^{u}})_{n\ge 0}}$.
The question is whether Condition D is sufficient for tightness. The answer is no, as we show in Section 3, so an additional condition is required. We introduce and discuss this condition in Theorem 2. We will see that it is essential to ensure that there is no positive time interval during which no jumps occur for all semi-Markov processes in the family. In other words, the frequency of jumps must increase. We also show that in some important and typical applications this condition indeed holds.
This paper is organized as follows. Section 2 contains the main result, and Section 3 presents a counterexample that justifies the additional condition in Theorem 2. Finally, Section 4 is devoted to a special case—families of semi-Markov processes obtained by scaling a single given process in space and time—which are important examples in the theory of semi-Markov approximations.

2 Main result

In this section we assume that $\Omega =D[0,\infty )$ is the Skorokhod space equipped with the standard Borel σ-field $ℬ(\Omega )$ (with respect to the Skorokhod topology; see [1], Chapter 16 for details).
Note that any semi-Markov process defined through a Markov renewal process ${({x_{n}},{\tau _{n}})_{n\ge 0}}$ has trajectories in a subspace ${\Omega ^{\ast }}$ consisting of functions that have at most one accumulation point of jump times. In other words, $\omega \in {\Omega ^{\ast }}$ if and only if there exists a non-decreasing sequence
\[ 0\le {t_{0}}\le {t_{1}}\le \cdots \le {t_{n}}\le \cdots \]
such that $\omega (t-)\ne \omega (t)$ if and only if $t\in \{{t_{n}}:n\ge 0\}$. For such $\omega \in {\Omega ^{\ast }}$, we define a non-decreasing sequence of jump times by ${\tau _{n}}(\omega )={t_{n}}$. For all other $\omega \in \Omega \setminus {\Omega ^{\ast }}$, we set ${\tau _{n}}(\omega )=0$. Since ${\Omega ^{\ast }}$ is Borel-measurable, this construction yields a sequence of non-decreasing stopping times with respect to the natural coordinate filtration.
To simplify notation, we adopt the following convention. Let ${({\xi _{n}})_{n\ge 0}}$ be a sequence of random variables and let $\tau \ge 0$ be an integer-valued random variable. We will write ${\xi _{\tau }}(\omega )$ for ${\xi _{\tau (\omega )}}(\omega )$.
Theorem 2.
Let ${\left({\zeta ^{u}}(t)\right)_{t\ge 0}}$, $u\in \{0,1,2,\dots \}$, be a sequence of semi-Markov processes with values in ${\mathbb{R}^{d}}$, and let $𝒫={\{{P^{u}}\}_{u\ge 0}}$ be the corresponding family of distributions on the Skorokhod space $(\Omega ,ℬ(\Omega ))$. Let $\{{\tau _{n}}(\omega ):n\ge 0\}$ be the non-decreasing sequence of jump times defined above, and define the corresponding value at jump time ${\tau _{n}}$ by
\[ {X_{n}}(\omega )=\omega ({\tau _{n}}(\omega )).\]
Assume the following conditions hold.
  • (i) For every $T\gt 0$,
    (4)
    \[ \underset{a\to \infty }{\lim }\underset{u}{\limsup }{P^{u}}\hspace{-0.1667em}\left(\left\{\omega \in \Omega :\underset{t\in [0,T]}{\sup }\big|\omega (t)\big|\ge a\right\}\right)=0.\]
  • (ii) Assume that for every non-negative $f\in {C_{0}^{\infty }}({\mathbb{R}^{d}})$ there exists a constant ${A_{f}}\ge 0$ such that the discrete-time process
    \[ {\big(f({X_{n}})+{A_{f}}{\tau _{n}}\big)_{n\ge 0}}\]
    is a non-negative submartingale with respect to the filtration ${𝒢_{n}}=\sigma ({X_{k}},{\tau _{k}}:0\le k\le n)$. Assume also that the choice of ${A_{f}}$ can be made so that it works for all translates of f.
  • (iii) For any n and t, denote the next jump after ${\tau _{n}}+t$ by
    (5)
    \[ {\hat{\tau }_{n}}(t)={\hat{\tau }_{n}}(t;\omega )=\underset{m\gt n}{\inf }\{{\tau _{m}}(\omega ):{\tau _{m}}(\omega )\gt {\tau _{n}}(\omega )+t\},\]
    and define the conditional expectation of the time between ${\tau _{n}}+t$ and the next jump by
    (6)
    \[ {d_{x}^{u}}(t)={E^{u}}\hspace{-0.1667em}\big[{\hat{\tau }_{n}}(t)-{\tau _{n}}-t\hspace{0.1667em}\big|\hspace{0.1667em}{X_{n}}=x\big],\hspace{1em}x\in {\mathbb{R}^{d}}.\]
    Assume that
    (7)
    \[ \underset{t\to 0}{\lim }\underset{u\to \infty }{\limsup }\underset{x\in {\mathbb{R}^{d}}}{\sup }{d_{x}^{u}}(t)=0.\]
Then the family 𝒫 is tight in $D[0,\infty )$.
Remark 1.
Note that ${d_{x}^{u}}(t)$ does not depend on n, since the pair $({X_{n}},{\tau _{n}})$ forms a time-homogeneous, discrete-time Markov chain, and ${\hat{\tau }_{n}}(t)-{\tau _{n}}$ is independent of ${\tau _{k}}$, $k\le n$, and depends only on the state ${X_{n}}$.
Proof.
The proof follows the steps of the original proof of Theorem 1.4.6 from [4]. First, for a function $y\in \Omega =D[0,\infty )$ and $\delta \gt 0$, we define
\[ {w^{\prime }_{y}}(\delta ;T)=\underset{\{0={t_{0}}\lt {t_{1}}\lt \cdots \lt {t_{n}}=T\}}{\inf }\underset{1\le i\le n}{\max }\underset{s,t\in [{t_{i-1}},{t_{i}})}{\sup }|y(t)-y(s)|,\]
where the infimum is taken over all sets $\{{t_{0}},\dots ,{t_{n}}\}$ such that $0={t_{0}}\lt {t_{1}}\lt \cdots \lt {t_{n}}=T$ (n is arbitrary) and ${\min _{1\le i\lt n}}({t_{i}}-{t_{i-1}})\gt \delta $ (see [1], p. 171 for details).
From [1], Theorem 16.8, we know that the family of distributions 𝒫 is tight if and only if condition (4) holds, together with the following condition:
(8)
\[ \underset{\delta \to 0}{\lim }\underset{u}{\limsup }{P^{u}}\hspace{-0.1667em}\left(\left\{y\in \Omega :{w^{\prime }_{y}}(\delta ;T)\ge \rho \right\}\right)=0,\]
for all $\rho \gt 0$ and $T\gt 0$.
Thus, our goal is to prove (8). In what follows we assume that both T and ρ are fixed positive numbers. Following the proof of Theorem 1.4.6 from [4], we define for $\omega \in \Omega $:
(9)
\[ {s_{0}}=0,\hspace{2em}{s_{n}}(\omega )=\inf \hspace{-0.1667em}\left\{t\ge {s_{n-1}}(\omega ):\big|\omega (t)-\omega ({s_{n-1}})\big|\ge \rho /4\right\}.\]
Denote
\[ N=N(\omega )=\min \{n:{s_{n+1}}(\omega )\gt T\},\]
and
\[ {\Delta _{\rho }}(T;\omega )=\min \{{s_{n}}(\omega )-{s_{n-1}}(\omega ):1\le n\le N(\omega )\}.\]
Next, we make a crucial observation: each ${P^{u}}$ is concentrated on piecewise constant functions in $D[0,\infty )$, so the moments ${s_{n}}$ necessarily coincide with some ${\tau _{m}}$. Hence we can define ${\nu _{n}}(\omega )$ as the integer such that
\[ {s_{n}}(\omega )={\tau _{{\nu _{n}}}}(\omega ),\]
where we use the convention ${\tau _{{\nu _{n}}}}(\omega )={\tau _{{\nu _{n}}(\omega )}}(\omega )$ introduced at the beginning of this section. Note that ${\nu _{n}}$ is a stopping time with respect to the filtration ${({𝒢_{m}})_{m\ge 0}}$.
We then observe that
\[ {P^{u}}\hspace{-0.1667em}\left(\left\{y\in \Omega :{w^{\prime }_{y}}(\delta ,T)\ge \rho \right\}\right)\le {P^{u}}\hspace{-0.1667em}\left({\Delta _{\rho }}(T)\le \delta \right),\]
which follows directly from the definitions (the argument literally repeats the proof of Lemma 1.4.1 in [4]).
For each $\tilde{\omega }\in \Omega $, let ${Q_{\tilde{\omega }}^{u}}$ be a regular conditional probability,
\[ {Q_{\tilde{\omega }}^{u}}(\cdot )={P^{u}}(\cdot \mid {𝒢_{{\nu _{n}}}})(\tilde{\omega }),\]
whose existence is guaranteed by Theorems 1.1.6 and 1.1.8 in [4]. The corresponding expectation will be denoted by ${E_{\tilde{\omega }}^{u}}$, so that
(10)
\[ {E_{\tilde{\omega }}^{u}}[\xi ]={\int _{\Omega }}\xi (\omega )\hspace{0.1667em}{Q_{\tilde{\omega }}^{u}}(d\omega )={E^{u}}\hspace{-0.1667em}\left[\xi \hspace{0.1667em}|\hspace{0.1667em}{𝒢_{{\nu _{n}}}}\right](\tilde{\omega }).\]
Let us now choose $f\in {C_{0}^{\infty }}(\mathbb{R})$ such that $f(0)=1$, $f(x)=0$ for $|x|\ge \rho /4$, and $0\le f\le 1$. Define
\[ \tilde{f}(x)=\tilde{f}(x,\tilde{\omega })=f\hspace{-0.1667em}\left(x-{X_{{\nu _{n}}}}(\tilde{\omega })\right),\]
that is, a random translation of f by ${X_{{\nu _{n}}}}(\tilde{\omega })$.
Let ${\gamma _{n,\delta }}$ be the index of the first jump after ${\tau _{{\nu _{n}}}}+\delta $, so that ${\hat{\tau }_{{\nu _{n}}}}(\delta )={\tau _{{\gamma _{n,\delta }}}}$, where ${\hat{\tau }_{n}}(t)$ was defined in (5). Note that ${\gamma _{n,\delta }}$ is also a stopping time with respect to the filtration ${\left({𝒢_{m}}\right)_{m\ge 0}}$.
Fix an arbitrary $q\in {\mathbb{Q}^{n}}$, an n-dimensional vector of rational numbers. Put ${f_{q}}(x)=f(x-q)$ and note that ${A_{{f_{q}}}}={A_{f}}$ by the translation-invariance property of ${A_{f}}$ (see condition (ii)).
Define
\[ {\kappa _{n,\delta }}={\kappa _{n,\delta }}(\omega )={\nu _{n+1}}(\omega )\wedge {\gamma _{n,\delta }}(\omega ).\]
Note that ${\kappa _{n,\delta }}$ is a ${\left({𝒢_{m}}\right)_{m\ge 0}}$-stopping time and that ${\kappa _{n,\delta }}\ge {\nu _{n}}$ ${P^{u}}$-a.s. for all u. In what follows, we will write κ and γ to mean ${\kappa _{n,\delta }}$ and ${\gamma _{n,\delta }}$ respectively, as n and δ are fixed.
By Proposition IV.5.5 from [3] and condition (ii) of the theorem (namely, the submartingale property of the process $f({X_{n}})+{A_{f}}{\tau _{n}}$), we may conclude that there exists a null set ${F_{q}}\in ℬ(\Omega )$ such that for all ${\omega ^{\prime }}\notin {F_{q}}$,
(11)
\[ {E^{u}}\hspace{-0.1667em}\left[{f_{q}}\hspace{-0.1667em}\left({X_{\kappa }}\right)+{A_{f}}{\tau _{\kappa }}\hspace{0.1667em}|\hspace{0.1667em}{𝒢_{{\nu _{n}}}}\right]({\omega ^{\prime }})\ge {f_{q}}\hspace{-0.1667em}\left({X_{{\nu _{n}}}}({\omega ^{\prime }})\right)+{A_{f}}{\tau _{{\nu _{n}}}}({\omega ^{\prime }}).\]
Let us define the null set
\[ F=\bigcup \limits_{q}{F_{q}},\]
and fix an arbitrary $\tilde{\omega }\notin F$. For this fixed $\tilde{\omega }$ and arbitrary $\varepsilon \gt 0$, we can find a rational vector $\tilde{q}=\tilde{q}(\varepsilon ,\tilde{\omega })\in {\mathbb{Q}^{n}}$ such that
\[ \underset{x\in {\mathbb{R}^{d}}}{\sup }\big|\tilde{f}(x)-{f_{\tilde{q}}}(x)\big|=\underset{x\in {\mathbb{R}^{d}}}{\sup }\big|f\hspace{-0.1667em}\left(x-{X_{{\nu _{n}}}}(\tilde{\omega })\right)-f\hspace{-0.1667em}\left(x-\tilde{q}\right)\big|\lt \varepsilon .\]
This is possible because f has compact support and is therefore uniformly continuous. It is clear that $\tilde{q}(\cdot ,\varepsilon )$ is ${𝒢_{{\nu _{n}}}}$-measurable as a function of ω, since it is completely determined by ${X_{{\nu _{n}}}}(\omega )$.
Now, using (11), we obtain
\[\begin{aligned}{}{E_{\tilde{\omega }}^{u}}\hspace{-0.1667em}\left[\tilde{f}({X_{\kappa }})+{A_{f}}{\tau _{\kappa }}\right]& ={E_{\tilde{\omega }}^{u}}\hspace{-0.1667em}\left[\big(\tilde{f}({X_{\kappa }})-{f_{\tilde{q}}}({X_{\kappa }})\big)+{f_{\tilde{q}}}({X_{\kappa }})+{A_{f}}{\tau _{\kappa }}\right]\\ {} & \ge {E_{\tilde{\omega }}^{u}}\hspace{-0.1667em}\left[{f_{\tilde{q}}}({X_{\kappa }})+{A_{f}}{\tau _{\kappa }}\right]-\varepsilon \\ {} & \ge {f_{\tilde{q}}}\hspace{-0.1667em}\left({X_{{\nu _{n}}}}(\tilde{\omega })\right)+{A_{f}}{\tau _{{\nu _{n}}}}(\tilde{\omega })-\varepsilon \\ {} & \ge \tilde{f}\hspace{-0.1667em}\left({X_{{\nu _{n}}}}(\tilde{\omega })\right)+{A_{f}}{\tau _{{\nu _{n}}}}(\tilde{\omega })-2\varepsilon ,\end{aligned}\]
where ${E_{\tilde{\omega }}^{u}}$ is defined in (10).
Note that $\tilde{\omega }$ is fixed, so ${\nu _{n}}(\tilde{\omega })$ is a non-random integer. Also observe that $\tilde{f}({X_{{\nu _{n}}}}(\tilde{\omega }))=f(0)=1$. Finally, it is impossible that ${\tau _{{\nu _{n+1}}}}\in ({\tau _{{\nu _{n}}}}+\delta ,{\hat{\tau }_{{\nu _{n}}}}(\delta ))$, since ${\tau _{{\nu _{n+1}}}}$ is a jump time, and ${\hat{\tau }_{{\nu _{n}}}}(\delta )$ is the first jump time after ${\tau _{{\nu _{n}}}}+\delta $.
Thus, we have a.s.
\[ {E_{\tilde{\omega }}^{u}}\hspace{-0.1667em}\left[\tilde{f}({X_{\kappa }})+{A_{f}}\big({\tau _{\kappa }}-{\tau _{{\nu _{n}}}}(\tilde{\omega })\big)\right]\ge 1-2\varepsilon .\]
Hence,
\[\begin{aligned}{}{E_{\tilde{\omega }}^{u}}\hspace{-0.1667em}\left[1-\tilde{f}({X_{\kappa }})\right]& \le {E_{\tilde{\omega }}^{u}}\hspace{-0.1667em}\left[{A_{f}}\big({\tau _{\kappa }}-{\tau _{{\nu _{n}}}}(\tilde{\omega })\big)\right]+2\varepsilon \\ {} & \le {A_{f}}\hspace{-0.1667em}\left(\delta +{E_{\tilde{\omega }}^{u}}\hspace{-0.1667em}\left[{\hat{\tau }_{{\nu _{n}}}}(\delta )-{\tau _{{\nu _{n}}}}-\delta \right]\right)+2\varepsilon \\ {} & ={A_{f}}\delta +{A_{f}}{d_{{X_{{\nu _{n}}}}(\tilde{\omega })}^{u}}(\delta )+2\varepsilon \\ {} & \le {A_{f}}\hspace{-0.1667em}\left(\delta +\underset{x}{\sup }{d_{x}^{u}}(\delta )\right)+2\varepsilon ,\end{aligned}\]
where ${d_{x}^{u}}(\delta )$ is defined in (6), and we used the fact that
\[ {\tau _{\kappa }}-{\tau _{{\nu _{n}}}}={\tau _{{\nu _{n+1}}\wedge \gamma }}-{\tau _{{\nu _{n}}}}\le {\tau _{\gamma }}-{\tau _{{\nu _{n}}}}={\hat{\tau }_{{\nu _{n}}}}(\delta )-{\tau _{{\nu _{n}}}}.\]
Note that $0\le 1-\tilde{f}\le 1$.
Define
\[ {B_{\tilde{\omega }}}=\left\{\omega \in \Omega :{\tau _{{\nu _{n+1}}}}(\omega )\le {\tau _{{\nu _{n}}}}(\tilde{\omega })+\delta \right\}\subset {𝒢_{{\nu _{n+1}}}}.\]
We know that for a regular conditional probability, ${Q_{\tilde{\omega }}^{u}}(C)=1$ for $C\in ℬ(\Omega )$ if and only if $\tilde{\omega }\in C$ (see [4], p. 16). Put
\[ {C_{\tilde{\omega }}}=\{\omega \in \Omega :{\nu _{n}}(\omega )={\nu _{n}}(\tilde{\omega })\},\]
and note that $\tilde{\omega }\in {C_{\tilde{\omega }}}$.
Using this fact and the definition of f, we obtain
\[\begin{aligned}{}{E_{\tilde{\omega }}^{u}}\hspace{-0.1667em}\left[\tilde{f}({X_{\kappa }}){𝟙_{{B_{\tilde{\omega }}}}}\right]& ={\int _{{B_{\tilde{\omega }}}}}f\hspace{-0.1667em}\left({X_{{\nu _{n+1}}}}(\omega )-{X_{{\nu _{n}}}}(\tilde{\omega })\right)\hspace{0.1667em}{Q_{\tilde{\omega }}^{u}}(d\omega )\\ {} & ={\int _{{B_{\tilde{\omega }}}\cap {C_{\tilde{\omega }}}}}f\hspace{-0.1667em}\left({X_{{\nu _{n+1}}}}(\omega )-{X_{{\nu _{n}}}}(\tilde{\omega })\right)\hspace{0.1667em}{Q_{\tilde{\omega }}^{u}}(d\omega )\\ {} & ={\int _{{B_{\tilde{\omega }}}\cap {C_{\tilde{\omega }}}}}f\hspace{-0.1667em}\left({X_{{\nu _{n+1}}}}(\omega )-{X_{{\nu _{n}}}}(\omega )\right)\hspace{0.1667em}{Q_{\tilde{\omega }}^{u}}(d\omega )=0,\end{aligned}\]
since
\[ \big|{X_{{\nu _{n+1}}}}(\omega )-{X_{{\nu _{n}}}}(\omega )\big|\ge \rho /4.\]
Thus, we obtain
\[\begin{aligned}{}{A_{f}}\hspace{-0.1667em}\left(\delta +\underset{x}{\sup }{d_{x}^{u}}(\delta )\right)+2\varepsilon & \ge {E_{\tilde{\omega }}^{u}}\hspace{-0.1667em}\left[(1-\tilde{f}({X_{\kappa }}))\big({𝟙_{{B_{\tilde{\omega }}}}}+{𝟙_{\Omega \setminus {B_{\tilde{\omega }}}}}\big)\right]\\ {} & ={Q_{\tilde{\omega }}^{u}}({B_{\tilde{\omega }}})+{E_{\tilde{\omega }}^{u}}\hspace{-0.1667em}\left[\big(1-f({X_{\gamma }})\big){𝟙_{\Omega \setminus {B_{\tilde{\omega }}}}}\right]\\ {} & \ge {Q_{\tilde{\omega }}^{u}}({B_{\tilde{\omega }}}),\end{aligned}\]
so we arrive at the inequality
(12)
\[ {A_{f}}\hspace{-0.1667em}\left(\delta +\underset{x}{\sup }{d_{x}^{u}}(\delta )\right)+2\varepsilon \ge {Q_{\tilde{\omega }}^{u}}({B_{\tilde{\omega }}})={P^{u}}\hspace{-0.1667em}\left({\tau _{{\nu _{n+1}}}}\le {\tau _{{\nu _{n}}}}(\tilde{\omega })+\delta \hspace{0.1667em}|\hspace{0.1667em}{𝒢_{{\nu _{n}}}}\right)(\tilde{\omega }).\]
Next, for every $k\gt 0$ we can write
\[\begin{aligned}{}{P^{u}}\hspace{-0.1667em}\left(\{\omega :{\Delta _{\rho }}(T;\omega )\le \delta \}\right)& \le {P^{u}}\hspace{-0.1667em}\left(\underset{1\le i\le k}{\min }{\tau _{{\nu _{i}}}}-{\tau _{{\nu _{i-1}}}}\le \delta \right)+{P^{u}}(N\gt k)\\ {} & \le {\sum \limits_{i=1}^{k}}{P^{u}}\hspace{-0.1667em}\left({\tau _{{\nu _{i}}}}-{\tau _{{\nu _{i-1}}}}\le \delta \right)+{P^{u}}(N\gt k)\\ {} & \le {\sum \limits_{i=1}^{k}}{E^{u}}\hspace{-0.1667em}\left[{P^{u}}\hspace{-0.1667em}\left({\tau _{{\nu _{i}}}}-{\tau _{{\nu _{i-1}}}}\le \delta \hspace{0.1667em}|\hspace{0.1667em}{𝒢_{{\nu _{i-1}}}}\right)\right]+{P^{u}}(N\gt k)\\ {} & \le k\hspace{-0.1667em}\left[{A_{f}}\hspace{-0.1667em}\left(\delta +\underset{x}{\sup }{d_{x}^{u}}(\delta )\right)+2\varepsilon \right]+{P^{u}}(N\gt k),\end{aligned}\]
where we used inequality (12) and the fact that ${A_{f}}$ can be chosen so that it works for all translations of f.
Next, we write for any ${t_{0}}\gt 0$:
\[\begin{aligned}{}{E^{u}}\hspace{-0.1667em}\left[{e^{-({\tau _{{\nu _{i+1}}}}-{\tau _{{\nu _{i}}}})}}\hspace{0.1667em}|\hspace{0.1667em}{𝒢_{{\nu _{i}}}}\right]& \le {P^{u}}\hspace{-0.1667em}\left({\tau _{{\nu _{i+1}}}}-{\tau _{{\nu _{i}}}}\le {t_{0}}\hspace{0.1667em}|\hspace{0.1667em}{𝒢_{{\nu _{i}}}}\right)+{e^{-{t_{0}}}}{P^{u}}\hspace{-0.1667em}\left({\tau _{{\nu _{i+1}}}}-{\tau _{{\nu _{i}}}}\gt {t_{0}}\hspace{0.1667em}|\hspace{0.1667em}{𝒢_{{\nu _{i}}}}\right)\\ {} & \le {e^{-{t_{0}}}}+(1-{e^{-{t_{0}}}})\hspace{0.1667em}{P^{u}}\hspace{-0.1667em}\left({\tau _{{\nu _{i+1}}}}-{\tau _{{\nu _{i}}}}\le {t_{0}}\hspace{0.1667em}|\hspace{0.1667em}{𝒢_{{\nu _{i}}}}\right)\\ {} & \le {e^{-{t_{0}}}}+(1-{e^{-{t_{0}}}})\hspace{-0.1667em}\left[{A_{f}}\left({t_{0}}+\underset{x}{\sup }{d_{x}^{u}}({t_{0}})\right)+2\varepsilon \right],\end{aligned}\]
where we used (12).
From condition (iii) of the theorem we know that
\[ \underset{u\to \infty }{\limsup }\underset{x}{\sup }{d_{x}^{u}}({t_{0}})\to 0,\hspace{2em}{t_{0}}\to 0,\]
so we can choose ${u_{0}}$, ${t_{0}}$ and ε such that
\[ \lambda :={e^{-{t_{0}}}}+(1-{e^{-{t_{0}}}})\hspace{-0.1667em}\left[{A_{f}}\big({t_{0}}+\underset{x}{\sup }{d_{x}^{u}}({t_{0}})\big)+2\varepsilon \right]\lt 1,\]
for all $u\gt {u_{0}}$.
Next, by Lemma 1.4.5 from [4], we conclude that for $u\gt {u_{0}}$,
\[ {P^{u}}(N\gt k)\le {e^{T}}{\lambda ^{k}},\hspace{2em}k\ge 0.\]
Thus, in order to prove the theorem, it remains to show that condition (8), namely
\[ \underset{\delta \to 0}{\lim }\underset{u\to \infty }{\limsup }{P^{u}}\hspace{-0.1667em}\left(\{\omega :{\Delta _{\rho }^{u}}(T;\omega )\le \delta \}\right)=0,\]
holds.
Indeed, we have
\[\begin{aligned}{}\underset{\delta \to 0}{\lim }\underset{u\to \infty }{\limsup }& {P^{u}}\hspace{-0.1667em}\left(\{\omega :{\Delta _{\rho }^{u}}(T;\omega )\lt \delta \}\right)\\ {} & \le \underset{\delta \to 0}{\lim }\underset{u\to \infty }{\limsup }\Big(k\big[{A_{f}}(\delta +\underset{x}{\sup }{d_{x}^{u}}(\delta ))+2\varepsilon \big]+{P^{u}}(N\gt k)\Big)\\ {} & \le 2k\varepsilon +{e^{T}}{\lambda ^{k}},\end{aligned}\]
where in the last step we used condition (iii).
Now, we set $\varepsilon \to 0$ and then $k\to \infty $. Thus, condition (8) holds, and the theorem is proven.  □
Remark 2.
Condition (iii) can be replaced with a somewhat opposite assumption: Condition (iv). Assume that there exists a real number $a\gt 0$ such that for every $x\in {\mathbb{R}^{d}}$,
(13)
\[ \underset{u}{\limsup }{P^{u}}\hspace{-0.1667em}\left({\tau _{n+1}}-{\tau _{n}}\lt a\hspace{0.1667em}|\hspace{0.1667em}{X_{n}}=x\right)=0.\]
This condition means that asymptotically there are no jumps on the interval $({\tau _{n}},{\tau _{n}}+a)$. The proof of tightness is trivial in this case, since equation (8) takes the form (here we follow the notation from the proof)
\[\begin{aligned}{}\underset{\delta \to 0}{\lim }\underset{u\to \infty }{\limsup }& {P^{u}}\hspace{-0.1667em}\left(\{\omega :{\Delta _{\rho }^{u}}(T;\omega )\le \delta \}\right)\\ {} & \le \underset{\delta \to 0}{\lim }\underset{u\to \infty }{\limsup }\left({\sum \limits_{i=1}^{k}}{E^{u}}\hspace{-0.1667em}\left[{P^{u}}\hspace{-0.1667em}\left({\tau _{{\nu _{i}}}}-{\tau _{{\nu _{i-1}}}}\le \delta \hspace{0.1667em}|\hspace{0.1667em}{𝒢_{{\nu _{i-1}}}}\right)\right]+{P^{u}}(N\gt k)\right)\\ {} & =\underset{u\to \infty }{\limsup }{P^{u}}(N\gt k)\hspace{2.83862pt}\longrightarrow \hspace{2.83862pt}0,\hspace{2em}k\to \infty .\end{aligned}\]

3 Discussion and examples

In the previous section we saw that each of Conditions (iii) or (iv) guarantees tightness (assuming the submartingale conditions (i) and (ii)). At the same time, it is clear that neither Condition (iii) nor Condition (iv) is sufficient on its own. The following example shows two things:
  • • These conditions cannot be omitted entirely;
  • • Tightness fails in situations where Conditions (iii) and (iv) are mixed in some sense.
Consider the sequence of semi-Markov processes ${\zeta _{n}}(t)$, $n\in \{1,2,3,\dots \}$. Fix n. Let ${\zeta _{n}}(t)$ start at 0 and make a jump to 1 at one of two times, $1/n$ or 1, each with probability $1/2$. After that, ${\zeta _{n}}(t)$ remains equal to 1 forever.
To show that tightness does not hold, we use the same criterion from [1], Theorem 16.8, that was applied in the proof of Theorem 2. Theorem 16.8 states that a family of probability measures is tight if and only if both conditions (4) and (8) hold. We will show that condition (8) fails in this case.
Recall the definition of ${w^{\prime }}$ from the proof of Theorem 2. For $y\in D[0,\infty )$ and $\delta \gt 0$ define
\[ {w^{\prime }_{y}}(\delta ;T)=\underset{\{0={t_{0}}\lt {t_{1}}\lt \cdots \lt {t_{n}}=T\}}{\inf }\underset{1\le i\le n}{\max }\underset{s,t\in [{t_{i-1}},{t_{i}})}{\sup }|y(t)-y(s)|,\]
where the infimum is taken over all partitions $\{{t_{0}},\dots ,{t_{n}}\}$ such that $0={t_{0}}\le {t_{1}}\lt \cdots \lt {t_{n}}=T$ (n arbitrary) and ${\min _{1\le i\lt n}}({t_{i}}-{t_{i-1}})\gt \delta $.
Our goal is to show that
\[ \underset{\delta \to 0}{\lim }\underset{n\to \infty }{\limsup }{P^{n}}\hspace{-0.1667em}\left(\left\{y\in \Omega :{w^{\prime }_{y}}(\delta ;T)\gt \rho \right\}\right)\gt 0.\]
Indeed, fix arbitrary $\rho \in (0,1)$ and $\delta \gt 0$. We can choose n large enough so that $1/n\lt \delta $, which gives
\[ \underset{n\to \infty }{\limsup }{P^{n}}\hspace{-0.1667em}\left(\left\{y\in \Omega :{w^{\prime }_{y}}(\delta ;T)\gt \rho \right\}\right)=\frac{1}{2}.\]
On the other hand, the submartingale property does hold. It suffices to check it at the first jump:
\[ f(0)\le {E^{n}}[f(1)+{A_{f}}{\tau _{1}}]=f(1)+\frac{1}{2}\big({A_{f}}/n+{A_{f}}\big),\]
which is true if ${A_{f}}=4\sup |f|$.
Next, we demonstrate how Condition (iii) can be verified in a situation where a family of semi-Markov processes is obtained via a time-scale and space-scale transformation of a single process, which is typical in diffusion and averaging approximations.

4 Space-time scaled semi-Markov processes

Assume the semi-Markov process $x(t)$ is given in the sense of Definition 2, with associated Markov renewal process ${({x_{n}},{\tau _{n}})_{n\ge 0}}$ and an associated family of holding-time distributions $\{{F_{x}}:x\in {\mathbb{R}^{d}}\}$. Let ${\theta _{n}}={\tau _{n+1}}-{\tau _{n}}$. Note that ${\theta _{n}}$ depends only on ${x_{n}}$ and does not depend on n or ${\tau _{n}}$.
We then generate a family of processes ${({\zeta ^{n}}(t))_{n\ge 1}}$ defined by
\[ {\zeta ^{n}}(t)=\frac{x({a_{n}}t)}{{b_{n}}},\hspace{2em}t\ge 0,\]
where ${\{{a_{n}}\}_{n\ge 1}}$ and ${\{{b_{n}}\}_{n\ge 1}}$ are two increasing sequences of positive numbers such that ${a_{n}}\wedge {b_{n}}\to \infty $ as $n\to \infty $. Typical examples are ${b_{n}}=n$ and ${a_{n}}={n^{2}}$ for diffusion schemes, or ${a_{n}}=n$ for averaging schemes (see [2] for details). We denote by $({X_{j}^{n}},{\tau _{j}^{n}})$ a Markov renewal process that is associated with the semi-Markov process ${({\zeta ^{n}}(t))_{t\ge 0}}$.
It follows from the construction that all processes ${(x(t))_{t\ge 0}}$, ${({x_{j}},\hspace{-0.1667em}{\tau _{j}})_{j\ge 0}}$, ${({\zeta ^{n}}(t))_{t\ge 0}}$, and ${({X_{j}^{n}},{\tau _{j}^{n}})_{j\ge 0}}$ are defined on the same probability space. The next theorem gives a condition that allows one to verify condition (iii).
Theorem 3.
Let ${({\zeta ^{n}})_{n\ge 1}}$ be the family of space-time-scaled semi-Markov processes defined above. Assume the following condition holds:
(14)
\[ \underset{t\to 0}{\lim }\underset{n\to \infty }{\limsup }\frac{1}{{a_{n}}}\underset{x}{\sup }{\int _{{a_{n}}t}^{\infty }}\frac{{\bar{F}_{x}}(r)}{{\bar{F}_{x}}({a_{n}}t)}\hspace{0.1667em}dr=0,\]
where ${\bar{F}_{x}}=1-{F_{x}}$ is the tail distribution function associated with ${F_{x}}$. Then condition (iii) of Theorem 2 holds.
Proof.
Let ${P_{x,s}}(\mathrm{d}y,\mathrm{d}t)$ be the transition probability of the Markov chain ${({x_{n}},{\tau _{n}})_{n\ge 0}}$, and let ${P_{x,s}^{j}}(\mathrm{d}y,\mathrm{d}t)$ denote the corresponding j-step transition probability.
Consider the measurable space $(\Omega ,ℱ)$, where
\[ \Omega =\big({\mathbb{R}^{d}}\times [0,\infty )\big){^{\infty }}\]
is a countable product space and ℱ is the corresponding cylinder σ-field. Let
\[ \omega =\big(({\omega _{00}},{\omega _{01}}),({\omega _{10}},{\omega _{11}}),\dots \big)\in \Omega .\]
Define the random variables
\[ {X_{j}}(\omega )={\omega _{j0}},\hspace{2em}{\tau _{j}}(\omega )={\omega _{j1}},\hspace{2em}{X_{j}^{n}}=\frac{{X_{j}}}{{b_{n}}},\hspace{2em}{\tau _{j}^{n}}=\frac{{\tau _{j}}}{{a_{n}}}.\]
For every pair $(x,s)\in {\mathbb{R}^{d}}\times [0,\infty )$ we may define a probability measure ${\mathbb{P}_{x,s}}$ on $(\Omega ,ℱ)$ such that the finite-dimensional distributions of the sequence ${({X_{j}},{\tau _{j}})_{j\ge 0}}$ coincide with those of the original sequence ${({x_{j}},{\tau _{j}})_{j\ge 0}}$ starting at $(x,s)$. The same is true for the scaled sequence ${({X_{j}^{n}},{\tau _{j}^{n}})_{j\ge 0}}$. In fact, not only the finite-dimensional distributions coincide, but also the distributions of the entire sequences as random elements of $(\Omega ,ℱ)$; however, for our purposes it suffices to work with finite-dimensional distributions.
Condition (iii) may then be restated as
(15)
\[ \underset{n}{\limsup }\underset{x,s}{\sup }{\mathbb{E}_{x,s}}\left[{\hat{\tau }_{0}^{n}}(t)-s-t\right]\to 0,\hspace{1em}t\to 0.\]
Moreover, it is enough to prove the convergence for $s=0$
(16)
\[ \underset{n}{\limsup }\underset{x}{\sup }{\mathbb{E}_{x,0}}\left[{\hat{\tau }_{0}^{n}}(t)-t\right]\to 0,\hspace{1em}t\to 0,\]
where ${\hat{\tau }_{0}^{n}}(t)={\inf _{m\gt 0}}\{{\tau _{m}^{n}}:{\tau _{m}^{n}}\gt {\tau _{0}^{n}}+t\}$.
In what follows we will simplify the notation by denoting
\[ {\mathbb{E}_{x}}={\mathbb{E}_{x,0}},\hspace{3.33333pt}{d_{x}^{n}}(t)={\mathbb{E}_{x}}\left[{\hat{\tau }_{0}^{n}}(t)-t\right].\]
We can write
\[\begin{aligned}{}{d_{x}^{n}}(t)& ={\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\hat{\tau }_{0}^{n}}(t)-t\right]={\sum \limits_{j=0}^{\infty }}{\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\hat{\tau }_{0}^{n}}(t)-t;\hspace{0.1667em}{\tau _{j}^{n}}\le t\lt {\tau _{j+1}^{n}}\right]\\ {} & ={\sum \limits_{j=0}^{\infty }}{\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\tau _{j+1}^{n}}-t;\hspace{0.1667em}{\tau _{j}^{n}}\le t\lt {\tau _{j+1}^{n}}\right]\\ {} & ={\sum \limits_{j=0}^{\infty }}{\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\tau _{j+1}^{n}}-t;\hspace{0.1667em}{\tau _{j}^{n}}\le t\lt {\tau _{j+1}^{n}}\hspace{0.1667em}|\hspace{0.1667em}{X_{j}^{n}},{\tau _{j}^{n}}\right]\right]\\ {} & ={\sum \limits_{j=0}^{\infty }}{\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\theta _{j}^{n}}-(t-{\tau _{j}^{n}});\hspace{0.1667em}{\theta _{j}^{n}}\gt (t-{\tau _{j}^{n}})\hspace{0.1667em}|\hspace{0.1667em}{X_{j}^{n}},{\tau _{j}^{n}}\right]{𝟙_{\{{\tau _{j}^{n}}\le t\}}}\right],\end{aligned}\]
where we used the relation ${\theta _{j}^{n}}={\tau _{j+1}^{n}}-{\tau _{j}^{n}}$. Thus, by the Markov property,
\[\begin{aligned}{}{d_{x}^{n}}(t)& ={\sum \limits_{j=0}^{\infty }}{\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\mathbb{E}_{{X_{j}^{n}},{\tau _{j}^{n}}}}\hspace{-0.1667em}\left[{\theta _{0}^{n}}-(t-{\tau _{j}^{n}});\hspace{0.1667em}{\theta _{0}^{n}}\gt t-{\tau _{j}^{n}}\right]{𝟙_{\{{\tau _{j}^{n}}\le t\}}}\right]\\ {} & ={\sum \limits_{j=0}^{\infty }}{\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\mathbb{E}_{{X_{j}^{n}},{\tau _{j}^{n}}}}\hspace{-0.1667em}\left[\frac{{\theta _{0}}}{{a_{n}}}-\hspace{-0.1667em}\left(t-\frac{{\tau _{j}}}{{a_{n}}}\right);\hspace{0.1667em}\frac{{\theta _{0}}}{{a_{n}}}\gt \hspace{-0.1667em}\left(t-\frac{{\tau _{j}}}{{a_{n}}}\right)\right]{𝟙_{\{{\tau _{j}}\le {a_{n}}t\}}}\right]\\ {} & =\frac{1}{{a_{n}}}{\sum \limits_{j=0}^{\infty }}{\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\mathbb{E}_{{X_{j}^{n}},{\tau _{j}^{n}}}}\hspace{-0.1667em}\left[{\theta _{0}}-({a_{n}}t-{\tau _{j}});\hspace{0.1667em}{\theta _{0}}\gt {a_{n}}t-{\tau _{j}}\right]{𝟙_{\{{\tau _{j}}\le {a_{n}}t\}}}\right]\\ {} & =\frac{1}{{a_{n}}}{\sum \limits_{j=0}^{\infty }}{\int _{{\mathbb{R}^{d}}}}{\int _{0}^{{a_{n}}t}}{\mathbb{E}_{y/{b_{n}}}}\hspace{-0.1667em}\left[{\theta _{0}}-({a_{n}}t-s);\hspace{0.1667em}{\theta _{0}}\gt {a_{n}}t-s\right]{P_{x,0}^{j}}(dy,ds),\end{aligned}\]
where we used the equalities ${X_{j}^{n}}={x_{j}}/{b_{n}}$ and ${\tau _{j}^{n}}={\tau _{j}}/{a_{n}}$, together with the fact that ${\theta _{0}}$ does not depend on ${\tau _{0}}$. Hence we may omit the second subscript and write ${E_{y/{b_{n}}}}$ instead of ${E_{y/{b_{n}},s/{a_{n}}}}$.
Finally, since ${\theta _{0}}$ is a nonnegative random variable, for every $z\in {\mathbb{R}^{d}}$
\[\begin{aligned}{}{\mathbb{E}_{z}}\hspace{-0.1667em}& \left[{\theta _{0}}-({a_{n}}t-s);\hspace{3.33333pt}{\theta _{0}}\gt {a_{n}}t-s\right]\\ {} & =\left({\int _{0}^{\infty }}{\mathbb{P}_{z}}\hspace{-0.1667em}\left({\theta _{0}}\gt r+({a_{n}}t-s)\hspace{0.1667em}|\hspace{0.1667em}{\theta _{0}}\gt {a_{n}}t-s\right)dr\right){\mathbb{P}_{z}}({\theta _{0}}\gt {a_{n}}t-s)\\ {} & =\left({\int _{0}^{\infty }}\frac{1-{F_{z}}(r+({a_{n}}t-s))}{1-{F_{z}}({a_{n}}t-s)}\hspace{0.1667em}dr\right)\big(1-{F_{z}}({a_{n}}t-s)\big)\\ {} & =\left({\int _{0}^{\infty }}\frac{{\bar{F}_{z}}(r+({a_{n}}t-s))}{{\bar{F}_{z}}({a_{n}}t-s)}\hspace{0.1667em}dr\right){\bar{F}_{z}}({a_{n}}t-s).\end{aligned}\]
Thus, we can continue
\[\begin{aligned}{}{d_{x}^{n}}(t)& =\frac{1}{{a_{n}}}{\sum \limits_{j=0}^{\infty }}{\int _{{\mathbb{R}^{d}}}}{\int _{0}^{{a_{n}}t}}{\int _{0}^{\infty }}\frac{{\bar{F}_{y/{b_{n}}}}(r+{a_{n}}t-s)}{{\bar{F}_{y/{b_{n}}}}({a_{n}}t-s)}\hspace{0.1667em}dr\hspace{0.1667em}{\bar{F}_{y/{b_{n}}}}({a_{n}}t-s)\hspace{0.1667em}{P_{x,0}^{j}}(dy,ds)\\ {} & =\frac{1}{{a_{n}}}{\sum \limits_{j=0}^{\infty }}{\int _{{\mathbb{R}^{d}}}}{\int _{0}^{{a_{n}}t}}{\int _{{a_{n}}t-s}^{\infty }}\frac{{\bar{F}_{y/{b_{n}}}}(r)}{{\bar{F}_{y/{b_{n}}}}({a_{n}}t-s)}\hspace{0.1667em}dr\hspace{0.1667em}{\bar{F}_{y/{b_{n}}}}({a_{n}}t-s)\hspace{0.1667em}{P_{x,0}^{j}}(dy,ds)\\ {} & \le \left(\frac{1}{{a_{n}}}\underset{y}{\sup }{\int _{{a_{n}}t}^{\infty }}\frac{{\bar{F}_{y}}(r)}{{\bar{F}_{y}}({a_{n}}t)}\hspace{0.1667em}dr\right){\sum \limits_{j=0}^{\infty }}{\int _{{\mathbb{R}^{d}}}}{\int _{0}^{{a_{n}}t}}{\bar{F}_{y/{b_{n}}}}({a_{n}}t-s)\hspace{0.1667em}{P_{x,0}^{j}}(dy,ds)\\ {} & =\left(\frac{1}{{a_{n}}}\underset{y}{\sup }{\int _{{a_{n}}t}^{\infty }}\frac{{\bar{F}_{y}}(r)}{{\bar{F}_{y}}({a_{n}}t)}\hspace{0.1667em}dr\right){\sum \limits_{j=0}^{\infty }}{\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\mathbb{P}_{{X_{j}},{\tau _{j}}}}({\theta _{0}}\gt {a_{n}}t-{\tau _{j}})\hspace{0.1667em}{𝟙_{\{{\tau _{j}}\le {a_{n}}t\}}}\right]\\ {} & =\left(\frac{1}{{a_{n}}}\underset{y}{\sup }{\int _{{a_{n}}t}^{\infty }}\frac{{\bar{F}_{y}}(r)}{{\bar{F}_{y}}({a_{n}}t)}\hspace{0.1667em}dr\right){\sum \limits_{j=0}^{\infty }}{\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\mathbb{P}_{x}}({\tau _{j+1}}\gt {a_{n}}t\mid {X_{j}},{\tau _{j}})\hspace{0.1667em}{𝟙_{\{{\tau _{j}}\le {a_{n}}t\}}}\right]\\ {} & =\left(\frac{1}{{a_{n}}}\underset{y}{\sup }{\int _{{a_{n}}t}^{\infty }}\frac{{\bar{F}_{y}}(r)}{{\bar{F}_{y}}({a_{n}}t)}\hspace{0.1667em}dr\right){\sum \limits_{j=0}^{\infty }}{\mathbb{E}_{x}}\hspace{-0.1667em}\left[{\mathbb{P}_{x}}({\tau _{j}}\le {a_{n}}t\lt {\tau _{j+1}})\right]\\ {} & =\frac{1}{{a_{n}}}\underset{y}{\sup }{\int _{{a_{n}}t}^{\infty }}\frac{{\bar{F}_{y}}(r)}{{\bar{F}_{y}}({a_{n}}t)}\hspace{0.1667em}dr.\end{aligned}\]
Thus,
\[ \underset{t\to 0}{\lim }\underset{n\to \infty }{\limsup }\underset{x}{\sup }{d_{x}^{n}}(t)=\underset{t\to 0}{\lim }\underset{n\to \infty }{\limsup }\frac{1}{{a_{n}}}\underset{x}{\sup }{\int _{{a_{n}}t}^{\infty }}\frac{{\bar{F}_{x}}(r)}{{\bar{F}_{x}}({a_{n}}t)}\hspace{0.1667em}dr=0,\]
by condition (14).  □
Remark 3.
As an example, consider the case when ${F_{x}}=F$ does not depend on x (so that all holding times are identically distributed). In this situation, condition (14) holds if the tails of F are exponential or polynomial.
Indeed, if
\[ {c_{1}}{e^{-\alpha u}}\le \bar{F}(u)\le {c_{2}}{e^{-\alpha u}},\]
for some constants ${c_{1}},{c_{2}}\gt 0$, and $\alpha \gt 0$ then
\[ \frac{1}{{a_{n}}}{\int _{{a_{n}}t}^{\infty }}\frac{\bar{F}(r)}{\bar{F}({a_{n}}t)}\hspace{0.1667em}dr\le \frac{{c_{2}}}{{c_{1}}}\frac{1}{{a_{n}}}{\int _{{a_{n}}t}^{\infty }}{e^{-\alpha r+\alpha {a_{n}}t}}\hspace{0.1667em}dr=\frac{{c_{2}}}{\alpha {c_{1}}}\frac{1}{{a_{n}}}\to 0,\hspace{1em}n\to \infty .\]
If instead for ${c_{1}},{c_{2}}\gt 0$ and $\alpha \gt 1$
\[ {c_{1}}{u^{-\alpha }}\le \bar{F}(u)\le {c_{2}}{u^{-\alpha }},\]
then
\[ \frac{1}{{a_{n}}}{\int _{{a_{n}}t}^{\infty }}\frac{\bar{F}(r)}{\bar{F}({a_{n}}t)}\hspace{0.1667em}dr\le \frac{{c_{2}}{({a_{n}}t)^{\alpha }}}{{c_{1}}{a_{n}}}{\int _{{a_{n}}t}^{\infty }}{r^{-\alpha }}dr=\frac{{c_{2}}}{(\alpha -1){c_{1}}}t\to 0,\hspace{1em}t\to 0.\]

References

[1] 
Billingsley, P.: Convergence of Probability Measures. Second Edition. John Wiley and Sons, Inc., New York (1999) MR1700749. https://doi.org/10.1002/9780470316962
[2] 
Koroliuk, V., Limnios, N.: Stochastic Systems in Merging Phase Space. World Scientific Publishing Co. Pte. Ltd., Singapore (2005) MR2205562. https://doi.org/10.1142/9789812703125
[3] 
Neveu, J.: Mathematical Foundations of the Calculus of Probability. Holden-Day (1965) MR0198505
[4] 
Strook, D., Varadhan, S.: Multidimensional Diffusion Processes. Springer, Berlin Heidelberg (2006) MR2190038
Exit Reading PDF XML


Table of contents
  • 1 Introduction
  • 2 Main result
  • 3 Discussion and examples
  • 4 Space-time scaled semi-Markov processes
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

Journal

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy