Random time-changes and asymptotic results for a class of continuous-time Markov chains on integers with alternating rates

We consider continuous-time Markov chains on integers which allow transitions to adjacent states only, with alternating rates. This kind of processes are useful in the study of chain molecular diffusions. We give explicit formulas for probability generating functions, and also for means, variances and state probabilities of the random variables of the process. Moreover we study independent random time-changes with the inverse of the stable subordinator, the stable subordinator and the tempered stable subordinator. We also present some asymptotic results in the fashion of large deviations. These results give some generalizations of those presented in [Journal of Statistical Physics 154 (2014), 1352–1364].


Introduction
Random walks in continuous time are largely employed in several fields of both theoretical and applied interest. In this paper we consider a class of continuous-time Markov chains on integers, called the basic model, which can have transitions to adjacent states only, and with alternating transition rates to their adjacent states; namely we assume to have the same transition rates for the odd states, and the same transition rates for the even states. We also consider some independent random time-changes of the basic model.
Markov chains with alternating rates are useful in the study of chain molecular diffusions. We recall the paper [31], where a molecule is modeled as a freely-joined chain of two regularly alternating kinds of atoms, which have alternating jump rates. Another reference is [6] where a simple birth-death process with alternating rates has been studied as a model for an infinitely long chain of atoms joined by links which are subject to random alternating shocks. Recent results on the transient probabilities of such model, also in the presence of suitable reflecting or absorbing states, are provided in [32,33] and [34].
In this paper we also consider independent random time-changes of the basic model which provide more flexible versions of the chemical models in the references cited above. More precisely we consider the inverse stable subordinator or, alternatively, the (possibly tempered) stable subordinator. In the first case the particle is subject to a sort of trapping and delaying effect; on the contrary, in the second case, we allow positive jumps in the random time-changed argument, which produces a possible rushing effect.
We start with a more rigorous presentation of the basic model in terms of the generator. In general we consider a continuous-time Markov chain {X(t) : t ≥ 0} on Z (where Z is the set of integers), and we consider the state probabilities p k,n (t) := P (X(t) = n|X(0) = k), (1) which satisfy the condition p k,n (0) = 1 {k=n} ; the generator G = (g k,n ) k,n∈Z of {X(t) : t ≥ 0} is defined by g k,n := lim t→0 p k,n (t) − p k,n (0) t .
In particular we extend the results in [10] by giving explicit expressions of the probability generating function, mean and variance of X(t) (for each fixed t > 0), and we study the asymptotic behavior (as t → ∞) in the fashion of large deviations. Here we also give explicit expressions of the state probabilities.
Moreover we consider some random time-changes of the basic model {X(t) : t ≥ 0}, with independent processes. This is motivated by the great interest that the theory of random time-changes (and subordination) is being receiving starting from [5] (see also [30]). In particular this theory allows to construct non-standard models which are useful for possible applications in different fields; indeed, in many circumstances, the process is more realistically assumed to evolve according to a random (so-called operational) time, instead of the usual deterministic one. For instance, in applications to finance, the particle jumps usually represent price changes separated by a random waiting time between trades; then a time-changed version captures the role of information flow and activity time in modeling price changes (see e.g. [17]). Similarly, in applications to hydrology, the velocity irregularities caused by a heterogeneous porous media can be described by heavy tailed particle jumps, whereas suitable assumptions concerning the distribution of the waiting times allow to model particle sticking or trapping (see e.g. [4]).
In both cases, i.e. for both {X(T ν (t)) : t ≥ 0} and {X(S ν,μ (t)) : t ≥ 0}, we provide expressions for the state probabilities in terms of the generalized Fox-Wright function. We recall [23] among the references with the inverse of the stable subordinator, and [15,27] and [28] among the references with the tempered stable subordinator. Typically these two random time-changes are associated to some generalized derivative in the literature; namely the Caputo left fractional derivative (see, for example, (2.4.14) and (2.4.15) in [18]) in the first case, and the shifted fractional derivative (see (6) in [1]; see also (17) in [1] for the connections with the fractional Riemann-Liouville derivative) in the second case.
We also try to extend the large deviation results for {X(t) : t ≥ 0} to the cases with a random time-change considered in this paper. It is useful to remark that all the large deviation principles in this paper are proved by applications of the Gärtner Ellis Theorem; moreover these large deviation principles yield the convergence (at least in probability) to the values at which the large deviation rate functions uniquely vanish. Thus, motivated by potential applications, when dealing with large deviation principles with the same speed function, we compare the rate functions to establish if we have a faster or slower convergence (if they are comparable). In conclusion the evaluation of the rate function can be an important task, in particular when they are given in terms of a variational formula (as happens with the application of the Gärtner Ellis Theorem).
The applications of the Gärtner Ellis Theorem are based on suitable limits of moment generating functions. So, in view of the applications of this theorem, we study the probability generating functions of the random variables of the processes; in particular the formulas obtained for {X(T ν (t)) : t ≥ 0} have some analogies with many results in the literature for other time-fractional processes (for instance the probability generating functions are expressed in terms of the Mittag-Leffler function), with both continuous and discrete state space (see, for example, [22,14,2] and [16]). For {X(T ν (t)) : t ≥ 0} we can consider large deviations only (the difficulties to obtain a moderate deviation result are briefly discussed); moreover we compute (and plot) different large deviation rate functions for various choices of ν ∈ (0, 1) and we conclude that, the smaller is ν, the faster is the convergence of X ν (t) t to zero (as t → ∞). For {X(S ν,μ (t)) : t ≥ 0} we can obtain large and moderate deviations for the tempered case μ > 0 only; in fact in this case we can apply the Gärtner Ellis Theorem because we have light-tailed distributed random variables (namely the moment generating functions of the involved random variables are finite in a neighborhood of the origin).
There are some references in the literature with applications of the Gärtner Ellis Theorem to time-changed processes. However there are very few cases where the random time-change is given by the inverse of the stable subordinator; see e.g. [13] and [35] where the time-changed processes are fractional Brownian motions (see also [20] and [25] for other asymptotic results for time-changed Gaussian processes with inverse stable subordinators). We are not aware of any other references where the time-changed process takes values on Z.
We conclude with the outline of the paper. Section 2 is devoted to some preliminaries on large deviations. In Section 3 we present the results for the basic model, i.e. the (non-fractional) process {X(t) : t ≥ 0}. Then we present some results for the process {X(t) : t ≥ 0} with random time-changes: the case with the inverse of the stable subordinator is studied in Section 4, the case with the (possibly tempered) stable subodinator is studied in Section 5. We conclude with the short final Section 6 devoted to some conclusions. We also present a final appendix (Section A) with the state probabilities expressions.

Preliminaries on large deviations
Some results in this paper concerns the theory of large deviations; so, in this section, we recall some preliminaries (see e.g. [7], pages [4][5]. A family of probability measures {π t : t > 0} on a topological space Y satisfies the large deviation principle (LDP for short) with rate function I and speed function v t if: lim t→+∞ v t = +∞, for all open sets O, and lim sup for all closed sets C. A rate function is said to be good if all its level sets {{y ∈ Y : We also present moderate deviation results. This terminology is used when, for each family of positive numbers {a t : t > 0} such that a t → 0 and v t a t → ∞, we have a family of laws of centered random variables (which depend on a t ), which satisfies the LDP with speed function 1/a t , and they are governed by the same quadratic rate function which uniquely vanishes at zero (for every choice of {a t : t > 0}). More precisely we have a rate function J (y) = y 2 2σ 2 , for some σ 2 > 0. Typically moderate deviations fill the gap between a convergence to zero of centered random variables, and a convergence in distribution to a centered Normal distribution with variance σ 2 .
The main large deviation tool used in this paper is the Gärtner Ellis Theorem (see e.g. Theorem 2.3.6 in [7]).

Results for the basic model (non-fractional case)
In this section we present the results for the basic model. Some of them will be used for the models with random time-changes in the next sections. We start with some non-asymptotic results, where t is fixed, which concern probability generating functions, means and variances. In the second part we present the asymptotic results, namely large and (moderate) deviation results as t → ∞.
In particular the probability generating functions {F k (·, t) : k ∈ Z, t ≥ 0} are important in both parts; they are defined by where {p k,n (t) : k, n ∈ Z, t ≥ 0} are the state probabilities in (1).
We also have to consider the function : R → R defined by where Remark 3.1. The non-asymptotic results presented below depend on k = X(0), and we have different formulations when k is odd or even. In particular we can reduce from a case to another by exchanging (α 1 , α 2 ) and (β 1 , β 2 ). On the contrary k is negligible for the asymptotic results; in facth(z; α, β) =h(z; β, α), and we have an analogous property for the function , for its first derivative and its second derivative .
The function is the analogue of the function in equation (14) in [10], and plays a crucial role in the proofs of the large (and moderate) deviation results. However we refer to this function also for the non-asymptotic results in order to have simpler expressions; in particular we refer to the derivatives (0) and (0) and therefore we present the following lemma.

Lemma 3.1. Let be the function in (2). Then we have
Proof. The desired equalities can be checked with some cumbersome computations.
Here we only say that it is useful to check the equalities in terms of the function h and its derivatives. In fact we have which yields

Non-asymptotic results
In this section we present explicit formulas for probability generating functions (see Proposition 3.1), means and variances (see Proposition 3.2). In all these propositions we can check what we said in Remark 3.1 about the exchange of (α 1 , α 2 ) and (β 1 , β 2 ).
In view of this we present some preliminaries. It is known that the state probabilities solve the equations So, if we consider the decomposition where G k and H k are the generating functions defined by and We remark that, if we consider the matrix the equations (5) can be rewritten as We start with the probability generating functions.
Proof. The main part of the proof consists of the computation of the exponential matrix e At , where A is the matrix in (6), and finally we easily conclude by taking into account (4) and (7).
(where h is defined by (3)), and it is known that we can find a matrix S such that in particular we can consider the matrix and its inverse is . .

Then the desired exponential matrix is
moreover, after some computations, we have where We complete the proof noting that, by (4) and (7), we have and F 2k+1 (z, t) = z 2k+1 (u 12 (z, t) + u 22 (z, t)); in fact these equalities yield In the next proposition we give mean and variance; in particular we refer to (0) and (0) given in Lemma 3.1.
Proof. The desired expressions of mean and variance can be obtained with suitable (well-known) formulas in terms of dF k (z,t) dz z=1 ; these two values can be computed by considering the explicit formulas of F k (z, t) in Proposition 3.1.
The computations are cumbersome and we omit the details.

Asymptotic results
In this section we present Propositions 3.3 and 3.4, which are the generalization of Propositions 3.1 and 3.2 in [10]. In both cases we apply the Gärtner Ellis Theorem, and we use the probability generating function in Proposition 3.1. Actually the proof of Proposition 3.4 here is slightly different from the proof of Proposition 3.2 in [10]. We also give some brief comments on the interest of these results (whatever we choose k ∈ Z). Proposition 3.3 allows to say that X(t) t converges in probability to (0) (as t → ∞); moreover, for every measurable set A such that (0) / ∈Ā, roughly speaking P X(t) t ∈ A X(0) = k decays exponentially fast with a rate given by inf y∈A * (y), where * is the large deviation rate function. On the other hand Proposition 3.4 provides a class of LDPs that fill the gap between the convergence of X(t) t to (0) cited above, and the weak convergence of X(t)−E[X(t)|X(0)=k] √ t to the centered Normal distribution with variance (0).
Proof. We can simply adapt the proof of Proposition 3.1 in [10]. The details are omitted.

Proposition 3.4. Let {a t : t > 0} be such that a t → 0 and ta t → +∞ (as t → +∞).
Then, for all k ∈ Z, P √ ta t
We remark that As far as the right hand side is concerned, we take into account Proposition 3.1 for the moment generating function and Proposition 3.2 for the mean; then we get and, by (2), we obtain Finally, if we consider the second order Taylor formula for the function , we have for a remainder o γ 2 ta t such that o γ 2 ta t / γ 2 ta t → 0, and (10) is checked. These limits give a generalization of the analogue limits in [10].
where {T ν (t) : t ≥ 0} is the inverse of the stable subordinator, independent of a version of the non-fractional process {X 1 (t) : t ≥ 0} studied above. This random time-change has interest when we study a chain molecular diffusion and, for some reasons (for instance some environmental conditions), we need to refer to a modification of the basic model with a sort of trapping and delaying effect.
So, in view of what follows, we recall some preliminaries. We start with the definition of the Mittag-Leffler function (see e.g. [26], page 17): Then we have In some references this formula is stated assuming that γ ≤ 0 but this restriction is not needed because we can refer to the analytic continuation of the Laplace transform with complex argument. We also recall that formula (24) in [21] provides a version of (12) for t = 1 (in that formula there is −s in place of γ , and s ∈ C).

Probability generating function
Now we prove Proposition 4.1, which provides an expression for the probability gen- where {p ν k,n (t) : k, n ∈ Z, t ≥ 0} are the state probabilities defined by Obviously Proposition 4.1 is the analogue of Proposition 3.1 (and we can recover it by setting ν = 1).

Proposition 4.1. For z > 0 we have
where c k (z) is as in (8) andĥ ± (z) are the eigenvalues in (9).
Proof. We recall that T ν (0) = 0. Then, if we refer the expression of the probability generating functions {F k (·, t) : k ∈ Z, t ≥ 0} in Proposition 3.1, we have Then, by taking into account the moment generating function in (12), after some manipulations we get So we can immediately check that this coincides with the expression in the statement of the proposition.

Asymptotic results
In this section we present Proposition 4.2, which is the analogue of Proposition 3.3. Unfortunately we cannot present a moderate deviation result, namely we cannot present the analogue of Proposition 3.4; see the discussion in Remark 4.1.
Proof. We want to apply the Gärtner Ellis Theorem and, for all γ ∈ R, we have to take the limit of 1 t log F ν k (e γ , t) (as t → ∞). Obviously we consider the expression of the function F ν k (z, t) in Proposition 4.1. Firstly, if ν ∈ (0, 1), we have this can be checked noting thatĥ − (z) < 0,ĥ + (e γ ) = (γ ) (for all γ ∈ R), by taking into account the limit lim t→∞ 1 t log E ν (ct ν ) = 0 ifc ≤ 0 c 1/ν if c > 0 (this limit can be seen as a consequence of an expansion of Mittag-Leffler function; see (1.8.27) in [18] with α = ν and β = 1), and by considering a suitable application of Lemma 1.2.15 in [7].
Moreover the function ν in the limit (14) is nonnegative and attains its minimum, equal to zero, at the points of the set {γ ∈ R : (γ ) ≤ 0}; we recall that this set can be reduced to the single point γ = 0 if and only if (0) = 0. Thus we can apply the Gärtner Ellis Theorem (because the function in the limit is finite everywhere and differentiable), and the desired LDP holds. Remark 4.1. We have some difficulties to get the extension of Proposition 3.4 for the time-fractional case. In fact, if a moderate deviation holds, we expect that it is governed by the rate function J ν (y) := y 2 2 (0) , where (0) is the second derivative at the origin γ = 0 of ν , and assuming that such value exists and it is finite. On the contrary (0) exists only if ν ∈ (0, 1/2], and it is equal to zero. So, in such a case, we should have and this rate function is not interesting; in fact it is the largest rate function that we have for a sequence that converges to zero (for instance this rate function comes up when we have constant random variables converging to zero).
Thus, by combining these two statements, there exists δ > 0 such that, for 0 < |y| < δ , we have Figure 2 where (0) = 0 and we consider some specific values of ν). In conclusion we can say that X ν 1 (t) t converges to zero faster than X ν 2 (t) t (as t → ∞).

Probability generating function
Now we prove Proposition 5.1, which provides an expression for the probability generating functions {F Obviously Proposition 5.1 is the analogue of Propositions 3.1 and 4.1. The condition h + (z) ≤ μ will be discussed after the proof.
Proof. We recall thatS ν,μ (0) = 0. Then, if we refer the expression of the probability generating functions {F k (·, t) : k ∈ Z, t ≥ 0} in Proposition 3.1, we havẽ Then, by taking into account the moment generating function in (17), after some manipulations we get (we recall thatĥ − (z) < 0) ifĥ + (z) ≤ μ (and infinity otherwise). So we can easily check that this coincides with the expression in the statement of the proposition.
We conclude this section with a brief discussion on the conditionĥ + (z) ≤ μ for μ ≥ 0. For z > 0 we have (9) and (3). Then, after some easy computations, it is easy to check that this is equivalent to In particular, for case μ = 0, we haveĥ + (z) ≤ 0 if and only if min 1, so we have m − (0) = 1 and/or m + (0) = 1, and they are both equal to 1 if and only if α 1 β 1 = α 2 β 2 or, equivalently, (0) = 0 by Lemma 3.1.

Asymptotic results
In this section we present Propositions 5.2 and 5.3. The first one is the analogue of where is the function in (2). Then, for all k ∈ Z, P X ν,μ (t) t ∈ · X ν,μ (0) = k : this can be checked noting thatĥ − (z) < 0,ĥ + (e γ ) = (γ ) (for all γ ∈ R), and by considering a suitable application of Lemma 1.2.15 in [7].
The function˜ ν,μ in the limit (22) is essentially smooth (see e.g. Definition 2.3.5 in [7]); in fact it is finite in a neighborhood of the origin, differentiable in the interior of the set D := {γ ∈ R :˜ ν,μ (γ ) < ∞}, and steep (namely˜ ν,μ (γ n ) → ∞ for every sequence {γ n : n ≥ 1} in the interior of D which converges to a boundary point of the interior of D) because, if γ 0 is such that (γ 0 ) = μ, we havẽ Then we can apply the Gärtner Ellis Theorem (in fact the function˜ ν,μ is also lower semi-continuous), and the desired LDP holds.

Conclusions
In this paper we study continuous-time Markov chains on integers which allow transitions to adjacent states only, with alternating rates. We present some explicit formulas