Compositions of Poisson and Gamma processes

In the paper we study the models of time-changed Poisson and Skellam-type processes, where the role of time is played by compound Poisson-Gamma subordinators and their inverse (or first passage time) processes. We obtain explicitly the probability distributions of considered time-changed processes and discuss their properties.


Introduction
Stochastic processes with random time and more general compositions of processes are quite popular topics of recent studies both in the theory of stochastic processes and in various applied areas. Specifically, in financial mathematics, models with random clock (or time change) allow to capture more realistically the relationship between calendar time and financial markets activity. Models with random time appear in reliability and queuing theory, biological, ecological and medical research, note also that for solving some problems of statistical estimation sampling of a stochastic process at random times or on the trajectory of another process can be used. Some examples of applications are described, for example, in [5].
In the present paper we study various compositions of Poisson and Gamma processes. We only consider the cases when processes used for compositions are independent.
The Poisson process directed by a Gamma process and, in reverse, the Gamma process directed by a Poisson process one can encounter, for example, in the book by W. Feller [6], where distributions of these processes are presented as particular examples within the general framework of directed processes.
Time-changed Poisson processes have been investigated extensively in the literature. We mention, for example, the recent comprehensive study undertaken in [16] concerned with the processes N (H f (t)), where N (t) is a Poisson process and H f (t) is an arbitrary subordinator with Laplace exponent f , independent of N (t). The particular emphasis has been made on the cases where f (u) = u α , α ∈ (0, 1) (spacefractional Poisson process), f (u) = (u + θ) α − θ α , α ∈ (0, 1), θ > 0 (tempered Poisson process) and f (λ) = log(1 + λ) (negative binomial process), in the last case we have the Poisson process with Gamma subordinator.
The most intensively studied models of Poisson processes with time-change are two fractional extensions of Poisson process, namely, the space-fractional and the time-fractional Poisson processes, obtained by choosing a stable subordinator or its inverse process in the role of time correspondingly; the literature devoted to these processes is rather voluminous, we refer, e.g., to the recent papers [7,15,12] (and references therein), and to the paper [14], where the correlation structure of the timechanged Lévy processes has been investigated and the correlation of time-fractional Poisson process was discussed among the variety of other examples. The most recent results are concerned with non-homogeneous fractional Poisson processes (see, e.g. [13] and references therein).
Interesting models of processes are based on the use of the difference of two Poisson processes, so-called Skellam processes, and their generalizations via time change. Investigation of these models can be found, for example, in [2], and we also refer to the paper [10], where fractional Skellam processes were introduced and studied.
In the present paper we study time-changed Poisson and Skellam processes, where the role of time is played by compound Poisson-Gamma subordinators and their inverse (or first passage time) processes. Some motivation for our study is presented in Remarks 1 and 2 in Section 3.
We obtain explicitly the probability distributions of considered time-changed processes and their first and second order moments.
In particular, for the case, where time-change is taken by means of compound Poisson-exponential subordinator and its inverse process, corresponding probability distributions of time-changed Poisson and Skellam processes are presented in terms of generalized Mittag-Leffler functions.
We also find the relation, in the form of differential equation, between the distribution of Poisson process, time-changed by Poisson-exponential process, and the distribution of Poisson processes time-changed by inverse Poisson-exponential process.
The paper is organized as follows. In Section 2 we recall definitions of processes which will be considered in the paper, in Section 3 we discuss the main features of the compound Poisson-Gamma process G N (t). In Section 4 we study Poisson and Skellam-type processes time-changed by the process G N (t); in Section 5 we investigate time-changed Poisson and Skellam-type processes, where the role of time is played by the inverse process for G N (t), we also discuss some properties of the inverse processes. Appendix contains details of the derivation of some results stated in Section 5.

Preliminaries
In this section we recall definitions of processes, which will be considered in the paper (see, e.g. [1,3]).
The Poisson process N (t) with intensity parameter λ > 0 is a Lévy process with values in N ∪ {0} such that each N (t) has Poisson distribution with parameter λt, that is, Lévy measure of N (t) can be written in the form The Gamma process G(t) with parameters α, β > 0 is a Lévy process such that each G(t) follows Gamma distribution Γ (αt, β), that is, has the density The Lévy measure of G(t) is ν(du) = αu −1 e −βu du and the corresponding Bernštein function is The Skellam process is defined as where N 1 (t), t ≥ 0, and N 2 (t), t ≥ 0, are two independent Poisson processes with intensities λ 1 > 0 and λ 2 > 0, respectively. The probability mass function of S(t) is given by where I k is the modified Bessel function of the first kind [20]: The Skellam process is a Lévy process, its Lévy measure is the linear combination of two Dirac measures: and the moment generating function is given by Skellam processes are considered, for example, in [2], the Skellam distribution had been introduced and studied in [19] and [9].
We will consider Skellam processes with time change where X(t) is a subordinator independent of N 1 (t) and N 2 (t), and will call such processes time-changed Skellam processes of type I. We will also consider the processes of the form where N 1 (t), N 2 (t) are two independent Poisson processes with intensities λ 1 > 0 and λ 2 > 0, and X 1 (t), X 2 (t) are two independent copies of a subordinator X(t), which are also independent of N 1 (t) and N 2 (t), and we will call the process S II (t) a time-changed Skellam process of type II.
To represent distributions and other characteristics of processes considered in the next sections, we will use some special functions, besides the modified Bessel function introduced above. Namely, we will use the Wright function for δ = 0, (3) is simplified as follows: the two-parameter generalized Mittag-Leffler function and the three-parameter generalized Mittag-Leffler function , z ∈ C, ρ, δ, γ ∈ C, with Re(ρ) > 0, Re(δ) > 0, Re(γ) > 0 (see, e.g., [8] for definitions and properties of these functions).

Compound Poisson-Gamma process
The first example of compositions of Poisson and Gamma processes which we consider in the paper is the compound Poisson-Gamma process. This is a well known process, however here we would like to focus on some of its important features. Let N (t) be a Poisson process and {G n , n ≥ 1} be a sequence of i.i.d. Gamma random variables independent of N (t). Then compound Poisson process with Gamma distributed jumps is defined as . This process can be also represented as Y (t) = G(N (t)), that is, as a time-changed Gamma process, where the role of time is played by Poisson process. Let us denote this process by G N (t): Let N (t) and G(t) have parameters λ and (α, β) correspondingly. The process G N (t) is a Lévy process with Laplace exponent (or Bernštein function) of the form and the corresponding Lévy measure is The transition probability measure of the process G N (t) can be written in the closed form: = e −λt δ {0} (ds) + e −λt−βs 1 s Φ α, 0, λt(βs) α ds, therefore, probability law of G N (t) has atom e −λt at zero, that is, has a discrete part P {G N (t) = 0} = e −λt , and the density of the absolutely continuous part can be expressed in terms of the Wright function.
In particular, when α = n, n ∈ N , we have a Poisson-Erlang process, which we will denote by G (n) N (t) and for α = 1, we have a Poisson process with exponentially distributed jumps. We will denote this last process by E N (t). Its Lévy measure ν(du) = λβe −βu du and Laplace exponent is The transition probability measure of E N (t) is given by Note that: (i) for the range a ∈ (0, 1) and b > 0 we obtain the tempered Poisson processes, (ii) the limiting case when a ∈ (0, 1), b = 0 corresponds to stable subordinators, (iii) the case a = 0, b > 0 corresponds to Gamma subordinators, (iv) for a < 0 we have compound Poisson-Gamma subordinators.
Probability distributions of the above subordinators can be written in the closed form in the case (iii); in the case (i) for α = 1/2, when we have the inverse Gaussian subordinators; and in the case (iv).
In the paper [16] the deep and detailed investigation was performed for the timechanged Poisson processes where the role of time is played by the subordinators from the above cases (i)-(iii) (and as we have already pointed out in the introduction, the most studied in the literature is the case (ii)). The mentioned paper deals actually with the general time-changed processes of the form N (H f (t)), where N (t) is a Poisson process and H f (t) is an arbitrary subordinator with Laplace exponent f , independent of N (t). This general construction falls into the framework of Bochner subordination (see, e.g. book by Sato [18]) and can be studied by means of different approaches.
With the present paper we intend to complement the study undertaken in [16] with one more example. We consider time-change by means of subordinators corresponding to the above case (iv) and, therefore, subordination related to the measures (9) will be covered for the all range of parameters. We must also admit that the attractive feature of these processes is the closed form of their distributions which allows to perform the exact calculations for characteristics of corresponding time-changed processes.
We also study Skellam processes with time-change by means of compound Poisson-Gamma subordinators, in this part our results are close to the corresponding results of the paper [10].
In our paper we develop (following [16,10] among others) an approach for studying time-changed Poisson process via investigation of their distributional properties, with the use of the form and properties of distributions of the processes involved.
It would be also interesting to study the above mentioned processes within the framework of Bochner subordination via semigroup approach. We address this topic for future research, as well as the study of other characteristics of the processes considered in the present paper and their comparison with related results existing in the literature.
Note that in our exposition for convenience we will write Lévy measures (9) for the case of compound Poisson-Gamma subordinators in the form (7) with α > 0 and transparent meaning of parameters.
Remark 2. It is well known that the composition of two independent stable subordinators is again a stable subordinator. If S i (t), i = 1, 2, are two independent stable subordinators with Laplace exponents c i u ai , i = 1, 2, then S 1 (S 2 (t)) is the stable subordinator with index a 1 a 2 , since its Laplace exponent is c 2 (c 1 ) a2 u a1a2 .
More generally, iterated composition of stable subordinators S 1 , . . . , S k with indices a 1 , . . . , a k is the stable subordinator with index k i=1 a i . In the paper [11] it was shown that one specific subclass of Poisson-Gamma subordinators has a property similar to the property of stable subordinators described above, that is, if two processes belong to this class, so does their composition. Namely, this is the class of compound Poisson processes with exponentially distributed jumps and parameters λ = β = 1 a , therefore, the Lévy measure is of the form ν(du) = 1 a 2 e −u/a du and the corresponding Bernštein function isf a (u) = u 1+au . Denote such processes by E a N (t). Then, as can be easily checked (see also [11]) the composition of two independent processes E a1 can be represented as infinite composition of independent subordinators E 1/2 n N : This interesting feature of the processes E a N (t) deserves further investigation.

Compound Poisson-Gamma process as time change
Let N 1 (t) be the Poisson process with intensity λ 1 . Consider the time-changed pro- Theorem 1. Probability mass function of the process X(t) = N 1 (G N (t)) is given by The probabilities p k (t), k ≥ 0, satisfy the following system of difference-differential equations: In the case when a = 1, that is, when the process G N (t) becomes E N (t), the compound Poisson process with exponentially distributed jumps, probabilities p k (t), k ≥ 0, can be represented via the generalized Mittag-Leffler function as stated in the next theorem.
and the probabilities p k (t), k ≥ 0, satisfy the following equation: Proof of Theorem 1. The probability mass function of the process X(t) = N 1 (G N (t)) can be obtained by standard conditioning arguments (see, e.g., the general result for subordinated Lévy processes in [18], Theorem 30.1).
For k ≥ 1 we obtain: For k = 0 we have: The governing equation for the probabilities p k (t) = P (N 1 (G N (t)) = k) follows as a particular case of the general equation presented in Theorem 2.1 [16] for probabilities P (N (H(t)) = k), where H(t) is an arbitrary subordinator.
Proof of Theorem 2. We obtain statements of Theorem 2 as consequences of corresponding statements of Theorem 1, using the expansion (6) for three-parameter generalized Mittag-Leffler function.
Remark 3. Moments of the process N 1 (G N (t)) can be calculated, for example, from the moment generating function which is given by: for θ ∈ R such that β + λ 1 (1 − e θ ) = 0. We have, in particular, Expressions for probabilities p k (t) and expressions for moments were also calculated in [11] using the probability generating function of the process N 1 (G N (t)). In [11] the covariance function was also obtained: The very detailed study of time-changed processes N (H f (t)) with H f (t) being an arbitrary subordinator, independent of Poisson process N (t), with Laplace exponent f (u), is presented in [16]. In particular, it was shown therein that the probability generating function can be written in the form G(u, t) = e −tf (λ(1−u)) , where λ is the parameter of the process N (t).
Note also that in order to compute the first two moments and covariance function of time-changed Lévy processes the following result, stated in [14] as Theorem 2.1, can be used: If X(t) is a homogeneous Lévy process with X(0) = 0, Y (t) is a nondecreasing process independent of X(t) and Z(t) = X(Y (t)), then provided that EX(t) and U (t) = EY (t) exist; and if X(t) and Y (t) have finite second moments, then Cov We will use the above formulas further in the paper.

Remark 4.
Let E a (t) = E a N (t) be the compound Poisson-exponential process with Laplace exponentf a (u) = u 1+au , which we discussed in Remark 2 above, and let . In view of Remark 2, for the processes N a (t) we have the following property concerning double and iterated compositions: This property makes processes N a (t) similar to the space-fractional Poisson processes, which are obtained as In the papers [15,7] it was shown that relation (21) is referred to in [7] as auto-conservative property. This property deserves the further investigation.
Remark 5. Note that the marginal laws of the time-changed processes obtained in Theorems 1, 2 (and in the next theorems in what follows) can be considered as new classes of distributions, in particular, (12) and (32) below represent three-parameter distributions involving the generalized Mittag-Leffler functions.
We now consider Skellam processes S(t) with time change, where the role of time is played by compound Poisson-Gamma subordinators G N (t) with Laplace exponent Let the process S(t) have parameters λ 1 and λ 2 and let us consider first the timechanged Skellam process of type I, that is the process where N 1 (t), N 2 (t) and G N (t) are mutually independent.
Theorem 3. Let S I (t) = S(G N (t)), then probabilities r k (t) = P (S I (t) = k) are given by (22) The moment generating function of S I (t) has the following form: for θ such that β + λ 1 + λ 2 − λ 1 e θ − λ 2 e −θ = 0. Remark 6. For the case α = 1, that is, S I (t) = S(E N (t)), we obtain Proof of Theorem 3. Using conditioning arguments, we can write: where s k (u) = P (S(t) = k). Inserting the expressions for s k (u) and P (G N (t) ∈ du), which are given by formulas (1) and (8) correspondingly, we come to (22). The moment generating function can be obtained as follows: Remark 7. The mean, variance and covariance function for Skellam process S I (t) = S(G N (t)) can be calculated with the use of the general result for time-changed Lévy processes stated in [14], Theorem 2.1 (see our Remark 3, formulas (17)- (20)) and expressions for the mean, variance and covariance of Skellam process, which are Cov S(t), S(s) = (λ 1 + λ 2 ) min(t, s).
We obtain: Consider now the time-changed Skellam process of type II, where the role of time is played by the subordinator X(t) = E N (t) with Laplace exponent f (u) = λ u β+u , that is, the process where X 1 (t) and X 2 (t) are independent copies X(t) and independent of N 1 (t), N 2 (t).
Theorem 4. Let S II (t) be the time-changed Skellam process of type II given by (23). Its probability mass function is given by for k ∈ Z, k ≥ 0, and when k < 0 The moment generating function is for θ such that β + λ 1 (1 − e θ ) = 0.
Proof of Theorem 4. Using the independence of N 1 (X 1 (t)) and N 2 (X 2 (t)), we can write: and then we use the expressions for probabilities of N i (E N (t)) given in Theorem 2.
For k > 0 we have the expression from which we obtain (24); and in the analogous way we come to (25). In view of independence of N 1 (X 1 (t)) and N 2 (X 2 (t)), the moment generating function is obtained as the product: Ee θSII (t) = Ee θN1(X1(t)) Ee −θN2(X2(t)) , and then we use expression (15) for α = 1.
Remark 8. The moments of S II (t) can be calculated using the moment generating function given in Theorem 4, or using the independence of processes N i (X i (t)), i = 1, 2, and corresponding expressions for the moments of N i (X i (t)), i = 1, 2. Since N 1 (X 1 (t)) and N 2 (X 2 (t)) are independent we can also easily obtain the covariance function as follows: Cov S II (t), S II (s) where the expressions for covariance function of the process N 1 (E N (t)) are used (see formula (16) with α = 1).
5 Inverse compound Poisson-Gamma process as time change

Inverse compound Poisson-exponential process
We first consider the process E N (t), the compound Poisson process with exponentially distributed jumps with Laplace exponent f (u) = λ u β+u . Define the inverse process (or first passage time): It is known (see, e.g., [4]) that the process Y (t) has density h(s, t) = λe −λs−βt I 0 (2 λβst) and its Laplace transform is which can be also verified as follows: Moments of Y (t) can be easily found by the direct calculations. We have For example, the first moment can be obtained as follows: The covariance function of the process Y (t) can be calculated using the results on moments of the inverse subordinators stated in [21].
The proof of Lemma 1 is given in Appendix A.1.

Remark 9.
It is known that, generally, the inverse subordinator is a process with non-stationary, non-independent increments (see, e.g., [21]). We have not investigated here this question for the process Y (t), however we can observe the same similarity between the expressions for variance and covariance of Y (t) as that which holds for the processes with stationary independent increments.
Remark 10. Note that, for inverse processes, the important role is played by the function U (t) = EY (t), which is called the renewal function. This function can be calculated using the following formula for its Laplace transform: where f (s) is the Laplace exponent of the subordinator, for which Y (t) is the inverse (see, [21], formula (13)). In our case we obtain and by inverting this Laplace transform we come again to the expression for the first moment given in (30). The function U (t) characterizes the distribution of the inverse process Y (t), since all the moments of Y (t) (and the mixed moments as well) can be expressed (by recursion) in terms of U (t) (see, [21]).
Let N 1 (t) be the Poisson process with intensity λ 1 . Consider the time-changed process Z(t) = N 1 (Y (t)), where Y (t) is the inverse process given by (27), independent of N 1 (t).
Theorem 5. The probability mass function of the process Z(t) = N 1 (Y (t)) is given by the mean and variance are: and the covariance function has the following form: The Laplace transform is given by Proof of Theorem 5. The probabilities p I k (t) = P (Z(t) = k) can be obtained by means of the following calculations: The mean and variance can be calculated using formulas (17), (19) and expressions (30). The Laplace transform is obtained as follows: .
We now state the relationship, in the form of a system of differential equations, between the marginal distributions of the processes N 1 (E N (t)) and N 1 (Y (t)), with Y (t) being the inverse process for E N (t).
Introduce the following notations: forp E k (t) we changed the order of parameters λ and β, that is, parameters in E and N are interchanged, and λ and β are now parameters of E and N correspondingly; and for the inverse process we denote: where Y (t) is the inverse process for E N (t) = E β (N λ (t)), where Y (t) is the inverse process for E N (t) = E λ (N β (t)).
Theorem 6. The probabilities p E k (t),p E k (t) and p I k (t),p I k (t) satisfy the following differential equations: and If λ = β then p E k (t) and p I k (t) satisfy the following equation: Proof. Firstly we represent the derivative d dtp E k (t) in the form: Next we notice that the following relation holds: which can be easily checked by direct calculations. Therefore, (43) can be written as Comparing the second term in the r.h.s. of (44) with the expression for p I k (t), we come to (40). With analogous reasonings we derive (41), and taking λ = β in (40) or in (41), we obtain (42).
Remark 11. We note that in addition to (40)-(42) some other relations can be written, in particular, the probabilities p I k (t) can be represented recursively via p E k (t). If λ = β, from (42) and (14) we deduce: Consider now Skellam processes with time change, where the role of time is played by the inverse process Y (t) given by (27).
Let the Skellam process S(t) have parameters λ 1 and λ 2 and let us consider the process where N 1 (t), N 2 (t) and Y (t) are independent.
Theorem 7. Let S I (t) be a Skellam process of type I given by (45), then the probabilities r k (t) = P (S I (t) = k), k ∈ Z, are given by

The moment generating function is given by
for θ such that λ + f s (−θ) = 0, where f s (θ) is the Laplace transform of the initial Skellam process S(t).
Proof. Using conditioning arguments, we write: and then we insert the expressions for s k (u) and h(u, t), which are given by formulas (1) and (28) correspondingly. The moment generating function can be calculated as follows: for θ such that λ + λ 1 + λ 2 − λ 1 e θ − λ 2 e −θ = 0.
Remark 12. The expressions for mean, variance and covariance function for the Skellam process S(Y (t)) can be calculated analogously to corresponding calculations for the process S(G N (t)) (see Remark 7). We obtain: Consider the time-changed Skellam process of type II: where Y 1 (t) and Y 2 (t) are independent copies of the inverse process Y (t) and independent of N 1 (t), N 2 (t).
Theorem 8. Let S II (t) be the time-changed Skellam process of type II given by (47). Its probability mass function is given by The moment generating function is given by for θ such that λ + λ 2 (1 − e θ ) = 0.
Proof. Analogously to the proof of Theorem 4 we write the expression for P (S II (t) = k) in the form (26) and then we use the expressions for probabilities P (N 1 (Y (t)) = k) from Theorem 5. The moment generating function is obtained as the product: Ee θSII (t) = Ee θN1(Y1(t)) Ee −θN2(Y2(t)) , and then we use expression (35).
Remark 13. The moments of S II (t) can be calculated using the moment generating function given in Theorem 8, or using independence of processes N i (Y i (t)), i = 1, 2, and corresponding expressions for the moments of N i (Y i (t)), i = 1, 2. In view of mutual independence of N 1 (Y 1 (t)) and N 2 (Y 2 (t)) and using the formula (31), we obtain the covariance function: 2β min(t, s) + 1 .

Inverse compound Poisson-Erlang process
Consider now the compound Poisson-Erlang process G (n) N (t), that is, the Poisson-Gamma process with α = n.
For this case the inverse process has density of the following form: The formula for the density (49) (in different set of parameters) was obtained in [17] (Theorem 3.1) by developing the approach presented in [4]. This approach is based on calculating and inverting the Laplace transforms, by taking into account the relationship between Laplace transforms of a direct process and its inverse process. We refer for details to [4,17] (see also Appendix A.2). It should be noted that for the compound Poisson-Gamma processes with a non-integer parameter α the inverting Laplace transforms within this approach leads to complicated infinite integrals (see again [17]). Laplace transform of the process Y (n) (t) can be represented in the following form: (βt) k−1 E n,k λ(βt) n λ + θ (see (60) in Appendix A.2). With direct calculations, using the known form of the density of Y (n) (t), we find the following expressions for the moments: and, generally, for p ≥ 1: (see details of calculations in Appendix A.2).

Remark 14.
Using the arguments similar to those in [17] (see proof of Lemma 3.11 therein), we can also derive another expression for the first moment, in terms of the two-parameter generalized Mittag-Leffler function: From (50) we can see that EY (n) (t) has a linear behavior with respect to t as t → ∞: which should be indeed expected, since the general result holds for subordinators with finite mean: the mean of their first passage time exhibit linear behavior for large times (see, for example, [21] and references therein).
The details of derivation of (50) and (51) are presented in Appendix A.2.
Consider the time-changed process is the inverse process given by (48), independent of N 1 (t).
Theorem 9. The probability mass function of the process Z (n) (t) = N 1 (Y (n) (t)) is given by Laplace transform is given by .
Proof. Proof is similar to that for Theorem 5. In particular, the probability mass function is obtained as follows: Remark 15. The first two moments of the process Z (n) (t) can be calculated as follows: and we can see that, similarly to EY (n) (t), EZ (n) (t) has linear behavior as t → ∞: Let the Skellam process S(t) have parameters λ 1 and λ 2 and let us consider the process S (n) where N 1 (t), N 2 (t) and Y (n) (t) are independent.
Theorem 10. Let S The moment generating function is given by for θ such that λ + f s (−θ) = 0, where f s (θ) is the Laplace transform of the initial Skellam process S(t).
Proof. Proof is analogous to that of Theorem 7.
Consider the time-changed Skellam process of type II: where Y 1 (t) and Y 2 (t) are independent copies of the inverse process Y (n) (t) and independent of N 1 (t), N 2 (t).
Proof. Proof is analogous to that of Theorem 8.
Remark 16. Covariance structure of the Skellam processes considered in this section appears to be of complicated form and we postpone this issue for future research.
LetŨ (u 1 , u 2 , m 1 , m 2 ) be the Laplace transform of U (t 1 , t 2 , m 1 , m 2 ). Then, in these notations, EY (t 1 )Y (t 2 ) = U (t 1 , t 2 , 1, 1) and from Theorem 3.1, formula (17) [21] we have: In the above formulaŨ (u 1 , u 2 , 1, 0) is the Laplace transform of U (t 1 , t 2 , The inverse Laplace transform can be found by the following calculations: for the function we write the inverse Laplace transform in the form and continue calculations inserting (56) in (55): Therefore, for the covariance of the process Y (t) we obtain the following expression: Using the expression for U (t) = 1 λ (βt + 1), we find:

A.2 Marginal distribution and moments of the process Y (n) (t)
We present some details of the derivation of the expression for probability density of the inverse Poisson-Erlang process Y (n) (t) introduced by the formula (48) in Section 5.2. The inverse Poisson-Erlang process was considered in [17] and its probability density function (p.d.f.) was presented in Theorem 3.1 therein. However, in [17] the different parametrization of the Poisson-Erlang process was used in comparison with that used in our paper.
For convenience of a reader and to make the paper self-contained, we present here some details of calculations following the general approach developed in [4].
Introduce the Laplace transforms related to the process G Then the following relation holds (see [4], Section 8.4, formula (3)): The above formula holds, in fact, for more general compound Poisson processes. In the case of compound Poisson process with jumps having the p.d.f. g(x), it is possible to write the exact expression for * l * (v, u) and formula (57) takes the form: * q * (v, u) =f (v)(1 −ĝ(u)) where f (x) is the p.d.f. of exponential distribution,f (v) andĝ(u) are the Laplace transforms of f and g correspondingly (see formula (4) Section 8.4 in [4]).
For the case of Poisson-Erlang process, f (x) is the p.d.f. of exponential distribution and g(x) is the p.d.f. of Erlang distribution. Inserting the expressions forf (v) andĝ(u) in (58) we finally obtain: * q * (v, u) = λ((β + u) n − β n ) u(λ + v)(β + u) n − λβ n . (59) One special case when inversion of (59) can be easily performed was considered in [4], namely, the case when f and g are both exponential, and in such a case we come to the p.d.f. of inverse compound Poisson-exponential process (see formula (28) in Section 5.1). The exact result is also available for the inverse Poisson-Erlang process. This result was stated in Theorem 3.1 of [17], for its proof the inverse Laplace transforms for (59) were calculated consequently with respect to variables v and u.
We next obtain the expressions for the moments of the process Y (n) (t). Using the known form of the probability density of the process Y (n) (t), we obtain: (βt) k−1 E p+1 n,k (βt) n .