Arithmetic properties of multiplicative integer-valued perturbed random walks

Let $(\xi_1, \eta_1)$, $(\xi_2, \eta_2),\ldots$ be independent identically distributed $\mathbb{N}^2$-valued random vectors with arbitrarily dependent components. The sequence $(\Theta_k)_{k\in\mathbb{N}}$ defined by $\Theta_k=\Pi_{k-1}\cdot\eta_k$, where $\Pi_0=1$ and $\Pi_k=\xi_1\cdot\ldots\cdot \xi_{k}$ for $k\in\mathbb{N}$, is called a multiplicative perturbed random walk. We study arithmetic properties of the random sets $\{\Pi_1,\Pi_2,\ldots, \Pi_k\}\subset \mathbb{N}$ and $\{\Theta_1,\Theta_2,\ldots, \Theta_k\}\subset \mathbb{N}$, $k\in\mathbb{N}$. In particular, we derive distributional limit theorems for their prime counts and for the least common multiple.

To set the scene we introduce first some necessary notation.Let P denote the set of prime numbers.For an integer n ∈ N and p ∈ P , let λ p (n) denote the multiplicity of prime p in the prime decomposition of n, that is, n = p∈P p λ p (n) .
For every p ∈ P , the function λ p : N → N 0 is totally additive in the sense that λ p (mn) = λ p (m) + λ p (n), p ∈ P , m, n ∈ N.
The set of functions (λ p ) p∈P is a basic brick from which many other arithmetic functions can be constructed.For example, with GCD (A) and LCM (A) denoting the greatest common divisor and the least common multiple of a set A ⊂ N, respectively, we have GCD (A) = p∈P p min n∈A λ p (n) and LCM (A) = p∈P p max n∈A λ p (n) .
The listed arithmetic functions applied either to A = {Π 1 , . . ., Π n } or A = {Θ 1 , . . ., Θ n } are the main objects of investigation in the present paper.From the additivity of λ p we infer and Fix any p ∈ P .Formulae (1) and (2) demonstrate that S(p) := (S k (p)) k∈N 0 , is a standard additive random walk with the generic step λ p (ξ), whereas the sequence T (p) := (T k (p)) k∈N , is a particular instance of an additive perturbed random walk, see [6], generated by the pair (λ p (ξ), λ p (η)).

Main results
2.1.Distributional properties of the prime counts (λ p (ξ), λ p (η)).As is suggested by ( 1) and ( 2) the first step in the analysis of S(p) and T (p) should be the derivation of the joint distribution (λ p (ξ), λ p (η)) p∈P .The next lemma confirms that the finitedimensional distributions of the infinite vector (λ p (ξ), λ p (η)) p∈P , are expressible via the probability mass function of (ξ, η).However, the obtained formulae are not easy to handle except some special cases.For i, j ∈ N, put Lemma 1. Fix p ∈ P and nonnegative integers (k q ) q∈P ,q≤p and (ℓ q ) q∈P ,q≤p .Then w Ki,Lj , where K := q≤p,q∈P q k q and L := q≤p,q∈P q ℓ q .
Obviously, if ξ and η are independent, then We proceed with the series of examples.
where ζ is the Riemann zetafunction.Then, (λ p (ξ)) p∈P are mutually independent and which means that λ p (ξ) has a geometric distribution on N 0 with parameter p −α .
Example 2. For β ∈ (0, 1), Example 3. Let Poi(λ) be a random variable with the Poisson distribution with parameter λ and put ) where 0 F p k is the generalized hypergeometric function, see Chapter 16 in [10].
In all examples above the distribution of λ p (ξ) for every fixed p ∈ P , is extremely light-tailed.It is not that difficult to construct 'weird' distributions where all λ p (ξ) have infinite expectations.
Example 4. Let (g p ) p∈P be any probability distribution supported by P , g p > 0, and (t k ) k∈N 0 any probability distribution on N such that ∞ k=1 kt k = ∞ and t k > 0. Define a probability distribution h on Q := p∈P {p, p 2 , . ..} by If ξ is a random variable with distribution h, then This example can be modified by taking g := p∈P g p < 1 and charging all points of N \ Q (this set contains 1 and all integers having at least two different prime factors) with arbitrary positive masses of the total weight 1 − g.The obtained probability distribution charges all points of N and still possesses the property that all λ p 's have infinite expectations.
Let X be a random variable taking values in N. Since log X = p∈P λ p (X) log p, It is also clear that the converse implication is false in general.When k = 1 the inequality E[λ p (X)] < ∞ is equivalent to p∈P E[λ p (X)] log p < ∞.As we have seen in the above examples, checking that E[(λ p (X)) k ] < ∞ might be a much more difficult task than proving a stronger assumption E[log k X] < ∞.Thus, we shall mostly work under moment conditions on log ξ and log η.
Our standing assumption throughout the paper is which, by the above reasoning, implies E[λ p (ξ)] < ∞, p ∈ P .

Limit theorems for S(p) and T (p)
. From Donsker's invariance principle we immediately obtain the following proposition.Let D := D([0, ∞), R) be the Skorokhod space endowed with the standard J 1 -topology.
on the product space D N , where, for all n ∈ N and all According to the proof of Proposition 1.3.13 in [6], see pp. 28-29 therein, the following holds true for the perturbed random walks T (p), p ∈ P . (5) is clearly sufficient for (5).
From the continuous mapping theorem under the assumptions of Proposition 2 we infer see Proposition 1.3.13 in [6].Formula ( 7), for a fixed p ∈ P , belongs to the realm of limit theorems for the maximum of a single additive perturbed random walk.This circle of problems is wellunderstood, see Section 1.3.3 in [6] and [7], in the situation when the underlying additive standard random walk is centered and attracted to a stable Lévy process.In our setting the perturbed random walks (T k (p)) k∈N and (T k (q)) k∈N are dependent whenever p, q ∈ P , p q, which make derivation of the joint limit theorems harder and leads to various asymptotic regimes.
In the next result we shall assume that η dominates ξ in a sense that the asymptotic behavior of max 1≤k≤n T k (p) is regulated by the perturbations (λ p (η k )) k≤n for all p ∈ P 0 , where P 0 is a finite subset of prime numbers and those p's dominate all other primes.Theorem 6. Assume (4).Suppose further that there exists a finite set P 0 ⊆ P , d := |P 0 |, such that the distributional tail of (λ p (η)) p∈P 0 is regularly varying at infinity in the following sense.For some positive function (a(t)) t>0 and a measure on the space of locally finite measures on (0, ∞] d endowed with the vague topology.Finally, suppose E[λ p (η)] < ∞, for p ∈ P \ P 0 .Then where (M p (u)) u≥0 ) p∈P 0 is a multivariate extreme process defined by Here the pairs (t k , y k ) are the atoms of a Poisson point process on [0, ∞) × (0, ∞] d with the intensity measure LEB ⊗ ν and the supremum is taken coordinatewise.Moreover, 2.3.Limit theorems for the LCM .The results from the previous section will be applied below to the analysis of A moment's reflection shows that the analysis of ⋄ n is trivial.Indeed, by definition, Π n−1 divides Π n and thereupon ⋄ n = Π n for n ∈ N. Thus, assuming that σ 2 ξ := Var (log ξ) ∈ (0, ∞), an application of the Donsker functional limit theorem yields on the Skorokhod space D, where (W (u)) u≥0 is a standard Brownian motion.
A simple structure of the sequence (⋄ n ) n∈N breaks down completely upon introducing the perturbations (η k ), which makes the analysis of (× n ) a much harder problem.
For instance, it contains as a special case the problem of studying the LCM of an independent sample, which is itself highly non-trivial.Note that log × n = log p∈P p max 1≤k≤n (λ p (ξ 1 )+...+λ p (ξ k−1 )+λ p (η k )) = p∈P max 1≤k≤n T k (p) log p, which shows that the asymptotic of × n is intimately connected with the behavior of max 1≤k≤n T k (p), p ∈ P .
As one can guess from Theorem 5 in a 'typical' situation relation ( 14) holds with log × ⌊tu⌋ replacing log ⋄ ⌊tu⌋ .The following heuristics suggest the right form of assumptions ensuring that perturbations (η k ) k∈N have an asymptotically negligible impact on log × n .Take a prime p ∈ P .Its contribution to log × n (up to a factor log p) is max 1≤k≤n T k (p).According to Theorem 5, this maximum is asymptotically the same as S n (p).However, as p gets large, the mean E[λ p (ξ)] of the random walk S n−1 (p) becomes small because of the identity Thus, for large p ∈ P , the remainder max 1≤k≤n T k (p) − S n−1 (p) can, in principle, become larger than S n−1 (p) itself if the tail of λ p (η) is sufficiently heavy.In order to rule out such a possibility, we introduce the following deterministic sets: and bound the rate of growth of max 1≤k≤n λ p (η k ) for all p ∈ P 2 (n).It is important to note that under the assumption (8) it holds Therefore, if E[log X] < ∞ for some random variable X, then the relation holds true.
and the following two conditions and where The condition (18) can be replaced by a stronger one which only involves distribution of η, namely Taking into account (16) and the fact that E[log η] < ∞, the assumption (20) is nothing else but a condition of the speed of convergence of the series Example 8.In the settings of Example 1 let ξ and η be arbitrarily dependent with

From the chain of relations
where we have used the prime number theorem for the asymptotic equivalence.Thus, (20) In the setting of Theorem 6 the situation is much simpler in a sense that almost no extra assumptions are needed to derive a limit theorem for × n .
Theorem 9.Under the same assumptions as in Theorem 6 and assuming additionally that Note that it is allowed to take in Theorem 9 ξ = 1, which yields the following limit theorem for the LCM of an independent integer-valued random variables.

Corollary 1. Under the same assumptions on η as in Theorem 6 it holds
Remark 4. The results presented in Theorems 7 and 9 is a contribution to a popular topic in probabilistic number theory, namely, the asymptotic analysis of the LCM of various random sets.For random sets comprised of independent random variables uniformly distributed on {1, 2, . . ., n} this problem has been addressed in [2,3,4,5,9].Some models with a more sophisticated dependence structure have been studied [1] and [8].

Limit theorems for coupled perturbed random walks
Theorems 5 and 6 will be derived from general limit theorems for the maxima of arbitrary additive perturbed random walks indexed by some parameters ranging in a countable set in the situation when the underlying additive standard random walks are positively divergent and attracted to a Brownian motion.
is an additive standard random walk.For each r ∈ A, the sequence (T * k (r)) k∈N defined by , is an additive perturbed random walk.The sequence ((T * k (r)) k∈N ) r∈A is a collection of (generally) dependent additive perturbed random walks.
where, for all n ∈ N and arbitrary r 1 < r 2 < . . .< r n with r i ∈ A, i ≤ n, (W r 1 (u), . . ., W r n (u)) u≥0 is an n-dimensional centered Wiener process with covariance matrix C = ||C i, j || 1≤i,j≤n with the entries C i, j = C j, i = Cov (X(r i ), X(r j )).
Proof.We shall prove an equivalent statement that, as t → ∞, which differs from (23) by a shift of the subscript k.By the multidimensional Donsker theorem, in the product topology of D N .Fix any r ∈ A and write max 0≤k≤⌊tu⌋ In view of (24) the proof is complete once we can show that Let (X 0 (r), Y 0 (r)) be a copy of (X(r), Y (r)) which is independent of (X k (r), Y k (r)) k∈N .Since the collection ((X 1 (r), Y 1 (r)), . . ., (X n+1 (r), Y n+1 (r))) has the same distribution as has the same distribution as By assumption, E(−S * 1 (r)) ∈ (−∞, 0) and E(Y (r) − X(r)) + < ∞.Hence, by Theorem 1.2.1 and Remark 1.2.3 in [6], As a consequence, the a.s.limit is a.s.finite.This completes the proof of (25).
The assumption (8) on the space of locally finite measures on (0, ∞] d endowed with the vague topology.If where (M r (u)) u≥0 ) r∈A 0 is defined as in (12).Moreover, Proof.According to Corollary 5.18 in [11] max 1≤k≤⌊tu⌋ Y k (r) a(t) in the product topology of D N .The function (a(t)) t≥0 is regularly varying at infinity with index 1/α > 1.Thus, by the law of large numbers, for all r ∈ A, and ( 27) follows from the inequalities min 1≤k≤⌊tu⌋ In view of ( 29) and (30) , to prove (28) it suffices to check that for every fixed r ∈ A \ A 0 .This, in turn, follows from which is a consequence of the assumption E[|Y (r)|] < ∞, r ∈ A \ A 0 and the Borel-Cantelli lemma.

Proof of Theorem 7
We aim at proving that Let (ξ 0 , η 0 ) be an independent copy of (ξ, η) which is also independent of (ξ n , η n ) n∈N .By the same reasoning as we have used in the proof of (25) we obtain Taking into account p∈P λ p (η 0 ) log p = log η 0 , we see that (31) is a consequence of Since, for every fixed p ∈ P , max by assumption (8), it suffices to check that, for every fixed ε > 0, In order to check (34) we divide the sum into two disjoint parts with summations over P 1 (n) and P 2 (n).For the first sum, by Markov's inequality, we obtain where last estimate is a consequence of Erickson's inequality for renewal functions, see Eq. (6.5) in [6].Further, since for p ∈ P 1 (n), The right-hand side converges to 0, as M → ∞ by (17).For the sum over P 2 (n) the derivation is simpler.By Markov's inequality , and the right-hand side tends to zero as n → ∞ in view of (18).The proof is complete.

Proof of Theorem 9
From Theorem 6 with the aid of the continuous mapping theorem we conclude that