Detecting independence of random vectors: generalized distance covariance and Gaussian covariance

Distance covariance is a quantity to measure the dependence of two random vectors. We show that the original concept introduced and developed by Székely, Rizzo and Bakirov can be embedded into a more general framework based on symmetric Lévy measures and the corresponding real-valued continuous negative deﬁnite functions. The Lévy measures replace the weight functions used in the original deﬁnition of distance covariance. All essential properties of distance covariance are preserved in this new framework. From a practical point of view this allows less restrictive moment conditions on the underlying random variables and one can use other distance functions than Euclidean distance, e.g. Minkowski distance. Most importantly, it serves as the basic building block for distance mul-tivariance, a quantity to measure and estimate dependence of multiple random vectors, which is introduced in a follow-up paper [Distance Multivariance: New dependence measures for random vectors (submitted). Revised version of arXiv: 1711.07775v1] to the present article.


Introduction
The concept of distance covariance was introduced by Székely, Rizzo and Bakirov [37] as a measure of dependence between two random vectors of arbitrary dimensions.Their starting point is to consider a weighted L 2 -integral of the difference of the (joint) characteristic functions f X , f Y and f (X,Y ) of the (R m -and R n -valued) random variables X, Y and (X, Y ), The weight w is given by w(s, t) := c α,m |s| −m−α c α,n |t| −n−α for α ∈ (0, 2).
We are going to embed this into a more general framework.In order to illustrate the new features of our approach we need to recall some results on distance covariance.Among several other interesting properties, [37] shows that distance covariance characterizes independence, in the sense that V 2 (X, Y ; w) = 0 if, and only if, X and Y are independent.Moreover, they show that in the case α = 1 the distance covariance N V 2 (X, Y ; w) of the empirical distributions of two samples (x 1 , x 2 , . . ., x N ) and (y 1 , y 2 , . . ., y N ) takes a surprisingly simple form.It can be represented as where A and B are double centrings (cf.Lemma 4.2) of the Euclidean distance matrices of the samples, i.e. of (|x k −x l |) k,l=1,...,N and (|y k −y l |) k,l=1,...,N .If α = 1 then Euclidean distance has to be replaced by its power with exponent α.The connection between the weight function w in (1) and the (centred) Euclidean distance matrices in (2) is given by the Lévy-Khintchine representation of negative definite functions, i.e.
where c p is a suitable constant, cf.Section 2.1, Table 1.Finally, the representation (2) of N V 2 (X, Y ; w) and its asymptotic properties as N → ∞ are used by Székely, Rizzo and Bakirov to develop a statistical test for independence in [37].
Yet another interesting representation of distance covariance is given in the followup paper [34]: Let (X cop , Y cop ) be an independent copy of (X, Y ) and let W and W ′ be Brownian random fields on R m and R n , independent from each other and from X, Y, X cop , Y cop .The paper [34] defines the Brownian covariance where X W := W (X) − E[W (X) | W ] for any random variable X and random field W with matching dimensions.Surprisingly, as shown in [34], Brownian covariance coincides with distance covariance, i.e.W 2 (X, Y ) = V 2 (X, Y ; w) when α = 1 is chosen for the kernel w.
The paper [34] was accompanied by a series of discussion papers [25,6,22,11,15,18,26,17,35] where various extensions, applications and open questions were suggested.Let us highlight the three problems which we are going to address: a) Can the weight function w in (1) be replaced by other weight functions?(Cf.[15,18]) b) Can the Euclidean distance (or its α-power) in (2) be replaced by other distances?(Cf.[22,18,23]) c) Can the Brownian random fields W, W ′ in (3) be replaced by other random fields?(Cf.[22,26]) While insights and partial results on these questions can be found in all of the mentioned discussion papers, a definitive and unifying answer was missing for a long time.In the present paper we propose a generalization of distance covariance which resolves these closely related questions.In a follow-up paper [9] we extend our results to the detection of independence of d random variables (X 1 , X 2 , . . ., X d ), answering a question of [15,1].
More precisely, we introduce in Definition 3.1 the generalized distance covariance where µ and ν are symmetric Lévy measures, as a natural extension of distance covariance of Székely et al. [37].The Lévy measures µ and ν are linked to negative definite functions Φ and Ψ by the well-known Lévy-Khintchine representation, cf.Section 2 where examples and important properties of negative definite functions are discussed.In Section 3 we show that several different representations (related to [23]) of V 2 (X, Y ) in terms of the functions Φ and Ψ can be given.In Section 4 we turn to the finite-sample properties of generalized distance covariance and show that the representation (2) of N V 2 (X, Y ) remains valid, with the Euclidean distance matrices replaced by the matrices We also show asymptotic properties of N V 2 (X, Y ) as N tends to infinity, paralleling those of [34,36] for Euclidean distance covariance.After some remarks on uniqueness and normalization, we show in Section 7 that the representation (3) remains also valid, when the Brownian random fields W and W ′ are replaced by centered Gaussian random fields G Φ and G Ψ with covariance kernel and analogously for G Ψ .
To use generalized distance covariance (and distance multivariance) in applications all necessary functions and tests are provided in the R package multivariance [8].Extensive examples and simulations can be found in [7], therefore we concentrate in the current paper on the theoretical foundations.
Notation.Most of our notation is standard or self-explanatory.Throughout we use positive (and negative) in the non-strict sense, i.e. x ≥ 0 (resp.x ≤ 0) and we write a ∨ b = max{a, b} and a ∧ b = min{a, b} for the maximum and minimum.For a vector x ∈ R d the Euclidean norm is denoted by |x|.

Fundamental results
In this section we collect some tools and concepts which will be needed in the sequel.

Negative definite functions
A function Θ : R d → C is called negative definite (in the sense of Schoenberg) if the matrix (Θ(x i ) + Θ(x j ) − Θ(x i − x j )) i,j ∈ C m×m is positive semidefinite hermitian for every m ∈ N and x 1 , . . ., x m ∈ R d .It is not hard to see, cf.Berg & Forst [3] or Jacob [19], that this is equivalent to saying that Θ(0) ≥ 0, Θ(−x) = Θ(x) and the matrix (−Θ( Because of this equivalence, the function −Θ is also called conditionally positive definite (and some authors call Θ conditionally negative definite).
Negative definite functions appear naturally in several contexts, for instance in probability theory as characteristic exponents (i.e.logarithms of characteristic functions) of infinitely divisible laws or Lévy processes, cf.Sato [28] or [10], in harmonic analysis in connection with non-local operators, cf.Berg & Forst [3] or Jacob [19] and in geometry when it comes to characterize certain metrics in Euclidean spaces, cf.Benyamini & Lindenstrauss [2].
If Θ is continuous, the assertions a)-d) are also equivalent to e) Θ has the following integral representation where l ∈ R d , Q ∈ R d×d is symmetric and positive semidefinite and ρ is a measure on Meixner (1-d) We will frequently use the abbreviation cndf instead of continuous negative definite function.The representation (4) is the Lévy-Khintchine formula and any measure ρ satisfying is commonly called Lévy measure.To keep notation simple, we will write The triplet (l, Q, ρ) uniquely determines Θ; moreover Θ is real (hence, positive) if, and only if, l = 0 and ρ is symmetric, i.e. ρ(B) = ρ(−B) for any Borel set B ⊂ R d \ {0}.In this case (4) becomes Using the representation (4) it is straightforward to see that we have sup x |Θ(x)| < ∞ if ρ is a finite measure, i.e., ρ(R d \ {0}) < ∞, and Q = 0.The converse is also true, see [29, pp. 1390[29, pp. -1391, Lem. 6.2], Lem. 6.2].
Table 1 contains some examples of continuous negative definite functions along with the corresponding Lévy measures and infinitely divisible laws.
A measure ρ on a topological space X is said to have full (topological) support, if µ(G) > 0 for any open set G ⊂ X; for Lévy measures we have the Lévy measure has full support.
It is interesting to note that the Minkowski distances for p > 2 and d ≥ 2 are never negative definite functions.This is the consequence of Schoenberg's problem, cf.Zastavnyi [40,p. 56,Eq. (3)].
Proof of Lemma 2.2.Since each , is a one-dimensional continuous negative definite function, we can use the formula (6) to see that where is the constant of the one-dimensional p-stable Lévy measure, cf.Table 1.
This means that ℓ p p is itself a continuous negative definite function, but its Lévy measure is concentrated on the coordinate axes.Writing ℓ p (x) = f p (ℓ p p (x)) with , shows that ℓ p can be represented as a combination of the Bernstein function f p and the negative definite function ℓ p p .In other words, ℓ p is subordinate to ℓ p p in the sense of Bochner (cf.Sato [28,Chap. 30] or [30,Chap. 5,Chap. 13.1]) and it is possible to find the corresponding Lévy-Khintchine representation, cf.[28,Thm. 30.1].We have where x i → g t (x i ) is the probability density of the random variable t p X where X is a one-dimensional, symmetric 1/p-stable random variable.
Although the 1/p-stable density is known explicitly only for 1/p ∈ {1, 2}, one can show -this follows, e.g. from [28,Thm. 15.10] -that it is strictly positive, i.e. the Lévy measure of ℓ p , p ∈ (1, 2) has full support.For p = 1 the measure does not have full support, since it is concentrated on the coordinate axes.For p = 2, note that ℓ 2 (x) = |x| corresponds to the Cauchy distribution with Lévy measure given in Table 1, which has full support.
and, consequently, Using a standard argument, e.g.[10, p. 44], we can derive from ( 7), ( 8) that cndfs grow at most quadratically as x → ∞, We will assume that Θ(0) = 0 is the only zero of the function Θ -incidentally, this means that x → e −Θ(x) is the characteristic function of a(n infinitely divisible) random variable the distribution of which is non-lattice.This and (7) show that (x, y) → Θ(x − y) is a metric on R d and (x, y) → Θ(x − y) is a quasi-metric, i.e. a function which enjoys all properties of a metric, but the triangle inequality holds with a multiplicative constant c > 1. Metric measure spaces of this type have been investigated by Jacob et al. [20].Historically, the notion of negative definiteness has been introduced by I.J.Schoenberg [31] in a geometric context: he observed that for a real-valued cndf Θ the function d Θ (x, y) := Θ(x − y) is a metric on R d and that these are the only metrics such that (R d , d Θ ) can be isometrically embedded into a Hilbert space.In other words: d Θ behaves like a standard Euclidean metric in a possibly infinite-dimensional space.

Measuring independence of random variables with metrics
Let X, Y be random variables with values in R m and R n , respectively, and write L(X) and L(Y ) for the corresponding probability laws.For any metric d(•, •) defined on the family of (m + n)-dimensional probability distributions we have This equivalence can obviously be extended to finitely many random variables . Moreover, the random variables X i , i = 1, . . ., n, are independent if, and only if, (X 1 , . . ., X k−1 ) and X k are independent for all 2 ≤ k ≤ n. 2 In other words: X 1 , . . ., X n are independent if, and only if, for metrics on the . ., n.Thus, as in (10), only the concept of independence of pairs of random variables is needed.In [9,Sec. 3.1] we use a variant of this idea to characterize multivariate independence.Thus ( 10) is a good starting point for the construction of (new) estimators for independence.For this it is crucial that the (empirical) distance be computationally feasible.For discrete distributions with finitely many values this yields the classical chi-squared test of independence (using the χ 2 -distance).For more general distributions other commonly used distances (e.g.relative entropy, Hellinger distance, total variation, Prokhorov distance, Wasserstein distance) might be employed (e.g.[4]), provided that they are computationally feasible.It turns out that the latter is, in particular, satisfied by the following distance.
Definition 2.3.Let U, V be d-dimensional random variables and denote by f U , f V their characteristic functions.For any symmetric measure ρ on R d \ {0} with full support we define the distance The assumption that ρ has full support, i.e. ρ(G) > 0 for every nonempty open set ) is a metric.The symmetry assumption on ρ is not essential since the integrand appearing in (11) is even; therefore, we can always replace ρ(dr) by its symmetrization 1  2 (ρ(dr) + ρ(−dr)).Currently it is unknown how the fact that the Lévy measure ρ has full support can be expressed in terms of the cndf Θ(u) = (1 − cos u • r) ρ(dr) given by ρ, see (6).
Note that Indeed, d ρ (L(U ), L(V )) < ∞ follows from the integrability properties (5) of the Lévy measure ρ and the elementary estimates We obtain further sufficient conditions for V (X, Y ) < ∞ in terms of moments of the real-valued cndf Θ, see (6), whose Lévy measure is ρ and with Q = 0. Proposition 2.4.Let ρ be a symmetric Lévy measure on R d \ {0} with full support and denote by Θ(u) = (1 − cos u • r) ρ(dr), u ∈ R d , the real-valued cndf with Lévy triplet (l = 0, Q = 0, ρ).For all d-dimensional random variables U, V the following assertions hold: b) Let U ′ be an i.i.d.copy of U .Then and for d) Let (U ′ , V ′ ) be an i.i.d.copy of (U, V ) and assume EΘ(U ) + EΘ(V ) < ∞.
Then the following equality holds and all terms are finite Proof.Let us assume first that ρ is a finite Lévy measure, thus, Θ a bounded cndf.We denote by U ′ an i.i.d.copy of U .Since L(U − U ′ ) is symmetric, we can use Tonelli's theorem to get Now we consider an i.i.d.copy (U ′ , V ′ ) of (U, V ) and use the above equality in (11).This yields This proves (15) and, since d ρ (L(U ), L(V )) ≥ 0, also (12).Combining Part b) and ( 15) yields ( 14), while (13) immediately follows from the subadditivity of a cndf (8).
If ρ is an arbitrary Lévy measure, its truncation ρ ǫ (dr) := 1 (ǫ,∞) (|r|) ρ(dr) is a finite Lévy measure and the corresponding cndf Θ ǫ is bounded.In particular, we have a)-d) for ρ ǫ and Θ ǫ .Using monotone convergence we get Again by monotone convergence we see that the assertions a)-c) remain valid for general Lévy measures -if we allow the expressions to attain values in [0, ∞].Because of ( 13), the moment condition assumed in Part d) ensures that the limits are finite, and (15) carries over to the general situation.
Remark 2.5.a) Since U and V play symmetric roles in (17) it is clear that is always defined, the right-hand side of (15) needs attention.If we do not assume the moment condition EΘ(U ) + EΘ(V ) < ∞, we still have but it is not clear whether the limits exist for each term.
The moment condition EΘ(U )+ EΘ(V ) < ∞ is sharp in the sense that it follows from EΘ(U − V ′ ) < ∞: Since U and V ′ are independent, Tonelli's theorem entails that EΘ(u − V ′ ) < ∞ for some u ∈ R d .Using the symmetry and sub-additivity of √ Θ, see ( 8), we get c) Since a cndf Θ grows at most quadratically at infinity, see (9), it is clear that One should compare this to the condition , but not necessarily the finiteness of the terms appearing in the representation (15).d) As described at the beginning of this section, a measure of independence of X 1 , . . ., X n is given by d ρ (L(X 1 , . . ., X n ), n i=1 L(X i )).This can be estimated by empirical estimators for (15).For the 1-stable (i.e.Cauchy) cndf, see Table 1, this direct approach to (multivariate) independence has recently been proposed by [21] but the exact estimators become computationally challenging even for small samples.A further approximation recovers a computationally feasible estimation, resulting in a loss of power compared with our approach, cf.[7].
It is worth mentioning that the metric d ρ can be used to describe convergence in distribution.Lemma 2.6.Let ρ be a finite symmetric measure with full support, then d ρ given in (11) is a metric which characterizes convergence in distribution, i.e. for random variables X n , n ∈ N, and X one has The proof below shows that the implication "⇐" does not need the finiteness of the Lévy measure ρ.
Proof.Convergence in distribution implies pointwise convergence of the characteristic functions.Therefore, we see by dominated convergence and because of the obvious estimates Conversely, assume that lim n→∞ d ρ (L(X n ), L(X)) = 0.If we interpret this as convergence in L 2 (ρ), we see that there is a Lebesgue a.e.convergent subsequence f X n(k) → f X ; since f X n(k) and f X are characteristic functions, this convergence is already pointwise, hence locally uniform, see Sasvári [27,Thm. 1.5.2].By Lévy's continuity theorem, this entails the convergence in distribution of the corresponding random variables.Since the limit does not depend on the subsequence, the whole sequence must converge in distribution.

An elementary estimate for log-moments
Later on we need certain log-moments of the norm of a random vector.The following lemma allows us to formulate these moment conditions in terms of the coordinate processes.
Lemma 2.7.Let X, Y be one-dimensional random variables and ǫ > 0. Then ) is finite if, and only if, the moments we can use the elementary estimate Conversely, assume that and E log 1+ǫ (1 + Y 2 ) < ∞ follows similarly.

Generalized distance covariance
Székely et al. [37,34] introduced distance covariance for two random variables X and Y with values in R m and R n as with the weight w(x, y) = w α,m (x)w α,n (y) where w α,m (x) = c(p, α)|x| −m−α , m, n ∈ N, α ∈ (0, 2).It is well known from the study of infinitely divisible distributions (see also Székely & Rizzo [33]) that w α,m (x) is the density of an m-dimensional α-stable Lévy measure, and the corresponding cndf is just |x| α .We are going to extend distance covariance to products of Lévy measures.Definition 3.1.Let X and Y be random variables with values in R m and R n and ρ := µ ⊗ ν where µ and ν are symmetric Lévy measures on R m \ {0} and R n \ {0}, both having full support.The generalized distance covariance V (X, Y ) is defined as and, in view of the discussion in Section 2.2, we have where copies of (X, Y ). 3 It is clear that the product measure ρ inherits the properties "symmetry" and "full support" from its marginals µ and ν.
From the discussion following Definition 2.3 we immediately get the next lemma.Lemma 3.2.Let V 2 (X, Y ) be generalized distance covariance of the mresp.ndimensional random variables X and Y , cf. (22).The random variables X and Y are independent if, and only if, V 2 (X, Y ) = 0.

Generalized distance covariance with finite Lévy measures
Fix the dimensions m, n ∈ N, set d := m + n, and assume that the measure ρ is of the form ρ = µ ⊗ ν where µ and ν are finite symmetric Lévy measures on R m \ {0} and R n \ {0}, respectively.If we integrate the elementary estimates with respect to ρ(ds, dt) = µ(ds) ν(dt), it follows that ρ is a Lévy measure if µ and ν are finite Lévy measures. 4We also assume that µ and ν, hence ρ, have full support.
Since V (X, Y ) is a metric in the sense of Section 2.2 we can use all results from the previous section to derive various representations of generalized distance covariance.
We write Φ, Ψ and Θ for the bounded cndfs induced by µ(ds), ν(dt), and ρ(dr with u = (x, y) ∈ R m+n and r = (s, t) ∈ R m+n .The symmetry in each variable and the elementary identity We can now easily apply the results from Section 2.2.In order to do so, we consider six i.i.d.copies (X i , Y i ), i = 1, . . ., 6, of the random vector (X, Y ), and set . This is a convenient way to say that and U, U ′ , V and V ′ are independent. 5he following formulae follow directly from Proposition 2.4.d) and Remark 2.5.a).
Proposition 3.3.Let X and Y be random variables with values in R m and R n and assume that µ, ν are finite symmetric Lévy measures on R m \ {0} and R n \ {0} with full support.Generalized distance covariance has the following representations The latter equality follows from (25) since the terms depending only on one of the variables cancel as the random variables (X i , Y i ) are i.i.d.This gives rise to various further representations of V (X, Y ).Corollary 3.4.Let (X, Y ), (X i , Y i ), Φ, Ψ and µ, ν be as in Proposition 3.3.Generalized distance covariance has the following representations Corollary 3.4 shows, in particular, that V (X, Y ) can be written as a function of for an appropriate function g; for instance, the formula (30) follows for Corollary 3.5.Let (X, Y ), (X i , Y i ), Φ, Ψ and µ, ν be as in Proposition 3.3 and write for the "doubly centered" versions of Φ(X 1 − X 4 ) and Ψ (Y 1 − Y 4 ).Generalized distance covariance has the following representation Proof.Denote by the centered random variables appearing in (33).Clearly, Thus, Since X 1 and Y 4 are independent, we have Using the tower property and the independence of (X 1 , Y 1 ) and Y 4 we get where we use, for the last equality, that Ψ is orthogonal to the L 2 -space of Y 1measurable functions.In a similar fashion we see E[Φ • E(Ψ | Y 4 )] = 0, and the assertion follows because of (33).
In Section 4 we will encounter further representations of the generalized distance covariance if X and Y have discrete distributions with finitely many values, as it is the case for empirical distributions.

Generalized distance covariance with arbitrary Lévy measures
So far, we have been considering finite Lévy measures µ, ν and bounded cndfs Φ and Ψ (24).We will now extend our considerations to products of unbounded Lévy measures.The measure ρ := µ ⊗ ν satisfies the integrability condition Other than in the case of finite marginals, ρ is no longer a Lévy measure, see the footnote on page 364.Thus, the function Θ defined in (24) need not be a cndf and we cannot directly apply Proposition 2.4; instead we need the following result ensuring the finiteness of V (X, Y ).Lemma 3.6.Let X, X ′ be i.i.d.random variables on R m and Y, Y ′ be i.i.d.random variables on R n ; µ and ν are symmetric Lévy measures on R m and R n with full support and with corresponding cndfs Φ and Ψ as in (24).Then Proof.Following Székely et al. [37, p. 2772] we get and Using ( 16) for ρ = µ, Θ = Φ, U = X and ( 13) for and an analogous argument for ν and Y yields the bound (37).
Looking at the various representations ( 29)-(33) of V (X, Y ) it is clear that these make sense as soon as all expectations in these expressions are finite, i.e. some moment condition in terms of Φ and Ψ should be enough to ensure the finiteness of V (X, Y ) and all terms appearing in the respective representations.
In order to use the results of the previous section we fix ǫ > 0 and consider the finite symmetric Lévy measures and the corresponding cndfs Φ ǫ and Ψ ǫ given by ( 24); the product measure ρ ǫ := µ ǫ ⊗ ν ǫ is a finite Lévy measure and the corresponding cndf Θ ǫ is also bounded (it can be expressed by Φ ǫ and Ψ ǫ through the formula ( 25)).This allows us to derive the representations ( 29)- (33) for each ǫ > 0 and with Φ ǫ and Ψ ǫ .Since we have Φ = sup ǫ Φ ǫ and Ψ = sup ǫ Ψ ǫ , we can use monotone convergence to get the representations for the cndfs Φ and Ψ with Lévy measures µ and ν, respectively.Of course, this requires the existence of certain (mixed) Φ-Ψ moments of the random variables (X, Y ).Theorem 3.7.Let µ and ν be symmetric Lévy measures on R m \ {0} and R n \ {0} with full support and corresponding cndfs Φ and Ψ given by (24).For any random vector (X, Y ) with values in R m+n satisfying the following moment condition then also the representations ( 29)-( 33) hold with all terms finite.6 Proof.We only have to check the finiteness.Using Lemma 3.6, we see that (41) guarantees that V (X, Y ) < ∞.The finiteness of (all the terms appearing in) the representations ( 29)- (33) follows from the monotone convergence argument, since the moment condition (42) ensures the finiteness of the limiting expectations.EΦ p (X) + EΨ q (Y ) < ∞ for some p, q > 1 with If one of Ψ or Φ is bounded then (41) implies (42), and if both are bounded then the expectations are trivially finite.Since continuous negative definite functions grow at most quadratically, we see that (41) and (42) also follow if b) A slightly different set-up was employed by Lyons [23]: If the cndfs Φ and Ψ are subadditive, then the expectation in (32) is finite.This is a serious restriction on the class of cndfs since subadditivity means that Φ and Ψ can grow at most linearly at infinity, whereas general cndfs grow at most quadratically, cf.(9).Note, however, that square roots of real cndfs are always subadditive, cf.(7).
For the last equality we use that a and b are symmetric matrices.
We will now show that N V 2 is a consistent estimator for V 2 (X, Y ).
Theorem 4.4 (Consistency).Let (X i , Y i ) i=1,...,N be i.i.d.copies of (X, Y ) and write Proof.The moment condition E[Φ(X) + Ψ (Y )] < ∞ ensures that the generalized distance covariance V 2 (X, Y ) is finite, cf.Lemma 3.6.Define µ ǫ and ν ǫ as in (40), and write V 2 ǫ (X, Y ) for the corresponding generalized distance covariance and N V 2 ǫ (X, Y ) for its estimator.By the triangle inequality we obtain We consider the three terms on the right-hand side separately.The first term vanishes as ǫ → 0, since by monotone convergence.For each ǫ > 0, the second term converges to zero as N → ∞, since lim by the strong law of large numbers (SLLN) for V -statistics; note that this is applicable since the functions Φ ǫ and Ψ ǫ are bounded (because of the finiteness of the Lévy measures µ ǫ and ν ǫ ).
For the third term we set µ ǫ = µ − µ ǫ , ν ǫ = ν − ν ǫ and write Φ ǫ , Ψ ǫ for the corresponding continuous negative definite functions.Lemma 3.6 yields the inequality From the representation (24) we know that Φ ǫ (x) ≤ Φ(x), hence also EΦ ǫ (X) ≤ EΦ(X) and this is finite by assumption.Therefore, we can use monotone convergence to conclude that lim ǫ→0 EΦ ǫ (X) = 0. Thus, the classical SLLN applies and proves with the integrand Following Csörgő [13] we obtain for N → ∞ the limit where B is a Brownian bridge; as in [13, Eq. (3. 2)] one can show that it is a Gaussian process indexed by R m+n satisfying The limit (67) is continuous if, and only if, a rather complicated tail condition is satisfied [13,Thm. 3.1]; Csörgő [14, p. 294] shows that this condition is implied by the simpler moment condition (57), cf.Lemma 2.7.Thus, Pick ǫ > 0, set δ := 1/ǫ and define µ ǫ,δ (A) := µ A ∩ {ǫ ≤ |s| < δ} and µ ǫ,δ := µ − µ ǫ,δ ; the measures ν ǫ,δ and ν ǫ,δ are defined analogously.Note that shows that h → h 2 µ ǫ,δ ⊗ν ǫ,δ is continuous on C T .Thus, the continuous mapping theorem implies By the triangle inequality we have Thus, it remains to show that the first and last terms on the right-hand side vanish uniformly as ǫ → 0. Note that This follows from the dominated convergence theorem, since Moreover, and for the first term we have by dominated convergence, since we have The other summands are dealt with similarly.The result follows since the convergence in (75) and ( 78) is uniform in N .
We still have to prove (63).Lemma 4.6.In the setting of (the proof of ) Theorem 4.5 we have Proof.Observe that Using this formula and the independence of the random variables (X 1 , . . ., X N ) and (Y 1 , . . ., Y N ) yields A lengthy but otherwise straightforward calculation shows that and an analogous formula holds for the Y i .Summing over k, j = 1, . . ., N and distinguishing between the cases k = j and k = j finally gives and the lemma follows.( 22) is symmetric, hence we can always symmetrize any non-symmetric measure ρ without changing the value of the integral.Thus, the function is well-defined and symmetric in each variable.The corresponding generalized distance covariance V 2 (X, Y ) can be expressed by (28), if the expectations on the righthand side are finite.Note, however, that the nice and computationally feasible representations of V 2 (X, Y ) make essential use of the factorization of ρ which means that they are no longer available in this setting.Let X and Y be random variables with values in R m and R n , respectively, such that for some x ∈ R m and y ∈ R n A direct calculation of (28), using Θ(0, y) = Θ(x, 0) = 0, gives with where (X i , Y i ), i = 1, . . ., 6, are i.i.d.copies of the random vector (X, Y ).Now suppose that V 2 (X, Y ) is homogeneous and/or rotationally invariant, i.e. for some α, β ∈ (0, 2), all scalars a, b > 0 and orthogonal matrices hold.The homogeneity, (86), yields and the rotational invariance, (87), shows that Θ(x/|x|, y/|y|) is a constant.In particular, homogeneity of degree α = β = 1 and rotational invariance yield that Θ(x, y) = const • |x| • |y|.Since the Lévy-Khintchine formula furnishes a one-to-one correspondence between the cndf and its Lévy triplet, see (the comments following) Theorem 2.1, this determines ρ uniquely: it factorizes into two Cauchy Lévy measures.This means that -even in a larger class of weights -the assumptions (86) and (87) imply a unique (up to a constant) choice of weights, and we have recovered Székely-and-Rizzo's uniqueness result from [36].
L 2 (ρ) be a generalized distance covariance as in Definition 2.3 and assume that the symmetric measure ρ satisfies the integrability condition (81).If V 2 (X, Y ) is homogeneous of order α ∈ (0, 2) and β ∈ (0, 2) and rotationally invariant in each argument, then the measure ρ defining For completeness, let us mention that the constants c(α, m) and c(β, n) are of the form

Generalized distance correlation
We continue our discussion in the setting of Section 3.2.Let ρ = µ ⊗ ν and assume that µ and ν are symmetric Lévy measures on R m \ {0} and R n \ {0}, each with full support.For mand n-dimensional random variables X and Y the generalized distance covariance is, cf.Definition 3.1, We set and define generalized distance correlation as Using the Cauchy-Schwarz inequality it follows from (38) that, whenever R(X, Y ) is well defined, one has The sample distance correlation is given by  For a symmetric Lévy measure with corresponding continuous negative definite function Φ : R m → R let G Φ be the Gaussian field indexed by R m with Analogously we define the random field G Ψ .For a random variable Z with values in R d and for a Gaussian random field G indexed by R d we set Definition 7.1.Let G Φ , G Ψ be mean-zero Gaussian random fields indexed by R m and R n and with covariance structure given by the cndfs Φ and Ψ , respectively.For any two mand n-dimensional random variables X and Y the Gaussian covariance is defined as where (X 1 , Y 1 ), (X 2 , Y 2 ) are i.i.d.copies of (X, Y ).
We can now identify Gaussian covariance and generalized distance covariance.
Theorem 7.2 (Gaussian covariance is generalized distance covariance).Assume that X and Y satisfy EΦ(X) + EΨ (Y ) < ∞.If G Φ and G Ψ are independent, then Proof.The proof is similar to Székely & Rizzo [34,Thm. 8].By conditioning and the independence of G Φ and G Ψ we see Using E(G Φ (x)G Φ (x ′ )) = Φ(x) + Φ(x ′ ) − Φ(x − x ′ ) =: ϕ(x, x ′ ) yields where the second equality is due to cancellations.An analogous calculation for ) turns (100) into (35).For (100) we have to make sure that E In this calculation we use first the independence of G Φ and G Ψ , the conditional Cauchy-Schwarz inequality and the fact that the random variables (X 1 , Y 1 ) and (X 2 , Y 2 ) are i.i.d.In the final estimate we use again the Cauchy-Schwarz inequality.In order to see that the right-hand side is finite, we note that (96) and ( 97) yield A similar estimate for Y completes the proof.

Conclusion
We have shown that the concept of distance covariance introduced by Székely et al. [37] can be embedded into a more general framework based on Lévy measures, cf.Section 3. In this generalized setting the key results for statistical applications are: the convergence of the estimators and the fact that also the limit distribution of the (scaled) estimators is known, cf.Section 4. Moreover -for applications this is of major importance -the estimators have the numerically efficient representation (50).
The results allow the use of generalized distance covariance in the tests for independence developed for distance covariance, e.g.tests based on a general Gaussian quadratic form estimate or resampling tests.The test statistic is the function aN bN discussed in Corollary 4.8.Using the quadratic form estimate (see [37] for details) its p-value can be estimated by 1 − F (T ) where F is the distribution function of the Chi-squared distribution with 1 degree of freedom.This test and resampling tests are studied in detail in [9] and [7], respectively.In addition, these papers contain illustrating examples which show that the new flexibility provided by the choice of the Lévy measures (equivalently: by the choice of the continuous negative definite function) can be used to improve the power of these tests.Moreover, new test procedures using distance covariance and its generalizations are developed in [5].
Finally, the presented results are also the foundation for a new approach to testing and measuring multivariate dependence, i.e. the mutual (in)dependence of an arbitrary number of random vectors.This approach is developed in [9] accompanied by extensive examples and further applications in [7].All functions required for the use of generalized distance covariance in applications are implemented in the R package multivariance [8].

Table 1 .
Some real-valued continuous negative definite functions (cndfs) on R d and the corresponding Lévy measures and infinitely divisible distributions (IDD)