Distance between exact and approximate distributions of partial maxima under power normalization

We obtain the distance between the exact and approximate distributions of partial maxima of a random sample under power normalization. It is observed that the Hellinger distance and variational distance between the exact and approximate distributions of partial maxima under power normalization is the same as the corresponding distances under linear normalization.


Introduction
Let X 1 , X 2 , . . . , X n be independent and identically distributed (iid) random variables with common distribution function (df) F and M n = max(X 1 , X 2 , . . . , X n ), n ≥ 1. Then F is said to belong to the max domain of attraction of a nondegenerate df H under power normalization (denoted by F ∈ D p (H)) if, for n ≥ 1, there exist constants α n > 0, β n > 0, such that lim n→∞ P M n α n 1 βn the set of continuity points of H, where sign(x) = −1, 0, or 1 according as x < 0, = 0, or > 0. The limit df H in (1) is called a p-max stable law, and we refer to [5] for details.
The p-max stable laws. Two dfs F and G are said to be of the same p-type if F (x) = G(A | x | B sign(x)), x ∈ R, for some positive constants A, B. The p-max stable laws are p-types of one of the following six laws with parameter α > 0: Note that H 2,1 (·) is the uniform distribution over (0, 1). Necessary and sufficient conditions for a df F to belong to D p (H) for each of the six p-types of p-max stable laws were given in [5] (see also [3]). As in [8], we define the generalized log-Pareto distribution (glogPd) as W (x) = 1 + log H(x) for x with 1/e ≤ H(x) ≤ 1, where H is a p-max stable law, and the distribution functions W are given by and the respective probability density functions (pdfs) are the following: where the pdfs are equal to 0 for the remaining values of x. See also [1] and [9] for more details on generalized log-Pareto distributions. The von-Mises type sufficient conditions for p-max stable laws were obtained in [6].
Von Mises-type parameterization of generalized log-Pareto distributions. The von Mises-type parameterization for generalized log-Pareto distributions is given by x > 0, (1 + γ log x) > 0, whenever γ ≥ 0, and where the case γ = 0 is interpreted as the limit as γ → 0. Let v 1 and v 2 denote the densities of V 1 and V 2 , respectively. The dfs of generalized log-Pareto distributions can be regained from V 1 and V 2 by the following identities: if 0 < x, γ < 0; Graphical representation of generalized log-Pareto pdfs. In Fig. 1, observe that the pdfs v 1 approach the standard Pareto pdf as γ ↓ 0, and the pdfs v 2 approach the standard uniform pdf as γ ↑ 0. The Hellinger distance, also called the Bhattacharya distance, is used to quantify the similarity between two probability distributions, and this was defined in terms of the Hellinger integral introduced by [4]. In view of statistical applications, the distance between the exact and the limiting distributions is measured using the Hellinger distance. Inference procedures based on the Hellinger distance provide alternatives to likelihood-based methods. The minimum Hellinger distance estimation with inlier modification was studied in [7]. In [10], the weak convergence of distributions of extreme order statistics (defined later in Section 2) was examined.
In the next section, we study the variational distance between the exact and asymptotic distributions of power normalized partial maxima of a random sample and the Hellinger distance between these. The results obtained here are similar to those in [10].

Hellinger and variational distances for sample maxima
We recall a few definitions for convenience.
Weak domain of attraction. If a df F satisfies (1) where sup is taken over all Borel sets B on R.
Limit law for the kth largest order statistic [5]. Let X 1:n ≤ · · · ≤ X n:n denote the order statistics from a random sample X 1 , . . . , X n , and for i = 1, . . . , 6, let lim n→∞ P X n:n α n Then it is well known that, for integer k ≥ 1, Hellinger distance [10]. Given dfs F and G with Lebesgue densities f and g, the Hellinger distance between F and G, denoted H * (F, G), is defined as The results in this section will be proved for the p-max stable law H 2,1 (·), and the other cases can be deduced by using the transformation T (x) = T i,α (x) given by We assume that the underlying pdf f is of the form f (x) = w(x)e g(x) where g(x) → 0 as x → r(H) = sup{x : H(x) < 1}, the right extremity of H. Equivalently, we may use the representation f (x) = w(x)(1 + g * (x)) by writing f (x) = w(x)e g(x) = w(x)(1 + (e g(x) − 1)), g(x) → 0 as x → r(F ). The following result is on Hellinger distance, and its proof is similar to that of Theorem 5.2.5 of [10] and hence is omitted. (1), and F be an absolutely continuous df with pdf f such that f (x) > 0 for x 0 < x < r(F ) and f (x) = 0 otherwise. Assume that r(F ) = r(H). Then

Theorem 1. Let H be a p-max stable law as in
where c > 0 is a universal constant. (1)

Theorem 2. Suppose that H is a p-max stable law as in
where L, δ are positive constants. If F n (x) = F (A n |x| Bn sign(x)) with where D is a constant depending only on x 0 , L, and δ.
Theorem 4 below gives the variational distance between exact and approximate distributions of power normalized partial maxima. To prove the result, we use the next result, the proof of which is similar to that of Theorem 5.5.4 of [10] and hence is omitted.