Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. To appear
  3. The generalization of several classical ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • More
    Article info Full article Related articles

The generalization of several classical estimators for a positive extreme value index
Marijus Vaičiulis ORCID icon link to view author Marijus Vaičiulis details  

Authors

 
Placeholder
https://doi.org/10.15559/26-VMSTA296
Pub. online: 19 March 2026      Type: Research Article      Open accessOpen Access

Received
6 September 2025
Revised
9 February 2026
Accepted
26 February 2026
Published
19 March 2026

Abstract

In this paper, we introduce a family of semi-parametric estimators for the positive extreme value index γ, parameterized in two tuning parameters. The asymptotic normality of the introduced estimators is proved. It is shown that the partial case of newly introduced estimators (a subfamily with one tuning parameter) has quite good asymptotic properties and dominates several previously introduced estimators. Small Monte-Carlo simulations are included. Also, the performance of this parameterized subfamily of estimators is illustrated for pair exchange ratio data sets.

1 Introduction and main results

Let ${X_{1}},\dots ,{X_{n}}$ be independent and identically distributed (i.i.d.) random variables with a common distribution function F. Suppose that the right tail $1-F$ is a regularly varying function with index $-1/\gamma $ (written $1-F\in {\mathrm{RV}_{-1/\gamma }}$), that is, for $x\gt 0$,
(1)
\[ \underset{t\to \infty }{\lim }\frac{1-F(tx)}{1-F(t)}={x^{-1/\gamma }},\]
where $\gamma \gt 0$ is the positive extreme value index (EVI). Put
\[ U(t)=\left\{\begin{array}{l@{\hskip10.0pt}l}0,& 0\lt t\le 1\\ {} \inf \left\{x:\hspace{3.33333pt}F(x)\ge 1-(1/t)\right\},& t\gt 1.\end{array}\right.\]
Condition (1) is equivalent to
(2)
\[ \underset{t\to \infty }{\lim }\left(\ln \left(U(tx)\right)-\ln \left(U(t)\right)\right)=\gamma \ln (x),\hspace{1em}x\gt 0,\]
see, e.g. pg. 73 in [5]. For a large class of quantile type functions U satisfying (2) there exists a function $A(t)\to 0$ of constant sign for large values of t, such that
(3)
\[ \underset{t\to \infty }{\lim }\frac{\ln \left(U(tx)\right)-\ln \left(U(t)\right)-\gamma \ln (x)}{A(t)}={h_{\rho }}(x)\]
for every $x\gt 0$, where $\rho \le 0$ is a second order parameter and
\[ {h_{\rho }}(x)=\left\{\begin{array}{l@{\hskip10.0pt}l}\ln (x),& \rho =0,\\ {} ({x^{r}}-1)/r,& \rho \lt 0.\end{array}\right.\]
Numerous of the existing estimators of a positive extreme value index (see, e.g. [18, 9, 8]) are constructed by using parameterized statistics
\[ {M_{n}}(k,r)=\left\{\begin{array}{l@{\hskip10.0pt}l}1,& r=0,\\ {} \frac{1}{k}{\textstyle\textstyle\sum _{i=1}^{k}}{\left(\ln \left({X_{n-i+1,n}}\right)-\ln \left({X_{n-k,n}}\right)\right)^{r}},& r\gt 0,\end{array}\right.\]
where ${X_{1,n}}\le {X_{2,n}}\le \cdots \le {X_{n,n}}$ denote the ascending order statistics of ${X_{1}},\dots ,{X_{n}}$. ${M_{n}}(k,1)$ is the classical Hill estimator ([18]), while ${M_{n}}(k,2)$ was introduced in [8]. The asymptotic properties (including weak consistency and asymptotic normality) of ${M_{n}}(k,r)$ when r is real positive were considered in [13]. To learn more about estimators for a positive extreme value index, we refer to the review [10], where more than one hundred estimators are collected.
Several papers are devoted to the investigation of the asymptotic properties of estimators (with two tuning parameters), defined by
(4)
\[ {\hat{\gamma }_{n}}(k,{r_{1}},{r_{2}})=\frac{\Gamma ({r_{1}})}{{M_{n}}(k,{r_{1}}-1)}{\left(\frac{{M_{n}}(k,{r_{1}}{r_{2}})}{\Gamma ({r_{1}}{r_{2}}+1)}\right)^{1/{r_{2}}}},\hspace{1em}{r_{1}}\ge 1,{r_{2}}\gt 0,\]
where $\Gamma (1+r)$, $r\ge 0$ denotes the gamma function defined by the integral $\Gamma (1+r)={\textstyle\int _{0}^{\infty }}{t^{r}}\exp \{-t\}\mathrm{d}t$. The estimators (4) were presented in [3]. It is important to note that the family of estimators (4) generalizes several classical estimators. The estimator ${\hat{\gamma }_{n}}(k,1,1)$ coincides with the Hill estimator, while the estimator ${\hat{\gamma }_{n}}(k,1,2)=\sqrt{{M_{n}}(k,{r_{1}})/2}$ was proposed in [9] as an alternative to the Hill estimator. Also, the estimator ${\hat{\gamma }_{n}}(k,2,1)={M_{n}}(k,2)/\left(2{M_{n}}(k,1)\right)$ is a moment ratio estimator, introduced by de Vries; see [6].
For completeness, we recall the result regarding the asymptotic normality of estimators (4).
Theorem 1 ([3]).
Suppose that ${X_{1}},\dots ,{X_{n}}$ are i.i.d. random variables whose quantile function U satisfies condition (3) with $\gamma \gt 0$ and $\rho \le 0$. Let the sequence of integers $k={k_{n}}$ be such that
(5)
\[ {k_{n}}\to \infty ,\hspace{3.57777pt}{k_{n}}/n\to 0,\hspace{1em}\hspace{3.57777pt}n\to \infty \]
and further assume that
(6)
\[ \underset{n\to \infty }{\lim }\sqrt{{k_{n}}}A\left(\frac{n}{{k_{n}}}\right)=\mu \]
with μ finite. Then
(7)
\[ \sqrt{{k_{n}}}\left({\hat{\gamma }_{n}}({k_{n}},{r_{1}},{r_{2}})-\gamma \right)\hspace{3.57777pt}\stackrel{\mathrm{d}}{\to }\hspace{3.57777pt}\mathcal{N}\left(\mu \nu (\rho ,{r_{1}},{r_{2}}),{\gamma ^{2}}{\sigma ^{2}}({r_{1}},{r_{2}})\right),\hspace{1em}n\to \infty ,\]
where $\stackrel{\mathrm{d}}{\to }$ denotes the convergence in distribution, $\mathcal{N}$ is a normal distribution, and
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \nu (\rho ,{r_{1}},{r_{2}})& \displaystyle =& \displaystyle \left\{\begin{array}{l@{\hskip10.0pt}l}1,& \rho =0,\\ {} (1/(\rho {r_{2}}))\left\{\frac{1}{{(1-\rho )^{{r_{1}}{r_{2}}}}}-\frac{{r_{2}}}{{(1-\rho )^{{r_{1}}-1}}}+({r_{2}}-1)\right\},& \rho \lt 0,\end{array}\right.\\ {} \displaystyle {\sigma ^{2}}({r_{1}},{r_{2}})& \displaystyle =& \displaystyle \frac{1}{{r_{2}^{2}}}\bigg\{\frac{2\Gamma (2{r_{1}}{r_{2}})}{{r_{1}}{r_{2}}{\Gamma ^{2}}({r_{1}}{r_{2}})}+\frac{{r_{2}^{2}}\Gamma (2{r_{1}}-1)}{{\Gamma ^{2}}({r_{1}})}\\ {} & & \displaystyle \hspace{1em}\hspace{1em}-\frac{2\Gamma ({r_{1}}(1+{r_{2}}))}{{r_{1}}\Gamma ({r_{1}})\Gamma ({r_{1}}{r_{2}})}-{({r_{2}}-1)^{2}}\bigg\}.\end{array}\]
The subfamily ${\hat{\gamma }_{n}}(k,1,r)$, $r\gt 0$ of the family of estimators (4) was investigated in [12]. The subfamilies ${\hat{\gamma }_{n}}(k,r,2)$, $r\ge 1$ and ${\hat{\gamma }_{n}}(k,r,1)$, $r\ge 1$ of (4) were considered in [4] and [22], respectively.
Recall from Prop. 1 in [4] that if (2) and (5) hold, then for $r\gt 0$,
(8)
\[ {M_{n}}(k,r)\hspace{3.33333pt}\stackrel{\mathrm{p}}{\to }\hspace{3.33333pt}{\gamma ^{r}}\Gamma (r+1),\hspace{1em}n\to \infty .\]
Motivated by the construction of the moment ratio estimator, we consider the ratio ${M_{n}}(k,{r_{2}})/{M_{n}}(k,{r_{1}})$, $0\le {r_{1}}\lt {r_{2}}$ and by combining (8) with Slutsky’s theorem (see e.g. [24]) we find that ${M_{n}}(k,{r_{2}})/{M_{n}}(k,{r_{1}})$ converges in probability to ${\gamma ^{{r_{2}}-{r_{1}}}}\Gamma ({r_{2}}+1)/\Gamma ({r_{1}}+1)$ as $n\to \infty $. This leads to the new family of semi–parametric estimators of $\gamma \gt 0$:
(9)
\[ {\hat{\gamma }_{n}^{(1)}}(k,{r_{1}},{r_{2}})={\left(\frac{\Gamma ({r_{1}}+1){M_{n}}(k,{r_{2}})}{\Gamma ({r_{2}}+1){M_{n}}(k,{r_{1}})}\right)^{1/({r_{2}}-{r_{1}})}},\hspace{1em}0\le {r_{1}}\lt {r_{2}}.\]
It should be noted that the family of estimators (9) generalizes the Hill estimator (${r_{1}}=0$, ${r_{2}}=1$), the alternative Hill estimator (${r_{1}}=0$, ${r_{2}}=2$) and the moment ratio estimator (${r_{1}}=1$, ${r_{2}}=2$).
Our main result states that the estimators in (9) are asymptotically normal for $\gamma \gt 0$.
Theorem 2.
Under the conditions of Theorem 1,
(10)
\[ \sqrt{{k_{n}}}\left({\hat{\gamma }_{n}^{(1)}}({k_{n}},{r_{1}},{r_{2}})-\gamma \right)\hspace{3.57777pt}\stackrel{\mathrm{d}}{\to }\hspace{3.57777pt}\mathcal{N}\left(\mu {\nu _{1}}(\rho ,{r_{1}},{r_{2}}),{\gamma ^{2}}{\sigma _{1}^{2}}({r_{1}},{r_{2}})\right),\hspace{1em}n\to \infty ,\]
where
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\nu _{1}}(\rho ,{r_{1}},{r_{2}})& \displaystyle =& \displaystyle \left\{\begin{array}{l@{\hskip10.0pt}l}1,& \rho =0,\\ {} \frac{{(1-\rho )^{-{r_{1}}}}-{(1-\rho )^{-{r_{2}}}}}{(-\rho )({r_{2}}-{r_{1}})},& \rho \lt 0,\end{array}\right.\\ {} \displaystyle {\sigma _{1}^{2}}({r_{1}},{r_{2}})& \displaystyle =& \displaystyle \frac{1}{{({r_{2}}-{r_{1}})^{2}}}\left\{\frac{\Gamma (1+2{r_{1}})}{{\Gamma ^{2}}(1+{r_{1}})}-\frac{2\Gamma (1+r+{r_{2}})}{\Gamma (1+{r_{1}})\Gamma (1+{r_{2}})}+\frac{\Gamma (1+2{r_{2}})}{{\Gamma ^{2}}(1+{r_{2}})}\right\}.\end{array}\]
Obviously, that
(11)
\[ {\nu _{1}}(\rho ,{r_{1}},{r_{2}})\ne 0\]
for any $\rho \lt 0$ and $0\lt {r_{1}}\lt {r_{2}}$. Recall that $A(t)\to 0$ as $t\to \infty $. Thus, under assumption (5), the asymptotic bias ${\nu _{1}}(\rho ,{r_{1}},{r_{2}})$ $A(n/k)$ tends to zero as $n\to \infty $, but estimators satisfying (11) are referred to as asymptotically biased; see, e.g. [19].
Next, consider three families of asymptotically biased estimators for $\gamma \gt 0$. Namely, by taking ${r_{2}}=2{r_{1}}$ in (9) we obtain the estimators
\[ {\hat{\gamma }_{n}^{(1)}}(k,r)={\left(\frac{\Gamma (r+1){M_{n}}(k,2r)}{\Gamma (2r+1){M_{n}}(k,r)}\right)^{1/r}},\hspace{1em}r\gt 0.\]
Using a slight modification of the comparison scheme for biased estimators (proposed in [6]), we will compare the estimator ${\hat{\gamma }_{n}^{(1)}}(k,r)$ with the following estimators:
\[ {\hat{\gamma }_{n}^{(2)}}(k,r)={\hat{\gamma }_{n}}(k,1,r),\hspace{3.33333pt}r\gt 0,\hspace{1em}{\hat{\gamma }_{n}^{(3)}}(k,r)={\hat{\gamma }_{n}}(k,r,1),\hspace{3.33333pt}r\ge 1.\]
In [4] it is noted that ${\hat{\gamma }_{n}}(k,r,2)$, $r\ge 1$ are asymptotically unbiased estimators, and thus we eliminated these estimators from our comparison. We refer to [21] for a discussion of difficulties related to the comparison of an asymptotically unbiased estimator with an asymptotically biased estimator. Taking into account that ${\hat{\gamma }_{n}^{(2)}}(k,r)={\hat{\gamma }_{n}^{(1)}}(k,0,r)$, $r\gt 0$ and ${\hat{\gamma }_{n}^{(3)}}(k,r)={\hat{\gamma }_{n}^{(1)}}(k,r-1,r)$, $r\ge 1$, the asymptotic normality of the estimators ${\hat{\gamma }_{n}^{(\ell )}}(k,r)$, $\ell =1,2,3$ follows directly from Theorem 2.
Corollary 1.
Under the conditions of Theorem 1,
\[ \sqrt{{k_{n}}}\left({\hat{\gamma }_{n}^{(\ell )}}({k_{n}},r)-\gamma \right)\hspace{3.57777pt}\stackrel{\mathrm{d}}{\to }\hspace{3.57777pt}\mathcal{N}\left(\mu {\lambda _{\ell }}(\rho ,r),{\gamma ^{2}}{\varsigma _{\ell }^{2}}(r)\right),\hspace{1em}n\to \infty ,(\ell =1,2,3),\]
where
\[ {\lambda _{1}}(\rho ,r)={\nu _{1}}(\rho ,r,2r),\hspace{3.57777pt}{\lambda _{2}}(\rho ,r)={\nu _{1}}(\rho ,0,r),\hspace{3.57777pt}{\lambda _{3}}(\rho ,r)={\nu _{1}}(\rho ,r-1,r)\]
and
\[ {\varsigma _{1}^{2}}(r)={\sigma _{1}^{2}}(r,2r),\hspace{3.57777pt}{\varsigma _{2}^{2}}(r)={\sigma _{1}^{2}}(0,r),\hspace{3.57777pt}{\varsigma _{3}^{2}}(r)={\sigma _{1}^{2}}(r-1,r).\]
The paper is organized as follows. In the next Section, we compare estimators ${\hat{\gamma }_{n}^{(\ell )}}(k,r)$, $\ell =1,2,3$ theoretically. In Section 3, a small-scale simulation study is undertaken. In Section 4, an application to the exchange rate data is presented to illustrate the behavior of the estimator ${\hat{\gamma }_{n}^{(1)}}(k,r)$. Section 5 contains conclusions, while all proofs are collected in Section 6.

2 Comparison of the estimators ${\hat{\gamma }_{n}^{(1)}}({k_{n}},r)$ and ${\hat{\gamma }_{n}^{(\ell )}}({k_{n}},r)$, $\ell =2,3$

The asymptotic second moment of ${\hat{\gamma }_{n}^{(\ell )}}(k,r)$ is ${A^{2}}\left(n/k\right){\lambda _{\ell }^{2}}(\rho ,r)+{\gamma ^{2}}{\varsigma _{\ell }^{2}}(r)/k$, see [6] for the definition of the asymptotic second moment. Firstly, following [20] (see also [1]), we fix r and find the asymptotic behavior (as $n\to \infty $) of the so-called minimal mean squared error
(12)
\[ \mathrm{MMSE}\left({\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }},r)\right)=\underset{{k_{n}}}{\inf }\left\{{A^{2}}\left(n/{k_{n}}\right){\lambda _{\ell }^{2}}(\rho ,r)+\frac{{\gamma ^{2}}{\varsigma _{\ell }^{2}}(r)}{{k_{n}}}\right\},\]
where ${k_{n,\ell }^{\ast }}$ is the minimizing sequence.
Further, we will assume the classical restriction $\rho \lt 0$. It eliminates the case where $|A|$ is a slowly varying function at infinity. In addition, under assumption $\rho \lt 0$ in (3), there exists a positive decreasing function $a\in {\mathrm{RV}_{2\rho -1}}$ such that
(13)
\[ {A^{2}}(t)\sim {\int _{t}^{\infty }}a(\tau )\mathrm{d}\tau ,\hspace{1em}t\to \infty ,\]
see [7].
Note that ${\lambda _{\ell }}(\rho ,r)\ne 0$ in (12) is an essential assumption. It allows us to balance the rate of decay of squared asymptotic bias and asymptotic variance.
By applying Lemma 2.8 in [7], we get that the minimizing sequence ${k_{n,\ell }^{\ast }}={k_{n,\ell }^{\ast }}(r)$ satisfies the relation
(14)
\[ {k_{n,\ell }^{\ast }}(r)\sim {\left(\frac{{\gamma ^{2}}{\varsigma _{\ell }^{2}}(r)}{{\lambda _{\ell }^{2}}(\rho ,r)}\right)^{1/(1-2\rho )}}\frac{n}{{a^{\gets }}(1/n)},\hspace{1em}n\to \infty ,\hspace{1em}\ell =1,2,3,\]
where ${a^{\gets }}$ is the inverse function of a. Following the lines in [6] it can be proven that
(15)
\[ {k_{n,\ell }^{\ast }}(r){A^{2}}\left(\frac{n}{{k_{n,\ell }^{\ast }}(r)}\right)\sim \frac{{\varsigma _{\ell }^{2}}(r)}{-2\rho {\lambda _{\ell }^{2}}(\rho ,r)},\hspace{1em}n\to \infty .\]
Substituting (14)–(15) into (12) we get
(16)
\[ \mathrm{MMSE}\left({\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }},r)\right)\sim \frac{1-2\rho }{-2\rho }{\left({\lambda _{\ell }^{2}}(\rho ,r){\left({\gamma ^{2}}{\varsigma _{\ell }^{2}}(r)\right)^{-2\rho }}\right)^{1/(1-2\rho )}}\frac{{a^{\gets }}(1/n)}{n},\]
as $n\to \infty $. The next step is to minimize the right-hand side of (16) with respect to r, or equivalently, to minimize the product
(17)
\[ {\lambda _{\ell }^{2}}(\rho ,r){\left({\varsigma _{\ell }^{2}}(r)\right)^{-2\rho }}\]
with respect to r. Using Wolfram Mathematica 10.4 the minimization of (17) is performed numerically. If product (17) attains its minimum at a point $r={r_{\ell }^{\ast }}$, then ${r_{\ell }^{\ast }}$ is called the optimal choice of the tuning parameter r. The graphs of the optimal choices ${r_{\ell }^{\ast }}={r_{\ell }^{\ast }}(\rho )$, $\ell =1,2,3$ are provided in Figures 1 and 2.
Graph of r₁(ρ) function showing exponential growth from 0.4 to 1.0 as ρ increases from -5 to 0 on x-axis.
Fig. 1.
Graph $\{(\rho ,{r_{1}^{\ast }}(\rho )),-5\le \rho \lt 0\}$
The graphs of ${r_{\ell }^{\ast }}(\rho )$, $\ell =1,2,3$ are plotted using a set $\{-i/100,\hspace{3.33333pt}1\le i\le 500\}$ of values of ρ. Consider ${r_{\ell }^{\ast }}(\rho )$, $\ell =1,2,3$ as functions of ρ on $[-i/100,-1/100]$, $1\lt i\le 500$. Then the difference ${r_{\ell }^{\ast }}(-1/100)-{r_{\ell }^{\ast }}(-i/100)$ is the width of the range of ${r_{\ell }^{\ast }}(\rho )$ on $[-i/100,-1/100]$. For example, taking $i=100$ we find that the widths of the range of ${r_{1}^{\ast }}(\rho )$, ${r_{2}^{\ast }}(\rho )$ and ${r_{3}^{\ast }}(\rho )$ are 0.38, 0.73 and 0.58, respectively. Calculating the widths of the range of ${r_{1}^{\ast }}(\rho )$, ${r_{2}^{\ast }}(\rho )$ and ${r_{3}^{\ast }}(\rho )$ for values $i\gt 100$ allows us to conclude that the width of the range of ${r_{1}^{\ast }}(\rho )$ is smaller than the widths of the range of ${r_{2}^{\ast }}(\rho )$ and ${r_{3}^{\ast }}(\rho )$. Thus, ${r_{1}^{\ast }}(\rho )$ is less sensitive to the estimation of the parameter ρ than the other two optimal choices.
Two graphs of functions r²(ρ) and r³(ρ) showing increasing curves from -5 to 0 on x-axis, with different y-axis scales.
Fig. 2.
Graphs $\{(\rho ,{r_{2}^{\ast }}(\rho )),-5\le \rho \lt 0\}$ (left) and $\{(\rho ,{r_{3}^{\ast }}(\rho )),-5\le \rho \lt 0\}$ (right)
Now we can compare the estimator ${\hat{\gamma }_{n}^{(1)}}({k_{n,1}^{\ast }}({r_{1}^{\ast }}),{r_{1}^{\ast }})$ with the estimators
\[ {\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }}),\hspace{1em}\ell =2,3.\]
Following [6], we write
\[ \mathrm{RMMSE}\left({\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})|{\hat{\gamma }_{n}^{(1)}}({k_{n,1}^{\ast }}({r_{1}^{\ast }}),{r_{1}^{\ast }})\right),\hspace{1em}\ell =2,3\]
for the limit of the ratio of the minimal mean squared errors of ${\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})$ and ${\hat{\gamma }_{n}^{(1)}}({k_{n,1}^{\ast }}({r_{1}^{\ast }}),{r_{1}^{\ast }})$. Using (16) we get that
\[ \mathrm{RMMSE}\left({\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})|{\hat{\gamma }_{n}^{(1)}}({k_{n,1}^{\ast }}({r_{1}^{\ast }}),{r_{1}^{\ast }})\right)={\phi _{\ell ,1}}(\rho ),\hspace{1em}\ell =2,3,\]
where
\[ {\phi _{\ell ,1}}(\rho )={\left(\frac{{\lambda _{\ell }^{2}}(\rho ,{r_{\ell }^{\ast }})}{{\lambda _{1}^{2}}(\rho ,{r_{1}^{\ast }})}{\left(\frac{{\varsigma _{\ell }^{2}}({r_{\ell }^{\ast }})}{{\varsigma _{1}^{2}}({r_{1}^{\ast }})}\right)^{-2\rho }}\right)^{1/(1-2\rho )}}.\]
We say that the estimator ${\hat{\gamma }_{n}^{(1)}}({k_{n,1}^{\ast }}({r_{1}^{\ast }}),{r_{1}^{\ast }})$ outperforms ${\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})$ on a half-line $\{(\rho ,\gamma ):\rho ={\rho _{0}},\gamma \gt 0\}$ if ${\phi _{\ell ,1}}({\rho _{0}})\gt 1$.
Graphs of the functions ${\phi _{2,1}}(\rho )$, $-5\le \rho \lt 0$ and ${\phi _{3,1}}(\rho )$, $-5\le \rho \lt 0$ are presented in Fig. 3. Whence one can deduce that the estimator ${\hat{\gamma }_{n}^{(1)}}({k_{n,1}^{\ast }}({r_{1}^{\ast }}),{r_{1}^{\ast }})$ outperforms the estimators ${\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})$, $\ell =2,3$ in the area $\{(\gamma ,\rho ):\hspace{3.33333pt}\gamma \gt 0,-5\le \rho \lt 0\}$.
We end this Section with a comparison of the estimators
\[ {\hat{\gamma }_{n}}(k,{r_{1}},{r_{2}})\hspace{1em}\mathrm{and}\hspace{1em}{\hat{\gamma }_{n}^{(1)}}(k,{r_{1}},{r_{2}}).\]
Theorem 1 in [3] states that for any $\rho \lt 0$ and ${r_{2}}\gt 1$ there is a value ${\bar{r}_{1}}={\bar{r}_{1}}(\rho )$ such that $\nu (\rho ,{\bar{r}_{1}}(\rho ),{r_{2}})=0$, where $\nu (\rho ,{r_{1}},{r_{2}})$ is the same as in Theorem 1. As for the case $0\lt {r_{2}}\le 1$, we add the following statement: $\nu (\rho ,{r_{1}},{r_{2}})\ne 0$ for any $\rho \lt 0$ and $({r_{1}},{r_{2}})\in [1,\infty )\times (0,1]$, see Lemma 1 in Section 6. With the goal of eliminating unbiased estimators from our consideration, we will compare estimators (9) with ${\hat{\gamma }_{n}}(k,{r_{1}},{r_{2}})$, ${r_{1}}\ge 1$, $0\lt {r_{2}}\le 1$. In fact, this comparison reduces to the comparison of estimators ${\hat{\gamma }_{n}^{(1)}}({\tilde{K}_{n}}({\tilde{R}_{1}}(\rho ),{\tilde{R}_{2}}(\rho )),{\tilde{R}_{1}}(\rho ),{\tilde{R}_{2}}(\rho ))$ and ${\hat{\gamma }_{n}}({\tilde{k}_{n}}({\tilde{r}_{1}}(\rho ),{\tilde{r}_{2}}(\rho )),{\tilde{r}_{1}}(\rho ),{\tilde{r}_{2}}(\rho ))$, where
\[ ({\tilde{K}_{n}}({\tilde{R}_{1}}(\rho ),{\tilde{R}_{2}}(\rho )),{\tilde{R}_{1}}(\rho ),{\tilde{R}_{2}}(\rho ))\hspace{3.33333pt}\mathrm{and}\hspace{3.33333pt}({\tilde{k}_{n}}({\tilde{r}_{1}}(\rho ),{\tilde{r}_{2}}(\rho )),{\tilde{r}_{1}}(\rho ),{\tilde{r}_{2}}(\rho ))\]
are optimal choices for estimators ${\hat{\gamma }_{n}^{(1)}}(k,{r_{1}},{r_{2}})$ and ${\hat{\gamma }_{n}}(k,{r_{1}},{r_{2}})$, respectively.
Two side-by-side graphs showing functions φ₂.₁(ρ) and φ₃.₁(ρ), both decreasing curves from left to right with ρ on x-axis.
Fig. 3.
Graphs of the functions ${\phi _{2,1}}(\rho )$ (left) and ${\phi _{3,1}}(\rho )$ (right)
One can verify that the limit of the ratio of the minimal mean squared errors of
\[ {\hat{\gamma }_{n}}({\tilde{k}_{n}}({\tilde{r}_{1}}(\rho ),{\tilde{r}_{2}}(\rho )),{\tilde{r}_{1}}(\rho ),{\tilde{r}_{2}}(\rho ))\hspace{3.33333pt}\mathrm{and}\hspace{3.33333pt}{\hat{\gamma }_{n}^{(1)}}({\tilde{K}_{n}}({\tilde{R}_{1}}(\rho ),{\tilde{R}_{2}}(\rho )),{\tilde{R}_{1}}(\rho ),{\tilde{R}_{2}}(\rho ))\]
equals
\[ \phi (\rho )={\left(\frac{{\nu ^{2}}(\rho ,{\tilde{r}_{1}}(\rho ),{\tilde{r}_{2}}(\rho ))}{{\nu _{1}^{2}}(\rho ,{\tilde{R}_{1}}(\rho ),{\tilde{R}_{2}}(\rho ))}{\left(\frac{{\sigma ^{2}}({\tilde{r}_{1}}(\rho ),{\tilde{r}_{2}}(\rho ))}{{\sigma _{1}^{2}}({\tilde{R}_{1}}(\rho ),{\tilde{R}_{2}}(\rho ))}\right)^{-2\rho }}\right)^{1/(1-2\rho )}}.\]
Numerically we obtain that the estimator ${\hat{\gamma }_{n}^{(1)}}({\tilde{K}_{n}}({\tilde{R}_{1}}(\rho ),{\tilde{R}_{2}}(\rho )),{\tilde{R}_{1}}(\rho ),{\tilde{R}_{2}}(\rho ))$ dominates the estimator ${\hat{\gamma }_{n}}({\tilde{k}_{n}}({\tilde{r}_{1}}(\rho ),{\tilde{r}_{2}}(\rho )),{\tilde{r}_{1}}(\rho ),{\tilde{r}_{2}}(\rho ))$ in the area $\{(\gamma ,\rho ):\gamma \gt 0,-5\le \rho \lt 0\}$, see Fig. 4.
Graph of φ(ρ) vs ρ, showing a curve that starts flat around 1.005, then decreases sharply as ρ approaches 0 from -5.
Fig. 4.
Graph of the function $\phi (\rho )$

3 Monte-Carlo simulations

Let ${\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})$, $\ell =1,2,3$ be the estimators discussed in Section 2. In this section, we use numerical simulations to verify the theoretical result provided in Fig. 3. For this purpose, the i.i.d. samples ${X_{1}},\dots ,{X_{n}}$ of sizes $n=1000$ were simulated $N=500$ times from Burr distribution with several values of positive extreme value index γ and second order parameter $\rho \lt 0$.
We recall that the Burr (type XII) distribution has a distribution function $F(x)=1-{\left(1+{x^{-\rho /\gamma }}\right)^{1/\rho }}$, $x\ge 0$, while the appropriate quantile type function has the form $U(t)={t^{\gamma }}{\left(1-{t^{\rho }}\right)^{-\gamma /\rho }}$, $t\gt 1$.
The Burr distribution belongs to the Hall’s class of Pareto-type distributions ([16, 17]), i.e. its quantile type function $U(t)$ satisfies the relation
(18)
\[ U(t)=C{t^{\gamma }}\left(1+\frac{\gamma \beta }{\rho }{t^{\rho }}+o\left({t^{\rho }}\right)\right),\hspace{1em}t\to \infty \]
with $(C,\beta )=(1,1)$.
It is well-known that under assumption (18) a condition (3) is satisfied with $A(t)=\beta \gamma {t^{\rho }}$. Now, using (13) one can find that
(19)
\[ {a^{\gets }}(t)={\left(-2\rho {\beta ^{2}}{\gamma ^{2}}\right)^{1/(1-2\rho )}}{t^{1/(2\rho -1)}}.\]
By substituting (19) into (14) we get
(20)
\[ {k_{n,\ell }^{\ast }}(r)\sim {\left(\frac{{\varsigma _{\ell }^{2}}(r)}{-2\rho {\beta ^{2}}{\lambda _{\ell }^{2}}(\rho ,r)}\right)^{1/(1-2\rho )}}{n^{-2\rho /(1-2\rho )}},\hspace{1em}n\to \infty ,\hspace{1em}\ell =1,2,3.\]
Put
\[ {T_{n}}(k,\tau )=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{{\left({M_{n}}(k,1)\right)^{\tau }}-{\left({M_{n}}(k,2)/2\right)^{\tau /2}}}{{\left({M_{n}}(k,2)/2\right)^{\tau /2}}-{\left({M_{n}}(k,3)/6\right)^{\tau /3}}},& \tau \gt 0,\\ {} \frac{\ln \left({M_{n}}(k,1)\right)-\ln {\left({M_{n}}(k,2)/2\right)^{1/2}}}{\ln {\left({M_{n}}(k,2)/2\right)^{1/2}}-\ln {\left({M_{n}}(k,3)/6\right)^{1/3}}},& \tau =0.\end{array}\right.\]
To estimate the second order parameter ρ we apply the estimator
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\hat{\rho }_{n}}(\kappa ,\tau )& \displaystyle =& \displaystyle -\left|\frac{3({T_{n}}(\kappa ,\tau )-1)}{{T_{n}}(\kappa ,\tau )-3}\right|,\end{array}\]
This estimator was introduced in [11]. To decide which value (0 or 1) of the parameter τ to take in the above mentioned estimator, we implemented the algorithm provided in [15]. To estimate the parameter β we use the estimator
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle {\hat{\beta }_{n}}(\kappa )={\left(\frac{\kappa }{n}\right)^{{\hat{\rho }_{n}}(\kappa ,\tau )}}\times \\ {} & & \displaystyle \times \frac{\left(\frac{1}{\kappa }{\textstyle\textstyle\sum _{i=1}^{\kappa }}{\left(\frac{i}{\kappa }\right)^{-{\hat{\rho }_{n}}(\kappa ,\tau )}}\right)\left(\frac{1}{\kappa }{\textstyle\textstyle\sum _{i=1}^{\kappa }}{V_{i}}\right)-\left(\frac{1}{\kappa }{\textstyle\textstyle\sum _{i=1}^{\kappa }}{\left(\frac{i}{\kappa }\right)^{-{\hat{\rho }_{n}}(\kappa ,\tau )}}{V_{i}}\right)}{\left(\frac{1}{\kappa }{\textstyle\textstyle\sum _{i=1}^{\kappa }}{(i/\kappa )^{-{\hat{\rho }_{n}}(\kappa ,\tau )}}\right)\left(\frac{1}{\kappa }{\textstyle\textstyle\sum _{i=1}^{\kappa }}{\left(\frac{i}{\kappa }\right)^{-{\hat{\rho }_{n}}(\kappa ,\tau )}}{V_{i}}\right)-\left(\frac{1}{\kappa }{\textstyle\textstyle\sum _{i=1}^{\kappa }}{\left(\frac{i}{\kappa }\right)^{-2{\hat{\rho }_{n}}(\kappa ,\tau )}}{V_{i}}\right)},\end{array}\]
where ${V_{i}}=i\left(\ln ({X_{n-i+1,n}})-\ln ({X_{n-i}})\right)$, $1\le i\le \kappa $, which was introduced in [14]. Following the recommendations in [2], for both estimators we used $\kappa =[{n^{0.995}}]$, where $[\cdot ]$ denotes the integer part.
Replacing β and ρ by estimators ${\hat{\beta }_{n}}(\kappa )$ and ${\hat{\rho }_{n}}(\kappa ,\tau )$ in (20) we obtain empirical values of ${k_{n,\ell }^{\ast }}(r)$.
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\hat{k}_{n,\ell }^{\ast }}(r)& \displaystyle =& \displaystyle \bigg[{\left(\frac{{\varsigma _{\ell }^{2}}(r)}{-2{\hat{\rho }_{n}}(\kappa ,\tau ){\hat{\beta }_{n}^{2}}(\kappa ){\lambda _{\ell }^{2}}({\hat{\rho }_{n}}(\kappa ,\tau ),r)}\right)^{1/(1-2{\hat{\rho }_{n}}(\kappa ,\tau ))}}\times \\ {} & & \displaystyle \times {n^{-2{\hat{\rho }_{n}}(\kappa ,\tau )/(1-2{\hat{\rho }_{n}}(\kappa ,\tau ))}}\bigg],\hspace{1em}\ell =1,2,3.\end{array}\]
To calculate the estimators ${\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})$, $\ell =1,2,3$ we use the following algorithm:
1. Estimate the parameters β and ρ using the estimators ${\hat{\beta }_{n}}(\kappa )$ and ${\hat{\rho }_{n}}(\kappa ,\tau )$, respectively.
2. Find numerically ${r_{\ell }^{\ast }}=\mathrm{argmin}\left\{r\gt 0:\hspace{3.33333pt}{\lambda _{\ell }^{2}}({\hat{\rho }_{n}}(\kappa ,\tau ),r){\left({\varsigma _{\ell }^{2}}(r)\right)^{-2{\hat{\rho }_{n}}(\kappa ,\tau )}}\right\}$ for $\ell =1,2$ and ${r_{3}^{\ast }}=\mathrm{argmin}\left\{r\gt 1:\hspace{3.33333pt}{\lambda _{3}^{2}}({\hat{\rho }_{n}}(\kappa ,\tau ),r){\left({\varsigma _{3}^{2}}(r)\right)^{-2{\hat{\rho }_{n}}(\kappa ,\tau )}}\right\}$.
3. Compute ${\hat{k}_{n,\ell }^{\ast }}({r_{\ell }^{\ast }})$ and then find estimate ${\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})$.
The results of simulations are summarized in Fig. 5. For the Burr distribution, we took parameters ρ and γ from intervals $[-4,0)$ and $(0,2]$, respectively. We divide the rectangle $[-4,0)\times (0,2]$ into squares
\[ {V_{i,j}}=[-0.1(i+1),-0.1i)\times (0.1j,0.1(j+1)],\hspace{3.33333pt}i=0,1,\dots ,39,\hspace{3.33333pt}j=0,1,\dots ,19.\]
Taking the true values of the parameters ρ and γ as coordinates of the center of each square ${V_{i,j}}$, we performed Monte-Carlo simulations. Let $EMS{E_{\ell }}(\rho ,\gamma )$ denote the empirical MSE of the estimator ${\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})$ when observations are simulated from the Burr distribution with true parameters ρ and γ. The square ${V_{i,j}}$ is colored in black, gray and white if
\[\begin{array}{r}\displaystyle EMS{E_{1}}(-0.1i-0.05,0.1j+0.05)\lt EMS{E_{\ell }}(-0.1i-0.05,0.1j+0.05),\hspace{3.33333pt}\ell =2,3,\\ {} \displaystyle EMS{E_{2}}(-0.1i-0.05,0.1j+0.05)\lt EMS{E_{\ell }}(-0.1i-0.05,0.1j+0.05),\hspace{3.33333pt}\ell =1,3,\\ {} \displaystyle EMS{E_{3}}(-0.1i-0.05,0.1j+0.05)\lt EMS{E_{\ell }}(-0.1i-0.05,0.1j+0.05),\hspace{3.33333pt}\ell =1,2,\end{array}\]
respectively.
Scatter plot showing y vs ρ (rho), with data points represented by white and gray squares on a black background.
Fig. 5.
Graphical comparison of the estimators ${\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})$, $\ell =1,2,3$
The graphical result in Fig. 5 indicates that additional simulation is needed for the case $\rho =-1$. We consider a model of i.i.d. observations from the Fréchet distribution with d.f. $F(x)=\exp \left\{-{x^{-1/\gamma }}\right\}$, $x\gt 0$. Also, we consider a stable model, i.e., when observations are absolute values of i.i.d. stable random variables with characteristic function $\varphi (t)=\exp \left\{-|t{|^{1/\gamma }}\right\}$. The results of these simulations are presented in Table 1 and Table 2, respectively. The best result in each column is presented in bold.
Table 1.
Comparison of the root EMSEs for the Fréchet model
γ 0.25 0.50 0.75 1.00 1.25 1.50 1.75
$\sqrt{EMS{E_{1}}(-1,\gamma )}$ 0.0249 0.0500 0.0703 0.1050 0.1233 0.1502 0.1748
$\sqrt{EMS{E_{2}}(-1,\gamma )}$ 0.0251 0.0505 0.0711 0.1051 0.1240 0.1517 0.1749
$\sqrt{EMS{E_{3}}(-1,\gamma )}$ 0.0299 0.0597 0.0846 0.1232 0.1492 0.1769 0.2063
Table 2.
Comparison of the root EMSEs for the stable model
γ 0.75 1.00 1.25 1.50 1.75 2.00 2.25
$\sqrt{EMS{E_{1}}(-1,\gamma )}$ 0.0806 0.0916 0.1245 0.1518 0.1732 0.2111 0.2211
$\sqrt{EMS{E_{2}}(-1,\gamma )}$ 0.0816 0.0919 0.1258 0.1521 0.1755 0.2130 0.2218
$\sqrt{EMS{E_{3}}(-1,\gamma )}$ 0.0708 0.0931 0.1467 0.1869 0.2088 0.2485 0.2660
The findings of our Monte-Carlo simulations are the following:
  • 1. The graphical results in Fig. 5 allow to conclude that comparison of the estimators under consideration does not depend on the parameter γ. This reflects theoretical results: the functions ${\phi _{2,1}}(\rho )$ and ${\phi _{3,1}}(\rho )$ also do not depend on γ.
  • 2. The newly proposed estimator outperforms the estimators ${\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})$, $\ell =2,3$ when $-4\le \rho \le -1$, see Fig. 5 and Tables 1–2. However, the dominance of the estimator ${\hat{\gamma }_{n}^{(1)}}({k_{n,1}^{\ast }}({r_{1}^{\ast }}),{r_{1}^{\ast }})$ is not substantial, see Tables 1–2.

4 A practical example - exchange rate data

Here we deal with daily exchange rates of U.S. Dollar expressed in Chinese Yuans (CHY) and South Korea Wons (KRW). We choose the samples of width $n=1248$ for a period of almost five years, from August 24, 2020, to August 22, 2025. These daily data are taken from Federal Reserve Bank of St. Louis (https://fred.stlouisfed.org). We analyze the absolute values of the logarithmic returns
\[ {R_{t}^{(i)}}=\left|\ln (E{R_{t}^{(i)}}/E{R_{t-1}^{(i)}})\right|,\hspace{1em}2\le t\le n,\hspace{3.33333pt}i=1,2,\]
where $E{R_{t}^{(1)}}$ denotes the USD/CHY rate, while $E{R_{t}^{(2)}}$ – USD/KRW rate at a time t. Zero and small log-returns are deleted by using a rule ${R_{t}^{(i)}}\lt 0.0001$. After deleting time series ${R_{t}^{(1)}}$ and ${R_{t}^{(2)}}$ contain ${n_{1}}=1175$ and ${n_{2}}=1234$ observations, see Fig. 6.
Two time series plots of R(1) and R(2) vs t, showing irregular spikes over 1200 time units with max amplitudes of 0.015 and 0.035 respectively.
Fig. 6.
Time series ${R_{t}^{(1)}}$, $1\le t\le 1175$ (left) and ${R_{t}^{(2)}}$, $1\le t\le 1234$ (right)
We applied the algorithm (with $\ell =1,2,3$) provided in Section 3 to calculate the estimate of the parameter γ. The estimates of $(\rho ,\beta )$ are $(-0.712,1.040)$ and $(-0.697,1.045)$ for time series ${R_{t}^{(1)}}$, $1\le t\le 1175$ and ${R_{t}^{(2)}}$, $1\le t\le 1234$, respectively. Regarding the estimates of the parameter γ, they are collected in Table 3.
It is worth noting that one of the stylized econometric facts states that in most cases the absolute values of log-returns of economic and financial real data exhibit heavy tail phenomena and their distribution satisfies (1) with extreme value index $1/4\le \gamma \le 1/2$. The estimates of γ presented in Table 3 allow to conclude that the effect of heavy tails is observed in time series ${R_{t}^{(1)}}$, $1\le t\le 1175$ and ${R_{t}^{(2)}}$, $1\le t\le 1234$. In addition, all estimates do not contradict the mentioned stylized fact.
Table 3.
Estimates of the parameter γ
time series ${\hat{\gamma }_{n}^{(1)}}$ ${\hat{\gamma }_{n}^{(2)}}$ ${\hat{\gamma }_{n}^{(3)}}$
${R_{t}^{(1)}}$ 0.346 0.339 0.407
${R_{t}^{(2)}}$ 0.281 0.287 0.327

5 Conclusions

We introduced the new family of estimators (parameterized by two tuning parameters) for a positive EVI. This family is quite rich: it generalizes several classical estimators and parameterized families of estimators, previously presented in the statistical literature. We proved the asymptotic normality of the newly introduced estimators. This allows us to compare estimators proposed in this paper with other estimators for a positive EVI theoretically, in the sense of asymptotic MMSE. It is shown that the family of newly introduced estimators ${\hat{\gamma }_{n}^{(1)}}(k,r)$ (at the optimal levels of parameters) dominates the families of estimators proposed in [13] and [22].
For practical purposes, we discussed a subfamily of newly introduced estimators. This subfamily is parameterized in one tuning parameter, which, as a function of a second order parameter ρ, has quite small width of range. Nowadays, estimation of the parameter ρ is still at a poor level. Thus noting that the tuning parameter is less sensitive to the estimation of parameter ρ is a quite significant advantage against other estimators for a positive EVI. The performance of the considered subfamily of estimators has been exhibited in a small-scale Monte-Carlo study and for two real data sets.

6 Proofs

To prove Theorem 2 we apply a result from [13], adopted for our purposes. A generalization of the following Theorem can be found in [23].
Theorem 3 ([13]).
Suppose that the assumptions of Theorem 1 hold. Suppose also that $0\le {r_{1}}\lt {r_{2}}$. Then
(21)
\[ \sqrt{{k_{n}}}\left({M_{n}}({k_{n}},{r_{1}})-\kappa ({r_{1}}),{M_{n}}({k_{n}},{r_{2}})-\kappa ({r_{2}})\right)\hspace{3.57777pt}\stackrel{\mathrm{d}}{\to }\hspace{3.57777pt}\left(\mu \upsilon ({r_{1}})+{Y_{{r_{1}}}},\mu \upsilon ({r_{2}})+{Y_{{r_{2}}}}\right),\]
as $n\to \infty $. Here
\[ \kappa (r)={\gamma ^{r}}\Gamma (1+r),\hspace{1em}\upsilon (r)=\left\{\begin{array}{l@{\hskip10.0pt}l}r{\gamma ^{r-1}}\Gamma (1+r),& \rho =0,\\ {} {\gamma ^{r-1}}\Gamma (1+r)\frac{1-{(1-\rho )^{-r}}}{-\rho },& \rho \lt 0,\end{array}\right.\]
while ${Y_{{r_{1}}}}$ and ${Y_{{r_{2}}}}$ are jointly normal random variables with zero means and a covariance matrix
\[ S({r_{1}},{r_{2}})=\left(\begin{array}{c@{\hskip10.0pt}c}s({r_{1}},{r_{1}})& s({r_{1}},{r_{2}})\\ {} s({r_{1}},{r_{2}})& s({r_{2}},{r_{2}})\end{array}\right),\]
where $s({r_{1}},{r_{2}})={\gamma ^{{r_{1}}+{r_{2}}}}\left(\Gamma (1+{r_{1}}+{r_{2}})-\Gamma (1+{r_{1}})\Gamma (1+{r_{2}})\right)$.
We are now ready to prove Theorem 2.
Proof.
In [13] the asymptotic normality of ${\hat{\gamma }_{n}^{(1)}}(k,0,r)$, $r\gt 0$ is proved. Thus, it is enough to prove (10) in the case $0\lt {r_{1}}\lt {r_{2}}$. We rewrite ${\hat{\gamma }_{n}^{(1)}}(k,{r_{1}},{r_{2}})$ as follows:
\[ {\hat{\gamma }_{n}^{(1)}}(k,{r_{1}},{r_{2}})=Q\left({M_{n}}(k,{r_{1}}),{M_{n}}(k,{r_{2}})\right),\hspace{1em}Q(x,y)={\left(\frac{\Gamma ({r_{1}}+1)y}{\Gamma ({r_{2}}+1)x}\right)^{1/({r_{2}}-{r_{1}})}}.\]
Denoting
\[ {\xi _{1}}(x,y)=\frac{\partial Q(x,y)}{\partial x},\hspace{1em}{\xi _{2}}(x,y)=\frac{\partial Q(x,y)}{\partial y}\]
it is not difficult to get that
\[ {\xi _{1}}(x,y)=-\frac{Q(x,y)}{({r_{2}}-{r_{1}})x},\hspace{1em}{\xi _{2}}(x,y)=\frac{Q(x,y)}{({r_{2}}-{r_{1}})y}.\]
It follows immediately that
\[ \underset{x\to \kappa ({r_{1}}),y\to \kappa ({r_{2}})}{\lim }{\xi _{i}}(x,y)={\xi _{i}}(\kappa ({r_{1}}),\kappa ({r_{2}})),\hspace{1em}i=1,2,\]
where
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\xi _{1}}(\kappa ({r_{1}}),\kappa ({r_{2}}))& \displaystyle =& \displaystyle -\frac{{\gamma ^{1-{r_{1}}}}}{({r_{2}}-{r_{1}})\Gamma (1+{r_{1}})},\\ {} \displaystyle {\xi _{2}}(\kappa ({r_{1}}),\kappa ({r_{2}}))& \displaystyle =& \displaystyle \frac{{\gamma ^{1-{r_{2}}}}}{({r_{2}}-{r_{1}})\Gamma (1+{r_{2}})}.\end{array}\]
Thus, the first order partial derivatives of the function $Q(x,y)$ exist for $(x,y)$ in a neighborhood of $(\kappa ({r_{1}}),\kappa ({r_{2}}))$ and are continuous at $(\kappa ({r_{1}}),\kappa ({r_{2}}))$. This yields that the multivariate delta method (see e.g. Theorem 3.1 in [24]) allows to immediately obtain (10). As for the quantities ${\nu _{1}}(\rho ,{r_{1}},{r_{2}})$ and ${\sigma _{1}^{2}}({r_{1}},{r_{2}})$ in (10), they are calculated using
\[ {\nu _{1}}(\rho ,{r_{1}},{r_{2}})=\left(\begin{array}{c@{\hskip10.0pt}c}{\xi _{1}}(\kappa ({r_{1}}),\kappa ({r_{2}}))& {\xi _{2}}(\kappa ({r_{1}}),\kappa ({r_{2}}))\end{array}\right)\left(\begin{array}{c}\upsilon ({r_{1}})\\ {} \upsilon ({r_{2}})\end{array}\right)\]
and
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle {\gamma ^{2}}{\sigma ^{2}}({r_{1}},{r_{2}})\\ {} & & \displaystyle \hspace{1em}\hspace{1em}=\left(\begin{array}{c@{\hskip10.0pt}c}{\xi _{1}}(\kappa ({r_{1}}),\kappa ({r_{2}}))& {\xi _{2}}(\kappa ({r_{1}}),\kappa ({r_{2}}))\end{array}\right)S({r_{1}},{r_{2}})\left(\begin{array}{c}{\xi _{1}}(\kappa ({r_{1}}),\kappa ({r_{2}}))\\ {} {\xi _{2}}(\kappa ({r_{1}}),\kappa ({r_{2}}))\end{array}\right).\end{array}\]
This ends the proof of Theorem 2.  □
Lemma 1.
For any $\rho \lt 0$ and $({r_{1}},{r_{2}})\in [1,\infty )\times (0,1]$,
(22)
\[ \nu (\rho ,{r_{1}},{r_{2}})\gt 0,\]
where $\nu (\rho ,{r_{1}},{r_{2}})$ is the same as in Theorem 1.
Proof.
Obviously, that $\nu (\rho ,{r_{1}},1)\gt 0$, see ${\lambda _{3}}(\rho ,r)$ in Corollary 1.
Focusing on the case $0\lt {r_{2}}\lt 1$ we consider $\nu (\rho ,{r_{1}},{r_{2}})$ as a function of ${r_{1}}$. From proof of Theorem 1 in [3] we know that
(23)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \underset{{r_{1}}\downarrow 1}{\lim }\nu (\rho ,{r_{1}},{r_{2}})& \displaystyle =& \displaystyle \frac{1-{(1-\rho )^{{r_{2}}}}}{{r_{2}}\rho {(1-\rho )^{{r_{2}}}}}\gt 0,\end{array}\]
(24)
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \underset{{r_{1}}\to \infty }{\lim }\nu (\rho ,{r_{1}},{r_{2}})& \displaystyle =& \displaystyle \frac{{r_{2}}-1}{\rho {r_{2}}}\gt 0\end{array}\]
for any $\rho \lt 0$ and $0\lt {r_{2}}\lt 1$.
We have for $\rho \lt 0$,
(25)
\[ \frac{\mathrm{d}\nu (\rho ,{r_{1}},{r_{2}})}{\mathrm{d}{r_{1}}}=\frac{{(1-\rho )^{-{r_{1}}-{r_{1}}{r_{2}}}}\ln (1-\rho )}{-\rho }\left({(1-\rho )^{{r_{1}}}}-{(1-\rho )^{{r_{1}}{r_{2}}+1}}\right).\]
If $0\lt {r_{2}}\lt 1$ and ${r_{1}}\gt 1/(1-{r_{2}})$, then the right-hand side of (25) is strictly positive. This, together with (23) implies (22). Similarly, one can check that (22) holds in the cases $0\lt {r_{2}}\lt 1$ and ${r_{1}}\lt 1/(1-{r_{2}})$. It rests to consider the cases $0\lt {r_{2}}\lt 1$ and ${r_{1}}=1/(1-{r_{2}})$, under which the derivative in (25) equals zero. Let $\epsilon \gt 0$ be such that $1\lt 1/(1-{r_{2}}+\epsilon )$. Then the inequalities
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}& & \displaystyle {(1-\rho )^{{r_{1}}}}-{(1-\rho )^{{r_{1}}{r_{2}}+1}}{\bigg|_{{r_{1}}=1/(1-{r_{2}}+\epsilon )}}\\ {} & & \displaystyle \hspace{1em}\hspace{1em}={(1-\rho )^{1/(1-{r_{2}}+\epsilon )}}-{(1-\rho )^{(1+\epsilon )/(1-{r_{2}}+\epsilon )}}\lt 0,\\ {} & & \displaystyle {(1-\rho )^{{r_{1}}}}-{(1-\rho )^{{r_{1}}{r_{2}}+1}}{\bigg|_{{r_{1}}=1/(1-{r_{2}}-\epsilon )}}\\ {} & & \displaystyle \hspace{1em}\hspace{1em}={(1-\rho )^{1/(1-{r_{2}}-\epsilon )}}-{(1-\rho )^{(1-\epsilon )/(1-{r_{2}}+\epsilon )}}\gt 0\end{array}\]
yield that $\nu (\rho ,{r_{1}},{r_{2}})$ attains its minimum at ${r_{1}}=1/(1-{r_{2}})$. Noting that
\[ \nu (\rho ,1/(1-{r_{2}}),{r_{2}})=\frac{1-{r_{2}}}{-\rho {r_{2}}}\left(1-{(1-\rho )^{-{r_{2}}/(1-{r_{2}})}}\right)\gt 0\]
ends the proof of (22). The lemma is proved.  □
Remark 1.
The estimators ${\hat{\gamma }_{n}^{(1)}}(k,1/(1-{r_{2}}),{r_{2}})$, $0\lt {r_{2}}\lt 1$ (considered in the proof of Lemma 1) coincide with the family of estimators ${\hat{\gamma }_{n}}(k,1,r)$, $r\gt 0$.

References

[1] 
Brilhante, M.F., Gomes, M.I., Pestana, D.: A simple generalization of the Hill estimator. Comput. Stat. Data Anal. 57, 518–535 (2013) MR2981106. https://doi.org/10.1016/j.csda.2012.07.019
[2] 
Caeiro, F., Gomes, M.I.: Minimum-variance reduced-bias tail index and high quantile estimation. REVSTAT 6, 1–20 (2008) MR2386296
[3] 
Caeiro, F., Gomes, M.I.: Bias reduction in the estimation of parameters of rare events. Theory Stoch. Process. 8, 67–76 (2002) MR2026256
[4] 
Caeiro, F., Gomes, M.I.: A class of asymptotically unbiased semi-parametric estimators of the tail index. Test 11, 345–364 (2002) MR1947602. https://doi.org/10.1007/BF02595711
[5] 
De Haan, L., Ferreira, A.: Extreme Value Theory: an Introduction. Springer, New York (2006) MR2234156. https://doi.org/10.1007/0-387-34471-3
[6] 
De Haan, L., Peng, L.: Comparison of tail index estimators. Stat. Neerl. 52, 60–70 (1998) MR1615558. https://doi.org/10.1111/1467-9574.00068
[7] 
Dekkers, A.L.M., de Haan, L.: Optimal choice of sample fraction in extreme-value estimation. J. Multivar. Anal. 47, 173–195 (1993) MR1247373. https://doi.org/10.1006/jmva.1993.1078
[8] 
Dekkers, A.L.M., Einmahl, J.H.J., de Haan, L.: A moment estimator for the index of an extreme-value distribution. Ann. Stat. 17, 1833–1855 (1989) MR1026315. https://doi.org/10.1214/aos/1176347397
[9] 
Draisma, G., de Haan, L., Peng, L., Pereira, T.T.: A bootstrap-based method to achieve optimality in estimating the extreme-value index. Extremes 2, 367–404 (1999) MR1776855. https://doi.org/10.1023/A:1009900215680
[10] 
Fedotenkov, I.: A review of more than one hundred pareto-tail index estimators. Statistica 80, 245–299 (2020)
[11] 
Fraga Alves, M.I., Gomes, M.I., de Haan, L.: A new class of semi-parametric estimators of the second order parameter. Port. Math. 60, 193–214 (2003) MR1984031
[12] 
Gomes, M.I., Martins, M.J.: Efficient alternatives to the Hill estimator. In: Proceedings of the Workshop V.E.L.A. - Extreme Values and Additive Laws, pp. 40–43 (1999)
[13] 
Gomes, M.I., Martins, M.J.: Alternatives to Hill’s estimator - asymptotic versus finite sample behaviour. J. Stat. Plan. Inference 93, 161–180 (2001) MR1822394. https://doi.org/10.1016/S0378-3758(00)00201-9
[14] 
Gomes, M.I., Martins, M.J.: Asymptotically unbiased estimators of the tail index based on external estimation of the second order parameter. Extremes 5, 5–31 (2002) MR1947785. https://doi.org/10.1023/A:1020925908039
[15] 
Gomes, M.I., Pestana, D., Caeiro, F.: A note on the asymptotic variance at optimal levels of a bias-corrected hill estimator. Stat. Probab. Lett. 79, 295–303 (2009) MR2493012. https://doi.org/10.1016/j.spl.2008.08.016
[16] 
Hall, P.: On some simple estimates of an exponent of regular variation. J. R. Stat. Soc. Ser. B 44, 37–42 (1982) MR0655370
[17] 
Hall, P., Welsh, A.H.: Adaptive estimates of parameters of regular variation. Ann. Stat. 13, 331–341 (1985) MR0773171. https://doi.org/10.1214/aos/1176346596
[18] 
Hill, B.M.: A simple general approach to inference about the tail of a distribution. Ann. Stat. 3, 1163–1174 (1975) MR0378204
[19] 
Oliveira, O.A., Gomes, M.I., Fraga Alves, M.I.: Improvement in the estimation of a heavy tails. REVSTAT 4, 81–109 (2006) MR2259366
[20] 
Paulauskas, V., Vaičiulis, M.: On the improvement of Hill and some others estimators. Lith. Math. J. 53, 336–355 (2013) MR3097309. https://doi.org/10.1007/s10986-013-9212-x
[21] 
Paulauskas, V., Vaičiulis, M.: Comparison of the several parameterized estimators for the positive extreme value index. J. Stat. Comput. Simul. 87, 1342–1362 (2016) MR3621952. https://doi.org/10.1080/00949655.2016.1263303
[22] 
Penalva, H., Caeiro, F., Gomes, M.I., Neves, M.M.: An efficient naive generalisation of the Hill estimator: discrepancy between asymptotic and finite sample behaviour. Notas e Comuniçaoes CEAUL 2 (2016)
[23] 
Vaičiulis, M.: A multivariate limit theorem for generalized Hill statistics. Lith. Math. J. 65, 117–133 (2025) MR4885701. https://doi.org/10.1007/s10986-025-09658-2
[24] 
Van Der Vaart, A.W.: Asymptotic Statistics. Cambridge University Press (1998) MR1652247. https://doi.org/10.1017/CBO9780511802256
Reading mode PDF XML

Table of contents
  • 1 Introduction and main results
  • 2 Comparison of the estimators ${\hat{\gamma }_{n}^{(1)}}({k_{n}},r)$ and ${\hat{\gamma }_{n}^{(\ell )}}({k_{n}},r)$, $\ell =2,3$
  • 3 Monte-Carlo simulations
  • 4 A practical example - exchange rate data
  • 5 Conclusions
  • 6 Proofs
  • References

Copyright
© 2026 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Asymptotic normality extreme value index Hall class Hill estimator

MSC2020
62F12 62G32 60F05

Metrics
since March 2018
3

Article info
views

0

Full article
views

0

PDF
downloads

0

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    6
  • Tables
    3
  • Theorems
    3
Graph of r₁(ρ) function showing exponential growth from 0.4 to 1.0 as ρ increases from -5 to 0 on x-axis.
Fig. 1.
Graph $\{(\rho ,{r_{1}^{\ast }}(\rho )),-5\le \rho \lt 0\}$
Two graphs of functions r²(ρ) and r³(ρ) showing increasing curves from -5 to 0 on x-axis, with different y-axis scales.
Fig. 2.
Graphs $\{(\rho ,{r_{2}^{\ast }}(\rho )),-5\le \rho \lt 0\}$ (left) and $\{(\rho ,{r_{3}^{\ast }}(\rho )),-5\le \rho \lt 0\}$ (right)
Two side-by-side graphs showing functions φ₂.₁(ρ) and φ₃.₁(ρ), both decreasing curves from left to right with ρ on x-axis.
Fig. 3.
Graphs of the functions ${\phi _{2,1}}(\rho )$ (left) and ${\phi _{3,1}}(\rho )$ (right)
Graph of φ(ρ) vs ρ, showing a curve that starts flat around 1.005, then decreases sharply as ρ approaches 0 from -5.
Fig. 4.
Graph of the function $\phi (\rho )$
Scatter plot showing y vs ρ (rho), with data points represented by white and gray squares on a black background.
Fig. 5.
Graphical comparison of the estimators ${\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})$, $\ell =1,2,3$
Two time series plots of R(1) and R(2) vs t, showing irregular spikes over 1200 time units with max amplitudes of 0.015 and 0.035 respectively.
Fig. 6.
Time series ${R_{t}^{(1)}}$, $1\le t\le 1175$ (left) and ${R_{t}^{(2)}}$, $1\le t\le 1234$ (right)
Table 1.
Comparison of the root EMSEs for the Fréchet model
Table 2.
Comparison of the root EMSEs for the stable model
Table 3.
Estimates of the parameter γ
Theorem 1 ([3]).
Theorem 2.
Theorem 3 ([13]).
Graph of r₁(ρ) function showing exponential growth from 0.4 to 1.0 as ρ increases from -5 to 0 on x-axis.
Fig. 1.
Graph $\{(\rho ,{r_{1}^{\ast }}(\rho )),-5\le \rho \lt 0\}$
Two graphs of functions r²(ρ) and r³(ρ) showing increasing curves from -5 to 0 on x-axis, with different y-axis scales.
Fig. 2.
Graphs $\{(\rho ,{r_{2}^{\ast }}(\rho )),-5\le \rho \lt 0\}$ (left) and $\{(\rho ,{r_{3}^{\ast }}(\rho )),-5\le \rho \lt 0\}$ (right)
Two side-by-side graphs showing functions φ₂.₁(ρ) and φ₃.₁(ρ), both decreasing curves from left to right with ρ on x-axis.
Fig. 3.
Graphs of the functions ${\phi _{2,1}}(\rho )$ (left) and ${\phi _{3,1}}(\rho )$ (right)
Graph of φ(ρ) vs ρ, showing a curve that starts flat around 1.005, then decreases sharply as ρ approaches 0 from -5.
Fig. 4.
Graph of the function $\phi (\rho )$
Scatter plot showing y vs ρ (rho), with data points represented by white and gray squares on a black background.
Fig. 5.
Graphical comparison of the estimators ${\hat{\gamma }_{n}^{(\ell )}}({k_{n,\ell }^{\ast }}({r_{\ell }^{\ast }}),{r_{\ell }^{\ast }})$, $\ell =1,2,3$
Two time series plots of R(1) and R(2) vs t, showing irregular spikes over 1200 time units with max amplitudes of 0.015 and 0.035 respectively.
Fig. 6.
Time series ${R_{t}^{(1)}}$, $1\le t\le 1175$ (left) and ${R_{t}^{(2)}}$, $1\le t\le 1234$ (right)
Table 1.
Comparison of the root EMSEs for the Fréchet model
γ 0.25 0.50 0.75 1.00 1.25 1.50 1.75
$\sqrt{EMS{E_{1}}(-1,\gamma )}$ 0.0249 0.0500 0.0703 0.1050 0.1233 0.1502 0.1748
$\sqrt{EMS{E_{2}}(-1,\gamma )}$ 0.0251 0.0505 0.0711 0.1051 0.1240 0.1517 0.1749
$\sqrt{EMS{E_{3}}(-1,\gamma )}$ 0.0299 0.0597 0.0846 0.1232 0.1492 0.1769 0.2063
Table 2.
Comparison of the root EMSEs for the stable model
γ 0.75 1.00 1.25 1.50 1.75 2.00 2.25
$\sqrt{EMS{E_{1}}(-1,\gamma )}$ 0.0806 0.0916 0.1245 0.1518 0.1732 0.2111 0.2211
$\sqrt{EMS{E_{2}}(-1,\gamma )}$ 0.0816 0.0919 0.1258 0.1521 0.1755 0.2130 0.2218
$\sqrt{EMS{E_{3}}(-1,\gamma )}$ 0.0708 0.0931 0.1467 0.1869 0.2088 0.2485 0.2660
Table 3.
Estimates of the parameter γ
time series ${\hat{\gamma }_{n}^{(1)}}$ ${\hat{\gamma }_{n}^{(2)}}$ ${\hat{\gamma }_{n}^{(3)}}$
${R_{t}^{(1)}}$ 0.346 0.339 0.407
${R_{t}^{(2)}}$ 0.281 0.287 0.327
Theorem 1 ([3]).
Suppose that ${X_{1}},\dots ,{X_{n}}$ are i.i.d. random variables whose quantile function U satisfies condition (3) with $\gamma \gt 0$ and $\rho \le 0$. Let the sequence of integers $k={k_{n}}$ be such that
(5)
\[ {k_{n}}\to \infty ,\hspace{3.57777pt}{k_{n}}/n\to 0,\hspace{1em}\hspace{3.57777pt}n\to \infty \]
and further assume that
(6)
\[ \underset{n\to \infty }{\lim }\sqrt{{k_{n}}}A\left(\frac{n}{{k_{n}}}\right)=\mu \]
with μ finite. Then
(7)
\[ \sqrt{{k_{n}}}\left({\hat{\gamma }_{n}}({k_{n}},{r_{1}},{r_{2}})-\gamma \right)\hspace{3.57777pt}\stackrel{\mathrm{d}}{\to }\hspace{3.57777pt}\mathcal{N}\left(\mu \nu (\rho ,{r_{1}},{r_{2}}),{\gamma ^{2}}{\sigma ^{2}}({r_{1}},{r_{2}})\right),\hspace{1em}n\to \infty ,\]
where $\stackrel{\mathrm{d}}{\to }$ denotes the convergence in distribution, $\mathcal{N}$ is a normal distribution, and
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle \nu (\rho ,{r_{1}},{r_{2}})& \displaystyle =& \displaystyle \left\{\begin{array}{l@{\hskip10.0pt}l}1,& \rho =0,\\ {} (1/(\rho {r_{2}}))\left\{\frac{1}{{(1-\rho )^{{r_{1}}{r_{2}}}}}-\frac{{r_{2}}}{{(1-\rho )^{{r_{1}}-1}}}+({r_{2}}-1)\right\},& \rho \lt 0,\end{array}\right.\\ {} \displaystyle {\sigma ^{2}}({r_{1}},{r_{2}})& \displaystyle =& \displaystyle \frac{1}{{r_{2}^{2}}}\bigg\{\frac{2\Gamma (2{r_{1}}{r_{2}})}{{r_{1}}{r_{2}}{\Gamma ^{2}}({r_{1}}{r_{2}})}+\frac{{r_{2}^{2}}\Gamma (2{r_{1}}-1)}{{\Gamma ^{2}}({r_{1}})}\\ {} & & \displaystyle \hspace{1em}\hspace{1em}-\frac{2\Gamma ({r_{1}}(1+{r_{2}}))}{{r_{1}}\Gamma ({r_{1}})\Gamma ({r_{1}}{r_{2}})}-{({r_{2}}-1)^{2}}\bigg\}.\end{array}\]
Theorem 2.
Under the conditions of Theorem 1,
(10)
\[ \sqrt{{k_{n}}}\left({\hat{\gamma }_{n}^{(1)}}({k_{n}},{r_{1}},{r_{2}})-\gamma \right)\hspace{3.57777pt}\stackrel{\mathrm{d}}{\to }\hspace{3.57777pt}\mathcal{N}\left(\mu {\nu _{1}}(\rho ,{r_{1}},{r_{2}}),{\gamma ^{2}}{\sigma _{1}^{2}}({r_{1}},{r_{2}})\right),\hspace{1em}n\to \infty ,\]
where
\[\begin{array}{r@{\hskip10.0pt}c@{\hskip10.0pt}l}\displaystyle {\nu _{1}}(\rho ,{r_{1}},{r_{2}})& \displaystyle =& \displaystyle \left\{\begin{array}{l@{\hskip10.0pt}l}1,& \rho =0,\\ {} \frac{{(1-\rho )^{-{r_{1}}}}-{(1-\rho )^{-{r_{2}}}}}{(-\rho )({r_{2}}-{r_{1}})},& \rho \lt 0,\end{array}\right.\\ {} \displaystyle {\sigma _{1}^{2}}({r_{1}},{r_{2}})& \displaystyle =& \displaystyle \frac{1}{{({r_{2}}-{r_{1}})^{2}}}\left\{\frac{\Gamma (1+2{r_{1}})}{{\Gamma ^{2}}(1+{r_{1}})}-\frac{2\Gamma (1+r+{r_{2}})}{\Gamma (1+{r_{1}})\Gamma (1+{r_{2}})}+\frac{\Gamma (1+2{r_{2}})}{{\Gamma ^{2}}(1+{r_{2}})}\right\}.\end{array}\]
Theorem 3 ([13]).
Suppose that the assumptions of Theorem 1 hold. Suppose also that $0\le {r_{1}}\lt {r_{2}}$. Then
(21)
\[ \sqrt{{k_{n}}}\left({M_{n}}({k_{n}},{r_{1}})-\kappa ({r_{1}}),{M_{n}}({k_{n}},{r_{2}})-\kappa ({r_{2}})\right)\hspace{3.57777pt}\stackrel{\mathrm{d}}{\to }\hspace{3.57777pt}\left(\mu \upsilon ({r_{1}})+{Y_{{r_{1}}}},\mu \upsilon ({r_{2}})+{Y_{{r_{2}}}}\right),\]
as $n\to \infty $. Here
\[ \kappa (r)={\gamma ^{r}}\Gamma (1+r),\hspace{1em}\upsilon (r)=\left\{\begin{array}{l@{\hskip10.0pt}l}r{\gamma ^{r-1}}\Gamma (1+r),& \rho =0,\\ {} {\gamma ^{r-1}}\Gamma (1+r)\frac{1-{(1-\rho )^{-r}}}{-\rho },& \rho \lt 0,\end{array}\right.\]
while ${Y_{{r_{1}}}}$ and ${Y_{{r_{2}}}}$ are jointly normal random variables with zero means and a covariance matrix
\[ S({r_{1}},{r_{2}})=\left(\begin{array}{c@{\hskip10.0pt}c}s({r_{1}},{r_{1}})& s({r_{1}},{r_{2}})\\ {} s({r_{1}},{r_{2}})& s({r_{2}},{r_{2}})\end{array}\right),\]
where $s({r_{1}},{r_{2}})={\gamma ^{{r_{1}}+{r_{2}}}}\left(\Gamma (1+{r_{1}}+{r_{2}})-\Gamma (1+{r_{1}})\Gamma (1+{r_{2}})\right)$.

MSTA

Journal

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy