Simulation paradoxes related to a fractional Brownian motion with small Hurst index

We consider the simulation of sample paths of a fractional Brownian motion with small values of the Hurst index and estimate the behavior of the expected maximum. We prove that, for each fixed $N$, the error of approximation $\mathbf {E}\max_{t\in[0,1]}B^H(t)-\mathbf {E}\max_{i=\overline{1,N}}B^H(i/N)$ grows rapidly to $\infty$ as the Hurst index tends to 0.

Due to self-similarity, we have that, for all T > 0, Based on such a invariance of distributions, it is appropriate to investigate the properties of the fractional Brownian motion only over the time interval [0,1].
In this paper, we consider the behavior of the maximum functional max t∈[0,1] B H (t) with small values of Hurst index.
It should be noted that the fractional Brownian motion process with H = 1/2 is the Wiener process {W (t), t ≥ 0}. The distribution of max t∈[0,1] W (t) is known. Namely, x 0 e −y 2 /2 dy, x ≥ 0, and, therefore, Many papers are devoted to the distribution of the maximum functional of the fractional Brownian motion, where usually asymptotic properties for large values of time horizon T are considered. For example, Molchan [5] has found an asymptotic behavior of small-ball probabilities for the maximum of the fractional Brownian motion. Talagrand [8] obtained lower bounds for the expected maximum of the fractional Brownian motion. In several works, the distribution of the maximum is investigated when the Hurst index H is close to 1/2. In particular, this case was considered by Sinai [6] and recently by Delorme and Weise [4].
Currently, an analytical expression for the distribution of the maximum of the functional Brownian motion remains unknown. Moreover, the exact value of the expectation of such a functional is unknown too.
From the paper of Borovkov et al. [1] we know the following bounds: On the other hand, we may get an approximate value of the expected maximum using Monte Carlo simulations. That is, for sufficiently large N , The authors of [1] obtain an upper bound for the error ∆ N of approximation (2). Namely, for N ≥ 2 1/H , The implementation of approximation (2) has technical limitations. Due to modern computer capabilities, we assume that N ≤ 2 20 ≈ 10 6 . Under such conditions, inequality (4) is true when H ≥ 0.05, and ∆ N < 11.18.
In this article, we make Monte Carlo simulations and estimate E max i=1,N B H (i/N ). Also, we investigate the behavior of ∆ N with small values of the Hurst index H and show that, for a fixed N , the approximation error ∆ N → +∞ as H → 0. For the rate of this convergence, when N = 2 20 , we prove the inequality ∆ N > c 1 H −1/2 − c 2 , H ∈ (0, 1), where the constants c 1 = 0.2055 and c 2 = 3.4452 are calculated numerically. Thus, when the values of H are small, approximation (2) is not appropriate for evaluation of E max t∈[0,1] B H (t).
The article is organized as follows. The first section presents the methodology of computing. The second section presents the results of computing of the expected maximum of the fractional Brownian motion. In the third section, we obtain a lower bound for the error ∆ N and calculate the constants c 1 and c 2 .

Simulation of a vector (B H (i/N )) 1≤i≤N
Let us consider briefly the method proposed by Wood and Chan [9]. Let G be the autocovariance matrix of (B H (1/N ), . . . , B H (N/N )). Embed G in a circulant m × m matrix C given by Proposition 1. Let m = 2 1+ν , where 2 ν is the minimum power of 2 not less than N .
Then the matrix C allows a representation C = QJQ T , where J is a diagonal matrix of eigenvalues of the matrix C, and Q is the unitary matrix with elements The eigenvalues λ k of the matrix C are equal to Since Q is unitary, we can set Y = QJ 1/2 Q T Z, where Z ∼ N (0, I m ). Therefore, we get Y ∼ N (0, C). Thus, the distributions of the vectors The method of Wood and Chan is exact and has complexity O(N log(N )). A more detailed description of the algorithm, a comparison with other methods of simulation of the fractional Brownian motion, and a program code are contained in the paper [3]. For reasons of optimization of calculations, simulations in the present paper are made by the method of Wood and Chan.
The estimate of the mean value E max i=1,N B H (i/N ) is a sample mean over the sample of size n. That is why the total complexity of the algorithm is O(nN log(N )).

Clark's method
Instead of generating samples and computing sample means, there exists a method of Clark [2] for approximating the expected maximum.

Computing the expected maximum
In this section, we present results of approximate computing E max i=1,N B H (i/N ) by generating random samples and applying Clark's method. Also, we compare the computational results obtained by these two methods. The 3.1 Approximation error of 1 N N i=0 B H (i/N ) We compute the sample mean and variance of (7). The values of theoretical moments of (7) are known: , N → ∞.
The sample moments of (7) when H = {10 −4 (1 + 4i), i = 0, 24} and N = {2 j , j = 8, 19} are presented in Figs. 1 and 2. In the figures, the lines indicate the theoretical moments and confidence intervals corresponding to the reliability of 95%. The data confirm the correctness of calculations of (7) with the reliability of 95% even for small values of H.

Bounds for the approximation error
In this section, we find bounds for the error of approximation (2). As noted before, E max t∈[0,1] B H (t) ≥ (4Hπe ln 2) −1/2 . It is expected that obtained sample means of the maximum functional (6) also satisfies this constraint. In Fig. 3, the sample means and the values of (4Hπe ln 2) −1/2 are presented. As one can see, the inequal- There are two possible explanations of this fact: either there is a significant error in calculations, or the approximation error ∆ N grows rapidly as H → 0. Let us verify these two explanations.
From [1,Theorem 4.2] we get that the expectation of the maximal functional (6) grows as H → 0 and has the limit where ξ 1 , . . . , ξ N are i.i.d. r.v.s, ξ 1 ∼ N (0, 1), and x + := max{0, x}.   Moreover, the rate of convergence in (8) is also obtained in [1]: The right-hand side of (9) does not exceed 0.1 when N = 2 20 and H < 0.0038. We apply two approaches to calculate 1 The first one is Monte Carlo simulations.
The sample means of 1 √ 2 (max i=1,N ξ i ) + are presented in Table 2 for several sample sizes n.
As we see, with increasing sample size 20 times, the sample means differ at most by 0.33% for each N . Therefore, to ensure the accuracy of calculations, it suffices to put n = 1000. Under such conditions, technical resources allow us to calculate the sample means for larger values of N . In Table 3, the values of the sample means are presented for N = {2 20 , 2 21 , 2 22 , 2 23 , 2 24 , 2 25 } . Instead of generating random samples, we may calculate the value of dy, x ∈ R, and erf (−1) is the inverse function of the error function erf.
Proof. The proposition follows straightforwardly by quantile transformation.
We immediately get the following corollary.

Corollary 1.
For any H ∈ (0, 1) and N ≥ 1, we have The integrand in (11) is not an elementary function, but its values are tabulated, and there exist methods for its numerical computing. For the present paper, the integral N 1 0.5 erf (−1) (2z − 1)z N −1 dz is calculated numerically, and the corresponding values are presented in Tables 2 and 3. By maintaining the accuracy of calculations, the maximum possible value of N is 2 31 , and the value of the integral reaches 4.390.
The values of 1 √ 2 E(max i=1,N ξ i ) + , obtained by the two methods, differ at most by 0.44 % when N ≤ 2 24 . When N = 2 20 , the absolute error of numerical computing of (10) is less than 1.3 × 10 −5 . Thereafter, for N = 2 20 , inequality (11) becomes Let us return to the lower bound for E max i=1,N B H (i/N ). By Sudakov's inequality [1,7] we have Moreover, the maximum of the right-hand side of (13) equals (4Hπe ln 2) −1/2 and is reached when N = [e 1/2H ]. The values of the lower bound are presented in Table 4.
Therefore, even with small values of the parameter H, the simulation does not lead to contradiction. Now let us find a lower bound for the approximation error ∆ N . We prove the following proposition.
From this it follows that, for a fixed N , the approximation error ∆ N → +∞ as H → 0. We also have the following evident corollaries.
Thus, we conclude that the estimation of E max t∈[0,1] B H (t) by Monte Carlo simulations leads to significant errors for small values of the parameter H.