Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 5, Issue 3 (2018)
  4. On a bound of the absolute constant in t ...

On a bound of the absolute constant in the Berry–Esseen inequality for i.i.d. Bernoulli random variables
Volume 5, Issue 3 (2018), pp. 385–410
Anatolii Zolotukhin   Sergei Nagaev   Vladimir Chebotarev  

Authors

 
Placeholder
https://doi.org/10.15559/18-VMSTA113
Pub. online: 14 September 2018      Type: Research Article      Open accessOpen Access

Received
30 January 2018
Revised
22 August 2018
Accepted
25 August 2018
Published
14 September 2018

Abstract

It is shown that the absolute constant in the Berry–Esseen inequality for i.i.d. Bernoulli random variables is strictly less than the Esseen constant, if $1\le n\le 500000$, where n is a number of summands. This result is got both with the help of a supercomputer and an interpolation theorem, which is proved in the paper as well. In addition, applying the method developed by S. Nagaev and V. Chebotarev in 2009–2011, an upper bound is obtained for the absolute constant in the Berry–Esseen inequality in the case under consideration, which differs from the Esseen constant by no more than 0.06%. As an auxiliary result, we prove a bound in the local Moivre–Laplace theorem which has a simple and explicit form.
Despite the best possible result, obtained by J. Schulz in 2016, we propose our approach to the problem of finding the absolute constant in the Berry–Esseen inequality for two-point distributions since this approach, combining analytical methods and the use of computers, could be useful in solving other mathematical problems.

1 Introduction

Let us consider the class V of all probability distributions on the real line $\mathbb{R}$, which have zero mean, unit variance and finite third absolute moment. Let $X,\hspace{0.1667em}{X_{1}},\hspace{0.1667em}{X_{2}},\hspace{0.1667em}\dots \hspace{0.1667em},{X_{n}}$ be i.i.d. random variables, where the distribution of X belongs to V. Denote
\[ \varPhi (x)=\frac{1}{\sqrt{2\pi }}{\underset{-\infty }{\overset{x}{\int }}}{e}^{-{t}^{2}/2}\hspace{0.1667em}dt,\hspace{2em}{\beta _{3}}=\mathbf{E}|X{|}^{3}.\]
According to the Berry–Esseen inequality [2, 5], there exists such an absolute constant ${C_{0}}$ that for all $n=1,\hspace{0.1667em}2,\hspace{0.1667em}\dots \hspace{0.2778em}$,
(1)
\[ \underset{x\in \mathbb{R}}{\sup }\Bigg|\mathbf{P}\Bigg(\frac{1}{\sqrt{n}}{\sum \limits_{j=1}^{n}}{X_{j}}<x\Bigg)-\varPhi (x)\Bigg|\le \frac{{C_{0}}{\beta _{3}}}{\sqrt{n}}.\]
The first upper bounds for the constant ${C_{0}}$ were obtained by C.-G. Esseen [5] (1942), H. Bergström [1] (1949) and K. Takano [30] (1951).
In 1956 C.-G. Esseen [6] showed that
(2)
\[ \underset{n\to \infty }{\lim }\frac{\sqrt{n}}{{\beta _{3}}}\underset{x\in \mathbb{R}}{\sup }\Bigg|\mathbf{P}\Bigg(\frac{1}{\sqrt{n}}{\sum \limits_{j=1}^{n}}{X_{j}}<x\Bigg)-\varPhi (x)\Bigg|\le {C_{E}},\]
where ${C_{E}}=\frac{3+\sqrt{10}}{6\sqrt{2\pi }}=0.409732\hspace{0.1667em}\dots \hspace{0.2778em}$. He has also found a two-point distribution, for which the equality holds in (2). He has proved the uniqueness of such a distribution (up to a reflection).
Consequently, ${C_{0}}\ge {C_{E}}$. The result of Esseen served as an argument for the conjecture
(3)
\[ {C_{0}}={C_{E}},\]
that V.M. Zolotarev advanced in 1966 [38]. The question whether the conjecture is correct remains open up to now.
Since then, a number of upper bounds for ${C_{0}}$ have been obtained. A historical review can be found, for example, in [11, 17, 28]. We only note that recent results in this field were obtained by I.S. Tyurin (see, for example, [31–35]), V.Yu. Korolev and I.G. Shevtsova (see, for example, [11, 13]), and I.G. Shevtsova (see, for example, [25–29]). The best upper estimate, known to date, belongs to Shevtsova: ${C_{0}}\le 0.469$ [28]. Note that in obtaining upper bounds, beginning from the estimates in [38, 39], calculations play an essential role. In addition, because of the large amount of computations, it was necessary to use computers.
The present paper is devoted to estimation of ${C_{0}}$ in the particular case of i.i.d. Bernoulli random variables. In this case we will use the notation ${C_{02}}$ instead of ${C_{0}}$. Let us recall the chronology of the results along these lines.
In 2007 C. Hipp and L. Mattner published an analytical proof of the inequality ${C_{02}}\le \frac{1}{\sqrt{2\pi }}$ in the symmetric case [8].
In 2009 the second and third authors of the present paper have suggested the compound method in which a refinement of C.L.T. for i.i.d. Bernoulli random variables was used along with direct calculations [17]. In unsymmetric case this method allows to obtain majorants for ${C_{02}}$, arbitrarily close to ${C_{E}}$, provided that the computer used is of sufficient power. The main content of the preprint [17] was published in 2011, 2012 in the form of the papers [18, 19]. In these papers, the following bound was proved, ${C_{02}}<0.4215$.
In 2015 we obtained the bound
(4)
\[ {C_{02}}\le 0.4099539,\]
by applying the same approach as in [17–19], with the only difference that this time a supercomputer was used instead of an ordinary PC. We announced bound (4) in [20], but for a number of reasons, delayed publishing the proof, and do it just now. While the present work being in preparation, we have detected a small inaccuracy in the calculations, namely, bound (4) must be increased by ${10}^{-7}$. Thus the following statement is true.
Theorem 1.
The bound
(5)
\[ {C_{02}}\le 0.409954\]
holds.
Meanwhile, in 2016 J. Schulz [23] obtained the unimprovable result: if the symmetry condition is violated, ${C_{02}}={C_{E}}$. As it should be expected, J. Schulz’s proof turned out to be very long and complicated. It should be said that methods based on the use of computers, and analytical methods complement each other. The former ones cannot lead to a final result, but they do not require so much effort. On the other hand, they allow us to predict the exact result, and thus facilitate theoretical research.

2 Shortly about the proof of Theorem 1

2.1 Some notations. On the choice of the left boundary of the interval for p

Let $X,\hspace{0.1667em}{X_{1}},\hspace{0.1667em}{X_{2}},\dots ,\hspace{0.1667em}{X_{n}}$ be a sequence of independent random variables with the same distribution:
(6)
\[ \mathbf{P}(X\hspace{-0.1667em}=\hspace{-0.1667em}1)\hspace{-0.1667em}=\hspace{-0.1667em}p,\hspace{1em}\mathbf{P}(X\hspace{-0.1667em}=\hspace{-0.1667em}0)=q=1-p.\]
In what follows we use the following notations,
(7)
\[\begin{aligned}{}& \hspace{-0.1667em}{F_{n,p}}(x)\hspace{-0.1667em}=\hspace{-0.1667em}\mathbf{P}\Bigg({\sum \limits_{i=1}^{n}}{X_{i}}<x\Bigg),\hspace{1em}{G_{n,p}}(x)\hspace{-0.1667em}=\hspace{-0.1667em}\varPhi \bigg(\frac{x-np}{\sqrt{npq}}\bigg),\\{} & {\Delta _{n}}(p)\hspace{-0.1667em}=\hspace{-0.1667em}\underset{x\in \mathbb{R}}{\sup }|{F_{n,p}}(x)-{G_{n,p}}(x)|,\hspace{1em}\varrho (p)\hspace{-0.1667em}=\hspace{-0.1667em}\frac{\mathbf{E}|X-p{|}^{3}}{{(\mathbf{E}{(X-p)}^{2})}^{3/2}}\hspace{-0.1667em}=\hspace{-0.1667em}\frac{{p}^{2}+{q}^{2}}{\sqrt{pq}},\\{} & {T_{n}}(p)\hspace{-0.1667em}=\hspace{-0.1667em}\frac{{\Delta _{n}}(p)\sqrt{n}}{\varrho (p)},\hspace{1em}\mathcal{E}(p)=\frac{2-p}{3\sqrt{2\pi }\hspace{0.1667em}[{p}^{2}+{(1-p)}^{2}]}.\end{aligned}\]
Obviously,
(8)
\[ {C_{02}}=\underset{n\ge 1}{\sup }\underset{p\in (0,0.5]}{\sup }{T_{n}}(p).\]
In this paper we solve, in particular, the problem of computing the sequence $T(n)=\underset{p\in (0,0.5)}{\sup }{T_{n}}(p)$ for all n such that $1\le n\le {N_{0}}$. Here and in what follows,
\[ {N_{0}}=5\cdot {10}^{5}.\]
Note that for fixed n and p, the quantity $\underset{x\in \mathbb{R}}{\sup }|{F_{n,p}}(x)-{G_{n,p}}(x)|$ is achieved at some discontinuity point of the function ${F_{n,p}}(x)$ (see Lemma 2). We consider distribution functions that are continuous from the left. Consequently,
(9)
\[ {\Delta _{n}}(p)=\underset{0\le i\le n}{\max }{\Delta _{n,i}}(p),\]
where i are integers, ${\Delta _{n,i}}(p)=\{|{F_{n,p}}(i)-{G_{n,p}}(i)|,\hspace{0.1667em}|{F_{n,p}}(i+1)-{G_{n,p}}(i)|\}$.
Note also that we can vary the parameter p in a narrower interval than $[0,0.5]$, namely, in
\[ I:=[0.1689,0.5].\]
This conclusion follows from the next statement.
Lemma 1.
If $0<p\le 0.1689$, then for all $n\hspace{-0.1667em}\ge \hspace{-0.1667em}1$,
(10)
\[ {T_{n}}(p)\hspace{-0.1667em}<\hspace{-0.1667em}0.4096.\]
Lemma 1 is proved in Section 4 with the help of some modification of the Berry – Esseen inequality (with numerical constants) obtained in [10, 12].
Remark 1.
By the same method that is used to prove inequality (10), the estimate ${T_{n}}(p)\le 0.369$ is found in [19] in the case $0<p<0.02$ ($n\ge 1$) (see the proof of (1.37) in [19]), where an earlier estimate of V. Korolev and I. Shevtsova [11] is used, instead of [10, 12]. Note that the use of modified inequalities of the Berry – Esseen type, obtained in [10, 12, 11], is not necessary for obtaining estimates of ${T_{n}}(p)$ in the case when p are close to 0.
An alternative approach, using Poisson approximation, is proposed in the preprint [17]. Let us explain the essence of this method.
An alternative bound is found in the domain $\{(p,n):\hspace{0.1667em}0.258\le \lambda \le 6,\hspace{0.1667em}n\ge 200\}$, where $\lambda =np$. Under these conditions, we have $p\le 0.03$, i.e. p are small enough. Consequently, the error arising under replacement of the binomial distribution by Poisson distribution ${\varPi _{\lambda }}$ with the parameter λ is small.
Next, the distance $d({\varPi _{\lambda }},{G_{\lambda }})$ between ${\varPi _{\lambda }}$ and normal distribution ${G_{\lambda }}$ with the mean λ and the variance λ is estimated, where $d(U,V)=\underset{x\in \mathbb{R}}{\sup }|U(x)-V(x)|$ for any distribution functions $U(x)$ and $V(x)$. Then the estimate of the distance between ${G_{\lambda }}$ and the normal distribution ${G_{n,p}}$ with the mean λ and variance $npq$ is deduced. Summing the obtained estimates, we arrive at an estimate for the distance between the original binomial distribution and ${G_{n,p}}$. As a result, in [17, Lemma 7.8, Theorem 7.2] we derive the estimate ${T_{n}}(p)<0.3607$, which is valid for all points $(p,n)$ in the indicated domain.

2.2 On calculations

Define
\[ {C_{02}}(N)=\underset{1\le n\le N}{\max }\underset{p\in (0,0.5]}{\sup }{T_{n}}(p),\hspace{1em}{\overline{C}_{02}}(N)=\underset{n\ge N}{\sup }\underset{p\in (0,0.5]}{\sup }{T_{n}}(p).\]
Obviously, ${C_{02}}=\max \{{C_{02}}(N),{\overline{C}_{02}}(N+1)\}$ for every $N\ge 1$.
It was proved in [19] that ${\overline{C}_{02}}(200)<0.4215$. By that time it was shown with the help of a computer (see the preprint [9]) that ${C_{02}}(200)<0.4096$, i.e.
(11)
\[ {C_{02}}(200)<{C_{E}},\]
and thus, ${C_{02}}<0.4215$ for all $n\ge 1$.
Some words about bound (11). By (8), to get ${C_{02}}(N)$ it is enough to calculate $T(n)=\underset{p\in (0,0.5]}{\sup }{T_{n}}(p)$ for every $1\le n\le N$, and then find $\underset{1\le n\le N}{\max }T(n)$. The calculation of $T(n)$ is reduced to two problems. The first problem is to calculate $\underset{{p_{j}}\in S}{\max }{T_{n}}({p_{j}})$, where S is a grid on $(0,0.5]$, and the second one is to estimate ${T_{n}}(p)$ in intermediate points p. Both problems were solved in [9] for $1\le n\le 200$.
It should be noted here that, according to the method, the quantity ${C_{02}}(N)$ is calculated (with some accuracy), and ${\overline{C}_{02}}(N)$ is estimated from above. In both cases, a computer is required. The power of an ordinary PC is sufficient for calculating majorants for ${\overline{C}_{02}}(N)$ whereas to calculate ${C_{02}}(N)$ a supercomputer is needed if N is sufficiently large. Moreover, an additional investigation of the interpolation type is required for the convincing conclusion from computer calculations of ${C_{02}}(N)$. In our paper, Theorem 2 plays this role.
Denote by symbol S the uniform grid on I with the step $h={10}^{-12}$. The values of ${T_{n}}({p_{j}})$ for all ${p_{j}}\in S$ and $1\le n\le {N_{0}}$ were calculated on a supercomputer.
The result of the calculations. For all $1\le n\le {N_{0}}$,
(12)
\[ \underset{{p_{j}}\in S}{\max }{T_{n}}({p_{j}})={T_{{N_{1}}}}(\overline{p})=0.40973212897643\dots <0.40973213.\]
The counting algorithm is a triple loop: a loop with respect to the parameter i (see (9)) is nested in a loop with respect to the parameter p, which in turn is nested in the loop with respect to the parameter n.
With the growth of n, the computation time increased rapidly. For example, for $2000\le n\le 2100$ calculations took more than 3 hours on a computer with processor Core2Due E6400. For $2101\le n\le {N_{0}}$ calculations were carried out on the supercomputer Blue Gene/P.
It follows from [20, Corollary 7] that for $n>200$ in the loop with respect to i, one can take not all values of i from 0 to n, but only those, which satisfy the inequality
\[ np-(\nu +1)\sqrt{npq}\hspace{-0.1667em}\le \hspace{-0.1667em}i\le np+\nu \sqrt{npq},\]
where $\nu \hspace{-0.1667em}=\hspace{-0.1667em}\sqrt{3+\sqrt{6}}$. This led to a significant reduction of computation time. We give information about the computer time (without waiting for the queue) in Table 1.
Table 1.
Dependence of computer time on n (supercomputer Blue Gene/P)
$n\in [{N_{1}},{N_{2}}]$: $[10000,11024]$ $[30000,50000]$ $[300000,320000]$ $[490000,{N_{0}}]$
computer time: 3 min 2 hrs + 5 min 4 hrs + 50 min 7 hrs
Calculations were carried out on the supercomputer Blue Gene/P of the Computational Mathematics and Cybernetics Faculty of Lomonosov Moscow State University. After some changes in the algorithm, the calculations for n such that $490000\le n\le {N_{0}}$, were also performed on the CC FEB RAS Computing Cluster [41]. The corresponding computer time was 6 hours and 40 minutes.
The program is written in C+MPI and registered [40].

2.3 Interpolation type results

Let ${p}^{\ast }\in (0,0.5)$. Consider a uniform grid on $[{p}^{\ast },0.5]$ with a step h. The following statement allows to estimate the value of the function $\frac{1}{\varrho (p)}\hspace{0.1667em}{\Delta _{n,k}}(p)$ at an arbitrary point from the interval $[{p}^{\ast },0.5]$ via the value of this function at the nearest grid node and h.
Denote
(13)
\[ {c_{1}}\hspace{-0.1667em}=\hspace{-0.1667em}0.516,\hspace{1em}{c_{2}}\hspace{-0.1667em}=\hspace{-0.1667em}0.121,\hspace{1em}{c_{3}}\hspace{-0.1667em}=\hspace{-0.1667em}0.271.\]
Theorem 2.
Let $0<{p}^{\ast }<p\le 0.5$, ${p^{\hspace{0.1667em}\prime }}$ be a node of a grid with a step h on the interval $[{p}^{\ast },0.5]$, closest to p. Then for all $n\ge 1$ and $0\le k\le n$,
\[ \bigg|\frac{1}{\varrho (p)}\hspace{0.1667em}{\Delta _{n,k}}(p)-\frac{1}{\varrho ({p^{\prime }})}\hspace{0.1667em}{\Delta _{n,k}}\big({p^{\prime }}\big)\bigg|\le \frac{h}{2}\hspace{0.1667em}L\big({p}^{\ast }\big),\]
where
(14)
\[ L(p)\hspace{-0.1667em}=\hspace{-0.1667em}\frac{1}{(1-2pq)\sqrt{pq}}\bigg(\frac{{c_{1}}}{p}+{c_{2}}+{c_{3}}\hspace{0.1667em}\frac{(1-2p)(1+2pq)}{1-2pq}\bigg).\]
The next statement follows from Theorem 2. Note that without it the proof of Theorem 1 would be incomplete.
Corollary 1.
If $p\in I$, and ${p^{\prime }}$ is a node of the grid S, closest to p, then for all $1\le n\le {N_{0}}$,
\[ \big|{T_{n}}(p)-{T_{n}}\big({p^{\prime }}\big)\big|\le 4.6\cdot {10}^{-9}.\]
Proof.
It follows from Theorem 2 that for $0\le k\le n\le {N_{0}}$,
(15)
\[ \bigg|\frac{\sqrt{n}}{\varrho (p)}\hspace{0.1667em}{\Delta _{n,k}}(p)-\frac{\sqrt{n}}{\varrho ({p^{\prime }})}\hspace{0.1667em}{\Delta _{n,k}}\big({p^{\prime }}\big)\bigg|\le \sqrt{{N_{0}}}\hspace{0.1667em}\frac{1}{2}\hspace{0.1667em}{10}^{-12}\hspace{0.1667em}L(0.1689).\]
Since $L(0.1689)<12.98$, the right-hand side of inequality (15) is majorized by the number $4.6\cdot {10}^{-9}$. This implies the statement of Corollary 1.  □

2.4 On the proof of Theorem 1

It follows from (12), Corollary 1 and Lemma 1 that for all $1\le n\le {N_{0}}$ and $p\in (0,0.5]$, the following inequality holds, ${T_{n}}(p)<0.4097321346<{C_{E}}$ (for details, see (64)). It is easy to verify that this inequality is true for $p\in (0.5,1)$ as well. Hence, inequality (5) implies Theorem 1.

2.5 About structure of the paper

The structure of the paper is as follows. The proof of Theorem 2, the main analytical result of the paper, is given in Section 3. The proof consists of 12 lemmas.
In Section 4, Theorem 1 is proved. The section consists of three subsections. In the first one, the formulation of Theorem 1.1 [19] is given. Several corollaries from the latter are also deduced here. The second subsection discusses the connection between the result of K. Neammanee [21], who refined and generalized Uspensky’s estimate [36], and the problem of estimating ${C_{02}}$. It is shown that one can obtain from the result of K. Neammanee the same estimate for ${C_{02}}$ as ours, but for a much larger N. This means that calculating ${C_{02}}(N)$ requires much more computing time if to use Neammanee’s estimate.
In the third subsection, we give, in particular, the proof of Lemma 1.

3 Proof of Theorem 2

We need the following statement, which we give without proof.
Lemma 2.
Let $G(x)$ be a distribution function with a finite number of discontinuity points, and ${G_{0}}(x)$ a continuous distribution function. Denote $\delta (x)=G(x)-{G_{0}}(x)$. There exists a discontinuity point ${x_{0}}$ of $G(x)$ such that the magnitude $\underset{x}{\sup }|\delta (x)|$ is attained in the following sense: if G is continuous from the left, then $\underset{x}{\sup }|\delta (x)|=\max \{\delta ({x_{0}}+),\hspace{0.1667em}-\delta ({x_{0}})\}$, and if G is continuous from the right, then $\underset{x}{\sup }|\delta (x)|=\max \{\delta ({x_{0}}),\hspace{0.1667em}-\delta ({x_{0}}-)\}$.
Define $f(t)=\mathbf{E}{e}^{it(X-p)}\equiv q{e}^{-itp}+p{e}^{itq}$.
Lemma 3.
For all $t\in \mathbb{R}$,
\[ |f(t)|\le \exp \bigg\{-2pq\hspace{0.1667em}{\sin }^{2}\frac{t}{2}\bigg\}.\]
Proof.
Taking into account the difference in the notations, we obtain the statement of Lemma 3 from [19, Lemma 8].  □
Further, we will use the following notations:
\[ \sigma =\sqrt{npq},\hspace{1em}{\beta _{3}}(p)=\mathbf{E}|X-p{|}^{3},\]
Y is a standard normal random variable. Note that $\varrho (p)=\frac{{\beta _{3}}(p)}{{(pq)}^{3/2}}$.
Lemma 4.
The following bound is true for all $n\ge 2$,
\[ {\int _{|t|\le \pi }}|{f}^{n}(t)-{e}^{-npq{t}^{2}/2}|\hspace{0.1667em}dt<\frac{1}{{\sigma }^{2}}\hspace{0.1667em}\bigg(f(p,n)+\pi {\sigma }^{2}{e}^{-{\sigma }^{2}}+\frac{4}{\pi }\hspace{0.1667em}{e}^{-{\pi }^{2}{\sigma }^{2}/8}\bigg),\]
where
\[ f(p,n)=\big({p}^{2}+{q}^{2}\big)\hspace{0.1667em}\frac{{\pi }^{4}}{96}\hspace{0.1667em}{\bigg(\frac{n}{n-1}\bigg)}^{2}+\frac{3{\pi }^{5}\sqrt{\pi pq}}{{2}^{10}\sqrt{n}}\hspace{0.1667em}{\bigg(\frac{n}{n-1}\bigg)}^{5/2}.\]
Proof.
Using the equalities ${e}^{-pq{t}^{2}/2}=\mathbf{E}{e}^{it\sqrt{pq}\hspace{0.1667em}Y}$, $\mathbf{E}{(X-p)}^{j}=\mathbf{E}{(Y\sqrt{pq}\hspace{0.1667em})}^{j}$, $j=0,1,2$, and the Taylor formula, we get
(16)
\[\begin{array}{cc}& \displaystyle |f(t)-{e}^{-pq{t}^{2}/2}|\hspace{-0.1667em}=\hspace{-0.1667em}\Bigg|\mathbf{E}\Bigg[{\sum \limits_{j=1}^{2}}\frac{{(it(X-p))}^{j}}{j!}+\frac{{(it(X-p))}^{3}}{2}\hspace{-0.1667em}{\int _{0}^{1}}\hspace{-0.1667em}\hspace{-0.1667em}{(1-\theta )}^{2}\hspace{0.1667em}{e}^{it\theta (X-p)}\hspace{0.1667em}d\theta \Bigg]\\{} & \displaystyle -\mathbf{E}\Bigg[{\sum \limits_{j=1}^{3}}\frac{1}{j!}\hspace{0.1667em}{(it\sqrt{pq}\hspace{0.1667em}Y)}^{j}+\frac{{(it\sqrt{pq}\hspace{0.1667em}Y)}^{4}}{3!}{\int _{0}^{1}}{(1-\theta )}^{3}\hspace{0.1667em}{e}^{it\theta \sqrt{pq}\hspace{0.1667em}Y}\hspace{0.1667em}d\theta \Bigg]\Bigg|\\{} & \displaystyle =\Bigg|\mathbf{E}\Bigg[\frac{{(it(X-p))}^{3}}{2}{\int _{0}^{1}}{(1-\theta )}^{2}\hspace{0.1667em}{e}^{it\theta (X-p)}\hspace{0.1667em}d\theta \\{} & \displaystyle -\frac{{(it\sqrt{pq}\hspace{0.1667em}Y)}^{4}}{3!}{\int _{0}^{1}}{(1-\theta )}^{3}\hspace{0.1667em}{e}^{it\theta \sqrt{pq}\hspace{0.1667em}Y}\hspace{0.1667em}d\theta \Bigg]\Bigg|\le \frac{|t{|}^{3}}{6}\hspace{0.1667em}{\beta _{3}}(p)+\frac{{t}^{4}}{8}\hspace{0.1667em}{(pq)}^{2}.\end{array}\]
Since for $|x|\le \frac{\pi }{4}$ the inequality $|\sin x|\ge \frac{2\sqrt{2}\hspace{0.1667em}|x|}{\pi }$ is fulfilled, then with the help of Lemma 3 we arrive at the following bound for $|t|\le \pi /2$,
\[ |f(t)|\le \exp \big\{-2pq{\sin }^{2}(t/2)\big\}\le \exp \bigg\{-\frac{4{t}^{2}pq}{{\pi }^{2}}\bigg\}.\]
Then, taking into account the elementary equality ${a}^{n}-{b}^{n}=(a-b){\sum \limits_{j=0}^{n-1}}{a}^{j}{b}^{n-1-j}$ and the estimate (16), we obtain for $|t|\le \pi /2$ that
\[\begin{array}{c}\displaystyle |{f}^{n}(t)-{e}^{-npq{t}^{2}/2}|\le |f(t)-{e}^{-pq{t}^{2}/2}|{\sum \limits_{j=0}^{n-1}}|f(t){|}^{j}{e}^{-(n-1-j){t}^{2}pq/2}\le \\{} \displaystyle \le \bigg(\frac{|t{|}^{3}}{6}\hspace{0.1667em}{\beta _{3}}(p)+\frac{{t}^{4}}{8}\hspace{0.1667em}{(pq)}^{2}\bigg)\hspace{0.1667em}{\sum \limits_{j=0}^{n-1}}\exp \big\{\big[j\hspace{0.1667em}\big(1-8/{\pi }^{2}\big)-(n-1)\big]{t}^{2}pq/2\big\}\le \\{} \displaystyle \le \bigg(\frac{|t{|}^{3}}{6}\hspace{0.1667em}{\beta _{3}}(p)+\frac{{t}^{4}}{8}\hspace{0.1667em}{(pq)}^{2}\bigg)\hspace{0.1667em}n\hspace{0.1667em}\exp \bigg\{-\frac{4(n-1){t}^{2}pq}{{\pi }^{2}}\bigg\}.\end{array}\]
Using the well-known formulas $\mathbf{E}|Y{|}^{3}=\frac{4}{\sqrt{2\pi }}$ and $\mathbf{E}{Y}^{4}=3$, we deduce from the previous inequality that for $n\ge 2$,
(17)
\[\begin{array}{cc}& \displaystyle \underset{|t|\le \pi /2}{\int }|{f}^{n}(t)-{e}^{-npq{t}^{2}/2}|\hspace{0.1667em}dt\le n\sqrt{2\pi }\hspace{0.1667em}\bigg(\frac{{\beta _{3}}(p)}{6{m}^{2}}\hspace{0.1667em}\mathbf{E}|Y{|}^{3}+\frac{{(pq)}^{2}}{8{m}^{5/2}}\hspace{0.1667em}\mathbf{E}{Y}^{4}\bigg){\bigg|_{m=\frac{8(n-1)pq}{{\pi }^{2}}}}\\{} & \displaystyle =n\bigg(\frac{{\pi }^{4}\varrho (p)}{96\sqrt{pq}\hspace{0.1667em}{(n-1)}^{2}}+\frac{3{\pi }^{5}\sqrt{\pi }}{{2}^{10}\sqrt{pq}\hspace{0.1667em}{(n-1)}^{5/2}}\bigg)=\frac{f(p,n)}{{\sigma }^{2}}.\end{array}\]
Applying Lemma 3 again, we get
(18)
\[ \underset{\pi /2\le |t|\le \pi }{\int }|{f}^{n}(t)|\hspace{0.1667em}dt\le 2{\int _{\pi /2}^{\pi }}{e}^{-2{\sigma }^{2}\hspace{0.1667em}{\sin }^{2}(t/2)}dt<\pi \hspace{0.1667em}{e}^{-{\sigma }^{2}}.\]
Moreover, by virtue of the known inequality
(19)
\[ {\int _{c}^{\infty }}{e}^{-{t}^{2}/2}\hspace{0.1667em}dt\le \frac{1}{c}\hspace{0.1667em}{e}^{-{c}^{2}/2},\]
which holds for every $c>0$, we have
(20)
\[ {\int _{|t|\ge \pi /2}}{e}^{-{\sigma }^{2}{t}^{2}/2}\hspace{0.1667em}dt\le \frac{4}{\pi {\sigma }^{2}}\hspace{0.1667em}{e}^{-{\sigma }^{2}{\pi }^{2}/8}.\]
Collecting the estimates (17)–(20), we obtain the statement of Lemma 4.  □
Denote
\[ {P_{n}}(k)={C_{n}^{k}}{p}^{k}{q}^{n-k},\hspace{1em}{\delta _{n}}(k,p)={P_{n}}(k)-\frac{1}{\sqrt{npq}}\hspace{0.1667em}\varphi \bigg(\frac{k-np}{\sqrt{npq}}\bigg).\]
Lemma 5.
For every $n\ge 1$ and $0\le k\le n$ the following bound holds,
(21)
\[ |{\delta _{n}}(k,p)|<\min \bigg\{\frac{1}{\sigma \sqrt{2e}},\frac{{c_{1}}}{{\sigma }^{2}}\bigg\},\]
where ${c_{1}}$ is defined in (13).
Proof.
It was proved in [7] that ${P_{n}}(k)\le \frac{1}{\sqrt{2enpq}}$. Moreover, $\frac{1}{\sqrt{npq}}\hspace{0.1667em}\varphi (\frac{k-np}{\sqrt{npq}})\le \frac{1}{\sqrt{2\pi npq}}$. Hence,
(22)
\[ |{\delta _{n}}(k,p)|\le \frac{1}{\sqrt{2enpq}}=\frac{1}{\sigma \sqrt{2e}}.\]
Let us find another bound for ${\delta _{n}}(k,p)$. Let $\sigma >1$. Then $n>\frac{1}{pq}\ge 4$, i.e. $n\ge 5$.
By the inversion formula for integer random variables,
\[ {P_{n}}(k)=\frac{1}{2\pi }{\int _{-\pi }^{\pi }}{\big(q+{e}^{it}p\big)}^{n}\hspace{0.1667em}{e}^{-itk}\hspace{0.1667em}dt=\frac{1}{2\pi }{\int _{-\pi }^{\pi }}{f}^{n}(t)\hspace{0.1667em}{e}^{-it(k-np)}\hspace{0.1667em}dt.\]
Moreover, by the inversion formula for densities,
\[ \frac{1}{\sigma }\hspace{0.1667em}\varphi \bigg(\frac{x-\mu }{\sigma }\bigg)=\frac{1}{2\pi }{\int _{-\infty }^{\infty }}{e}^{-{t}^{2}{\sigma }^{2}/2-it(x-\mu )}\hspace{0.1667em}dt.\]
Consequently,
(23)
\[ {\delta _{n}}(k,p)=\frac{1}{2\pi }\hspace{0.1667em}({J_{1}}-{J_{2}}),\]
where
\[ {J_{1}}={\int _{-\pi }^{\pi }}\big[{f}^{n}(t)-{e}^{-{\sigma }^{2}{t}^{2}/2}\big]\hspace{0.1667em}{e}^{-it(k-np)}\hspace{0.1667em}dt,\hspace{1em}{J_{2}}={\int _{|t|\ge \pi }}{e}^{-{\sigma }^{2}{t}^{2}/2}\hspace{0.1667em}{e}^{-it(k-np)}\hspace{0.1667em}dt.\]
Note that the function $f(p,n)$ from Lemma 4 decreases in n. Hence, $f(p,n)\le f(p,5)$. It is not hard to verify that $\underset{p\in [0,1]}{\max }f(p,5)<1.707$. Thus, for $\sigma >1$,
\[ |{J_{1}}|\le \frac{1}{{\sigma }^{2}}\bigg(1.707+\frac{\pi }{e}+\frac{4}{\pi }\hspace{0.1667em}{e}^{-{\pi }^{2}/8}\bigg)<\frac{3.234}{{\sigma }^{2}}.\]
Using inequality (19), we get the estimate
\[ |{J_{2}}|\le \frac{2}{\pi {\sigma }^{2}}\hspace{0.1667em}{e}^{-{\pi }^{2}{\sigma }^{2}/2}<\frac{0.005}{{\sigma }^{2}}.\]
Thus, we get from (23) that for $\sigma >1$,
(24)
\[ |{\delta _{n}}(k,p)|\le \frac{3.24}{2\pi {\sigma }^{2}}<\frac{0.516}{{\sigma }^{2}}.\]
Since $\frac{1}{\sigma \sqrt{2e}}\le \frac{{c_{1}}}{{\sigma }^{2}}$ for $0<\sigma \le {c_{1}}\sqrt{2e}=1.203\dots \hspace{0.2778em}>1$, the statement of Lemma 5 follows from (22) and (24).  □
Lemma 6.
The following equality holds,
(25)
\[ \frac{\partial }{\partial p}\hspace{0.1667em}{G_{n,p}}(x)=-\frac{x(1-2p)+np}{2pq\sqrt{npq}}\hspace{0.1667em}\varphi \bigg(\frac{x-np}{\sqrt{npq}}\bigg).\]
Proof.
We have
\[\begin{aligned}{}& \frac{d}{dp}{p}^{-1/2}{(1-p)}^{-1/2}=-\frac{q-p}{2pq\sqrt{pq}},\\{} & \frac{d}{dp}{p}^{1/2}{(1-p)}^{-1/2}=\frac{1}{2}{p}^{-1/2}{(1-p)}^{-1/2}+\frac{1}{2}{p}^{1/2}{(1-p)}^{-3/2}=\frac{1}{2q\sqrt{pq}}.\end{aligned}\]
Hence,
\[ \frac{\partial }{\partial p}\hspace{0.1667em}\frac{x-np}{\sqrt{npq}}=-\frac{x(q-p)}{2pq\sqrt{npq}}-\frac{\sqrt{n}}{2q\sqrt{pq}}=-\frac{x(q-p)+np}{2pq\sqrt{npq}},\]
and we arrive at (25).  □
Lemma 7.
For all $n\ge 1$ and $0\le k\le n$ the following bound holds,
\[ \bigg|\frac{\partial }{\partial p}\hspace{0.1667em}{F_{n,p}}(k+1)-\frac{\partial }{\partial p}\hspace{0.1667em}{G_{n,p}}(k)\bigg|\le {L_{1}}(p)\equiv \frac{1}{pq}\bigg(\frac{{c_{1}}}{q}+{c_{2}}\bigg).\]
Proof.
It is shown in [22] that
\[ \frac{\partial }{\partial p}\hspace{0.1667em}{F_{n,p}}(k+1)=-n{C_{n-1}^{k}}{p}^{k}{q}^{n-1-k}=-\frac{n-k}{q}{P_{n}}(k).\]
By Lemma 5,
(26)
\[ \frac{n-k}{q}\hspace{0.2778em}\bigg|{P_{n}}(k)-\frac{1}{\sigma }\hspace{0.1667em}\varphi \bigg(\frac{k-np}{\sigma }\bigg)\bigg|\le \frac{n\hspace{0.1667em}{c_{1}}}{q{\sigma }^{2}}=\frac{{c_{1}}}{p{q}^{2}}.\]
In turn, it follows from Lemma 6 that
(27)
\[\begin{array}{cc}& \displaystyle \frac{n-k}{q\sigma }\hspace{0.1667em}\varphi \bigg(\frac{k-np}{\sigma }\bigg)+\frac{\partial }{\partial p}\hspace{0.1667em}{G_{n,p}}(k)\\{} & \displaystyle =\bigg(\frac{n-k}{q\sigma }-\frac{k(1-2p)+np}{2pq\sigma }\bigg)\hspace{0.1667em}\hspace{0.1667em}\varphi \bigg(\frac{k-np}{\sigma }\bigg)=-\frac{k-np}{2pq\sigma }\hspace{0.1667em}\varphi \bigg(\frac{k-np}{\sigma }\bigg).\end{array}\]
Since
\[\begin{array}{c}\displaystyle \frac{\partial }{\partial p}\hspace{0.1667em}{F_{n,p}}(k+1)-\frac{\partial }{\partial p}\hspace{0.1667em}{G_{n,p}}(k)=-\frac{n-k}{q}\hspace{0.2778em}\bigg[{P_{n}}(k)-\frac{1}{\sigma }\hspace{0.1667em}\varphi \bigg(\frac{k-np}{\sigma }\bigg)\bigg]\\{} \displaystyle -\bigg[\frac{n-k}{q\sigma }\hspace{0.1667em}\varphi \bigg(\frac{k-np}{\sigma }\bigg)+\frac{\partial }{\partial p}\hspace{0.1667em}{G_{n,p}}(k)\bigg]\end{array}\]
and $\underset{x}{\max }|x|\varphi (x)=\frac{1}{\sqrt{2\pi e}}<0.242$, the statement of the lemma follows from (26) and (27).  □
Lemma 8.
For all $n\ge 1$ and $0\le k\le n$ the following bound holds,
\[ \bigg|\frac{\partial }{\partial p}\hspace{0.1667em}{F_{n,p}}(k)-\frac{\partial }{\partial p}\hspace{0.1667em}{G_{n,p}}(k)\bigg|\le {L_{2}}(p)\equiv \frac{1}{pq}\bigg(\frac{{c_{1}}}{p}+{c_{2}}\bigg),\]
where ${c_{1}}$, ${c_{2}}$ are from (13).
Proof.
Similarly to the proof of Lemma 7 we obtain
(28)
\[\begin{aligned}{}& \frac{\partial }{\partial p}\hspace{0.1667em}{F_{n,p}}(k)=-n{C_{n-1}^{k-1}}{p}^{k-1}{q}^{n-k}=-\frac{k}{p}{P_{n}}(k),\\{} & \frac{k}{p}\hspace{0.2778em}\bigg|{P_{n}}(k)-\frac{1}{\sigma }\hspace{0.1667em}\varphi \bigg(\frac{k-np}{\sigma }\bigg)\bigg|\le \frac{k\hspace{0.1667em}{c_{1}}}{p{\sigma }^{2}}\le \frac{{c_{1}}}{{p}^{2}q}.\end{aligned}\]
Hence,
\[ \frac{\partial }{\partial p}\hspace{0.1667em}{F_{n,p}}(k)-\frac{\partial }{\partial p}\hspace{0.1667em}{G_{n,p}}(k)=-\frac{k}{p}\hspace{0.2778em}\bigg[{P_{n}}(k)-\frac{1}{\sigma }\hspace{0.1667em}\varphi \bigg(\frac{k-np}{\sigma }\bigg)\bigg]-\frac{k-np}{2pq\sigma }\varphi \bigg(\frac{k-np}{\sigma }\bigg).\]
Since the last summand on the right-hand side of the equality is less than $\frac{0.121}{pq}$, then by using (28) we get the statement of the lemma.  □
Lemma 9.
For every $0<p<0.5$,
(29)
\[ \frac{d}{dp}\frac{1}{\varrho (p)}=\frac{1}{2}\hspace{0.1667em}A(p):=\frac{1}{2}\hspace{0.1667em}\frac{(1-2p)(1+2pq)}{\sqrt{pq}{(1-2pq)}^{2}}.\]
Proof.
The lemma follows from the equalities:
\[\begin{aligned}{}& \frac{d}{dp}\hspace{0.1667em}\frac{1}{\varrho (p)}=\frac{d}{dx}\hspace{0.1667em}\frac{x}{1\hspace{0.1667em}-\hspace{0.1667em}2{x}^{2}}{\bigg|_{x=\sqrt{pq}}}\times \frac{d}{dp}\sqrt{p(1\hspace{0.1667em}-\hspace{0.1667em}p)},\hspace{0.2778em}\hspace{0.2778em}\frac{d}{dp}\sqrt{p(1\hspace{0.1667em}-\hspace{0.1667em}p)}=\frac{1-2p}{2\sqrt{pq}},\\{} & \frac{d}{dx}\hspace{0.1667em}\frac{x}{1-2{x}^{2}}=\frac{1}{1-2{x}^{2}}+\frac{4{x}^{2}}{{(1-{x}^{2})}^{2}}=\frac{1+2{x}^{2}}{{(1-2{x}^{2})}^{2}}.\end{aligned}\]
 □
Lemma 10.
The function $A(p)$ decreases on the interval $(0,0.5)$.
Proof.
Denote $x=x(p)=p(1-p)$, ${A_{1}}(t)=\frac{\sqrt{1-4t}\hspace{0.1667em}(1+2t)}{\sqrt{t}\hspace{0.1667em}{(1-2t)}^{2}}$. Taking into account the equality $1-2p=\sqrt{1-4pq}$, we obtain $A(p)={A_{1}}(x)$.
Since $x(p)$ increases for $0<p<0.5$, it remains to prove the decrease of the function ${A_{1}}(x)$ for $0<x<0.25$. We have
\[ \frac{d}{dx}\hspace{0.1667em}\ln {A_{1}}(x)=\frac{-2}{1-4x}+\frac{2}{1+2x}-\frac{1}{2x}+\frac{4}{1-2x}=-\frac{32{x}^{3}+36{x}^{2}-12x+1}{2x(1-4x)(1-4{x}^{2})}.\]
On the interval $[0,0.25]$ the polynomial ${A_{2}}(x)\equiv 32{x}^{3}+36{x}^{2}-12x+1$ has the single minimum point ${x_{1}}=\frac{-3+\sqrt{17}}{8}=0.140\dots \hspace{0.2778em}$. Since ${A_{2}}({x_{1}})=0.11\dots >0$, we have $\frac{d}{dx}\hspace{0.1667em}\ln {A_{1}}(x)<0$ for $0\le x<0.25$, i.e. the function ${A_{1}}(x)$ decreases on $(0,0.25)$. The lemma is proved.  □
Lemma 11.
The function $L(p)$, defined in (14), decreases on $[0,0.5]$.
Proof.
Taking into account the equality ${p}^{2}+{q}^{2}=1-2pq$, it is not difficult to see that
(30)
\[ L(p)=\frac{1}{\varrho (p)}\hspace{0.1667em}{L_{2}}(p)+{c_{3}}\hspace{0.1667em}A(p).\]
According to Lemma 10, the function $A(p)$ decreases. Consequently, it remains to prove that the function ${L_{3}}(p):=\frac{1}{\varrho (p)}\hspace{0.1667em}{L_{2}}(p)=\frac{{c_{1}}+{c_{2}}p}{p\sqrt{pq}(1-2pq)}$ decreases on $[0,0.5]$. We have
\[\begin{array}{c}\displaystyle \frac{d}{dp}\hspace{0.1667em}\ln {L_{3}}(p)=\frac{{c_{2}}}{{c_{1}}+{c_{2}}p}-\frac{3}{2p}+\frac{1}{2(1-p)}+\frac{2(1-2p)}{1-2p+2{p}^{2}}\\{} \displaystyle =\frac{{A_{3}}(p)}{2pq({c_{1}}+{c_{2}}p)(1-2pq)},\end{array}\]
where ${A_{3}}(p)=-3{c_{1}}+(14{c_{1}}-{c_{2}})p-(26{c_{1}}-8{c_{2}}){p}^{2}+(16{c_{1}}-18{c_{2}}){p}^{3}+12{c_{2}}{p}^{4}$. Let us prove that
(31)
\[ {A_{3}}(p)<0,\hspace{1em}0<p<0.5.\]
We have
\[\begin{aligned}{}& {A^{\prime }_{3}}(p)=14{c_{1}}-{c_{2}}-4(13{c_{1}}-4{c_{2}})p+6(8{c_{1}}-9{c_{2}}){p}^{2}+48{c_{2}}{p}^{3},\\{} & {A^{\prime\prime }_{3}}(p)=-4(13{c_{1}}-4{c_{2}})+12(8{c_{1}}-9{c_{2}})p+144{c_{2}}{p}^{2}.\end{aligned}\]
As a result of calculations, we find that the equation ${A^{\prime }_{3}}(p)=0$ has the single root ${p_{0}}=0.478287\dots $ on $[0,0.5]$. The roots of the equation ${A^{\prime\prime }_{3}}(p)=0$ have the form
\[ {p_{1,2}}=\frac{1}{24{c_{2}}}\hspace{0.1667em}\big(-8{c_{1}}+9{c_{2}}\pm \sqrt{{(8{c_{1}}-9{c_{2}})}^{2}+16{c_{2}}(13{c_{1}}-4{c_{2}})}\hspace{0.1667em}\big),\]
and are equal to ${p_{1}}=-2.6\dots \hspace{0.2778em}$, ${p_{2}}=0.54\dots \hspace{0.2778em}$ respectively. Hence, ${A^{\prime\prime }_{3}}(p)<0$ for $p\in [0,0.5]$. Thus, the function ${A_{3}}(p)$, considered on $[0,0.5]$, takes a maximum value at the point ${p_{0}}$. Since ${A_{3}}({p_{0}})=-0.257\dots \hspace{0.2778em}$, inequality (31) is proved. This implies that ${L_{3}}(p)$ decreases on $(0,0.5)$.  □
Let $f(x)$ be an arbitrary function. Denote by ${D}^{+}f(x)$ and ${D}^{-}f(x)$ its right-side and left-side derivatives respectively (if they exist).
Lemma 12.
Let $g(x)=\max \{{f_{1}}(x),\hspace{0.1667em}{f_{2}}(x)\}$, where ${f_{1}}(x)$ and ${f_{2}}(x)$ are functions, differentiable on a finite interval $(a,b)$. Then at every point $x\in (a,b)$ there exist both one-side derivatives ${D}^{+}g(x)$ and ${D}^{-}g(x)$, each of which coincides with either ${f^{\prime }_{1}}(x)$ or ${f^{\prime }_{2}}(x)$.
Proof.
Let x be a point such that ${f_{1}}(x)\ne {f_{2}}(x)$. Then the function g is differentiable at x, and in this case the statement of the lemma is trivial.
Now let for a point $x\in (a,b)$,
(32)
\[ {f_{1}}(x)={f_{2}}(x).\]
First, consider the case ${f^{\prime }_{1}}(x)\ne {f^{\prime }_{2}}(x)$. Let, for instance, ${f^{\prime }_{1}}(x)>{f^{\prime }_{2}}(x)$. Then there exists ${h_{0}}>0$ such that
(33)
\[\begin{aligned}{}& {f_{1}}(x+h)>{f_{2}}(x+h),\hspace{1em}0<h\le {h_{0}},\end{aligned}\]
(34)
\[\begin{aligned}{}& {f_{2}}(x+h)>{f_{1}}(x+h),\hspace{1em}-{h_{0}}\le h<0.\end{aligned}\]
From differentiability of the functions ${f_{1}}$ and ${f_{2}}$ it follows that for $h\to 0$,
(35)
\[ {f_{i}}(x+h)={f_{i}}(x)+{f^{\prime }_{i}}(x)h+o(h),\hspace{1em}i=1,2.\]
Then using (33) we obtain the equality
\[ g(x+h)={f_{1}}(x+h)={f_{1}}(x)+{f^{\prime }_{1}}(x)h+o(h),\hspace{1em}h>0,\]
and using (34),
\[ g(x+h)={f_{2}}(x+h)={f_{2}}(x)+{f^{\prime }_{2}}(x)h+o(h),\hspace{1em}h<0.\]
Thus, existence of ${D}^{+}g(x)$ and ${D}^{-}g(x)$ follows.
Now let
(36)
\[ {f^{\prime }_{1}}(x)={f^{\prime }_{2}}(x).\]
It follows from (32), (35) and (36) that for $h\to 0$,
\[ g(x+h)={f_{i}}(x)+{f^{\prime }_{i}}(x)h+o(h),\hspace{1em}i=1,2.\]
Hence, ${g^{\prime }}(x)={f^{\prime }_{1}}(x)={f^{\prime }_{2}}(x)$. The lemma is proved.  □
Denote
\[ \varrho =\varrho (p),\hspace{1em}{q_{i}}=1-{p_{i}},\hspace{1em}{\varrho _{i}}=\varrho ({p_{i}})\equiv \frac{\omega ({p_{i}})}{\sqrt{{p_{i}}{q_{i}}}}.\]
Lemma 13.
Let $0<{p_{1}}<p<{p_{2}}\le 0.5$. Then for all $n\ge 1$ and $\hspace{0.2778em}0\le k\le n$,
(37)
\[ \bigg|\frac{1}{\varrho }\hspace{0.1667em}{\Delta _{n,k}}(p)-\frac{1}{{\varrho _{1}}}\hspace{0.1667em}{\Delta _{n,k}}({p_{1}})\bigg|\le L({p_{1}})\hspace{0.1667em}(p-{p_{1}}),\hspace{1em}\hspace{0.2778em}\]
and
(38)
\[ \bigg|\frac{1}{\varrho }{\Delta _{n,k}}(p)-\frac{1}{{\varrho _{2}}}{\Delta _{n,k}}({p_{2}})\bigg|<L({p_{1}})({p_{2}}-p).\]
Proof.
Note that ${\Delta _{n,k}}(p)<0.541$ (see [3]). Consequently,
(39)
\[ \bigg|\frac{1}{\varrho }\hspace{0.1667em}{\Delta _{n,k}}(p)-\frac{1}{{\varrho _{1}}}\hspace{0.1667em}{\Delta _{n,k}}({p_{1}})\bigg|\le \frac{1}{{\varrho _{1}}}\hspace{0.1667em}|{\Delta _{n,k}}(p)-{\Delta _{n,k}}({p_{1}})|+0.541\bigg(\frac{1}{\varrho }-\frac{1}{{\varrho _{1}}}\bigg)\hspace{0.2778em}.\]
It is obvious that ${F_{n,p}}(k)$ and ${G_{n,p}}(k)$, considered as functions of the argument p, are differentiable. Then, according to Lemma 12, the one-side derivatives of the functions ${\Delta _{n,k}}(p)$ exist at each point $p\in [0,0.5]$ and coincide with $\frac{\partial }{\partial p}({F_{n,p}}(k+1)-{G_{n,p}}(k))$ or $\frac{\partial }{\partial p}({G_{n,p}}(k)-{F_{n,p}}(k))$.
Taking into account that ${L_{1}}(p)\le {L_{2}}(p)$ for $0<p\le 0.5$, we obtain from Lemmas 7 and 8
(40)
\[\begin{array}{cc}& \displaystyle |{\Delta _{n,k}}(p)-{\Delta _{n,k}}({p_{1}})|\le (p-{p_{1}})\underset{{p_{1}}\le s\le p}{\max }|{D}^{+}{\Delta _{n,k}}(s)|\\{} & \displaystyle \le (p-{p_{1}})\underset{{p_{1}}\le s\le p}{\max }{L_{2}}(s).\end{array}\]
The function ${L_{2}}(s)$ decreases on $(0,\hspace{0.1667em}0.5]$. Hence,
(41)
\[ \underset{{p_{1}}\le s\le p}{\max }{L_{2}}(s)={L_{2}}({p_{1}}).\]
The inequality
(42)
\[ \frac{1}{{\varrho _{1}}}\hspace{0.1667em}|{\Delta _{n,k}}(p)-{\Delta _{n,k}}({p_{1}})|\le \frac{p-{p_{1}}}{{\varrho _{1}}}\hspace{0.1667em}{L_{2}}({p_{1}})\]
follows from (40) and (41). Taking into account Lemmas 9 and 10, we have
(43)
\[ \frac{1}{\varrho }-\frac{1}{{\varrho _{1}}}\le (p-{p_{1}})\hspace{0.1667em}\underset{{p_{1}}<s<p}{\max }\frac{d}{ds}\frac{1}{\varrho (s)}<{2}^{-1}A({p_{1}})(p-{p_{1}}).\]
Collecting the estimates (39), (42), (43), we obtain with the help of (30) that for $0\le {p_{1}}<p\le 0.5$,
(44)
\[\begin{array}{cc}& \displaystyle \bigg|\frac{1}{\varrho }\hspace{0.1667em}{\Delta _{n,k}}(p)-\frac{1}{{\varrho _{1}}}\hspace{0.1667em}{\Delta _{n,k}}({p_{1}})\bigg|\le (p-{p_{1}})\bigg(\frac{1}{{\varrho _{1}}}\hspace{0.1667em}{L_{2}}({p_{1}})+0.271\hspace{0.1667em}A({p_{1}})\bigg)\\{} & \displaystyle =(p-{p_{1}})L({p_{1}}).\end{array}\]
Hence, for $0<p<{p_{2}}\le 0.5$,
(45)
\[ \bigg|\frac{1}{\varrho }{\Delta _{n,k}}(p)-\frac{1}{{\varrho _{2}}}{\Delta _{n,k}}({p_{2}})\bigg|<({p_{2}}-p)L(p).\]
Inequality (37) coincides with (44), and inequality (38) follows from (45) and Lemma 11. Lemma 13 is proved.  □
Proof of Theorem 2.
It follows from the definition of ${p^{\prime }}$ that either $0<p-{p^{\hspace{0.1667em}\prime }}<h/2$ or $0<{p^{\hspace{0.1667em}\prime }}-p<h/2$. In the first case the statement of the theorem follows from (37) and Lemma 11, and in the second one from (38) and Lemma 11 again.  □

4 Proof of Theorem 1

4.1 Theorem 1.1 [19] and some its consequences

First we formulate Theorem 1.1 from [19]. To do this, we need to enter a rather lot of notations from [19]:
\[\begin{aligned}{}& {\omega _{3}}(p)=q-p,\hspace{1em}{\omega _{4}}(p)=|{q}^{3}+{p}^{3}-3pq|,\hspace{1em}{\omega _{5}}(p)={q}^{4}-{p}^{4},\\{} & {\omega _{6}}(p)={q}^{5}+{p}^{5}+15{(pq)}^{2},\\{} & {K_{1}}(p,n)=\frac{{\omega _{3}}(p)}{4\sigma \sqrt{2\pi }(n-1)}\hspace{0.1667em}\bigg(1+\frac{1}{4(n-1)}\bigg)+\frac{{\omega _{4}}(p)}{12{\sigma }^{2}\pi }\hspace{0.1667em}{\bigg(\frac{n}{n-1}\bigg)}^{2}\\{} & \hspace{85.35826pt}+\hspace{0.1667em}\frac{{\omega _{5}}(p)}{40{\sigma }^{3}\sqrt{2\pi }}\hspace{0.1667em}{\bigg(\frac{n}{n-1}\bigg)}^{5/2}+\frac{{\omega _{6}}(p)}{90{\sigma }^{4}\pi }\hspace{0.1667em}{\bigg(\frac{n}{n-1}\bigg)}^{3};\end{aligned}\]
\[\begin{array}{r@{\hskip0pt}l@{\hskip0pt}r@{\hskip0pt}l}\displaystyle \omega (p)& \displaystyle ={p}^{2}+{q}^{2},\hspace{2em}& \displaystyle \zeta (p)& \displaystyle ={\bigg(\frac{\omega (p)}{6}\bigg)}^{2/3},\hspace{1em}e(n,p)=\exp \bigg\{\frac{1}{24{\sigma }^{2/3}{\zeta }^{2}(p)}\bigg\},\\{} \displaystyle {e_{5}}& \displaystyle =0.0277905,\hspace{2em}& \displaystyle {\widetilde{\omega }_{5}}(p)& \displaystyle ={p}^{4}+{q}^{4}+5!\hspace{0.1667em}{e_{5}}{(pq)}^{3/2},\end{array}\]
\[\begin{aligned}{}& {V_{6}}(p)={\omega _{3}^{2}}(p),\hspace{1em}{V_{7}}(p)={\omega _{3}}(p){\omega _{4}}(p),\hspace{1em}{V_{8}}(p)=\frac{2{\widetilde{\omega }_{5}}(p){\omega _{3}}(p)}{5!3!}+{\bigg(\frac{{\omega _{4}}(p)}{4!}\bigg)}^{2},\\{} & {V_{9}}(p)={\widetilde{\omega }_{5}}(p){\omega _{4}}(p),\hspace{1em}{V_{10}}(p)={\widetilde{\omega }_{5}^{2}}(p),\hspace{1em}{A_{k}}(n)={\bigg(\frac{n}{n-2}\bigg)}^{k/2}\hspace{0.1667em}\frac{n-1}{n},\end{aligned}\]
\[\begin{array}{l@{\hskip10.0pt}l@{\hskip10.0pt}l@{\hskip10.0pt}l@{\hskip10.0pt}l}{\gamma _{6}}=\frac{1}{9},& \hspace{1em}{\gamma _{7}}=\frac{5\sqrt{2\pi }}{96},& \hspace{1em}{\gamma _{8}}=24,& \hspace{1em}{\gamma _{9}}=\frac{7\sqrt{2\pi }}{4!\hspace{0.1667em}16},& \hspace{1em}{\gamma _{10}}=\frac{{2}^{6}\cdot 3}{{(5!)}^{2}},\\{} {\widetilde{\gamma }_{6}}=\frac{2}{3},& \hspace{1em}{\widetilde{\gamma }_{7}}=\frac{7}{8},& \hspace{1em}{\widetilde{\gamma }_{8}}=\frac{10}{9},& \hspace{1em}{\widetilde{\gamma }_{9}}=\frac{11}{8},& \hspace{1em}{\widetilde{\gamma }_{10}}=\frac{5}{3},\end{array}\]
\[ {K_{2}}(p,n)=\frac{1}{\pi \sigma }{\sum \limits_{j=1}^{5}}\frac{{\gamma _{j+5}}\hspace{0.1667em}{A_{j+5}}(n)\hspace{0.1667em}{V_{j+5}}(p)}{{\sigma }^{j}}\hspace{0.1667em}\bigg[1+\frac{{\widetilde{\gamma }_{j+5}}\hspace{0.1667em}e(n,p)\hspace{0.1667em}n}{{\sigma }^{2}\hspace{0.1667em}(n-2)}\bigg];\]
\[ {A_{1}}=5.405,\hspace{1em}{A_{2}}=7.521,\hspace{1em}{A_{3}}=5.233,\hspace{1em}\mu =\frac{3{\pi }^{2}-16}{{\pi }^{4}},\]
\[ \chi (p,n)=\frac{2\zeta (p)}{{\sigma }^{2/3}}\hspace{0.2778em}\hspace{0.1667em}\text{if}\hspace{0.2778em}\hspace{0.1667em}p\in (0,0.085),\hspace{0.2778em}\hspace{0.2778em}\text{and}\hspace{0.2778em}\hspace{0.2778em}\chi (p,n)=0\hspace{0.2778em}\hspace{0.1667em}\text{if}\hspace{0.2778em}\hspace{0.1667em}p\in [0.085,0.5],\]
\[\begin{aligned}{}{K_{3}}(p,n)& =\frac{1}{\pi }\hspace{0.1667em}\bigg\{\frac{1}{12{\sigma }^{2}}+\bigg(\frac{1}{36}+\frac{\mu }{8}\bigg)\hspace{0.1667em}\frac{1}{{\sigma }^{4}}+\bigg(\frac{1}{36}\hspace{0.1667em}{e}^{{A_{1}}/6}+\frac{\mu }{8}\bigg)\hspace{0.1667em}\frac{1}{{\sigma }^{6}}+\frac{5\mu }{24}\hspace{0.1667em}{e}^{{A_{2}}/6}\hspace{0.1667em}\frac{1}{{\sigma }^{8}}\\{} & +\hspace{0.1667em}\frac{1}{3}\hspace{0.1667em}\exp \bigg\{-\sigma \sqrt{{A_{1}}}+\frac{{A_{1}}}{6}\bigg\}+(\pi -2)\mu \exp \bigg\{-\sigma \sqrt{{A_{2}}}+\frac{{A_{2}}}{6}\bigg\}\\{} & +\hspace{0.1667em}\exp \bigg\{-\sigma \sqrt{{A_{3}}}+\frac{{A_{3}}}{6}\bigg\}\frac{1}{4}\hspace{0.1667em}\ln \bigg(\frac{{\pi }^{4}{\sigma }^{2}}{4{A_{3}}}\bigg)\\{} & +\hspace{0.1667em}\exp \bigg\{-\frac{{\sigma }^{2/3}}{2\zeta (p)}\bigg\}\bigg[\frac{2\zeta (p)}{{\sigma }^{2/3}}+{e}^{{A_{3}}/6}\hspace{0.1667em}\frac{1+\chi (p,n)}{24\hspace{0.1667em}\zeta (p)\hspace{0.1667em}{\sigma }^{4/3}}\bigg]\bigg\};\end{aligned}\]
(46)
\[ R(p,n)={K_{1}}(p,n)+{K_{2}}(p,n)+{K_{3}}(p,n).\]
Theorem A ([19, Theorem 1.1]).
Let
(47)
\[ \frac{4}{n}\le p\le 0.5,\hspace{1em}n\ge 200.\]
Then
(48)
\[ {\Delta _{n}}(p)\le \frac{\varrho (p)}{\sqrt{n}}\hspace{0.1667em}\mathcal{E}(p)+R(p,n),\]
and the sequence ${R_{0}}(p,n):=\frac{\sqrt{n}}{\varrho (p)}\hspace{0.1667em}R(p,n)$ tends to zero for every $0<p\le 0.5$, decreasing in n.
Denote
\[ E(p,n)=\mathcal{E}(p)+{R_{0}}(p,n).\]
Figure 1 shows the mutual location of the following functions: $E(p,n)$ for $n=200$ and 800, $\mathcal{E}(p)$ and ${T_{n}}(p){|_{n=50}}$. Note that, as a consequence of the definition of the binomial distribution, the behavior of these functions is symmetric with respect to $p=0.5$.
vmsta-5-3-vmsta113-g001.jpg
Fig. 1.
Graphs of the functions (from top to down): $\hspace{-0.1667em}E(p,\hspace{-0.1667em}200),\hspace{0.1667em}\hspace{-0.1667em}E(p,\hspace{-0.1667em}800),\hspace{0.1667em}\hspace{-0.1667em}\mathcal{E}(p),\hspace{0.1667em}\hspace{-0.1667em}{T_{50}}(p)$
Recall that ${N_{0}}=500000$.
Corollary A.
For $p\in [0.1689,0.5]$, and $n\ge {N_{0}}$,
\[ E(p,n)\le E(p,{N_{0}})<0.409954.\]
Proof.
Since $E(p,n)$ decreases in n, we obtain the statement of Corollary A by finding the maximal value of $E(p,{N_{0}})$ directly using a computer.  □
In order to verify the plausibility of the previous numerical result, we estimate the function $E(p,{N_{0}})$, making preliminary estimates of some of the terms that enter into it. This leads to the following somewhat more coarse inequality.
Corollary A′.
For $p\in [0.1689,0.5]$, and $n\ge {N_{0}}$,
(49)
\[ E(p,n)<0.409954153.\]
Proof.
Separate the proof of (49) into four steps. First we rewrite ${R_{0}}(p,n)$ in the following form,
\[ {R_{0}}(p,n)=\frac{{K_{1}}(p,n)\sigma }{\omega (p)}+\frac{{K_{2}}(p,n)\sigma }{\omega (p)}+\frac{{K_{3}}(p,n)\sigma }{\omega (p)}.\]
In each function $\frac{{K_{i}}(p,n)\sigma }{\omega (p)}$, $i=1,2,3$, we will select the principal term, and estimate the remaining ones.
Step 1. Note that for $n\ge {N_{0}}$ and $0<a\le 3$,
\[\begin{aligned}{}& {\bigg(\frac{n}{n-1}\bigg)}^{a}\le {\bigg(\frac{n}{n-1}\bigg)}^{3}<{e_{1}}:=1.00000601,\\{} & 1+\frac{1}{4(n-1)}\hspace{-0.1667em}<{e_{2}}:=1.000000501.\end{aligned}\]
Then
\[ \frac{{K_{1}}(p,n)\hspace{0.1667em}\sigma }{\omega (p)}=\frac{{\omega _{4}}(p)}{12\pi \omega (p)\sigma }{\bigg(\frac{n}{n-1}\bigg)}^{2}+{r_{1}}(p,n),\]
where
\[ {r_{1}}(p,n)<{\widetilde{r}_{1}}(p,n):=\frac{{e_{1}}}{\omega (p)}\bigg(\frac{{e_{2}}\hspace{0.1667em}{\omega _{3}}(p)}{4\sqrt{2\pi }(n-1)}+\frac{{\omega _{5}}(p)}{40\sqrt{2\pi }{\sigma }^{2}}+\frac{{\omega _{6}}(p)}{90\pi {\sigma }^{3}}\bigg).\]
Using a computer, we get the estimate ${\widetilde{r}_{1}}(p,n)\le {\widetilde{r}_{1}}(0.1689,{N_{0}})<2.78\cdot {10}^{-7}$.
Step 2. We have
\[ \frac{{K_{2}}(p,n)\sigma }{\omega (p)}=\frac{{\gamma _{6}}\hspace{0.1667em}{A_{6}}(n)\hspace{0.1667em}{V_{6}}(p)}{\pi \omega (p)\sigma }+{r_{2}}(p,n),\]
where
\[ {r_{2}}(p,n)={\sum \limits_{j=2}^{5}}\frac{{\gamma _{j+5}}{A_{j+5}}(n){V_{j+5}}(p)}{\pi \omega (p){\sigma }^{j}}\bigg[1+\frac{{\widetilde{\gamma }_{j+5}}e(n,p)n}{{\sigma }^{2}(n-2)}\bigg]+\frac{{\gamma _{6}}{\widetilde{\gamma }_{6}}{A_{6}}(n)e(n,p)n}{\pi \omega (p){\sigma }^{3}(n-2)}.\]
Taking into account that for $n\ge {N_{0}}$, $1\le j\le 5$ and $p\in [0.1689,0.5]$, we have
\[\begin{aligned}{}& {A_{j+5}}(n)<{A_{10}}({N_{0}})<{e_{3}}:=1.00001801,\hspace{1em}e(n,p)\le e({N_{0}},0.5)<1.02316,\\{} & 1+\frac{{\widetilde{\gamma }_{j+5}}\hspace{0.1667em}e(n,p)\hspace{0.1667em}n}{{\sigma }^{2}\hspace{0.1667em}(n-2)}<1+\frac{(5/3)\cdot 1.02316}{pq({N_{0}}-2)}{\bigg|_{p=0.1689}}<{e_{4}}:=1.0000243.\end{aligned}\]
Then, taking into account as well that ${A_{6}}({N_{0}})<1.0000101$, we get
\[ {r_{2}}(p,n)<{\widetilde{r}_{2}}(p,n):=\frac{{e_{3}}\cdot {e_{4}}}{\pi \omega (p)}{\sum \limits_{j=2}^{5}}\frac{{\gamma _{j+5}}{V_{j+5}}(p)}{{\sigma }^{j}}+\frac{(1/9)(2/3)1.0000101\cdot 1.02316}{\pi \omega (p){(pq)}^{3/2}\sqrt{n}(n-2)}.\]
We find with the help of a computer: ${\widetilde{r}_{2}}(p,n)\le {\widetilde{r}_{2}}(0.1689,{N_{0}})<8.852\cdot {10}^{-8}$.
Step 3. Let us write up
\[ \frac{{K_{3}}(p,n)\sigma }{\omega (p)}=\frac{1}{12\pi \omega (p)\sigma }+{r_{3}}(p,n),\]
where
\[\begin{aligned}{}{r_{3}}(p,n)& =\frac{\sigma }{\pi \omega (p)}\hspace{0.1667em}\bigg\{\bigg(\frac{1}{36}+\frac{\mu }{8}\bigg)\hspace{0.1667em}\frac{1}{{\sigma }^{4}}+\bigg(\frac{1}{36}\hspace{0.1667em}{e}^{{A_{1}}/6}+\frac{\mu }{8}\bigg)\hspace{0.1667em}\frac{1}{{\sigma }^{6}}+\frac{5\mu }{24}\hspace{0.1667em}{e}^{{A_{2}}/6}\hspace{0.1667em}\frac{1}{{\sigma }^{8}}\\{} & \hspace{1em}+\hspace{0.1667em}\frac{1}{3}\hspace{0.1667em}\exp \bigg\{-\sigma \sqrt{{A_{1}}}+\frac{{A_{1}}}{6}\bigg\}+(\pi -2)\mu \exp \bigg\{-\sigma \sqrt{{A_{2}}}+\frac{{A_{2}}}{6}\bigg\}\\{} & \hspace{1em}+\hspace{0.1667em}\exp \bigg\{-\sigma \sqrt{{A_{3}}}+\frac{{A_{3}}}{6}\bigg\}\frac{1}{4}\hspace{0.1667em}\ln \bigg(\frac{{\pi }^{4}{\sigma }^{2}}{4{A_{3}}}\bigg)\\{} & \hspace{1em}+\hspace{0.1667em}\exp \bigg\{-\frac{{\sigma }^{2/3}}{2\zeta (p)}\bigg\}\bigg[\frac{2\zeta (p)}{{\sigma }^{2/3}}+{e}^{{A_{3}}/6}\hspace{0.1667em}\frac{1+\chi (p,n)}{24\hspace{0.1667em}\zeta (p)\hspace{0.1667em}{\sigma }^{4/3}}\bigg]\bigg\}.\end{aligned}\]
Using a computer, we get ${r_{3}}(p,n)\le {r_{3}}(0.1689,{N_{0}})<1.08\cdot {10}^{-9}$.
Thus, for $p\in [0.1689,0.5]$, $n\ge {N_{0}}$, we have
\[ {r_{1}}(p,n)+{r_{2}}(p,n)+{r_{3}}(p,n)<2.78\cdot {10}^{-7}+8.852\cdot {10}^{-8}+1.08\cdot {10}^{-9}<3.676\cdot {10}^{-7}.\]
Step 4. Now consider the function
\[ B(p,n)=\mathcal{E}(p)+\frac{1}{12\pi \omega (p)\sigma }\bigg({\omega _{4}}(p){\bigg(\frac{n}{n-1}\bigg)}^{2}+12{\gamma _{6}}\hspace{0.1667em}{A_{6}}(n)\hspace{0.1667em}{V_{6}}(p)+1\bigg).\]
We find with the help of a computer that for $p\in [0.1689,0.5]$, $n\ge {N_{0}}$,
\[\begin{array}{c}\displaystyle \underset{p\in [0.1689,0.5]}{\max }B(p,n)=\underset{p\in [0.1689,0.5]}{\max }B(p,{N_{0}})\\{} \displaystyle =B(0.418886928\dots \hspace{0.2778em},{N_{0}})=0.40995378459\dots \hspace{0.2778em}.\end{array}\]
Consequently,
\[\begin{array}{c}\displaystyle E(p,n)=B(p,n)+{\sum \limits_{j=1}^{3}}{r_{j}}(p,n)\\{} \displaystyle <0.4099537846+3.676\cdot {10}^{-7}<0.409954153.\end{array}\]
 □
Let us introduce the following notations:
\[ {\mathcal{E}_{1}}(p)=\big({p}^{2}+{q}^{2}\big)\mathcal{E}(p)=\frac{2-p}{3\sqrt{2\pi }},\]
${D_{2}}(p,n)$ is the coefficient at $\frac{1}{{\sigma }^{2}}$ in the expansion of $R(p,n)$ in powers of $\frac{1}{\sigma }$, ${\overline{D}_{2}}(p,n)={\sigma }^{2}R(p,n)$, where the remainder $R(p,n)$ is defined by equality (46). One can rewrite bound (48) in the following form,
(50)
\[ {\Delta _{n}}(p)\le \frac{{\mathcal{E}_{1}}(p)}{\sigma }+\frac{{\overline{D}_{2}}(p,n)}{{\sigma }^{2}}.\]
Define ${D_{2}^{I}}(n)=\underset{p\in I}{\max }{D_{2}}(p,n)$, ${\overline{D}_{2}^{I}}(n)=\underset{p\in I}{\max }{\overline{D}_{2}}(p,n)$, where I is an interval.
Corollary B.
The quantities $\underset{n\ge N}{\max }{D_{2}^{I}}(n)$ and $\underset{n\ge N}{\max }{\overline{D}_{2}^{I}}(n)$ take the following values depending on $N=200,\hspace{0.1667em}{N_{0}}$ and intervals $I=[0.02,0.5]$, $[0.1689,0.5]$:
Table 2.
Some values of $\underset{n\ge N}{\max }{D_{2}^{I}}(n)$ and $\underset{n\ge N}{\max }{\overline{D}_{2}^{I}}(n)$
$\phantom{\int }$$I=[0.02,0.5]$ $I=[0.1689,0.5]$
$N=200$ $N=200$ $N={N_{0}}$
$\underset{n\ge N}{\max }{D_{2}^{I}}(n)=$ $0.083592\dots $ $0.046656\dots $ $0.0462198\dots $
$\underset{n\ge N}{\max }{\overline{D}_{2}^{I}}(n)=$ $0.1940\dots $ $0.05986\dots $ $0.05531\dots $
Proof.
Since
\[ \underset{n\ge N}{\max }{\overline{D}_{2}^{I}}(n)=\underset{n\ge N}{\max }\underset{p\in I}{\max }{\sigma }^{2}R(p,n)=\underset{p\in I}{\max }{\sigma }^{2}R(p,N),\]
then by using a computer, we get the tabulated values of $\underset{n\ge N}{\max }{\overline{D}_{2}^{I}}(n)$.
Proceed to the derivation of the values of $\underset{n\ge N}{\max }{D_{2}^{I}}(n)$. It follows from the definitions of ${K_{1}}(p,n)$, ${K_{2}}(p,n)$, and ${K_{3}}(p,n)$ that the coefficient at $\frac{1}{{\sigma }^{2}}$ in $R(p,n)$ is
\[ {D_{2}}(p,n)=\frac{{\omega _{4}}(p)}{12\pi }\hspace{0.1667em}{\bigg(\frac{n}{n-1}\bigg)}^{2}+\frac{1}{\pi }\hspace{0.1667em}{\gamma _{6}}{A_{6}}(n){V_{6}}(p)+\frac{1}{12\pi }\]
or, in more detail,
\[\begin{array}{c}\displaystyle {D_{2}}(p,n)=\frac{1}{36\pi }\bigg(3|{q}^{3}+{p}^{3}-3pq|{\bigg(\frac{n}{n-1}\bigg)}^{2}+4{A_{6}}(n){(q-p)}^{2}+3\bigg)\\{} \displaystyle =:\frac{{G_{2}}(p,n)}{36\pi }.\end{array}\]
First we consider ${G_{2}}(p):=\underset{n\to \infty }{\lim }{G_{2}}(p,n)$. We have
\[ {G_{2}}(p)=3|{q}^{3}+{p}^{3}-3pq|+4{(q-p)}^{2}+3\equiv 3|6{p}^{2}-6p+1|+4{(1-2p)}^{2}+3.\]
Taking into account that
\[ |6{p}^{2}-6p+1|=\left\{\begin{array}{l@{\hskip10.0pt}l}6{p}^{2}-6p+1\hspace{1em}& \text{if}\hspace{0.2778em}p\le {p_{1}}:=\frac{3-\sqrt{3}}{6}=0.211324\dots \hspace{0.2778em},\\{} -6{p}^{2}+6p-1\hspace{1em}& \text{if}\hspace{0.2778em}p>{p_{1}},\end{array}\right.\]
we obtain
\[ {G_{2}}(p)=\left\{\begin{array}{l@{\hskip10.0pt}l}2(17{p}^{2}-17p+5)\hspace{1em}& \text{if}\hspace{0.2778em}p\le {p_{1}},\\{} -2({p}^{2}-p-2)\hspace{1em}& \text{if}\hspace{0.2778em}p>{p_{1}}.\end{array}\right.\]
Since ${G_{2}}(p)$ decreases for $p<{p_{1}}$, and increases for $p>{p_{1}}$, then the maximum value of this function is achieved either at the left bound or at the right bound of the interval. We have
\[ {G_{2}}(0.02)=9.3336,\hspace{1em}{G_{2}}(0.1689)=5.2273251\dots \hspace{0.2778em},\hspace{1em}{G_{2}}(0.5)=4.5.\]
Thus,
\[\begin{aligned}{}& \frac{1}{36\pi }\hspace{0.1667em}\underset{0.02\le p\le 0.5}{\max }{G_{2}}(p)=\frac{{G_{2}}(0.02)}{36\pi }=0.0825271\dots \hspace{0.2778em},\\{} & \frac{1}{36\pi }\hspace{0.1667em}\underset{0.1689\le p\le 0.5}{\max }{G_{2}}(p)=\frac{{G_{2}}(0.1689)}{36\pi }=0.04621970\dots \hspace{0.2778em}.\end{aligned}\]
Similarly, with more efforts only, we get
\[\begin{array}{l}\displaystyle \underset{0.02\le p\le 0.5}{\max }{G_{2}}(p,200)={G_{2}}(0.02,200)=9.4541\dots \hspace{0.2778em},\\{} \displaystyle \underset{0.1689\le p\le 0.5}{\max }{G_{2}}(p,200)={G_{2}}(0.1689,200)=5.2767\dots \hspace{0.2778em},\\{} \displaystyle {G_{2}}(0.5,200)=4.515\dots \hspace{0.2778em},\\{} \displaystyle \underset{0.02\le p\le 0.5}{\max }{G_{2}}(p,{N_{0}})={G_{2}}(0.02,{N_{0}})=9.33364\dots \hspace{0.2778em},\\{} \displaystyle \underset{0.1689\le p\le 0.5}{\max }{G_{2}}(p,{N_{0}})={G_{2}}(0.1689,{N_{0}})=5.227344\dots \hspace{0.2778em},\\{} \displaystyle {G_{2}}(0.5,{N_{0}})=4.00006\dots \hspace{0.2778em}.\end{array}\]
Consequently,
\[\begin{array}{l}\displaystyle \frac{\underset{0.02\le p\le 0.5}{\max }{G_{2}}(p,200)}{36\pi }=0.083592\dots \hspace{0.2778em},\hspace{1em}\frac{\underset{0.1689\le p\le 0.5}{\max }{G_{2}}(p,200)}{36\pi }=0.046656\dots \hspace{0.2778em},\\{} \displaystyle \frac{\underset{0.1689\le p\le 0.5}{\max }{G_{2}}(p,{N_{0}})}{36\pi }=0.0462198\dots \hspace{0.2778em}.\end{array}\]
 □
Remark 2.
1. One can observe from the previous proof that ${G_{2}}(p,{N_{0}})\approx {G_{2}}(p)$, therefore, ${D_{2}}(p,{N_{0}})\approx \frac{{G_{2}}(p)}{36\pi }$.
2. With increasing N, the sequence ${a}^{I}(N):=\underset{n\ge N}{\max }{D_{2}^{I}}(n)$ approaches to ${a}^{I}:=\frac{1}{36\pi }\hspace{0.1667em}\underset{p\in I}{\max }{G_{2}}(p)$. For instance, by Table 2, we have for the interval $I=[0.1689,0.5]$ that ${a}^{I}(200)=0.046656\dots \hspace{0.2778em}$, ${a}^{I}({N_{0}})=0.0462198\dots \hspace{0.2778em}$ while ${a}^{I}=0.0462197\dots \hspace{0.2778em}$. The sequence ${\overline{a}}^{I}(N):=\underset{n\ge N}{\max }{\overline{D}_{2}^{I}}(n)$ tends to $0.0462197\dots \hspace{0.2778em}$ as well, but slowly, since the main term of the difference ${\overline{D}_{2}}(p,n)-\frac{{G_{2}}(p)}{36\pi }$ has the order $\frac{1}{\sqrt{n}}$.
The following bound for ${\Delta _{n}}(p)$, simpler than Theorem A, follows from (50) and Table 2.
Corollary C.
For all $p\in I=[0.1689,0.5]$ and $n\ge {N_{0}}$,
(51)
\[ {\Delta _{n}}(p)\le \frac{{\mathcal{E}_{1}}(p)}{\sigma }+\frac{0.05532}{{\sigma }^{2}}.\]
Remark 3.
Corollary C allows to obtain the same estimate for ${C_{02}}$ as (4), but for larger n. Really, it is easy to verify with the help of a computer that
(52)
\[ \underset{p\in [0.1689,0.5]}{\sup }\bigg(\mathcal{E}(p)+\frac{0.05532}{\sqrt{npq}({p}^{2}+{q}^{2})}{\bigg|_{n=971000}}\bigg)<0.409954,\]
but
(53)
\[ \underset{p\in [0.1689,0.5]}{\sup }\bigg(\mathcal{E}(p)+\frac{0.05532}{\sqrt{npq}({p}^{2}+{q}^{2})}{\bigg|_{n=970000}}\bigg)>0.409954.\]

4.2 On the connection between Uspensky’s result and its refinements with the problem of estimating ${C_{02}}$

First we recall Uspensky’s estimate, published by him in 1937 in [36]. To this end we introduce the following notations: ${S_{n}}$ is the number of occurrences of an event in a series of n Bernoulli trials with a probability of success p, $\mu =np$,
\[ G(x)=\varPhi (x)+\frac{q-p}{6\sqrt{2\pi }\hspace{0.1667em}\sigma }\big(1-{x}^{2}\big){e}^{-{x}^{2}/2}.\]
For every $x\in \mathbb{R}$, define
(54)
\[ {x_{n}^{\pm }}=\frac{x-\mu \pm \frac{1}{2}}{\sigma },\]
where $\sigma =\sqrt{npq}$, as before.
Uspensky’s result can be formulated in the following form.
Theorem B ([36, p. 129]).
Let ${\sigma }^{2}\ge 25$. Then for arbitrary integers $a<b$,
(55)
\[ \big|\mathbf{P}(a\le {S_{n}}\le b)-\big(G\big({b_{n}^{-}}\big)-G\big({a_{n}^{+}}\big)\big)\big|\le \frac{0.13+0.18|p-q|}{{\sigma }^{2}}+{e}^{-3\sigma /2}.\]
A lot of works are devoted to generalizations and refinements of (55), for example, [4, 14–16, 21, 24, 37].
In 2005 K. Neammanee [21] refined and generalized (55) to the case of non-identically distributed Bernoulli random variables. Let us formulate his result as applied to the case of Bernoulli trials: if ${\sigma }^{2}\ge 100$, then
(56)
\[ \big|\mathbf{P}(a\le {S_{n}}\le b)-\big(G\big({b_{n}^{-}}\big)-G\big({a_{n}^{+}}\big)\big)\big|<\frac{0.1618}{{\sigma }^{2}},\]
where ${a_{n}^{+}}$, ${b_{n}^{-}}$ are defined by the formula (54).
It follows from (56) that under condition ${\sigma }^{2}\ge 100$,
(57)
\[ \big|\mathbf{P}({S_{n}}\le b)-G\big({b_{n}^{-}}\big)\big|\le \frac{0.1618}{{\sigma }^{2}}.\]
We may consider $p\in (0,0.5]$. Denote for brevity, $d=0.1618$. It follows from (57) and the definition of $G(\cdot )$ that
\[ \big|\mathbf{P}({S_{n}}\le b)-\varPhi \big({b_{n}^{-}}\big)\big|<\frac{|(1-{({b_{n}^{-}})}^{2})(q-p)|{e}^{-{({b_{n}^{-}})}^{2}/2}}{6\sqrt{2\pi }\sigma }+\frac{d}{{\sigma }^{2}}.\]
Taking into account that $\underset{t}{\max }|{t}^{2}-1|{e}^{-{t}^{2}/2}=1$, we get
(58)
\[ \big|\mathbf{P}({S_{n}}\le b)-\varPhi \big({b_{n}^{-}}\big)\big|\le \frac{|q-p|}{6\sqrt{2\pi }\sigma }+\frac{d}{{\sigma }^{2}}.\]
Denote ${x_{n}}=\frac{x-\mu }{\sigma }$. It is easily seen that
(59)
\[ \big|\varPhi ({b_{n}})-\varPhi \big({b_{n}^{-}}\big)\big|<\frac{{b_{n}}-{b_{n}^{-}}}{\sqrt{2\pi }}=\frac{1}{2\sqrt{2\pi }\sigma }.\]
It follows from (58), (59) that
\[ \big|\mathbf{P}({S_{n}}\le b)-\varPhi ({b_{n}})\big|<\bigg(\frac{|q-p|}{6}+\frac{1}{2}\bigg)\frac{1}{\sigma \sqrt{2\pi }}+\frac{d}{{\sigma }^{2}}=\frac{{\mathcal{E}_{1}}(p)}{\sigma }+\frac{d}{{\sigma }^{2}},\]
provided that $0<p\le 0.5$. Thus,
(60)
\[ {\Delta _{n}}(p)\le \frac{{\mathcal{E}_{1}}(p)}{\sigma }+\frac{0.1618}{{\sigma }^{2}}.\]
Note that our bound (51) is more accurate than (60). To get the bound 0.409954 for ${C_{02}}$ from (60), we should take n almost five times larger than in (52). Really, with the help of a computer we have
\[ \underset{p\in [0.1689,0.5]}{\sup }\bigg(\mathcal{E}(p)+\frac{0.1618}{\sqrt{npq}\hspace{0.1667em}({p}^{2}+{q}^{2})}{\bigg|_{n=4.6\cdot {10}^{6}}}\bigg)<0.410031,\]
and
\[ \underset{p\in [0.1689,0.5]}{\sup }\bigg(\mathcal{E}(p)+\frac{0.1618}{\sqrt{npq}\hspace{0.1667em}({p}^{2}+{q}^{2})}{\bigg|_{n=4.2\cdot {10}^{6}}}\bigg)>0.410044\]
(cf. (52), (53)).
Remark 4.
In 2014 V. Senatov obtained non-uniform estimates of the approximation accuracy in the central limit theorem, and, in particular, generalized Uspensky’s result (55) to lattice distributions [24].

4.3 Proof of Theorem 1

Before proving Theorem 1, we first prove Lemma 1.
Proof of Lemma 1.
By [10, Theorem 1],
(61)
\[ {\Delta _{n}}(p)\le \frac{0.33477}{\sqrt{n}}\hspace{0.1667em}\big(\varrho (p)+0.429\big).\]
Therefore, ${T_{n}}(p)\equiv \frac{\sqrt{n}\hspace{0.1667em}{\Delta _{n}}(p)}{\varrho (p)}\le 0.33477(1+\frac{0.429}{\varrho (p)})$. Since $\varrho (p)$ decreases on $(0,0.5]$, then $\underset{p\in (0,0.1689]}{\max }\frac{1}{\varrho (p)}=\frac{1}{\varrho (0.1689)}=0.52090548\dots \hspace{0.2778em}\hspace{0.1667em}$. Consequently,
\[ \underset{p\in (0,0.1689]}{\max }{T_{n}}(p)\le 0.33477(1+0.429\cdot 0.52090549)<0.409581.\]
 □
Remark 5.
If instead of [10, Theorem 1] we will use other modifications of the Berry–Esseen inequality by I. Shevtsova [25], the interval (0,0.1689] for which Lemma 1 is true can be extended, i.e. one can find $b>0.1689$ such that the inequality $\underset{p\in (0,b]}{\max }{T_{n}}(p)<{C_{E}}$ will be fulfilled. This will narrow the interval I (see (12)), which in turn will reduce the computation time on the supercomputer.
Let us indicate such b. The estimates found in [25] as applied to the particular case of Bernoulli trials can be written in the following form,
(62)
\[\begin{aligned}{}& {\Delta _{n}}(p)\le \frac{0.33554}{\sqrt{n}}\hspace{0.1667em}\big(\varrho (p)+0.415\big),\end{aligned}\]
(63)
\[\begin{aligned}{}& {\Delta _{n}}(p)\le \frac{0.3328}{\sqrt{n}}\hspace{0.1667em}\big(\varrho (p)+0.429\big).\end{aligned}\]
It is easy to verify that inequality (62) implies $b=0.174$, and (63) implies that $b=0.177$.
Proof of Theorem 1.
It follows from Corollary 1 and (12) that for all $p\in I$ the following bound holds,
(64)
\[ {T_{n}}(p)<0.40973213+4.6\cdot {10}^{-9}<0.4097321346,\hspace{1em}1\le n\le {N_{0}}.\]
Then by Lemma 1, this inequality is fulfilled for all $p\in (0,0.5]$ as well. It is not hard to see that the bound (64) is also true for all $p\in (0.5,1)$. Hence, bound (5) implies Theorem 1.  □

Acknowledgments

We thank the following colleagues from Lomonosov Moscow State University for providing the opportunity to use supercomputer Blue Gene/P: V. Yu. Korolev, Head of the Department of Mathematical Statistics of the Faculty of Computational Mathematics and Cybernetics, Professor, I. G. Shevtsova, Assistant Professor of the same Department, A. V. Gulyaev, Deputy Dean of the same Faculty, and S. V. Korobkov, the Data Center administrator.
We also thank our colleagues from Computing Center FEB RAS for the opportunity to use the Center for the Collective Use “Data Center FEB RAS”.
We also would like to thank reviewers for useful comments.

References

[1] 
Bergström, H.: On the central limit theorem in the case of not equally distributed random variables. Skand. Aktuarietidskr. 1949, 37–62 (1949). MR0032113
[2] 
Berry, A.C.: The accuracy of the Gaussian approximation to the sum of independent variates. Trans. Am. Math. Soc. 49, 122–136 (1941). MR0003498. https://doi.org/10.2307/1990053
[3] 
Chebotarev, V.I., Kondrik, A.S., Mikhaylov, K.V.: On an extreme two-point distribution. http://arxiv.org/abs/0710.3456. Accessed 18 October 2007
[4] 
Deheuvels, K., Puri, M., Ralesku, S.: Asymptotic expansions for sums of nonidentically distributed Bernoulli random variables. J. Multivariate Anal. 28(2), 282–303 (1989). MR0991952. https://doi.org/10.1016/0047-259X(89)90111-5
[5] 
Esseen, C.-G.: On the Liapounoff limit of error in the theory of probability. Ark. Mat. Astron. Fys. 28(9), 1–19 (1942). MR0011909
[6] 
Esseen, C.-G.: A moment inequality with an application to the central limit theorem. Scand. Aktuarietidskr. J. 39, 160–170 (1956). MR0090166
[7] 
Herzog, F.: Upper bound for terms of the binomial expansion. Amer. Math. Mounthly 54(8), 485–487 (1946)
[8] 
Hipp, C., Mattner, L.: On the normal approximation to symmetric binomial distributions. Teor. Veroyatn. Primen. 52, 610–617 (2008). (English, with Russian summary). – English version: Theory Probab. Appl. 52(3), 516–523. MR2743033. https://doi.org/10.1137/S0040585X97983213
[9] 
Kondrik, A., Mikhaylov, K., Nagaev, S., Chebotarev, V.: On the bound of closeness of the binomial distribution to the normal one for a limited number of observations. Preprint 2010/160, Khabarovsk: Computing Center FEB RAS (2010) (Russian). MR3136472. https://doi.org/10.1137/S0040585X97985364
[10] 
Korolev, V., Shevtsova, I.: An improvement of the Berry-Esseen inequality with applications to Poisson and mixed Poisson random sums. Obozrenie prikladnoi i promyshlennoi matematiki 17, 25–56 (2010) (Russian)
[11] 
Korolev, V., Shevtsova, I.: On the upper bound for the absolute constant in the Berry-Esseen inequality. Teor. Veroyatn. i Primen. 54, 671–695 (2009) (Russian). – English version: Theory Probab. Appl. 54(4), 638–658 (2010). MR2759643. https://doi.org/10.1137/S0040585X97984449
[12] 
Korolev, V., Shevtsova, I.: An improvement of the Berry-Esseen inequality with applications to Poisson and mixed Poisson random sums. Scand. Actuar. J. 2012(2), 81–105 (2012). MR2929524. https://doi.org/10.1080/03461238.2010.485370
[13] 
Korolev, V.Y., Shevtsova, I.G.: A new moment estimate of the convergence rate in the Lyapunov theorem. Teor. Veroyatnost. i Primenen. 55(3), 577–582 (2010) (Russian). – English version: Theory of Probability and its Applications. 55(3), 505–509 (2011). MR2768539. https://doi.org/10.1137/S0040585X97985017
[14] 
Makabe, H.: The approximations to the Poisson binomial distribution with their applications to the sampling inspection theory. II, Memo No. 19610602, Kawada Branch of Union of Japanese Scientists and Engineers, Kawada, Japan, 1961
[15] 
Makabe, H.: A normal approximation to binomial distribution. Rep. Statist. Appl. Res. Un. Japan. Sci. Engrs. 4, 47–53 (1955). MR0075490
[16] 
Mikhailov, V.G.: On refinement of the central limit theorem for sums of independent random indicators. Theory Probab. Appl. 38(3), 479–489 (1993). MR1404663. https://doi.org/10.1137/1138044
[17] 
Nagaev, S., Chebotarev, V.: On the bound of closeness of the binomial distribution to the normal one. Preprint 2009/142. Khabarovsk: Computing Center FEB RAS (2009) (Russians). MR2810156. https://doi.org/10.1134/S1064562411010030
[18] 
Nagaev, S.V., Chebotarev, V.I.: On the bound of proximity of the binomial distribution to the normal one. Dokl. Akad. Nauk 436, 26–28 (2011) (Russian). – English version: Dokl. Math. 83(1), 19–21 (2011). MR2810156. https://doi.org/10.1134/S1064562411010030
[19] 
Nagaev, S.V., Chebotarev, V.I.: On the bound of proximity of the binomial distribution to the normal one. Teor. Veroyatn. i Primen. 56, 248–278 (2011) (Russian). – English version: Theory Probab. Appl. 56(2), 213–239 (2012). MR3136472. https://doi.org/10.1137/S0040585X97985364
[20] 
Nagaev, S.V., Chebotarev, V.I., Zolotukhin, A.Y.: A non-uniform bound of the remainder term in the central limit theorem for Bernoulli random variables. J. Math. Sci., New York 214(1), 83–100 (2016). MR3476252. https://doi.org/10.1007/s10958-016-2759-4
[21] 
Neammanee, K.: A refinement of normal approximation to Poisson binomial. Int. J. Math. Math. Sci. 5, 717–728 (2005). MR2173687. https://doi.org/10.1155/IJMMS.2005.717
[22] 
Schmetterer, L.: Introduction to Mathematical Statistics. Springer, New York (1973). MR0359100
[23] 
Schulz, J.: The Optimal Berry – Esseen Constant in the Binomial Case. Dissertation. Universität Trier, Trier (2016). http://ubt.opus.hbz-nrw.de/volltexte/2016/1007/
[24] 
Senatov, V.V.: About non-uniform bounds of approximation accuracy in central limit theorem. Teor. Veroyatn. i Primen. 59(2), 276–312 (2014) (Russian). – English version: Theory Probab. Appl. 59(2), 279–310 (2015). https://doi.org/10.4213/tvp4566
[25] 
Shevtsova, I.: On the absolute constants in the Berry-Esseen type inequalities for identically distributed summands. arXiv:1111.6554 (2011). MR2848430
[26] 
Shevtsova, I.G.: On asymptotically exact constants in the Berry-Esseen-Katz inequality. Teor. Veroyatnost. i Primenen. 55(2), 271–304 (2010) (Russian). – English version: Theory of Probability and its Applications. 55(2), 225–252 (2011). MR2768905. https://doi.org/10.1137/S0040585X97984772
[27] 
Shevtsova, I.G.: On the absolute constants in the Berry-Esseen inequality and its structural and nonuniform improvements. Inform. Primen. 7(1), 124–125 (2013) (Russian)
[28] 
Shevtsova, I.G.: Optimization of the Structure of the Moment Bounds for Accuracy of Normal Approximation for the Distributions of Sums of Independent Random Variables. Dissertation on competition of a scientific degree of the doctor of physico-mathematical Sciences. Moscow State University, Moscow (2013). http://www.dissercat.com/content/optimizatsiya-struktury-momentnykh-otsenok-tochnosti-normalnoi-approksimatsii-dlya-raspredel (Russian). MR3196782. https://doi.org/10.1137/S0040585X97986096
[29] 
Shevtsova, I.G.: Sharpening of the upper-estimate of the absolute constant in the Berry-Esseen inequality. Teor. Veroyatnost. i Primenen. 51(3), 622–626 (2006) (Russian). – English version: Theory of Probability and its Applications. 51(3), 549–553 (2007). MR2325552. https://doi.org/10.1137/S0040585X97982591
[30] 
Takano, K.: A remark to a result of A.C. Berry. Res. Mem. Inst. Math. 9, 408–415 (1951)
[31] 
Tyurin, I.: New estimates of the convergence rate in the Lyapunov theorem. arXiv:0912.0726 (2009). MR2768904. https://doi.org/10.1137/S0040585X97984760
[32] 
Tyurin, I.S.: On the accuracy of the Gaussian approximation. Dokl. Akad. Nauk. 429(3), 312–316 (2009) (Russian). – English version: Doklady Mathematics. 80(3), 840–843 (2009). MR2640604. https://doi.org/10.1134/S1064562409060155
[33] 
Tyurin, I.S.: On the convergence rate in Lyapunov’s theorem. Teor. Veroyatnost. i Primenen. 55(2), 250–270 (2010) (Russian). – English version: Theory of Probability and its Applications. 55(2), 253–270 (2011). MR2768904. https://doi.org/10.1137/S0040585X97984760
[34] 
Tyurin, I.S.: Refinement of the upper bounds of the constants in Lyapunov’s theorem. Uspekhi Mat. Nauk. 65(3(393)), 201–202 (2009) (Russian). – English version: Russian Mathematical Surveys. 65, 586–588, (2010). MR2682728. https://doi.org/10.1070/RM2010v065n03ABEH004688
[35] 
Tyurin, I.S.: Some optimal bounds in CLT using zero biasing. Stat. Prob. Letters. 82(3), 514–518 (2012). MR2887466. https://doi.org/10.1016/j.spl.2011.11.010
[36] 
Uspensky, J.V.: Introduction to Mathematical Probability. McGraw Hill, New York (1937)
[37] 
Volkova, A.Y.: A refinement of the central limit theorem for sums of independent random indicators. Theory Probab. Appl. 40(4), 791–794 (1995). MR1405154. https://doi.org/10.1137/1140093
[38] 
Zolotarev, V.M.: An absolute estimate of the remainder term in the central limit theorem. Theory Probab. Appl. 11, 95–105 (1966). MR0198531. https://doi.org/10.1137/1111005
[39] 
Zolotarev, V.M.: A sharpening of the inequality of Berry – Esseen. Z. Wahrscheinlichkeitstheor. verw. Geb. 8, 332–342 (1967). MR0221570. https://doi.org/10.1007/BF00531598
[40] 
Zolotukhin, A.Y.: The program for calculating the estimate of the main term in Central Limit Theorem for the Bernoulli distribution for limited number of observations. Certificate of state registration of the program for computers No. 2015617151. 2015.06.01 (Russian)
[41] 
Shared Facility Center “Data Center of FEB RAS” (Khabarovsk). http://lits.ccfebras.ru
Exit Reading PDF XML


Table of contents
  • 1 Introduction
  • 2 Shortly about the proof of Theorem 1
  • 3 Proof of Theorem 2
  • 4 Proof of Theorem 1
  • Acknowledgments
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy