Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 8, Issue 1 (2021)
  4. Asymptotic normality of the residual cor ...

Asymptotic normality of the residual correlogram in the continuous-time nonlinear regression model
Volume 8, Issue 1 (2021), pp. 93–113
Alexander Ivanov ORCID icon link to view author Alexander Ivanov details   Kateryna Moskvychova ORCID icon link to view author Kateryna Moskvychova details  

Authors

 
Placeholder
https://doi.org/10.15559/20-VMSTA170
Pub. online: 21 December 2020      Type: Research Article      Open accessOpen Access

Received
28 July 2020
Revised
5 December 2020
Accepted
5 December 2020
Published
21 December 2020

Abstract

In a continuous time nonlinear regression model the residual correlogram is considered as an estimator of the stationary Gaussian random noise covariance function. For this estimator the functional central limit theorem is proved in the space of continuous functions. The result obtained shows that the limiting sample continuous Gaussian random process coincides with the limiting process in the central limit theorem for standard correlogram of the random noise in the specified regression model.

1 Introduction

Estimation of the signal parameters in the “signal+noise” observation model is a classic problem of statistics of stochastic processes. If the signal (regression function) nonlinearly depends on parameters, then this is a problem of nonlinear time-series regression analysis. Another problems arise when there is a need to estimate the functional characteristics of the correlated random noise in the given functional regression model. For the stationary noise it can be estimation of the noise spectral density or covariance function. Asymptotic properties of the Whittle and Ibragimov estimators of spectral density parameters in the continuous time nonlinear regression model were considered in Ivanov and Prykhod’ko [16, 15], Ivanov et al. [17]. Exponential bounds for the probabilities of large deviations of the stationary Gaussian noise covariance function in the similar regression model are obtained in Ivanov et al. [11]. Stochastic asymptotic expansion and asymptotic expansions of the bias and variance of the residual correlogram in the same setting were derived in Ivanov and Moskvychova [21, 19]. In both cases it is first necessary to estimate the parameters of the regression function to neutralize it influence, and then use residual periodogram to estimate spectrum parameters and residual correlogram to estimate covariance function. The residual correlogram generalizes the notion of the averaged residual sum of squares in classical regression analysis.
However, unlike the residual sum of squares and usual correlogram, the results on the residual correlogram are not sufficiently represented in statistical literature except for a few theorems for discrete time linear regression with stationary correlated observation errors (see Anderson [1], Hannan [8]). These statements were obtained using explicit expressions for the least squares estimator (LSE) of unknown regression parameters. In the multitude of works dealing with stationary stochastic processes in the correlograms the values of the processes are centered by their sample means that are the LSE of their expectations. Some field generalizations of such a centering can be found in Leonenko [22].
In this paper we prove the functional central limit theorem (CLT) in the space of continuous functions for the normed residual correlogram as an estimator of the stationary Gaussian random noise covariance function in continuous time nonlinear regression model. The first result of such a kind has been obtained in Ivanov and Moskvychova [20]. In current paper we significantly weakened the requirements to the regression function under which the indicated CLT is true, namely: brought them closer to the conditions of the LSE asymptotic normality [18]. In addition we replaced the condition for the existence of a certain moment of the noise spectral density by much weaker condition on the weighted spectral density admissibility with respect to regression function spectral measure. In the last section of the paper we apply our result to the trigonometric regression.

2 Setting

Suppose the observations are of the form
(1)
\[ X(t)=g(t,{\theta ^{0}})+\varepsilon (t),\hspace{2.5pt}\hspace{2.5pt}t\in [0,+\infty ),\]
where $g:(-\gamma ,+\infty )\times {\Theta _{\gamma }}\to \mathbb{R}$ is a continuous function depending on unknown parameter ${\theta ^{0}}=({\theta _{1}^{0}},\dots ,{\theta _{q}^{0}})\in \Theta \subset {\mathbb{R}^{q}}$, Θ is an open convex set, ${\Theta _{\gamma }}={\textstyle\bigcup _{\| e\| \le 1}}(\Theta +\gamma e)$, γ is some positive number, and ε is a random noise described below.
Remark 1.
The assumption about domain $(-\gamma ,+\infty )$ for function g in t is of technical nature and does not affect possible applications. This assumption makes it possible to formulate the condition RN1(i) which is used in the proof of Lemma 3.
N1. (i) $\varepsilon =\left\{\varepsilon (t),\hspace{2.5pt}t\in \mathbb{R}\right\}$ is a real sample continuous stationary Gaussian process defined on a complete probability space $\left(\Omega ,\mathfrak{F},P\right)$, $\textit{E}\varepsilon (0)=0$;
(ii) covariance function $\textit{B}=\left\{\textit{B}(t),\hspace{2.5pt}t\in \mathbb{R}\right\}$ of the process ε is absolutely integrable.
Obviously, if $\textit{B}\in {L_{1}}(\mathbb{R})$, then the process ε has a bounded and continuous spectral density $f=\left\{f(\lambda ),\hspace{2.5pt}\lambda \in \mathbb{R}\right\}$.
Definition 1.
LSE of unknown parameter ${\theta ^{0}}\in \Theta $ obtained by observations of the process $\left\{X(t),\hspace{2.5pt}t\in [0,T]\right\}$ is said to be any random vector ${\widehat{\theta }_{T}}=({\widehat{\theta }_{1T}},\dots ,{\widehat{\theta }_{qT}})\in {\Theta ^{c}}$ (${\Theta ^{c}}$ is the closure of Θ in ${\mathbb{R}^{q}}$) such that
(2)
\[ {Q_{T}}({\widehat{\theta }_{T}})=\underset{\tau \in {\Theta ^{c}}}{\min }{Q_{T}}(\tau ),\hspace{2.5pt}\hspace{2.5pt}{Q_{T}}(\tau )={\underset{0}{\overset{T}{\int }}}{\left[X(t)-g(t,\tau )\right]^{2}}dt,\]
provided that the minimum in (2) is attained a.s.
The existence of at least one such a vector follows from the Pfanzagl results [23].
As an estimator of $\textit{B}$ we take the residual correlogram built by residuals
\[ \widehat{X}(t)=X(t)-g(t,{\widehat{\theta }_{T}}),\hspace{2.5pt}\hspace{2.5pt}t\in [0,T+H],\]
namely:
(3)
\[ {\textit{B}_{T}}(z,{\widehat{\theta }_{T}})={T^{-1}}{\underset{0}{\overset{T}{\int }}}\widehat{X}(t+z)\widehat{X}(t)dt,\hspace{2.5pt}\hspace{2.5pt}z\in [0,H],\]
$H>0$ is some fixed number. In particular ${\textit{B}_{T}}(0,{\widehat{\theta }_{T}})={T^{-1}}{Q_{T}}({\widehat{\theta }_{T}})$ is LSE of the variance $\textit{B}(0)$ of stochastic process ε. On the other hand
(4)
\[ {\textit{B}_{T}}(z,{\theta ^{0}})={\textit{B}_{T}}(z)={T^{-1}}{\underset{0}{\overset{T}{\int }}}\varepsilon (t+z)\varepsilon (t)dt,\hspace{2.5pt}z\in [0,H],\]
is the correlogram of the process ε.
From the condition N1 it follows that integrals (3) and (4) can be considered as Riemann integrals based on single paths of the corresponding processes and ${\textit{B}_{T}}(z,{\widehat{\theta }_{T}})$, ${\textit{B}_{T}}(z)$, $z\in [0,H]$, are sample continuous stochastic processes.
Consider the normalized residual correlogram
(5)
\[\begin{array}{l}\displaystyle {X_{T}}(z)={T^{1/2}}\left({\textit{B}_{T}}(z,{\widehat{\theta }_{T}})-\textit{B}(z)\right)={Y_{T}}(z)+{R_{T}}(z),\hspace{2.5pt}\hspace{2.5pt}z\in [0,H],\\ {} \displaystyle {Y_{T}}(z)={T^{1/2}}\left({\textit{B}_{T}}(z)-\textit{B}(z)\right),\hspace{2.5pt}\hspace{2.5pt}z\in [0,H],\\ {} \displaystyle {R_{T}}(z)={T^{-1/2}}{I_{1T}}(z)+{T^{-1/2}}{I_{2T}}(z)+{T^{-1/2}}{I_{3T}}(z),\hspace{2.5pt}\hspace{2.5pt}z\in [0,H],\end{array}\]
with
(6)
\[ {I_{1T}}(z)={\underset{0}{\overset{T}{\int }}}(g(t+z,{\widehat{\theta }_{T}})-g(t+z,\theta ))(g(t,{\widehat{\theta }_{T}})-g(t,\theta ))dt,\]
(7)
\[ {I_{2T}}(z)={\underset{0}{\overset{T}{\int }}}\varepsilon (t+z)(g(t,{\widehat{\theta }_{T}})-g(t,\theta ))dt,\]
(8)
\[ {I_{3T}}(z)={\underset{0}{\overset{T}{\int }}}\varepsilon (t)(g(t+z,{\widehat{\theta }_{T}})-g(t+z,\theta ))dt.\]
We will consider the processes ${X_{T}}$, ${Y_{T}}$, and ${R_{T}}$ as random elements in the measurable space $\left(C([0,H]),\mathfrak{B}\right)$ of continuous functions on $[0,H]$ with Borel σ-algebra $\mathfrak{B}$.
Let Z be a random element in the indicated space. The distribution of Z is the probability measure $P{Z^{-1}}$ on $\left(C([0,H]),\mathfrak{B}\right)$.
Definition 2.
A family $\left\{{U_{T}}\right\}$ of random elements converges in distribution, as $T\to \infty $, to a random element U in the space $C([0,H])$ (we write ${U_{T}}\stackrel{\mathcal{D}}{\longrightarrow }U$), if the distributions $P{U_{T}^{-1}}$ of elements ${U_{T}}$ converge weakly, as $T\to \infty $, to the distribution $P{U^{-1}}$ of the element U.
Since $f\in {L_{2}}(\mathbb{R})$ under assumption N1(ii), as it is well known, for any ${z_{1}},{z_{2}}\in [0,H]$, as $T\to \infty $,
(9)
\[ \textit{E}{Y_{T}}({z_{1}}){Y_{T}}({z_{2}})\longrightarrow b({z_{1}},{z_{2}})=4\pi \underset{\mathbb{R}}{\int }{f^{2}}(\lambda )\cos \lambda {z_{1}}\cos \lambda {z_{2}}\hspace{2.5pt}d\lambda .\]
and (see, e.g., Buldygin [3]) all the finite-dimensional distributions of the processes ${Y_{T}}$ weakly converge, as $T\to \infty $, to the Gaussian process Y with zero mean and covariance function (9).
We assume that the process Y is separable.
Introduce the function (see section 6.4 of the chapter 6 in Buldygin and Kozachenko [4])
\[ q(z)={\left(\underset{\mathbb{R}}{\int }{f^{2}}(\lambda ){\sin ^{2}}\frac{\lambda z}{2}d\lambda \right)^{1/2}},\hspace{2.5pt}h\ge 0.\]
If $f\in {L_{2}}(\mathbb{R})$, the function q generates pseudometrics
\[ \rho ({z_{1}},{z_{2}})=q(|{z_{1}}-{z_{2}}|),\hspace{2.5pt}\sqrt{\rho }({z_{1}},{z_{2}})=\sqrt{\rho ({z_{1}},{z_{2}})},\hspace{2.5pt}{z_{1}},{z_{2}}\in \mathbb{R}.\]
Denote by ${H_{\sqrt{\rho }}}(\varepsilon )={H_{\sqrt{\rho }}}([0,1],\varepsilon )$, $\varepsilon >0$, the metric entropy of the interval $[0,1]$ generated by the pseudometric $\sqrt{\rho }$, ${\textstyle\int _{0+}}$ the integral over an arbitrary neighborhood of zero $(0,\delta )$, $\delta >0$.
Below we are going to formulate a theorem obtained in Buldygin and Kozachenko [4] (Theorem 6.4.1) under milder conditions than ours. In the absence of assumption on sample continuity of the process ε from the condition $f\in {L_{2}}(\mathbb{R})$ it follows that correlograms can be understood, as continuous in probability with respect to the parameter z Riemann meansquare integrals. Due to Lemma 6.4.1 in [4] we can conclude that processes ${Y_{T}},\hspace{2.5pt}T>0$, are likewise continuous in probability. Thus, it can be assumed that the processes ${Y_{T}},\hspace{2.5pt}T>0$, are separable.
Theorem 1.
Let $f\in {L_{2}}(\mathbb{R})$ and
\[\begin{aligned}{}\mathbf{N}\mathbf{2}.\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}& \underset{0+}{\int }{H_{\sqrt{\rho }}}(\varepsilon )d\varepsilon <\infty .\end{aligned}\]
Then for any $H>0$ $\mathbf{I})$ $Y\in C([0,H])$ a.s.; $\mathbf{II})$ ${Y_{T}}\in C([0,H])$ a.s., $T>0$; $\mathbf{III})$ ${Y_{T}}\stackrel{\mathcal{D}}{\longrightarrow }Y$, as $T\to \infty $, in the space $C([0,H])$.
In particular, for any $x>0$
\[ \underset{T\to \infty }{\lim }\mathsf{P}\left\{\underset{z\in [0,H]}{\sup }|{Y_{T}}(z)|>x\right\}=\mathsf{P}\left\{\underset{z\in [0,H]}{\sup }|Y(z)|>x\right\}.\]
Corollary 1.
The conclusion of the Theorem 1 is true under conditions N1 and N2. (see Theorem 6.4.1 in [11]).
As it is shown in the Remark 6.4.1 in [4] the condition N2 is satisfied if for some $\delta >0$
(10)
\[ {\underset{0}{\overset{\infty }{\int }}}{f^{2}}(\lambda ){\ln ^{4+\delta }}(1+\lambda )d\lambda <\infty .\]
In turn (10) follows from the condition $f\in {L_{2}}(\mathbb{R})$ under natural restrictions on the decreasing of the spectral density f at infinity (see Theorem 6.4.2 in [4])
Taking into account the Theorem 1, we state a simple but important fact that is a rephrasing for $C([0,H])$ of the Theorem 3.1 in Billingsley [2], p. 27. For functions $a(z),\hspace{2.5pt}z\in [0,H]$, we will write $\| a\| ={\sup _{z\in [0,H]}}|a(z)|$.
Lemma 1.
If ${Y_{T}}\stackrel{\mathcal{D}}{\longrightarrow }Y$ and
(11)
\[ \left\| {R_{T}}\right\| \stackrel{P}{\longrightarrow }0,\hspace{2.5pt}as\hspace{2.5pt}T\to \infty ,\]
then ${X_{T}}\stackrel{\mathcal{D}}{\longrightarrow }Y$, as $T\to \infty $.
Thus, to obtain a functional theorem in $C([0,H])$ on asymptotic normality of the normalized residual correlogram ${X_{T}}$ it is required to prove (11).

3 Conditions

To prove (11) we need some regularity conditions imposed on the regression function g, spectral density f and LSE ${\widehat{\theta }_{T}}$.
Assume that for any $t>-\gamma $ the function $g(t,\theta )$ is twice continuously differentiable with respect to $\theta \in {\Theta _{\gamma }}$, and moreover, the derivatives ${g_{i}}(t,\theta )=\partial /\partial {\theta _{i}}g(t,\theta )$, ${g_{ij}}(t,\theta )=\left({\partial ^{2}}/\partial {\theta _{i}}\partial {\theta _{j}}\right)g(t,\theta )$, $i,j=\overline{1,q}$, are continuous in the totality of variables. Denote
\[ {d_{T}}(\theta )=\mathrm{diag}\left({d_{iT}}(\theta ),\hspace{2.5pt}i=\overline{1,q}\right),\hspace{2.5pt}{d_{iT}^{2}}(\theta )={\underset{0}{\overset{T}{\int }}}{g_{i}^{2}}(t,\theta )dt,\hspace{2.5pt}\theta \in \Theta ,\hspace{2.5pt}i=\overline{1,q},\]
and suppose that
\[ \underset{T\to \infty }{\liminf }{T^{-1}}{d_{iT}^{2}}(\theta )>0,\hspace{2.5pt}\theta \in \Theta ,\hspace{2.5pt}i=\overline{1,q},\]
in particular, these limits can be infinite. Let also
\[ {d_{ij,T}}(\theta )={\underset{0}{\overset{T}{\int }}}{g_{ij}^{2}}(t,\theta )dt,\hspace{2.5pt}\theta \in \Theta ,\hspace{2.5pt}i,j=\overline{1,q}.\]
Introduce now the normalized LSE ${\widehat{u}_{T}}={d_{T}}({\theta ^{0}})\left({\widehat{\theta }_{T}}-{\theta ^{0}}\right)$, ${\theta ^{0}}\in \Theta $, and the notation $h(t,u)=g(t,{\theta ^{0}}+{d_{T}^{-1}}({\theta ^{0}})u)$, ${h_{i}}(t,u)={g_{i}}(t,{\theta ^{0}}+{d_{T}^{-1}}({\theta ^{0}})u)$, ${h_{ij}}(t,u)={g_{ij}}(t,{\theta ^{0}}+{d_{T}^{-1}}({\theta ^{0}})u)$, $u\in {U_{T}}({\theta ^{0}})={d_{T}}({\theta ^{0}})\left({\Theta ^{c}}-{\theta ^{0}}\right)$, $i,j=\overline{1,q}$; $v(r)=\left\{u\in {\mathbb{R}^{q}}:\| u\| <r\right\}$;
(12)
\[\begin{aligned}{}{\Phi _{T}}\left({\theta _{1}},{\theta _{2}}\right)& ={\underset{0}{\overset{T}{\int }}}{\left(g(t,{\theta _{1}})-g(t,{\theta _{2}})\right)^{2}}dt,\hspace{2.5pt}{\theta _{1}},{\theta _{2}}\in {\Theta ^{c}};\\ {} {\Psi _{iT}}\left({z_{1}},{z_{2}};\theta \right)& ={\underset{0}{\overset{T}{\int }}}{\left({g_{i}}(t+{z_{1}},\theta )-{g_{i}}(t+{z_{2}},\theta )\right)^{2}}dt,\\ {} & \hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}{z_{1}},{z_{2}}\ge 0,\hspace{2.5pt}\theta \in {\Theta ^{c}},\hspace{2.5pt}i=\overline{1,q}.\end{aligned}\]
Instead of the words “for all sufficiently large T” we will write below “for $T>{T_{0}}$”. Assume that the following conditions are satisfied.
R1. There exists a constant ${k_{0}}<\infty $ such that for any ${\theta ^{0}}\in \Theta $ and $T>{T_{0}}$, where ${k_{0}}$ and ${T_{0}}$ may depend on ${\theta ^{0}}$,
(13)
\[ {\Psi _{T}}(\theta ,{\theta ^{0}})\le {k_{0}}\| {d_{T}}({\theta ^{0}})(\theta -{\theta ^{0}}){\| ^{2}},\hspace{2.5pt}\theta \in {\Theta ^{c}}.\]
R2. For any $r\ge 0$ ${\theta ^{0}}\in \Theta $, and $T>{T_{0}}(r)$
(i) ${d_{iT}^{-1}}({\theta ^{0}}){\sup _{t\in [0,T],u\in {V^{c}}(r)\cap {U_{T}}({\theta ^{0}})}}|{h_{i}}(t,u)|\le {k^{i}}(r){T^{-1/2}}$, $i=\overline{1,q}$;
(ii) ${d_{ij,T}^{-1}}({\theta ^{0}}){\sup _{t\in [0,T],u\in {V^{c}}(r)\cap {U_{T}}({\theta ^{0}})}}|{h_{ij}}(t,u)|\le {k^{ij}}(r){T^{-1/2}}$, $i,j=\overline{1,q}$;
(iii) ${d_{iT}^{-1}}({\theta ^{0}}){d_{jT}^{-1}}({\theta ^{0}}){d_{ij,T}^{-1}}({\theta ^{0}})\le {\widetilde{k}^{ij}}{T^{-1/2}}$, $i,j=\overline{1,q}$, with constants ${k^{i}}$, ${k^{ij}}$, ${\widetilde{k}^{ij}}$, possibly, depending on ${\theta ^{0}}$.
R3. There exist constants ${k_{i}}<\infty $, $i=\overline{1,q}$, such that for any ${\theta ^{0}}\in \Theta $ and $T>{T_{0}}$, where ${\overline{k}_{i}}$ and ${T_{0}}$ may depend on ${\theta ^{0}}$,
(14)
\[ {d_{iT}^{-2}}({\theta ^{0}}){\Psi _{iT}}({z_{1}},{z_{2}};{\theta ^{0}})\le {\overline{k}_{i}}|{z_{1}}-{z_{2}}{|^{2}},\hspace{2.5pt}{z_{1}},{z_{2}}\in [0,H].\]
Lemma 2.
If condition R2(i) is fulfilled for $r=0$, then for any fixed $H>0$ and ${\theta ^{0}}\in \Theta $
\[ {d_{i,T+H}}({\theta ^{0}}){d_{iT}^{-1}}({\theta ^{0}})\longrightarrow 1,\hspace{2.5pt}as\hspace{2.5pt}T\to \infty ,\hspace{2.5pt}i=\overline{1,q}.\]
Proof.
We have
\[\begin{array}{l}\displaystyle {q_{i}}={d_{i,T+H}^{2}}({\theta ^{0}}){d_{iT}^{-2}}({\theta ^{0}})=1+\left({\underset{T}{\overset{T+H}{\int }}}{g_{i}^{2}}(t,{\theta ^{0}})dt\right){d_{iT}^{-2}}({\theta ^{0}})\le \\ {} \displaystyle \le 1+H\underset{0\le t\le T+H}{\sup }|{g_{i}}(t,{\theta ^{0}}){|^{2}}{d_{i,T+H}^{-2}}({\theta ^{0}}){q_{i}}\le 1+{k_{i}^{2}}(0)\frac{H}{T+H}{q_{i}}.\end{array}\]
Then for $T>{T_{0}}$
\[ 1\le {q_{i}}\le \frac{1}{1-{k_{i}^{2}}(0)H{(T+H)^{-1}}},\hspace{2.5pt}\mathrm{and}\hspace{2.5pt}{q_{i}}\to 1,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty ,\hspace{2.5pt}i=\overline{1,q}.\]
 □
For basic observation model (1), we introduce a family of matrix-valued measures ${\mu _{T}}(d\lambda ;\theta )$, $\theta \in \Theta $, $T>0$, on $\left(\mathbb{R},\mathfrak{L}(\mathbb{R})\right)$, $\mathfrak{L}(\mathbb{R})$ is the σ-algebra of Lebesgue measurable subsets of $\mathbb{R}$, with matrix densities with respect to Lebesgue measure ${\big({\mu _{T}^{jl}}(\lambda ,\theta )\big)_{j,l=1}^{q}}$, $\theta \in \Theta $,
(15)
\[\begin{array}{l}\displaystyle {\mu _{T}^{jl}}(\lambda ,\theta )={g_{T}^{j}}(\lambda ,\theta )\overline{{g_{T}^{l}}(\lambda ,\theta )}{\left(\underset{\mathbb{R}}{\int }{\left|{g_{T}^{j}}(\lambda ,\theta )\right|^{2}}d\lambda \underset{\mathbb{R}}{\int }{\left|{g_{T}^{l}}(\lambda ,\theta )\right|^{2}}d\lambda \right)^{-1/2}},\\ {} \displaystyle {g_{T}^{j}}(\lambda ,\theta )={\underset{0}{\overset{T}{\int }}}{e^{i\lambda t}}{g_{j}}(t,\theta )dt,\hspace{2.5pt}\hspace{2.5pt}j,l=\overline{1,q}.\end{array}\]
by Plancherel identity
(16)
\[ \underset{\mathbb{R}}{\int }{g_{T}^{j}}(\lambda ,\theta )\overline{{g_{T}^{l}}(\lambda ,\theta )}d\lambda =2\pi {\underset{0}{\overset{T}{\int }}}{g_{j}}(t,\theta ){g_{l}}(t,\theta )dt,\]
in particular,
(17)
\[ {d_{jT}^{2}}(\theta )={(2\pi )^{-1}}\underset{\mathbb{R}}{\int }{\left|{g_{T}^{j}}(\lambda ,\theta )\right|^{2}}d\lambda ,\hspace{2.5pt}j,l=\overline{1,q}.\]
$\mathbf{R}\mathbf{4}.$ The family of measures ${\mu _{T}}(d\lambda ;\theta )$ converges weakly to a positive definite matrix measure $\mu (d\lambda ;\theta )={\left({\mu ^{jl}}(d\lambda ;\theta )\right)_{j,l=1}^{q}}$, as $T\to \infty $, $\theta \in \Theta $.
Condition $\mathbf{R}\mathbf{4}$ means that the elements ${\mu ^{jl}}(d\lambda ;\theta )$ of the matrix measure $\mu (d\lambda ;\theta )$ are complex signed measures of bounded variation and the matrix $\mu (A;\theta )$ is positive semi-definite for any $A\in \mathfrak{L}$, and $\mu (\mathbb{R};\theta )$ is a positive definite matrix, $\theta \in \Theta $.
Definition 3.
The measure $\mu (d\lambda ;\theta ),\hspace{2.5pt}\theta \in \Theta $, is called the spectral measure of the regression function $g(t,\theta )$, or, more precisely, the spectral measure of the gradient $\nabla g(t,\theta ),\hspace{2.5pt}\theta \in \Theta $, see Grenander [6], Holevo [9], Ibragimov and Rozanov [10], and Ivanov and Leonenko [14].
Taking into account (16), (17) and condition $\mathbf{R}\mathbf{4}$ we get
\[\begin{array}{l}\displaystyle {\mu _{T}}(\mathbb{R};\theta )=\underset{\mathbb{R}}{\int }{\mu _{T}}(d\lambda ;\theta )={\left({d_{jT}^{-1}}(\theta ){d_{lT}^{-1}}(\theta ){\underset{0}{\overset{T}{\int }}}{g_{j}}(t,\theta ){g_{l}}(t,\theta )dt\right)_{j,l=1}^{q}}=\\ {} \displaystyle ={J_{T}}(\theta )\longrightarrow J(\theta )=\underset{\mathbb{R}}{\int }\mu (d\lambda ;\theta )>0,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty ,\hspace{2.5pt}\theta \in \Theta .\end{array}\]
AN. The random vector ${d_{T}}({\theta ^{0}})({\widehat{\theta }_{T}}-{\theta ^{0}})$ is asymptotically, as $T\to \infty $, normal with zero mean and covariance matrix
\[ {\Sigma _{LSE}}=2\pi {\left(\underset{\mathbb{R}}{\int }\mu (d\lambda ;{\theta ^{0}})\right)^{-1}}\underset{\mathbb{R}}{\int }f(\lambda )\mu (d\lambda ;{\theta ^{0}}){\left(\underset{\mathbb{R}}{\int }\mu (d\lambda ;{\theta ^{0}})\right)^{-1}}.\]
Sufficient conditions of the assumption AN fulfillment are bulky. These conditions are given in [14] and, for example, in Ivanov et al. [18]. At least, conditions R2 and R4 form the part of these conditions in [18].
Consider the diagonal elements (measures) ${\mu ^{jj}},\hspace{2.5pt}j=\overline{1,q}$, of the matrix spectral measure μ.
Definition 4 (Billingsley [2], Ibragimov and Rozanov [10]).
A function $b(\lambda )$, $\lambda \in \mathbb{R}$, is called ${\mu ^{jj}}$-admissible, if it is integrable with respect to ${\mu ^{jj}}$ and
(18)
\[ \underset{\mathbb{R}}{\int }b(\lambda ){\mu _{T}^{jj}}(d\lambda ;\theta )\longrightarrow \underset{\mathbb{R}}{\int }b(\lambda ){\mu ^{jj}}(d\lambda ;\theta ),\mathrm{as}\hspace{2.5pt}T\to \infty ,\hspace{2.5pt}\theta \in \Theta .\]
RN. For some $\delta \in (0,1]$ the function $b(\lambda )=|\lambda {|^{1+\delta }}f(\lambda )$, $\lambda \in \mathbb{R}$, is ${\mu ^{jj}}$-admissible, $j=\overline{1,q}$.
Consider some sufficient conditions on ${\mu ^{jj}}$-admissibility of the function b from assumption RN under condition N1(ii).
\[\begin{aligned}{}\mathbf{N}\mathbf{3}.\hspace{2em}\hspace{2em}\hspace{2em}\hspace{2em}& \underset{\lambda \in \mathbb{R}}{\sup }{\left|\lambda \right|^{1+\delta }}f(\lambda )<\infty .\end{aligned}\]
Under condition N3 the relation (18) follows from N1(ii) and definition of weak convergence. Denote
(19)
\[ (\partial /\partial t){g_{j}}(t,\theta )={g^{\prime }_{j}}(t,\theta ),\hspace{2.5pt}\hspace{2.5pt}{\tilde{g}_{T}^{j}}(\lambda ,\theta )={\underset{0}{\overset{T}{\int }}}{e^{i\lambda t}}{g^{\prime }_{j}}(t,\theta )dt\]
and introduce the next condition
RN1. (i) The functions ${g_{j}}(t,\theta ),\hspace{2.5pt}\theta \in \Theta $, are continuously differentiable with respect to $t>-\gamma $, and there exists ${\lambda _{0}}={\lambda _{0}}(\theta )>0$ such that for $T>{T_{0}}(\theta )$
(20)
\[ \underset{|\lambda |>{\lambda _{0}}}{\sup }{d_{jT}^{-2}}(\theta )|{\tilde{g}_{T}^{j}}(\lambda ,\theta ){|^{2}}\le {h_{j}}(\theta )<\infty ,\hspace{2.5pt}j=\overline{1,q},\hspace{2.5pt}\theta \in \Theta .\]
(ii) There exists ${\lambda _{1}}>0$ such that for $\lambda >{\lambda _{1}}$ the function ${\lambda ^{1+\delta }}f(\lambda )$ strictly increases, and
\[ |\lambda {|^{1+\delta }}f(\lambda )\longrightarrow \infty ,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}\lambda \to \infty .\]
\[\begin{aligned}{}(\mathbf{iii})\hspace{2em}\hspace{2em}& \underset{\mathbb{R}}{\int }|\lambda {|^{1+\delta }}f(\lambda ){\mu ^{jj}}(d\lambda ;\theta )<\infty ,\hspace{2.5pt}j=\overline{1,q},\hspace{2.5pt}\theta \in \Theta .\end{aligned}\]
Lemma 3.
Conditions N1(ii), RN1, R2(i) fulfilled for $r=0$, and R4 imply the conditions RN.
Proof.
For $M>0$ consider the cut-off function
\[\begin{array}{l}\displaystyle {b^{M}}(\lambda )=b(\lambda )\chi \{\lambda :b(\lambda )\le M\}+M\chi \{\lambda :b(\lambda )>M\},\\ {} \displaystyle \left|\underset{\mathbb{R}}{\int }b(\lambda ){\mu _{T}^{jj}}(d\lambda ;\theta )-\underset{\mathbb{R}}{\int }b(\lambda ){\mu ^{jj}}(d\lambda ;\theta )\right|\le \\ {} \displaystyle \le \left|\underset{\mathbb{R}}{\int }b(\lambda ){\mu _{T}^{jj}}(d\lambda ;\theta )-\underset{\mathbb{R}}{\int }{b^{M}}(\lambda ){\mu _{T}^{jj}}(d\lambda ;\theta )\right|+\\ {} \displaystyle +\left|\underset{\mathbb{R}}{\int }{b^{M}}(\lambda ){\mu _{T}^{jj}}(d\lambda ;\theta )-\underset{\mathbb{R}}{\int }{b^{M}}(\lambda ){\mu ^{jj}}(d\lambda ;\theta )\right|+\\ {} \displaystyle +\left|\underset{\mathbb{R}}{\int }{b^{M}}(\lambda ){\mu ^{jj}}(d\lambda ;\theta )-\underset{\mathbb{R}}{\int }b(\lambda ){\mu ^{jj}}(d\lambda ;\theta )\right|=\\ {} \displaystyle ={K_{1j}}(T,M)+{K_{2j}}(T,M)+{K_{3j}}(M),\hspace{2.5pt}j=\overline{1,q}.\end{array}\]
By Lebesgue monotonic convergence theorem from RN1(iii) we get
(21)
\[ {K_{3j}}(M)\longrightarrow 0,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}M\to \infty .\]
Under conditions N1(ii) and R4 for any fixed $M>0$
(22)
\[ {K_{2j}}(T,M)\longrightarrow 0,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty .\]
On the other hand,
\[\begin{array}{l}\displaystyle {K_{1j}}(T,M)={(2\pi )^{-1}}\underset{\{\lambda :b(\lambda )>M\}}{\int }(b(\lambda )-M){d_{jT}^{-2}}(\theta )|{g_{T}^{j}}(\lambda ,\theta ){|^{2}}d\lambda \le \\ {} \displaystyle \le {(2\pi )^{-1}}\underset{\{\lambda :b(\lambda )>M\}}{\int }b(\lambda ){d_{jT}^{-2}}(\theta )|{g_{T}^{j}}(\lambda ,\theta ){|^{2}}d\lambda .\end{array}\]
Integrating by parts we obtain (see (15) and RN1(i))
\[\begin{array}{l}\displaystyle |{g_{T}^{j}}(\lambda ,\theta )|=|\lambda {|^{-1}}\left|{e^{i\lambda T}}{g_{j}}(T,\theta )-{g_{j}}(0,\theta )-{\tilde{g}_{T}^{j}}(\lambda ,\theta )\right|\le \\ {} \displaystyle \le |\lambda {|^{-1}}\left(2\underset{t\in [0,T]}{\sup }|{g_{j}}(t,\theta )|+|{\tilde{g}_{T}^{j}}(\lambda ,\theta )|\right).\end{array}\]
Thus under condition R2(i) with $r=0$
\[\begin{array}{l}\displaystyle {d_{jT}^{-2}}(\theta ){\left|{g_{T}^{j}}(\lambda ,\theta )\right|^{2}}\le {\lambda ^{-2}}\left(4{d_{jT}^{-2}}(\theta )\underset{t\in [0,T]}{\sup }{\left|{g_{j}}(\lambda ,\theta )\right|^{2}}+2{d_{T}^{-2}}(\theta ){\left|{\widetilde{g}_{T}^{j}}(\lambda ,\theta )\right|^{2}}\right)\\ {} \displaystyle {K_{1j}}(T,M)\le 2{k_{j}^{2}}(0){\pi ^{-1}}{T^{-1}}\underset{\{\lambda :b(\lambda )>M\}}{\int }|\lambda {|^{-1+\delta }}f(\lambda )d\lambda +\\ {} \displaystyle +{\pi ^{-1}}\underset{\{\lambda :b(\lambda )>M\}}{\int }|\lambda {|^{-1+\delta }}f(\lambda ){d_{jT}^{-2}}(\theta ){\left|{\widetilde{g}_{T}^{j}}(\lambda ,\theta )\right|^{2}}d\lambda =\\ {} \displaystyle ={K_{1j}^{(1)}}(T,M)+{K_{1j}^{(2)}}(T,M).\end{array}\]
Let $\varepsilon >0$ be an arbitrary fixed number. Since integral in ${K_{1j}^{(1)}}(T,M)$ is majorized by the spectral moment ${\textstyle\int _{\mathbb{R}}}|\lambda {|^{-1+\delta }}f(\lambda )d\lambda <\infty $, then for $T>{T_{0}}$ we have ${K_{1j}^{(1)}}(T,M)\le \varepsilon /4$.
Put ${\max _{\lambda \in [0,{\lambda _{1}}]}}b(\lambda )=b({\bar{\lambda }_{1}})$ for some ${\bar{\lambda }_{1}}\ge {\lambda _{1}}$, where ${\lambda _{1}}$ is the number from the condition RN1(ii). Let also $\Lambda >0$ is such a number that $b(\lambda )>M=b(\Lambda )>b({\bar{\lambda }_{1}})$. In this case $\{\lambda :b(\lambda )>M\}=\{\lambda :|\lambda |>\Lambda \}$, and if $\Lambda \ge {\lambda _{0}}$, then for $T>{T_{0}}$ from the condition RN1(i) we get
\[\begin{array}{l}\displaystyle {K_{1j}^{(2)}}(T,M)={\pi ^{-1}}\underset{\{\lambda :|\lambda |>\Lambda \}}{\int }|\lambda {|^{-1+\delta }}f(\lambda ){d_{jT}^{-2}}(\theta )|{\tilde{g}_{T}^{j}}(\lambda ,\theta ){|^{2}}d\lambda \le \\ {} \displaystyle \le {\pi ^{-1}}{h_{j}}(\theta )\underset{\{\lambda :|\lambda |>\Lambda \}}{\int }|\lambda {|^{-1+\delta }}f(\lambda )d\lambda .\end{array}\]
Now by the choice of Λ, that is by the choice of cut-off level $M=b(\Lambda )$, we get the inequality ${K_{1j}^{(2)}}(T,M)<\varepsilon /4$. Increasing Λ if necessary, from (21) we obtain ${K_{3j}}(T,M)<\varepsilon /4$. As well increasing ${T_{0}}$ if necessary, we receive from (22) ${K_{2j}}(T,M)<\varepsilon /4$.  □

4 Asymptotic normality of the residual correlogram

In this section, we formulate and prove the CLT for the normalized residual correlogram $\left\{{X_{T}}(z),\hspace{2.5pt}z\in [0,H]\right\}$ in the Banach space of continuous functions $C\left([0,H]\right)$ with uniform norm.
Theorem 2.
If the conditions N1, N2, R1–R4, AN, and RN are satisfied, then
(23)
\[ {X_{T}}(\cdot )={T^{1/2}}\left({\text{B}_{T}}\left(\cdot ,{\widehat{\theta }_{T}}\right)-\text{B}(\cdot )\right)\stackrel{\mathcal{D}}{\longrightarrow }Y,\hspace{2.5pt}as\hspace{2.5pt}T\to \infty .\]
In view of the Theorem 1 and Lemma 1 of Section 2, to obtain (23) it is sufficient to prove (11). So, taking into account the expressions (5)–(8), the proof of the Theorem 2 consists of 3 lemmas.
Lemma 4.
If conditions R1, R2(i) for $r=0$, and AN are fulfilled, then
\[ {T^{-1/2}}\left\| {I_{1T}}\right\| \stackrel{P}{\longrightarrow }0,\hspace{2.5pt}as\hspace{2.5pt}T\to \infty .\]
Proof.
Obviously, by conditions R1, Lemma 2
\[\begin{array}{l}\displaystyle {T^{-1/2}}\left\| {I_{1T}}\right\| \le {T^{-1/2}}{\Phi _{T}^{1/2}}({\widehat{\theta }_{T}},{\theta ^{0}}){\Phi _{T+H}^{1/2}}({\widehat{\theta }_{T}},{\theta ^{0}})\le \\ {} \displaystyle \le {k_{0}}\left(\underset{1\le j\le q}{\max }{d_{i,T+H}}({\theta ^{0}}){d_{i,T}^{-1}}({\theta ^{0}})\right)\left\| {T^{-1/2}}{d_{T}}({\theta ^{0}})({\widehat{\theta }_{T}}-{\theta ^{0}})\right\| \cdot \left\| {d_{T}}({\theta ^{0}})({\widehat{\theta }_{T}}-{\theta ^{0}})\right\| ,\end{array}\]
and ${T^{-1/2}}\left\| {I_{1T}}\right\| \stackrel{P}{\longrightarrow }0,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty $, due to the condition AN.  □
We will use the notation ${\alpha _{iT}}={d_{iT}}({\theta ^{0}})({\widehat{\theta }_{iT}}-{\theta _{i}^{0}})$, $i=\overline{1,q}$.
Lemma 5.
Under conditions N1, R2(ii), R2(iii), R4, RN, and AN
\[ {T^{-1/2}}\left\| {I_{2T}}\right\| \stackrel{P}{\longrightarrow }0,\hspace{2.5pt}as\hspace{2.5pt}T\to \infty .\]
Proof.
Apply the Taylor formula to the integral ${T^{-1/2}}{I_{2T}}$ and write
(24)
\[\begin{array}{l}\displaystyle {T^{-1/2}}{I_{2T}}={\sum \limits_{j=1}^{q}}{d_{jT}^{-1}}({\theta ^{0}}){\underset{0}{\overset{T}{\int }}}\varepsilon (t+z){g_{j}}(t,{\theta ^{0}})dt\left({T^{-1/2}}{\alpha _{jT}}\right)+\\ {} \displaystyle +\frac{1}{2}{\sum \limits_{i,j=1}^{q}}{d_{iT}^{-1}}({\theta ^{0}}){d_{jT}^{-1}}({\theta ^{0}})\left({\underset{0}{\overset{T}{\int }}}\varepsilon (t+z){g_{ij}}(t,{\theta _{T}^{\ast }})dt\right){\alpha _{iT}}\left({T^{-1/2}}{\alpha _{jT}}\right)=\\ {} \displaystyle ={\sum \nolimits_{1,T}}(z)+\frac{1}{2}{\sum \nolimits_{2,T}}(z),\hspace{2.5pt}\hspace{2.5pt}{\theta _{T}^{\ast }}=\theta +\eta \left({\widehat{\theta }_{T}}-{\theta ^{0}}\right),\hspace{2.5pt}\eta \in (0,1)\hspace{2.5pt}\mathrm{a}.\mathrm{s}.\end{array}\]
Consider sample continuous Gaussian stochastic processes
\[ {\xi _{jT}}(z)={d_{jT}^{-1}}({\theta ^{0}}){\underset{0}{\overset{T}{\int }}}\varepsilon (t+z){g_{j}}(t,{\theta ^{0}})dt,\hspace{2.5pt}\hspace{2.5pt}z\in [0,H],\hspace{2.5pt}\hspace{2.5pt}T>0,\hspace{2.5pt}\hspace{2.5pt}j=\overline{1,q}.\]
Subject R4, as $T\to \infty $,
(25)
\[\begin{array}{l}\displaystyle {\textit{B}_{jT}}({z_{1}}-{z_{2}})=\textit{E}{\xi _{jT}}({z_{1}}){\xi _{jT}}({z_{2}})=\\ {} \displaystyle ={d_{jT}^{-2}}({\theta ^{0}}){\underset{0}{\overset{T}{\int }}}{\underset{0}{\overset{T}{\int }}}\textit{B}(t-s+{z_{1}}-{z_{2}}){g_{j}}(t,{\theta ^{0}}){g_{j}}(s,{\theta ^{0}})dtds=\\ {} \displaystyle =2\pi \underset{\mathbb{R}}{\int }{e^{i\lambda ({z_{1}}-{z_{2}})}}f(\lambda ){\mu _{T}^{jj}}(d\lambda ,{\theta ^{0}})\longrightarrow 2\pi \underset{\mathbb{R}}{\int }\cos \lambda ({z_{1}}-{z_{2}})f(\lambda ){\mu ^{jj}}(d\lambda ,{\theta ^{0}})=\\ {} \displaystyle ={\textit{B}_{j}}({z_{1}}-{z_{2}}),\hspace{2.5pt}\hspace{2.5pt}{z_{1}},{z_{2}}\in [0,H].\end{array}\]
Thus all finite-dimensional distributions of the stationary Gaussian processes $\left\{{\xi _{jT}}(z),\hspace{2.5pt}z\in [0,H]\right\}$ converge to the corresponding finite-dimensional distributions of the stationary Gaussian processes ${\xi _{j}}=\left\{{\xi _{j}}(z),\hspace{2.5pt}z\in [0,H]\right\}$ with covariance functions ${\textit{B}_{j}}(z)$, $z\in [0,H]$, $j=\overline{1,q}$. We assume, that the processes ${\xi _{j}},\hspace{2.5pt}j=\overline{1,q}$, are separable.
Since by condition RN for some $\delta \in (0,1]$
\[ {k_{j}}(\delta ,{\theta ^{0}})=\underset{\mathbb{R}}{\int }|\lambda {|^{1+\delta }}f(\lambda ){\mu ^{jj}}(\lambda ,{\theta ^{0}})<\infty ,\hspace{2.5pt}j=\overline{1,q},\]
then
\[\begin{array}{l}\displaystyle \textit{E}{\left({\xi _{j}}({z_{1}})-{\xi _{j}}({z_{2}})\right)^{2}}=2\left({\textit{B}_{j}}(0)-{\textit{B}_{j}}({z_{1}}-{z_{2}})\right)\le \\ {} \displaystyle \le {2^{2-\delta }}\pi {k_{j}}(\delta ,{\theta ^{0}})|{z_{1}}-{z_{2}}{|^{1+\delta }},\hspace{2.5pt}\hspace{2.5pt}{z_{1}},{z_{2}}\in [0,H].\end{array}\]
According to the Kolmogorov theorem (see, for example, Gikhman and Skorokhod [5]) the processes ${\xi _{j}}$ are sample continuous. Moreover, under condition RN for $T>{T_{0}}$
\[\begin{array}{l}\displaystyle \textit{E}{\left({\xi _{jT}}({z_{1}})-{\xi _{jT}}({z_{2}})\right)^{2}}=2\left({\textit{B}_{jT}}(0)-{\textit{B}_{jT}}({z_{1}}-{z_{2}})\right)\le \\ {} \displaystyle \le {2^{2-\delta }}\pi \left({k_{j}}(\delta ,{\theta ^{0}})+1\right)|{z_{1}}-{z_{2}}{|^{1-\delta }}.\end{array}\]
So, ${\xi _{jT}}\stackrel{\mathcal{D}}{\longrightarrow }{\xi _{j}},\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty \hspace{2.5pt}j=\overline{1,q}$, in the space $C([0,H])$ and (see again [5]) for all continuous on $C([0,H])$ functionals ℓ the distribution of $\ell ({\xi _{jT}})$ converges, as $T\to \infty $, to the distribution of $\ell ({\xi _{j}})$. Using the same notation for weak convergence of random variables, in particular, we obtain $\left\| {\xi _{jT}}\right\| \stackrel{\mathcal{D}}{\longrightarrow }\left\| {\xi _{j}}\right\| ,\hspace{2.5pt}j=\overline{1,q}$, and (see (24))
\[ \| {\sum \nolimits_{1T}}\| \le {\sum \limits_{j=1}^{q}}\left\| {\xi _{jT}}\right\| \left({T^{-1/2}}|{\alpha _{jT}}|\right)\stackrel{P}{\longrightarrow }0,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty .\]
Let ${s_{T}^{\ast }}={T^{-1}}{\textstyle\int _{0}^{T}}{\varepsilon ^{2}}(t)dt$. Then
(26)
\[ {\underset{0}{\overset{2T}{\int }}}|\varepsilon (t)|dt\le T(1+{s_{2T}^{\ast }}).\]
Note that $\| {d_{T}}(\theta )({\theta _{T}^{\ast }}-{\theta ^{0}})\| \le \| {d_{T}}({\theta ^{0}})({\widehat{\theta }_{T}}-{\theta ^{0}})\| $, and if the events
\[ {A_{T}}(r)=\left\{{d_{T}}({\theta ^{0}})({\widehat{\theta }_{T}}-{\theta ^{0}})\le r\right\},\hspace{2.5pt}{A_{T}^{\ast }}=\left\{{s_{2T}^{\ast }}\le 1+B(0)\right\}\]
occur, then
\[ \underset{t\in [0,T]}{\sup }\left|{g_{ij}}(t,{\theta _{T}^{\ast }})\right|\le \underset{t\in [0,T],u\in {V^{c}}(r)\cap {U_{T}}({\theta ^{0}})}{\sup }\left|{h_{ij}}(t,u)\right|,\]
and by the conditions R2(ii), R2(iii) for the norm of any term ${\textstyle\sum _{2,T}^{ij}}$ of the sum ${\textstyle\sum _{2,T}}$ we get the upper bound
(27)
\[\begin{array}{l}\displaystyle \left\| {\sum \nolimits_{2,T}^{ij}}\right\| \le \left({T^{1/2}}{d_{iT}^{-1}}({\theta ^{0}}){d_{jT}^{-1}}({\theta ^{0}}){d_{ij,T}}({\theta ^{0}})\right)(1+{s_{2T}^{\ast }})\times \\ {} \displaystyle \times \left({T^{1/2}}{d_{ij,T}^{-1}}({\theta ^{0}})\underset{t\in [0,T],u\in {V^{c}}(r)\cap {U_{T}}({\theta ^{0}})}{\sup }\left|{h_{ij}}(t,u)\right||{\alpha _{iT}}|({T^{-1/2}}|{\alpha _{jT}}|)\right)\le \\ {} \displaystyle \le {r^{2}}{k^{ij}}(r){\widetilde{k}^{ij}}(r)(2+\textit{B}(0)){T^{-1/2}}.\end{array}\]
Under condition AN for any $\delta >0$ it is possible to find $r>0$ such that for $T>{T_{0}}(\delta )$
(28)
\[ P\left\{\overline{{A_{T}}(r)}\right\}<\delta .\]
On the other hand, by Isserlis’ theorem (see, for example, [14])
(29)
\[\begin{array}{l}\displaystyle P\left\{\overline{{A_{T}^{\ast }}}\right\}\le \textit{E}{\left({s_{2T}^{\ast }}-\textit{B}(0)\right)^{2}}=2{(2T)^{-2}}{\underset{0}{\overset{2T}{\int }}}{\underset{0}{\overset{2T}{\int }}}{B^{2}}(t-s)dtds\le \\ {} \displaystyle \le \| B{\| _{2}^{2}}{T^{-1}},\hspace{2.5pt}\| B{\| _{2}}={\left(\underset{\mathbb{R}}{\int }{B^{2}}(t)dt\right)^{1/2}}<\infty .\end{array}\]
From the inequalities (27)–(29) it follows
\[ \left\| {\sum \nolimits_{2,T}}\right\| \stackrel{P}{\longrightarrow }0,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty .\]
 □
Lemma 6.
Under conditions N1, R2–R4, RN, and AN
\[ {T^{-1/2}}\left\| {I_{3T}}\right\| \stackrel{P}{\longrightarrow }0,\hspace{2.5pt}\hspace{2.5pt}T\to \infty .\]
Proof.
We write
\[\begin{array}{l}\displaystyle {T^{-1/2}}{I_{3T}}(z)={\sum \limits_{j=1}^{q}}{d_{jT}^{-1}}({\theta ^{0}}){\underset{0}{\overset{T}{\int }}}\varepsilon (t){g_{j}}(t+z,{\theta ^{0}})dt\left({T^{-1/2}}{\alpha _{jT}}\right)+\\ {} \displaystyle +\frac{1}{2}{\sum \limits_{i,j=1}^{q}}{d_{iT}^{-1}}({\theta ^{0}}){d_{jT}^{-1}}({\theta ^{0}})\left({\underset{0}{\overset{T}{\int }}}\varepsilon (t){g_{ij}}(t+z,{\theta _{T}^{\ast }})dt\right){\alpha _{iT}}\left({T^{-1/2}}{\alpha _{jT}}\right)=\\ {} \displaystyle ={\sum \nolimits_{3,T}}(z)+{\sum \nolimits_{4,T}}(z),\end{array}\]
where the random vector ${\theta _{T}^{\ast }}$ is of the form (24).
Consider sample continuous Gaussian processes
\[ {\eta _{jT}}(z)={d_{jT}^{-1}}({\theta ^{0}}){\underset{0}{\overset{T}{\int }}}\varepsilon (t){g_{j}}(t+z,{\theta ^{0}})dt,\hspace{2.5pt}\hspace{2.5pt}z\in [0,H],\hspace{2.5pt}\hspace{2.5pt}T>0,\hspace{2.5pt}\hspace{2.5pt}j=\overline{1,q}.\]
For ${z_{1}},{z_{2}}\in [0,H]$ we have
(30)
\[\begin{array}{l}\displaystyle \textit{E}{\eta _{jT}}({z_{1}}){\eta _{jT}}({z_{2}})={d_{jT}^{-2}}({\theta ^{0}}){\underset{0}{\overset{T}{\int }}}{\underset{0}{\overset{T}{\int }}}\textit{B}(t-s){g_{j}}(t+{z_{1}},{\theta ^{0}}){g_{j}}(s+{z_{2}},{\theta ^{0}})dtds=\\ {} \displaystyle ={d_{jT}^{-2}}({\theta ^{0}}){\underset{{z_{1}}}{\overset{T+{z_{1}}}{\int }}}{\underset{{z_{2}}}{\overset{T+{z_{2}}}{\int }}}\textit{B}(t-s+{z_{2}}-{z_{1}}){g_{j}}(t,{\theta ^{0}}){g_{j}}(s,{\theta ^{0}})dtds,\end{array}\]
and the double integral in (30) can be symbolically written as
\[ {\underset{{z_{1}}}{\overset{T+{z_{1}}}{\int }}}{\underset{{z_{2}}}{\overset{T+{z_{2}}}{\int }}}=\left({\underset{0}{\overset{T}{\int }}}+{\underset{T}{\overset{T+{z_{1}}}{\int }}}-{\underset{0}{\overset{{z_{1}}}{\int }}}\right)\left({\underset{0}{\overset{T}{\int }}}+{\underset{T}{\overset{T+{z_{2}}}{\int }}}-{\underset{0}{\overset{{z_{2}}}{\int }}}\right).\]
Bound the integral
\[\begin{array}{l}\displaystyle \left|{d_{jT}^{-2}}({\theta ^{0}}){\underset{0}{\overset{T}{\int }}}{\underset{T}{\overset{T+{z_{2}}}{\int }}}\right|\le {\left({\underset{T}{\overset{T+{z_{2}}}{\int }}}{\underset{0}{\overset{T}{\int }}}{\textit{B}^{2}}(t-s+{z_{2}}-{z_{1}})dtds\right)^{1/2}}\times \\ {} \displaystyle \times {\left({d_{jT}^{-2}}({\theta ^{0}}){\underset{T}{\overset{T+{z_{2}}}{\int }}}{g_{j}^{2}}(s,{\theta ^{0}})ds\right)^{1/2}}\le \\ {} \displaystyle \le {H^{1/2}}\| \textit{B}{\| _{2}}{\left({d_{jT}^{-2}}({\theta ^{0}})\left({d_{j,T+H}^{2}}({\theta ^{0}})-{d_{jT}^{2}}({\theta ^{0}})\right)\right)^{1/2}}\longrightarrow 0,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty ,\end{array}\]
due to Lemma 2. Similarly ${d_{jT}^{-2}}({\theta ^{0}}){\textstyle\int _{0}^{T+{z_{1}}}}{\textstyle\int _{0}^{T}}\longrightarrow 0$, as $T\to \infty $. Also it is easy to see that
\[\begin{array}{l}\displaystyle \left|{d_{jT}^{-2}}({\theta ^{0}}){\underset{0}{\overset{T}{\int }}}{\underset{0}{\overset{{z_{2}}}{\int }}}\right|\le {H^{1/2}}\| \textit{B}{\| _{2}}\hspace{2.5pt}{d_{jH}}({\theta ^{0}}){d_{jT}^{-1}}({\theta ^{0}})\longrightarrow 0,\hspace{2.5pt}\hspace{2.5pt}{d_{jT}^{-2}}{\underset{0}{\overset{{z_{1}}}{\int }}}{\underset{0}{\overset{T}{\int }}}\longrightarrow 0,\\ {} \displaystyle \left|{d_{jT}^{-2}}({\theta ^{0}}){\underset{T}{\overset{T+{z_{1}}}{\int }}}{\underset{T}{\overset{T+{z_{2}}}{\int }}}\right|\le H\textit{B}(0){d_{jT}^{-2}}({\theta ^{0}}){\underset{T}{\overset{T+H}{\int }}}{g_{j}^{2}}(t,{\theta ^{0}})dt\longrightarrow 0,\\ {} \displaystyle \left|{d_{jT}^{-2}}({\theta ^{0}}){\underset{0}{\overset{{z_{1}}}{\int }}}{\underset{T}{\overset{T+{z_{2}}}{\int }}}\right|\le H\textit{B}(0){d_{jH}}({\theta ^{0}}){d_{jT}^{-1}}({\theta ^{0}}){\left({d_{jT}^{-2}}({\theta ^{0}}){\underset{T}{\overset{T+H}{\int }}}\hspace{-0.1667em}{g_{j}^{2}}(s,{\theta ^{0}})ds\right)^{1/2}}\longrightarrow 0,\\ {} \displaystyle {d_{jT}^{-2}}({\theta ^{0}}){\underset{0}{\overset{T+{z_{1}}}{\int }}}{\underset{0}{\overset{{z_{2}}}{\int }}}\longrightarrow 0,\hspace{2.5pt}\hspace{2.5pt}{d_{jT}^{-2}}({\theta ^{0}}){\underset{0}{\overset{{z_{1}}}{\int }}}{\underset{0}{\overset{{z_{2}}}{\int }}}\longrightarrow 0,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty .\end{array}\]
Thus for any ${z_{1}},{z_{2}}\in [0,H]$ and $j=\overline{1,q}$.
\[ \textit{E}{\eta _{jT}}({z_{1}}){\eta _{jT}}({z_{2}})={\textit{B}_{jT}}({z_{1}}-{z_{2}})+{o_{jT}}(1),\hspace{2.5pt}\hspace{2.5pt}{o_{jT}}(1)\longrightarrow 0,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty ,\]
and all finite-dimensional distributions of the Gaussian processes $\{{\eta _{jT}}(z),\hspace{2.5pt}z\hspace{0.1667em}\in \hspace{0.1667em}[0,H]\}$, $j=\overline{1,q}$, converge, as $T\to \infty $, to the corresponding finite-dimensional distributions of the stationary Gaussian processes ${\xi _{j}},\hspace{2.5pt}j=\overline{1,q}$, with covariance functions (25).
Besides, under conditions N1(ii) and R3
\[\begin{array}{l}\displaystyle \textit{E}{\big({\eta _{jT}}({z_{1}})-{\eta _{jT}}({z_{2}})\big)^{2}}={d_{jT}^{-2}}({\theta ^{0}}){\underset{0}{\overset{T}{\int }}}\hspace{-0.1667em}{\underset{0}{\overset{T}{\int }}}\textit{B}(t-s)\hspace{-0.1667em}\Big({g_{j}}(t+{z_{1}},{\theta ^{0}})-{g_{j}}(t+{z_{2}},{\theta ^{0}})\Big)\times \\ {} \displaystyle \times \left({g_{j}}(s+{z_{1}},{\theta ^{0}})-{g_{j}}(s+{z_{2}},{\theta ^{0}})\right)dtds\le \\ {} \displaystyle \le {d_{jT}^{-2}}({\theta ^{0}}){\underset{0}{\overset{T}{\int }}}\hspace{-0.1667em}{\underset{0}{\overset{T}{\int }}}|\textit{B}(t-s)|{\left({g_{j}}(t+{z_{1}},{\theta ^{0}})-{g_{j}}(t+{z_{2}},{\theta ^{0}})\right)^{2}}dtds\le \\ {} \displaystyle \le \| \textit{B}{\| _{1}}{d_{jT}^{-2}}({\theta ^{0}}){\Psi _{jT}}({z_{1}},{z_{2}};{\theta ^{0}})\le {\overline{k}_{j}}\| \textit{B}{\| _{1}}|{z_{1}}-{z_{2}}{|^{2}},\end{array}\]
${z_{1}},{z_{2}}\in [0,H]$, $j=\overline{1,q}$, $\| \textit{B}{\| _{1}}={\textstyle\int _{\mathbb{R}}}|\textit{B}(t)|dt<\infty $.
We have proved that $\| {\eta _{jT}}\| \stackrel{\mathcal{D}}{\longrightarrow }\| {\xi _{j}}\| $, $j=\overline{1,q}$, and from the condition AN it follows
\[ {\sum \nolimits_{3,T}}\| \stackrel{P}{\longrightarrow }0,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty .\]
Note further that similarly to (26) ${\textstyle\int _{0}^{T}}|\varepsilon (t)|dt\le \frac{T}{2}(1+{s_{T}^{\ast }})$, and if the events ${A_{T}}(r)$, ${\textit{B}_{T}^{\ast }}=\left\{{s_{T}^{\ast }}\le 1+\textit{B}(0)\right\}$ occur, then the norm of any term ${\textstyle\sum _{4,T}^{ij}}$, of the sum ${\textstyle\sum _{4,T}}$ can be dominated in the following way (compare with (27)):
(31)
\[\begin{array}{l}\displaystyle \left\| {\sum \nolimits_{4,T}^{ij}}\right\| \le \frac{1}{2}\left({(T+H)^{1/2}}{d_{i,T+H}^{-1}}({\theta ^{0}}){d_{j,T+H}^{-1}}({\theta ^{0}}){d_{ij,T+H}}({\theta ^{0}})\right)\times \\ {} \displaystyle \times \left({d_{iT}^{-1}}({\theta ^{0}}){d_{i,T+H}}({\theta ^{0}})\right)\left({d_{jT}^{-1}}({\theta ^{0}}){d_{j,T+H}}({\theta ^{0}})\right)\times \\ {} \displaystyle \times {(T+H)^{1/2}}{d_{ij,T+H}^{-1}}({\theta ^{0}})\underset{t\in [0,T+H],u\in {V^{c}}(r)\cap {U_{T}}({\theta ^{0}})}{\sup }\left|{h_{ij}}(t,u)\right|\times \\ {} \displaystyle \times T{(T+H)^{-1}}(1+{s_{T}^{\ast }})\left|{\alpha _{iT}}\right|\left({T^{-1/2}}\left|{\alpha _{jT}}\right|\right)\le \\ {} \displaystyle \le \frac{1}{2}{(1+\beta )^{2}}{r^{2}}{k^{ij}}(r){\widetilde{k}^{ij}}(r)(2+\textit{B}(0)){T^{-1/2}}\end{array}\]
for any $\beta >0$ and $T>{T_{0}}$. In addition (compare with (29)),
(32)
\[ P\left\{\overline{{\textit{B}_{T}^{\ast }}}\right\}\le 2\| \textit{B}{\| _{2}^{2}}{T^{-1}},\]
and from (28), (31), and (32) we obtain
\[ \| {\sum \nolimits_{4,T}}\| \stackrel{P}{\longrightarrow }0,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty .\]
 □
Theorem 2 is proved as well.
Remark 2.
In the proofs of the sections 3 and 4 the condition R2(i) has been used just for $r=0$. However this condition is used for any $r\ge 0$ in the proof of LSE ${\widehat{\theta }_{T}}$ asymptotic normality: see explanation in the example below.

5 Trigonometric regression function

In this section, we consider the example of trigonometric regression function
(33)
\[ g(t,{\theta ^{0}})={\sum \limits_{k=1}^{N}}\left({A_{k}^{0}}\cos {\varphi _{k}^{0}}t+{B_{k}^{0}}\sin {\varphi _{k}^{0}}t\right),\]
where
(34)
\[ {\theta ^{0}}=({\theta _{1}^{0}},{\theta _{2}^{0}},{\theta _{3}^{0}},\dots ,{\theta _{3N-2}^{0}},{\theta _{3N-1}^{0}},{\theta _{3N}^{0}})=\left({A_{1}^{0}},{B_{1}^{0}},{\varphi _{1}^{0}},\dots ,{A_{N}^{0}},{B_{N}^{0}},{\varphi _{N}^{0}}\right),\]
${\left({C_{k}^{0}}\right)^{2}}={\left({A_{k}^{0}}\right)^{2}}+{\left({B_{k}^{0}}\right)^{2}}>0$, $k=\overline{1,N}$, ${\varphi ^{0}}=\left({\varphi _{1}^{0}},\dots ,{\varphi _{N}^{0}}\right)\in \Phi (\underline{\varphi },\overline{\varphi })$,
$\Phi (\underline{\varphi },\overline{\varphi })=\left\{\varphi =({\varphi _{1}},\dots ,{\varphi _{N}})\in {\mathbb{R}^{N}}:\hspace{2.5pt}0\le \underline{\varphi }<{\varphi _{1}}<\cdots <{\varphi _{N}}<\overline{\varphi }<+\infty \right\}$.
To apply the results obtained in the paper to the function (33), we have to change a bit the Definition 1 of the LSE. We will use the following modification of LSE proposed by Walker [24], see also Ivanov [12, 13]. Consider non-decreasing system of open convex sets ${S_{T}}\subset \Phi (\underline{\varphi },\overline{\varphi })$, $T>{T_{0}}>0$, given by the condition that the true value of unknown parameter ${\varphi ^{0}}\in {S_{T}}$, ${\lim \nolimits_{T\to \infty }}{S_{T}}=\Phi (\underline{\varphi },\overline{\varphi })$, and
(35)
\[ \underset{T\to \infty }{\lim }\underset{1\le j<k\le N,\varphi \in {S_{T}}}{\inf }T({\varphi _{k}}-{\varphi _{j}})=+\infty ,\hspace{2.5pt}\underset{T\to \infty }{\lim }\underset{\varphi \in {S_{T}}}{\inf }T{\varphi _{1}}=+\infty .\]
Definition 5.
The LSE in the Walker sense of unknown parameter (34) in the model (1) with regression function (33) is said to be any random vector
(36)
\[ {\widehat{\theta }_{T}}=\left({\widehat{A}_{1T}},{\widehat{B}_{1T}},{\widehat{\varphi }_{1T}},\dots ,{\widehat{A}_{NT}},{\widehat{B}_{NT}},{\widehat{\varphi }_{NT}}\right)\in {\Theta _{T}^{c}}\]
having the property
\[ {Q_{T}}({\widehat{\theta }_{T}})=\underset{\tau \in {\Theta _{T}^{c}}}{\min }{Q_{T}}(\tau ),\]
where ${Q_{T}}(\tau )$ is defined in (2) and ${\Theta _{T}}\subset {\mathbb{R}^{3N}}$ is such that ${A_{k}}\in \mathbb{R}$, ${B_{k}}\in \mathbb{R}$, $k=\overline{1,N}$, and $\varphi \in {S_{T}}$.
The relations (35) allows to distinguish the parameters ${\varphi _{k}}$, $k=\overline{1,N}$, and prove the consistency of the LSE ${\widehat{\theta }_{T}}$ in the Walker sense, see [24, 12, 13], and [18].
Corollary 2.
Suppose the assumption (35) is satisfied for the LSE in the Walker sense of the parameters (33). Then under conditions N1 and N2 the relation (23) of Theorem 2 holds true.
Proof.
Due to the smoothness of function (33) with respect to the totality of variables, there is no need to introduce conditions for the differentiability of the function g by the variables θ in the set ${\Theta _{\gamma }}$ and by the variable t in the set $(-\gamma ,+\infty )$, as it was done in the main part of the paper for technical necessity.
To check the fulfillment of the condition R1 for regression function (33) we get
\[\begin{array}{l}\displaystyle \left|{A_{k}}\cos {\varphi _{k}}t+{B_{k}}\sin {\varphi _{k}}t-{A_{k}^{0}}\cos {\varphi _{k}^{0}}t-{B_{k}^{0}}\sin {\varphi _{k}^{0}}t\right|\le \\ {} \displaystyle \le \left|{A_{k}}-{A_{k}^{0}}\right|+\left|{B_{k}}-{B_{k}^{0}}\right|+\left(\left|{A_{k}^{0}}\right|+\left|{B_{k}^{0}}\right|\right)t\left|{\varphi _{k}}-{\varphi _{k}^{0}}\right|,\hspace{2.5pt}k=\overline{1,N},\end{array}\]
and therefore
(37)
\[\begin{array}{l}\displaystyle {\Phi _{T}}(\theta ,{\theta ^{0}})\le 3N{\sum \limits_{k=0}^{N}}\left(T{\left({A_{k}}-{A_{k}^{0}}\right)^{2}}+T{\left({B_{k}}-{B_{k}^{0}}\right)^{2}}+\right.\\ {} \displaystyle \left.+\frac{1}{3}{\left(\left|{A_{k}^{0}}\right|+\left|{B_{k}^{0}}\right|\right)^{2}}{T^{3}}{\left({\varphi _{k}}-{\varphi _{k}^{0}}\right)^{2}}\right).\end{array}\]
Note that for $k=\overline{1,N}$
(38)
\[ {T^{-1}}{d_{3k-2,T}^{2}}({\theta ^{0}}),{T^{-1}}{d_{3k-1,T}^{2}}({\theta ^{0}})\longrightarrow \frac{1}{2},\hspace{2.5pt}{T^{-3}}{d_{3k,T}^{2}}({\theta ^{0}})\longrightarrow \frac{1}{6}{{C_{k}^{0}}^{2}},\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty .\]
Thus for any $\varepsilon >0$ and $T>{T_{0}}={T_{0}}(\varepsilon )$ from (38) it follows
(39)
\[ T{d_{3k-2,T}^{-2}}({\theta ^{0}})<2+\varepsilon ,\hspace{2.5pt}T{d_{3k-1,T}^{-2}}({\theta ^{0}})<2+\varepsilon ,\hspace{2.5pt}{T^{3}}{d_{3k,T}^{-2}}({\theta ^{0}})<6{({C_{k}^{0}})^{-2}}+\varepsilon .\]
Increasing ${T_{0}}$, if necessary, we obtained from (37) and (39)
(40)
\[\begin{array}{l}\displaystyle {\Phi _{T}}(\theta ,{\theta ^{0}})\le 3N{\sum \limits_{1}^{N}}\left((2+\varepsilon ){d_{3k-2,T}^{2}}({\theta ^{0}}){\left({A_{k}}-{A_{k}^{0}}\right)^{2}}\right.+\\ {} \displaystyle +(2+\varepsilon ){d_{3k-1,T}^{2}}({\theta ^{0}}){\left({B_{k}}-{B_{k}^{0}}\right)^{2}}+\\ {} \displaystyle +\left.\left(\frac{2{\left(\left|{A_{k}^{0}}\right|+\left|{B_{k}^{0}}\right|\right)^{2}}}{{{C_{k}^{0}}^{2}}}+\varepsilon \right){d_{3k,T}^{2}}({\theta ^{0}}){\left({\varphi _{k}}-{\varphi _{k}^{0}}\right)^{2}}\right).\end{array}\]
So, as it follows from (40), for any ${\theta ^{0}}\in \Theta $ and $\varepsilon >0$ there exists ${T_{0}}>0$ such that for $T>{T_{0}}$ the inequality (13) of the condition R1 is satisfied with constant ${k_{0}}\ge 12N+\varepsilon $.
In the conditions R2(i) and R2(ii), instead of sets ${U_{T}}({\theta ^{0}})$, one should take sets ${\widetilde{U}_{T}}({\theta ^{0}})={d_{T}}({\theta ^{0}})\left({\Theta _{T}^{c}}-{\theta ^{0}}\right)$, and verification of conditions R2 for function (33) is routine.
Check condition R3 (see (12), (14)). Obviously, for $k=\overline{1,N}$
(41)
\[ \begin{array}{c}{g^{\prime }_{3k-2}}(t,\theta )=-{\varphi _{k}}\sin {\varphi _{k}}t,\hspace{2.5pt}{g^{\prime }_{3k-1}}(t,\theta )={\varphi _{k}}\cos {\varphi _{k}}t,\\ {} {g^{\prime }_{3k}}(t,\theta )=-{A_{k}}\sin {\varphi _{k}}t-{A_{k}}t{\varphi _{k}}\sin {\varphi _{k}}t+{B_{k}}\cos {\varphi _{k}}t-{B_{k}}t{\varphi _{k}}\sin {\varphi _{k}}t.\end{array}\]
Let ${z_{1}}<{z_{2}}$, then
\[ \left|{g_{3k}}(t+{z_{1}},\theta )-{g_{3k}}(t+{z_{2}},\theta )\right|=\left|{g^{\prime }_{3k}}({t^{\ast }},\theta )\right||{z_{1}}-{z_{2}}|,\]
where ${t^{\ast }}=t+{z_{1}}+\nu ({z_{2}}-{z_{1}})\le t+H$, $\nu \in (0,1)$ is some number.
Using formula (41), we obtain
(42)
\[\begin{array}{l}\displaystyle {\left|{g^{\prime }_{3k}}({t^{\ast }},{\theta ^{0}})\right|^{2}}\le 2{{A_{k}^{0}}^{2}}{\left|\sin {\varphi _{k}^{0}}{t^{\ast }}+{\varphi _{k}^{0}}{t^{\ast }}\sin {\varphi _{k}^{0}}{t^{\ast }}\right|^{2}}+\\ {} \displaystyle +2{{B_{k}^{0}}^{2}}{\left|\cos {\varphi _{k}^{0}}{t^{\ast }}-{\varphi _{k}^{0}}{t^{\ast }}\sin {\varphi _{k}^{0}}{t^{\ast }}\right|^{2}}\le 2{{C_{k}^{0}}^{2}}{\left(1+\overline{\varphi }(H+t)\right)^{2}}.\end{array}\]
From (39) and (42) it follows for any $\varepsilon >0$, ${\theta ^{0}}\in \Theta $, and $T>{T_{0}}$ the inequalities (14) are correct with constants ${\overline{k}_{i}}\ge 4{\overline{\varphi }^{2}}+\varepsilon $, if $i=3k$, $k=\overline{1,N}$. Similarly we obtain the inequalities (14) with constants ${\overline{k}_{i}}\ge 2{\overline{\varphi }^{2}}+\varepsilon $, if $i=3k-2,3k-1$, $k=\overline{1,N}$.
Passing to condition R4, we note that the spectral measures of the trigonometric regression were studied by Whittle [25], Walker [24], Hannan [7], Ivanov [12], Ivanov et al. [18]. For regression function (33) spectral measure $\mu (d\lambda ;{\theta ^{0}})$, ${\theta ^{0}}\in \Theta $, is a block-diagonal matrix $diag\left({M_{k}}({\theta ^{0}}),k=\overline{1,N}\right)$, where
(43)
\[\begin{array}{l}\displaystyle {M_{k}}({\theta ^{0}})=\left[\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{\delta _{k}}& i{\rho _{k}}& {\overline{\beta }_{k}}\\ {} -i{\rho _{k}}& {\delta _{k}}& {\overline{\gamma }_{k}}\\ {} {\beta _{k}}& {\gamma _{k}}& {\delta _{k}}\end{array}\right],\hspace{2.5pt}\\ {} \displaystyle {\beta _{k}}=\frac{\sqrt{3}}{2{C_{k}^{0}}}({B_{k}^{0}}{\delta _{k}}+i{A_{k}^{0}}{\rho _{k}}),\hspace{2.5pt}{\gamma _{k}}=\frac{\sqrt{3}}{2{C_{k}^{0}}}(-{A_{k}^{0}}{\delta _{k}}+i{B_{k}^{0}}{\rho _{k}}),\end{array}\]
with ${\delta _{k}}={\delta _{k}}(d\lambda )$, and the signed measure ${\rho _{k}}={\rho _{k}}(d\lambda )$ being located at the points $\pm {\varphi _{k}^{0}}$, $k=\overline{1,N}$. Moreover, ${\delta _{k}}\left(\left\{\pm {\varphi _{k}^{0}}\right\}\right)=\frac{1}{2}$, ${\rho _{k}}\left(\left\{\pm {\varphi _{k}}\right\}\right)=\pm \frac{1}{2}$, $k=\overline{1,N}$. On the other hand,
(44)
\[\begin{array}{l}\displaystyle {\mu _{T}}(\mathbb{R};{\theta ^{0}})={\underset{-\infty }{\overset{\infty }{\int }}}\mu (d\lambda ;{\theta ^{0}})=J({\theta ^{0}})=diag\left({J_{k}}({\theta ^{0}}),\hspace{2.5pt}k=\overline{1,N}\right),\\ {} \displaystyle {J_{k}}({\theta ^{0}})=\left[\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}1& 0& \frac{\sqrt{3}}{2}{B_{k}^{0}}{({C_{k}^{0}})^{-1}}\\ {} 0& 1& -\frac{\sqrt{3}}{2}{A_{k}^{0}}{({C_{k}^{0}})^{-1}}\\ {} \frac{\sqrt{3}}{2}{B_{k}^{0}}{({C_{k}^{0}})^{-1}}& -\frac{\sqrt{3}}{2}{A_{k}^{0}}{({C_{k}^{0}})^{-1}}& 1\end{array}\right].\end{array}\]
Since $\det {J_{k}}=\frac{1}{4}$, the matrix (44) is positive definite. Practically the components of the matrix-valued measure $\mu (d\lambda ;\theta )={\left({\mu ^{jl}}(d\lambda ;\theta )\right)_{j,l=1}^{q}}$, $q=3N$ in our example, are determined from relations
\[\begin{array}{l}\displaystyle {R_{jl}}(h,\theta )=\underset{T\to \infty }{\lim }{d_{jT}^{-1}}({\theta ^{0}}){d_{lT}^{-1}}({\theta ^{0}}){\underset{0}{\overset{T}{\int }}}{g_{j}}(t+h,\theta ){g_{l}}(t,\theta )dt=\\ {} \displaystyle =\underset{\mathbb{R}}{\int }{e^{i\lambda h}}{\mu ^{jl}}(d\lambda ;\theta ),\hspace{2.5pt}h\in \mathbb{R},\end{array}\]
where it is supposed that the matrix function ${\left({R_{jl}}(h;\theta )\right)_{j,l=1}^{q}}$ is continuous at $h=0$.
As to the condition AN fulfillment for trigonometric regression function (33) in the paper Ivanov et al. [18], it is shown using relations (38) that normalized LSE in the Walker sense
\[\begin{array}{c}\left({T^{1/2}}\left({\widehat{A}_{1T}}-{A_{1}^{0}}\right),{T^{1/2}}\left({\widehat{B}_{1T}}-{B_{1}^{0}}\right),{T^{3/2}}\left({\widehat{\varphi }_{1T}}-{\varphi _{1}^{0}}\right),\dots ,\right.\\ {} \left.{T^{1/2}}\left({\widehat{A}_{NT}}-{A_{N}^{0}}\right),{T^{1/2}}\left({\widehat{B}_{NT}}-{B_{N}^{0}}\right),{T^{3/2}}\left({\widehat{\varphi }_{NT}}-{\varphi _{N}^{0}}\right)\right)\end{array}\]
is asymptotically, as $T\to \infty $, normal $N\left(0,{\textstyle\sum _{TRIG}}\right)$, where ${\textstyle\sum _{TRIG}}$ is a block-diagonal matrix with blocks
\[ \frac{4\pi f({\varphi _{k}^{0}})}{{({C_{k}^{0}})^{2}}}\left[\begin{array}{c@{\hskip10.0pt}c@{\hskip10.0pt}c}{({A_{k}^{0}})^{2}}+4{({B_{k}^{0}})^{2}}& -3{A_{k}^{0}}{B_{k}^{0}}& -6{B_{k}^{0}}\\ {} -3{A_{k}^{0}}{B_{k}^{0}}& {({A_{k}^{0}})^{2}}+4{({B_{k}^{0}})^{2}}& 6{A_{k}^{0}}\\ {} -6{B_{k}^{0}}& 6{A_{k}^{0}}& {({A_{k}^{0}})^{2}}+4{({B_{k}^{0}})^{2}}\end{array}\right],\hspace{2.5pt}k=\overline{1,N}.\]
To obtain such a result, it was first proved in [18] that the normalized estimator (36) is weakly consistent, that is, for any $r>0$
\[ P\left\{\left\| {T^{-1/2}}{d_{T}}({\theta ^{0}})({\widehat{\theta }_{T}}-{\theta ^{0}})\right\| \ge r\right\}\longrightarrow 0,\hspace{2.5pt}\mathrm{as}\hspace{2.5pt}T\to \infty .\]
Then, under a complex set of conditions for general regression function asymptotic normality of the LSE of its parameters was proved. And finally, it was verified that the trigonometric regression function satisfies the specified set of conditions. It is important to note that the proofs of the asymptotic normality of the LSE complying with Definition 1 and Definition 5 are the same.
It remains to check the last condition RN associated with the regression function (33). As mentioned above under assumptions N1 and N3 the condition RN follows from N1, R4. If the function $b(\lambda )$, $\lambda \in \mathbb{R}$, is not bounded, then we verify the convergence (18) using Lemma 3.
First of all for ${\theta ^{0}}\in \Theta $ in view of (43)
\[ \underset{\mathbb{R}}{\int }|\lambda {|^{1+\delta }}f(\lambda ){\mu ^{3k-i,3k-i}}(d\lambda ;{\theta ^{0}})={({\varphi _{k}^{0}})^{1+\delta }}f({\varphi _{k}^{0}})<\infty ,\hspace{2.5pt}i=0,1,2,\hspace{2.5pt}k=\overline{1,N},\]
and RN1(iii) is true. Suppose that the condition RN(ii) is also satisfied. It can, for example, happen when outside of some neighborhood of zero the spectral density $f(\lambda ),\hspace{2.5pt}\lambda \in \mathbb{R}$, will behave as a function $\frac{C}{|\lambda |{\ln ^{a}}(1+|\lambda |)},\hspace{2.5pt}a>1$.
Using formulas (41) and the fact that it can be taken, for example, $|\lambda |>\overline{\varphi }+1={\lambda _{0}}$ in calculating the integrals (19), there are no non-integrable singularities of the form $\frac{1}{\lambda \pm {\varphi _{k}^{0}}}$, $k=\overline{1,N}$. Moreover, in the considered example a sharpened version of inequalities (20) of the condition RN(i) holds:
\[ \underset{|\lambda |>{\lambda _{0}}}{\sup }{d_{jT}^{-2}}({\theta ^{0}}){\left|{\widetilde{g}_{T}^{j}}(\lambda ,{\theta ^{0}})\right|^{2}}\le {h_{j}}({\theta ^{0}}){T^{-1}},\hspace{2.5pt}j=\overline{1,3N},\hspace{2.5pt}{\theta ^{0}}\in \Theta .\]
 □

References

[1] 
Anderson, T.W.: The Statistical Analysis of Time Series. Wiley, New York (1971). MR0283939
[2] 
Billingsley, P.: Convergence of Probability Measures, 2nd Edition. Wiley, New York (2013). MR0233396
[3] 
Buldygin, V.V.: On the properties of an empirical correlogram of a Gaussian process with square integrable spectral density. Ukr. Math. J. 47, 1006–1021 (1995). MR1367943. https://doi.org/10.1007/BF01084897
[4] 
Buldygin, V.V., Kozachenko, Y.V.: Metric Characterization of Random Variables and Random Processes. American Mathematical Society, Providence (2000). MR1743716. https://doi.org/10.1090/mmono/188
[5] 
Gikhman, I.I., Skorokhod, A.V.: Introduction to the Theory of Random Processes. Dover Publications, Mineola, New York (1996). MR1435501
[6] 
Grenander, U.: On the estimation of regression coefficients in the case of an autocorrelated disturbance. Ann. Math. Stat. 25(2), 252–272 (1954). MR0062402. https://doi.org/10.1214/aoms/1177728784
[7] 
Hannan, E.: The estimation of frequency. J. Appl. Probab. 10, 510–519 (1973)
[8] 
Hannan, E.J.: Multiple Time Series. Wiley, New York (1970). MR0279952
[9] 
Holevo, A.S.: On the asymptotic efficient regression estimates in the case of degenerate spectrum. Theory Probab. Appl. 21, 324–333 (1976)
[10] 
Ibragimov, I.A., Rozanov, Y.A.: Gaussian Random Processes. Springer, New York (1980). MR0543837
[11] 
Ivanov, A., Kozachenko, Y., Moskvychova, K.: Large deviations of the correlogram estimator of the random noise covariance function in the nonlinear regression model. Commun. Stat. Theory Methods (in press). https://doi.org/10.1080/03610926.2020.1713369
[12] 
Ivanov, A.V.: A solution of the problem of detecting hidden periodicities. Theory Probab. Math. Stat. 20, 51–68 (1980). MR0529259
[13] 
Ivanov, A.V.: Consistency of the least squares estimator of the amplitudes and angular frequencies of a sum of harmonic oscillations in models with long-range dependence. Theory Probab. Math. Stat. 80, 61–69 (2010). MR2541952. https://doi.org/10.1090/S0094-9000-2010-00794-0
[14] 
Ivanov, A.V., Leonenko, N.N.: Statistical Analysis of Random Fields. Kluwer Acad.Publ., Dordrecht-Boston-London (1989). MR1009786. https://doi.org/10.1007/978-94-009-1183-3
[15] 
Ivanov, A.V., Pryhod’ko, V.V.: Asymptotic properties of Ibragimov’s estimator for a parameter of the spectral density of the random noise in a nonlinear regression model. Theory Probab. Math. Stat. 93, 51–70 (2016)
[16] 
Ivanov, A.V., Pryhod’ko, V.V.: On the whittle estimator of the parameter of spectral density of random noise in the nonlinear regression model. Math. J. 67, 1183–1203 (2016). MR3473712. https://doi.org/10.1007/s11253-016-1145-1
[17] 
Ivanov, A.V., Leonenko, N.N., Orlovskyi, I.V.: On the whittle estimator for linear random noise spectral density parameter in continuous-time nonlinear regression models. Stat. Inference Stoch. Process. 23, 129–169 (2020). MR4072255. https://doi.org/10.1007/s11203-019-09206-z
[18] 
Ivanov, A.V., Leonenko, N.N., Ruiz-Medina, M.D., Zhurakovsky, B.M.: Estimation of harmonic component in regression with cyclically dependent errors. Statistics 49(1), 156–186 (2015). MR3304373. https://doi.org/10.1080/02331888.2013.864656
[19] 
Ivanov, O., Moskvychova, K.: Asymptotic expansion of the moments of correlogram estimator for the random-noise covariance function in the nonlinear regression model. Ukr. Math. J. 66(6), 884–904 (2014). MR3284595. https://doi.org/10.1007/s11253-014-0979-7
[20] 
Ivanov, O., Moskvychova, K.: Asymptotic normality of the correlogram estimator of the covariance function of a random noise in the nonlinear regression model. Theory Probab. Math. Stat. 91, 61–70 (2015). MR3364123. https://doi.org/10.1090/tpms/966
[21] 
Ivanov, O.V., Moskvychova, K.K.: Stochastic asymptotic expansion of correlogram estimator of the correlation function of random noise in nonlinear regression model. Theory Probab. Math. Stat. 90, 87–101 (2015). MR3241862. https://doi.org/10.1090/tpms/951
[22] 
Leonenko, N.N.: Limit Theorems for Random Fields with Singular Spectrum. Kluwer AP, Dordrecht (1999). MR1687092. https://doi.org/10.1007/978-94-011-4607-4
[23] 
Pfanzagl, J.: On the measurability and consistency of minimum contrast estimates. Metrika 14, 249–272 (1969). https://doi.org/10.1007/BF02613654
[24] 
Walker, A.M.: On the estimation of a harmonic component in a time series with stationary dependent residuals. Adv. Appl. Probab. 5, 217–241 (1973). MR0336943. https://doi.org/10.2307/1426034
[25] 
Whittle, P.: The simultaneous estimation of a time series harmonic components and covariance structure. Trab. Estad. Investig. Oper. 3, 43–57 (1952). MR0051487
Exit Reading PDF XML


Table of contents
  • 1 Introduction
  • 2 Setting
  • 3 Conditions
  • 4 Asymptotic normality of the residual correlogram
  • 5 Trigonometric regression function
  • References

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy