Asymptotic normality of the residual correlogram in the continuous-time nonlinear regression model

In a continuous time nonlinear regression model the residual correlogram is considered as an estimator of the stationary Gaussian random noise covariance function. For this estimator the functional central limit theorem is proved in the space of continuous functions. The result obtained shows that the limiting sample continuous Gaussian random process coincides with the limiting process in the central limit theorem for standard correlogram of the random noise in the specified regression model.


Introduction
Estimation of the signal parameters in the "signal+noise" observation model is a classic problem of statistics of stochastic processes. If the signal (regression function) nonlinearly depends on parameters, then this is a problem of nonlinear time-series regression analysis. Another problems arise when there is a need to estimate the functional characteristics of the correlated random noise in the given functional regression model. For the stationary noise it can be estimation of the noise spectral density or covariance function. Asymptotic properties of the Whittle and Ibragimov estimators of spectral density parameters in the continuous time nonlinear regression model were considered in Ivanov and Prykhod'ko [16,15], Ivanov et al. [17]. Exponential bounds for the probabilities of large deviations of the stationary Gaussian noise covariance function in the similar regression model are obtained in Ivanov et al. [11]. Stochastic asymptotic expansion and asymptotic expansions of the bias and variance of the residual correlogram in the same setting were derived in Ivanov and Moskvychova [21,19]. In both cases it is first necessary to estimate the parameters of the regression function to neutralize it influence, and then use residual periodogram to estimate spectrum parameters and residual correlogram to estimate covariance function. The residual correlogram generalizes the notion of the averaged residual sum of squares in classical regression analysis.
However, unlike the residual sum of squares and usual correlogram, the results on the residual correlogram are not sufficiently represented in statistical literature except for a few theorems for discrete time linear regression with stationary correlated observation errors (see Anderson [1], Hannan [8]). These statements were obtained using explicit expressions for the least squares estimator (LSE) of unknown regression parameters. In the multitude of works dealing with stationary stochastic processes in the correlograms the values of the processes are centered by their sample means that are the LSE of their expectations. Some field generalizations of such a centering can be found in Leonenko [22].
In this paper we prove the functional central limit theorem (CLT) in the space of continuous functions for the normed residual correlogram as an estimator of the stationary Gaussian random noise covariance function in continuous time nonlinear regression model. The first result of such a kind has been obtained in Ivanov and Moskvychova [20]. In current paper we significantly weakened the requirements to the regression function under which the indicated CLT is true, namely: brought them closer to the conditions of the LSE asymptotic normality [18]. In addition we replaced the condition for the existence of a certain moment of the noise spectral density by much weaker condition on the weighted spectral density admissibility with respect to regression function spectral measure. In the last section of the paper we apply our result to the trigonometric regression.

Remark 1.
The assumption about domain (−γ, +∞) for function g in t is of technical nature and does not affect possible applications. This assumption makes it possible to formulate the condition RN1(i) which is used in the proof of Lemma 3.
Obviously, if B ∈ L 1 (R), then the process ε has a bounded and continuous spectral density f = {f (λ), λ ∈ R}. Definition 1. LSE of unknown parameter θ 0 ∈ obtained by observations of the process {X(t), t ∈ [0, T ]} is said to be any random vector θ T = ( θ 1T , . . . , θ qT ) ∈ c ( c is the closure of in R q ) such that provided that the minimum in (2) is attained a.s.
The existence of at least one such a vector follows from the Pfanzagl results [23].
As an estimator of B we take the residual correlogram built by residuals namely: H > 0 is some fixed number. In particular B T (0, θ T ) = T −1 Q T ( θ T ) is LSE of the variance B(0) of stochastic process ε. On the other hand is the correlogram of the process ε. From the condition N1 it follows that integrals (3) and (4) can be considered as Riemann integrals based on single paths of the corresponding processes and B T (z, θ T ), B T (z), z ∈ [0, H ], are sample continuous stochastic processes.
Consider the normalized residual correlogram with We will consider the processes X T , Y T , and R T as random elements in the mea-  Since f ∈ L 2 (R) under assumption N1(ii), as it is well known, for any z 1 , z 2 ∈ [0, H ], as T → ∞, and (see, e.g., Buldygin [3]) all the finite-dimensional distributions of the processes Y T weakly converge, as T → ∞, to the Gaussian process Y with zero mean and covariance function (9). We assume that the process Y is separable. Introduce the function (see section 6.4 of the chapter 6 in Buldygin and Kozachenko [4]) If f ∈ L 2 (R), the function q generates pseudometrics Denote by H √ ρ (ε) = H √ ρ ([0, 1], ε), ε > 0, the metric entropy of the interval [0, 1] generated by the pseudometric √ ρ, 0+ the integral over an arbitrary neighborhood of zero (0, δ), δ > 0.
Below we are going to formulate a theorem obtained in Buldygin and Kozachenko [4] (Theorem 6.4.1) under milder conditions than ours. In the absence of assumption on sample continuity of the process ε from the condition f ∈ L 2 (R) it follows that correlograms can be understood, as continuous in probability with respect to the parameter z Riemann meansquare integrals. Due to Lemma 6.4.1 in [4] we can conclude that processes Y T , T > 0, are likewise continuous in probability. Thus, it can be assumed that the processes Y T , T > 0, are separable.

Then for any
In particular, for any

Corollary 1. The conclusion of the Theorem 1 is true under conditions N1 and N2.
(see Theorem 6.4.1 in [11]).
As it is shown in the Remark 6.4.1 in [4] the condition N2 is satisfied if for some In turn (10) Thus, to obtain a functional theorem in C([0, H ]) on asymptotic normality of the normalized residual correlogram X T it is required to prove (11).

Conditions
To prove (11) we need some regularity conditions imposed on the regression function g, spectral density f and LSE θ T .
Assume that for any t > −γ the function g(t, θ) is twice continuously differentiable with respect to θ ∈ γ , and moreover, the derivatives g i (t, θ ) = ∂/∂θ i g(t, θ), g ij (t, θ ) = ∂ 2 /∂θ i ∂θ j g(t, θ), i, j = 1, q, are continuous in the totality of variables. Denote and suppose that in particular, these limits can be infinite. Let also Instead of the words "for all sufficiently large T " we will write below "for T > T 0 ". Assume that the following conditions are satisfied.
Taking into account (16), (17) and condition R4 we get AN. The random vector d T (θ 0 )( θ T − θ 0 ) is asymptotically, as T → ∞, normal with zero mean and covariance matrix Sufficient conditions of the assumption AN fulfillment are bulky. These conditions are given in [14] and, for example, in Ivanov et al. [18]. At least, conditions R2 and R4 form the part of these conditions in [18].
Consider the diagonal elements (measures) μ jj , j = 1, q, of the matrix spectral measure μ.
Consider some sufficient conditions on μ jj -admissibility of the function b from assumption RN under condition N1(ii).
Proof. For M > 0 consider the cut-off function By Lebesgue monotonic convergence theorem from RN1(iii) we get Under conditions N1(ii) and R4 for any fixed M > 0 On the other hand, Integrating by parts we obtain (see (15) and RN1(i)) Thus under condition R2(i) with r = 0 Let ε > 0 be an arbitrary fixed number. Since integral in K (1) 1j (T , M) is majorized by the spectral moment R |λ| −1+δ f (λ)dλ < ∞, then for T > T 0 we have K (1) the conditions N1, N2, R1-R4, AN, and RN are satisfied, then In view of the Theorem 1 and Lemma 1 of Section 2, to obtain (23) it is sufficient to prove (11). So, taking into account the expressions (5)- (8), the proof of the Theorem 2 consists of 3 lemmas.
We will use the notation Under conditions N1, R2(ii), R2(iii), R4, RN, and AN Proof. Apply the Taylor formula to the integral T −1/2 I 2T and write Consider sample continuous Gaussian stochastic processes Subject R4, as T → ∞, Thus all finite-dimensional distributions of the stationary Gaussian processes ξ jT (z), z ∈ [0, H ] converge to the corresponding finite-dimensional distributions of the stationary Gaussian processes ξ j = ξ j (z), z ∈ [0, H ] with covariance functions B j (z), z ∈ [0, H ], j = 1, q. We assume, that the processes ξ j , j = 1, q, are separable.
Since by condition RN for some δ ∈ (0, 1] According to the Kolmogorov theorem (see, for example, Gikhman and Skorokhod [5]) the processes ξ j are sample continuous. Moreover, under condition RN and (see again [5]) for all continuous on C([0, H ]) functionals the distribution of (ξ jT ) converges, as T → ∞, to the distribution of (ξ j ). Using the same notation for weak convergence of random variables, in particular, we obtain ξ jT D −→ ξ j , j = 1, q, and (see (24)

and if the events
Under condition AN for any δ > 0 it is possible to find r > 0 such that for T > T 0 (δ) On the other hand, by Isserlis' theorem (see, for example, [14]) From the inequalities (27) Proof. We write where the random vector θ * T is of the form (24). Consider sample continuous Gaussian processes and all finite-dimensional distributions of the Gaussian processes {η jT (z), z ∈ [0, H ]}, j = 1, q, converge, as T → ∞, to the corresponding finite-dimensional distributions of the stationary Gaussian processes ξ j , j = 1, q, with covariance functions (25).
Theorem 2 is proved as well.

Remark 2.
In the proofs of the sections 3 and 4 the condition R2(i) has been used just for r = 0. However this condition is used for any r ≥ 0 in the proof of LSE θ T asymptotic normality: see explanation in the example below.

Trigonometric regression function
In this section, we consider the example of trigonometric regression function where To apply the results obtained in the paper to the function (33), we have to change a bit the Definition 1 of the LSE. We will use the following modification of LSE proposed by Walker [24], see also Ivanov [12,13]. Consider non having the property where Q T (τ ) is defined in (2) and T ⊂ R 3N is such that A k ∈ R, B k ∈ R, k = 1, N, and ϕ ∈ S T .
The relations (35) allows to distinguish the parameters ϕ k , k = 1, N, and prove the consistency of the LSE θ T in the Walker sense, see [24,12,13], and [18]. Proof. Due to the smoothness of function (33) with respect to the totality of variables, there is no need to introduce conditions for the differentiability of the function g by the variables θ in the set γ and by the variable t in the set (−γ, +∞), as it was done in the main part of the paper for technical necessity.
To check the fulfillment of the condition R1 for regression function (33) we get and therefore Note that for k = 1, N Thus for any ε > 0 and T > T 0 = T 0 (ε) from (38) it follows Increasing T 0 , if necessary, we obtained from (37) and (39) So, as it follows from (40), for any θ 0 ∈ and ε > 0 there exists T 0 > 0 such that for T > T 0 the inequality (13) of the condition R1 is satisfied with constant k 0 ≥ 12N + ε.
= R e iλh μ jl (dλ; θ), h ∈ R, where it is supposed that the matrix function R jl (h; θ) q j,l=1 is continuous at h = 0. As to the condition AN fulfillment for trigonometric regression function (33) in the paper Ivanov et al. [18], it is shown using relations (38) that normalized LSE in the Walker sense is asymptotically, as T → ∞, normal N 0, T RIG , where T RIG is a blockdiagonal matrix with blocks To obtain such a result, it was first proved in [18] that the normalized estimator (36) is weakly consistent, that is, for any r > 0 Then, under a complex set of conditions for general regression function asymptotic normality of the LSE of its parameters was proved. And finally, it was verified that the trigonometric regression function satisfies the specified set of conditions. It is important to note that the proofs of the asymptotic normality of the LSE complying with Definition 1 and Definition 5 are the same.
It remains to check the last condition RN associated with the regression function (33). As mentioned above under assumptions N1 and N3 the condition RN follows from N1, R4. If the function b(λ), λ ∈ R, is not bounded, then we verify the convergence (18) using Lemma 3.
Using formulas (41) and the fact that it can be taken, for example, |λ| > ϕ + 1 = λ 0 in calculating the integrals (19), there are no non-integrable singularities of the