The area under the receiver operating characteristic curve (AUC) is a suitable measure for the quality of classification algorithms. Here we use the theory of U-statistics in order to derive new confidence intervals for it. The new confidence intervals take into account that only the total sample size used to calculate the AUC can be controlled, while the number of members of the case group and the number of members of the control group are random. We show that the new confidence intervals can not only be used in order to evaluate the quality of the fitted model, but also to judge the quality of the classification algorithm itself. We would like to take this opportunity to show that two popular confidence intervals for the AUC, namely DeLong’s interval and the Mann–Whitney intervals due to Sen, coincide.
Principal Component Analysis (PCA) is a classical technique of dimension reduction for multivariate data. When the data are a mixture of subjects from different subpopulations one can be interested in PCA of some (or each) subpopulation separately. In this paper estimators are considered for PC directions and corresponding eigenvectors of subpopulations in the nonparametric model of mixture with varying concentrations. Consistency and asymptotic normality of obtained estimators are proved. These results allow one to construct confidence sets for the PC model parameters. Performance of such confidence intervals for the leading eigenvalues is investigated via simulations.
A multivariate errors-in-variables (EIV) model with an intercept term, and a polynomial EIV model are considered. Focus is made on a structural homoskedastic case, where vectors of covariates are i.i.d. and measurement errors are i.i.d. as well. The covariates contaminated with errors are normally distributed and the corresponding classical errors are also assumed normal. In both models, it is shown that (inconsistent) ordinary least squares estimators of regression parameters yield an a.s. approximation to the best prediction of response given the values of observable covariates. Thus, not only in the linear EIV, but in the polynomial EIV models as well, consistent estimators of regression parameters are useless in the prediction problem, provided the size and covariance structure of observation errors for the predicted subject do not differ from those in the data used for the model fitting.