[1] Hample F R. A general qualitative definition of robustness[J]. The Annals of Mathematical Statistics, 1971, 42:1187-1896.
[2] Donoho D, Huber P J. The notion of breakdown point[C]//Bickel P J, Doksum K A, Hodges J L. A Festschrift for Eric Lehmann. Belmont:Wadsworth, 1983:157-184.
[3] Yohai V J. High breakdown point and high efficiency robust esrimates for regression[J]. Tne Annals of Statistics, 1987, 15:642-656.
[4] Yohai V J, Zamar R H. High breakdown point estimators of regression by means of the minimization of an efficient scale[J]. Journal of American Statistical Association, 1988, 83:406-613.
[5] Rousseeuw P. Least median of squares regression[J]. Publications of the American Statistical Association, 1984, 79:871-880.
[6] Sakata S, White H. An alternative definition of finite-sample breakdown point with application to regression model estimators[J]. Journal of the American Statistical Association, 1995, 90:1099-1106.
[7] Cox D R, Snell E J. The analysis of binary data[M]. Alexandria:Biometrics, 1970.
[8] Christmann A. Least median of weighted squares in logistic regression with large strata[J]. Biometrika, 1994, 81:413-417.
[9] Kunsch H R, Stefanski L A, Carroll R J. Conditionally unbiased bounded-influence estimation in general regression models, with applications to generalized linear models[J]. Journal of the American Statistical Association, 1981, 84:460-466.
[10] Neykov N M, Muller C H. Breakdown point and computation of trimmed likelihood estimators in generalized linear models[C]//Dutter R, Filzmoser P, Gather U, et al. Developments in Robust Statistics. Berlin:Spring-Verlag, 2003:277-286.
[11] Khan D M, Ihtisham S, Ali A, et al. An efficient and high breakdown estimation procedure for nonlinear regression models[J]. Pakistan Journal of Statistics, 2017, 33:223-236.
[12] Croux C, Flandre C, Haesbroeck G. The breakdown behavior of the maximum likelihood estimator in the logistic regression model[J]. Statistics & Probability Letters, 2002, 60:377-386.
[13] Breiman L. Better subset regression using the nonnegative garrote[J]. Technometrics, 1995, 37:373-384.
[14] Tibshirani R. Regression shrinkage and selection via the lasso[J]. Journal of the Royal Statistical Society, 1996, 58:267-288.
[15] Wang H, Li G, Jiang G. Robust regression shrinkage and consistent variable selection through the LAD-lasso[J]. Journal of Business and Economic Statistics, 2007, 25:347-355.
[16] Xie H, Huang J. Scad-penalized regression in high-dimensional partially linear models[J]. The Annals of Statistics, 2009, 37:673-296.
[17] Alfons A, Croux C, Gelper S. Sparse least trimmed squares regression for analyzing high-dimensional large data sets[J]. The Annals of Applied Statistics, 2013, 7:226-248.
[18] Albert A, Anderson J A. On the existence of maximum likelihood estimators in logistic regression models[J]. Biometrika, 1984, 71:1-10.
[19] Silvapulle M J. On the existence of maximum likelihood estimators for the binomial response models[J]. Journal of the Royal Statistical Society, 1981, 43:310-313.
[20] Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties[J]. Journal of the American Statistical Association, 2001, 96:1348-1360.
[21] Rousseeuw P J, Leroy A M. Robust regression and outlier detection[M]. America:Wiley-Interscience, 1987.
[22] Zuo Y. Some quantitative relationships between two types of finite-sample breakdown point[J]. Statistics & Probability Letters, 2001, 51:369-375.
[23] Efron B, Hastie T, Johnstone I, et al. Least angle regression[J]. The Annals of Statistics, 2004, 32:407-451.
[24] Balakrishnan S, Madigan D. Algorithms for sparse linear classifiers in the massive data setting[J]. Journal of Machine Learning Research, 2008, 9:313-337.
[25] Wu T T, Lange K. Coordinate descent algorithms for lasso penalized regression[J]. The Annals of Applied Statistics, 2008, 2:224-244.
[26] Friedman J, Hastie T, Hofling H, et al. Pathwise coordinate optimization[J]. The Annals of Applied Statistics, 2007, 1:302-332.
[27] Zhang C, Zhang Z, Chai Y. Penalized bregman divergence estimation via coordinate descent[J]. Journal of the Iranian Statistical Society, 2011, 2:125-140.
[28] Luo Z Q, Tseng P. Error bounds and convergence analysis of feasible descent methods:a general approach[J]. Annals of Operations Research, 1993, 46:157-178.
[29] Saha A, Tewari A. On the nonasymptotic convergence of cyclic coordinate descent methods[J]. Siam Journal on Optimization, 2013, 23:576-601.
[30] Donoho D L, Johnstone I M. Ideal spatial adaptation by wavelet shrinkage[J]. Biometrika, 1994, 81:425-455.
[31] Fahrmeir L, Kaufmann H. Consistency and asymptotic normality of the maximum likelihood estimator in generalized linear models[J]. The Annals of Statistics, 1985, 13:342-368. |