
NonAsymptotic Bounds for the ℓ_∞ Estimator in Linear Regression with Uniform Noise
The Chebyshev or ℓ_∞ estimator is an unconventional alternative to the o...
read it

Support recovery and supnorm convergence rates for sparse pivotal estimation
In high dimensional sparse regression, pivotal estimators are estimators...
read it

Some effects in adaptive robust estimation under sparsity
Adaptive estimation in the sparse mean model and in sparse regression ex...
read it

Scaled Sparse Linear Regression
Scaled sparse linear regression jointly estimates the regression coeffic...
read it

Highdimensional inference robust to outliers with l1norm penalization
This paper studies inference in the highdimensional linear regression m...
read it

Functional Group Bridge for Simultaneous Regression and Support Estimation
There is a wide variety of applications where the unknown nonparametric ...
read it

Robust Decoding from Binary Measurements with Cardinality Constraint Least Squares
The main goal of 1bit compressive sampling is to decode n dimensional s...
read it
Robusttooutliers squareroot LASSO, simultaneous inference with a MOM approach
We consider the leastsquares regression problem with unknown noise variance, where the observed data points are allowed to be corrupted by outliers. Building on the medianofmeans (MOM) method introduced by Lecue and Lerasle Ann.Statist.48(2):906931(April 2020) in the case of known noise variance, we propose a general MOM approach for simultaneous inference of both the regression function and the noise variance, requiring only an upper bound on the noise level. Interestingly, this generalization requires care due to regularity issues that are intrinsic to the underlying convexconcave optimization problem. In the general case where the regression function belongs to a convex class, we show that our simultaneous estimator achieves with high probability the same convergence rates and a similar risk bound as if the noise level was unknown, as well as convergence rates for the estimated noise standard deviation. In the highdimensional sparse linear setting, our estimator yields a robust analog of the squareroot LASSO. Under weak moment conditions, it jointly achieves with high probability the minimax rates of estimation s^1/p√((1/n) log(p/s)) for the ℓ_pnorm of the coefficient vector, and the rate √((s/n) log(p/s)) for the estimation of the noise standard deviation. Here n denotes the sample size, p the dimension and s the sparsity level. We finally propose an extension to the case of unknown sparsity level s, providing a jointly adaptive estimator (β, σ, s). It simultaneously estimates the coefficient vector, the noise level and the sparsity level, with proven bounds on each of these three components that hold with high probability.
READ FULL TEXT
Comments
There are no comments yet.