arrow
Online First
Oracle Inequalities for Corrupted Compressed Sensing
Liping Yin and Peng Li

J. Comp. Math. DOI: 10.4208/jcm.2310-m2022-0282

Publication Date : 2024-03-07

  • Abstract

In this paper, we establish the oracle inequalities of highly corrupted linear observations $b = Ax_0 + f_0 + e ∈ \mathbb{R}^m.$ Here the vector $x_0 ∈ \mathbb{R}^n$ with $n ≫ m$ is a (approximately) sparse signal and $f_0 ∈ \mathbb{R}^m$ is a sparse error vector with nonzero entries that can be possible infinitely large, $e ∼\mathcal{ N} (0, σ^2 I_m)$ represents the Gaussian random noise vector. We extend the oracle inequality $||\hat{x}-x_0||^2_2\lesssim \sum_i{\rm min}\{|x_0(i)|^2,\sigma^2\}$ for Dantzig selector and Lasso models in [E.J. Candès and T. Tao, Ann. Statist., 35 (2007), 2313–2351] and [T.T. Cai, L. Wang, and G. Xu, IEEE Trans. Inf. Theory, 56 (2010), 3516–3522] to $||\hat{x}-x_0||^2_2+||\hat{f}-f_0||^2_2\lesssim \sum_i{\rm min}\{|x_0(i)|^2,\sigma^2\}+\sum_j{\rm min}\{|\lambda f_0(j)|^2,\sigma^2\}$ for the extended Dantzig selector and Lasso models. Here $(\hat{x}, \hat{f})$ is the solution of the extended model, and $λ > 0$ is the balance parameter between $||x||_1$ and $||f||_1$ i.e. $||x||_1+\lambda||f||_1.$

  • Copyright

COPYRIGHT: © Global Science Press