TY - JOUR

T1 - Penalized Least Squares methods for solving the EEG Inverse Problem

AU - Vega-Hernández, Mayrim

AU - Martínez-Montes, Eduardo

AU - Sánchez-Bornot, José M.

AU - Lage-Castellanos, Agustín

AU - Valdés-Sosa, Pedro A.

PY - 2008/10/1

Y1 - 2008/10/1

N2 - Most of the known solutions (linear and nonlinear) of the ill-posed EEG Inverse Problem can be interpreted as the estimated coefficients in a penalized regression framework. In this work we present a general formulation of this problem as a Multiple Penalized Least Squares model, which encompasses many of the previously known methods as particular cases (e.g., Minimum Norm, LORETA). New types of inverse solutions arise since recent advances in the field of penalized regression have made it possible to deal with non-convex penalty functions, which provide sparse solutions (Fan and Li (2001)). Moreover, a generalization of this approach allows the use of any combination of penalties based on 11 or 12-norms, leading to solutions with combined properties such as smoothness and sparsity. Synthetic data is used to explore the benefits of non-convex penalty functions (e.g., LASSO, SCAD and LASSO Fusion) and mixtures (e.g., Elastic Net and LASSO Fused) by comparing them with known solutions in terms of localization error, blurring and visibility. Real data is used to show that a mixture model (Elastic Net) allows for tuning the spatial resolution of the solution to range from very concentrated to very blurred sources.

AB - Most of the known solutions (linear and nonlinear) of the ill-posed EEG Inverse Problem can be interpreted as the estimated coefficients in a penalized regression framework. In this work we present a general formulation of this problem as a Multiple Penalized Least Squares model, which encompasses many of the previously known methods as particular cases (e.g., Minimum Norm, LORETA). New types of inverse solutions arise since recent advances in the field of penalized regression have made it possible to deal with non-convex penalty functions, which provide sparse solutions (Fan and Li (2001)). Moreover, a generalization of this approach allows the use of any combination of penalties based on 11 or 12-norms, leading to solutions with combined properties such as smoothness and sparsity. Synthetic data is used to explore the benefits of non-convex penalty functions (e.g., LASSO, SCAD and LASSO Fusion) and mixtures (e.g., Elastic Net and LASSO Fused) by comparing them with known solutions in terms of localization error, blurring and visibility. Real data is used to show that a mixture model (Elastic Net) allows for tuning the spatial resolution of the solution to range from very concentrated to very blurred sources.

KW - EEG

KW - Inverse Problem

KW - Least Squares

KW - Penalized regression

UR - http://www.scopus.com/inward/record.url?scp=60149096965&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:60149096965

SN - 1017-0405

VL - 18

SP - 1535

EP - 1551

JO - Statistica Sinica

JF - Statistica Sinica

IS - 4

ER -