Foundations and Trends® in Signal Processing > Vol 17 > Issue 1

Signal Decomposition Using Masked Proximal Operators

By Bennet E. Meyers, Stanford University, USA, bennetm@stanford.edu | Stephen P. Boyd, Stanford University, USA, boyd@stanford.edu

 
Suggested Citation
Bennet E. Meyers and Stephen P. Boyd (2023), "Signal Decomposition Using Masked Proximal Operators", Foundations and TrendsĀ® in Signal Processing: Vol. 17: No. 1, pp 1-78. http://dx.doi.org/10.1561/2000000122

Publication Date: 16 Jan 2023
© 2023 B. E. Meyers and S. P. Boyd
 
Subjects
Statistical signal processing,  Statistical/machine learning,  Signal decompositions,  Nonlinear signal processing,  Multidimensional signal processing,  Optimization,  Data mining
 
Keywords
Renewable Energy Technologies: Photovoltaic
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction
2. Signal Decomposition
3. Background and Related Work
4. Solution Methods
5. Component Class Attributes
6. Component Class Examples
7. Examples
Acknowledgements
Appendix
References

Abstract

We consider the well-studied problem of decomposing a vector time series signal into components with different characteristics, such as smooth, periodic, nonnegative, or sparse. We describe a simple and general framework in which the components are defined by loss functions (which include constraints), and the signal decomposition is carried out by minimizing the sum of losses of the components (subject to the constraints). When each loss function is the negative log-likelihood of a density for the signal component, this framework coincides with maximum a posteriori probability (MAP) estimation; but it also includes many other interesting cases. Summarizing and clarifying prior results, we give two distributed optimization methods for computing the decomposition, which find the optimal decomposition when the component class loss functions are convex, and are good heuristics when they are not. Both methods require only the masked proximal operator of each of the component loss functions, a generalization of the well-known proximal operator that handles missing entries in its argument. Both methods are distributed, i.e., handle each component separately. We derive tractable methods for evaluating the masked proximal operators of some loss functions that, to our knowledge, have not appeared in the literature.

DOI:10.1561/2000000122
ISBN: 978-1-63828-102-3
92 pp. $70.00
Buy book (pb)
 
ISBN: 978-1-63828-103-0
92 pp. $145.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. Signal Decomposition
3. Background and Related Work
4. Solution Methods
5. Component Class Attributes
6. Component Class Examples
7. Examples
Acknowledgements
Appendix
References

Signal Decomposition Using Masked Proximal Operators

The decomposition of a time series signal into components is an age old problem, with many different approaches proposed, including traditional filtering and smoothing, seasonal-trend decomposition, Fourier and other decompositions, PCA and newer variants such as nonnegative matrix factorization, various statistical methods, and many heuristic methods.

In this monograph, the well-studied problem of decomposing a vector time series signal into components with different characteristics, such as smooth, periodic, nonnegative, or sparse are covered. A general framework in which the components are defined by loss functions (which include constraints), and the signal decomposition is carried out by minimizing the sum of losses of the components (subject to the constraints) are included. When each loss function is the negative log-likelihood of a density for the signal component, this framework coincides with maximum a posteriori probability (MAP) estimation; but it also includes many other interesting cases.

Summarizing and clarifying prior results, two distributed optimization methods for computing the decomposition are presented, which find the optimal decomposition when the component class loss functions are convex, and are good heuristics when they are not. Both methods require only the masked proximal operator of each of the component loss functions, a generalization of the well-known proximal operator that handles missing entries in its argument. Both methods are distributed, i.e., handle each component separately. Also included are tractable methods for evaluating the masked proximal operators of some loss functions that have not appeared in the literature.

 
SIG-122