Sparsifying Transform Learning With Efficient Optimal Updates and Convergence Guarantees
In: IEEE Transactions on Signal Processing, Jg. 63 (2015-05-01), S. 2389-2404
Online
unknown
Zugriff:
Many applications in signal processing benefit from the sparsity of signals in a certain transform domain or dictio- nary. Synthesis sparsifying dictionaries that are directl y adapted to data have been popular in applications such as image denois- ing, inpainting, and medical image reconstruction. In this work, we focus instead on the sparsifying transform model, and study the learning of well-conditioned square sparsifying transforms. The proposed algorithms alternate between a l0 "norm"-based sparse coding step, and a non-convex transform update step. We derive the exact analytical solution for each of these steps. The proposed solution for the transform update step achieves the global minimum in that step, and also provides speedups over iterative solutions involving conjugate gradients. We establish that our alternating algorithms are globally convergent to the set of local minimizers of the non-convex transform learning prob- lems. In practice, the algorithms are insensitive to initia lization. We present results illustrating the promising performance and significant speed-ups of transform learning over synthesis K-SVD in image denoising. The transform model is not only more general in its modeling capabilities than the analysis models, it is also much more efficient and scalable than both the synthesis and noisy signal analysis models. We briefly review the main distinctions between these sparse models (cf. (5) for a more detailed review, and for the relevant references) in this an d the following paragraphs. One key difference is in the process of finding a sparse representation for data given the model, or dictionary. For the transform model, given the signal y and transform W , the transform sparse coding problem (5) minimizes kWy − xk 2 subject to kxk 0 ≤ s, where s is a given sparsity level. The solutionis obtained exactly and cheaply by zeroing out all but the s coefficients of largest magnitude in Wy 1 . In contrast, for the synthesis or noisy analysis models, the process of sparse coding is NP-hard (Non-deterministic Polynomial-time hard). While some of the approximate algorithms that have been proposed for synthesis or analysis sparse coding are guaranteed to provide the correct solution under certain conditions, in applications, espec ially those involving learning the models from data, these conditions are often violated. Moreover, the various synthesis and analysis sparse coding algorithms tend to be computationally expensive for large-scale problems. Recently, the data-driven adaptation of sparse models has re- ceived much interest. The adaptation of synthesis dictionaries based on training signals (6)-(12) has been shown to be useful in various applications (13)-(15). The learning of analysi s dictionaries, employing either the analysis model or its no isy signal extension, has also received some recent attention ( 3), (16)-(18).
Titel: |
Sparsifying Transform Learning With Efficient Optimal Updates and Convergence Guarantees
|
---|---|
Autor/in / Beteiligte Person: | Bresler, Yoram ; Ravishankar, Saiprasad |
Link: | |
Zeitschrift: | IEEE Transactions on Signal Processing, Jg. 63 (2015-05-01), S. 2389-2404 |
Veröffentlichung: | Institute of Electrical and Electronics Engineers (IEEE), 2015 |
Medientyp: | unknown |
ISSN: | 1941-0476 (print) ; 1053-587X (print) |
DOI: | 10.1109/tsp.2015.2405503 |
Schlagwort: |
|
Sonstiges: |
|