Course Details

Lecturer: Raja Giryes

Time: Mondays 18:00-20:00, winter semester 2016/2017

Location: Room 102, Kitot Handasa.

Literature: Recently published papers and the book “Sparse and Redundant Representations- From Theory to Applications in Signal and Image Processing” (can be found at the library) by Prof. Michael Elad.

Exam (mandatory): 23.2.2017.

Project Presentation Day (mandatory): 27.2.2017, 15:00-20:00, Room 011, Kitot Bldg.

 

Grading

  • 21% – home-work
  • 30% – project seminar
  • 20% – project report
  • 29% – exam.

Course Requirements

  • The course has a regular format (the lecturer gives all talks).
  •  There will be 3 HW assignments to be submitted in pairs (or singles). These assignments will concentrate on Matlab implementation of algorithms that will be discussed in class.
  • The course requirements include a final project to be performed by pairs as well, and based on recently published 1-3 papers. See the instructions in the moodle. The project will include
    • A final report (10-20 pages) summarizing these papers, their contributions, and your own findings (open questions, simulation results, etc.).
    • A Power-point presentation of the project in a mini-workshop that we will organize at the end of the semester.
  • The course includes a final exam with ~20 brief questions to assess your general knowledge of the course material.

Course Structure

Lecture

Topics covered

1

General Introduction of the course. Slides
Chapter 1: Proving the tendency of L1 to sparsify.

2

Chapter 1: Lp and L0, definition of P_0 as our ideal goal.
Chapter 2: Uniqueness of sparse solutions: definition of mutual coherence

3

Chapter 2: Mutual coherence and spark and their relation, uniqueness of the sparsest solution of a linear system.
Chapter 3: The Thresholding Algorithm

4

Chapter 2: Constructing Grassmanian frames [code]
Chapter 3: Introducing pursuit algorithms – Greedy and relaxation methods. Exponential convergence of the residual for MP.

5

Chapter 4: Pursuit Performance – Equivalence theorems for OMP and THR
Proving the success of the BP in the noiseless case, the role of the sign pattern.
[HW1: Inpainting by Greedy Algorithms]

6

Chapter 5: Introducing P_0^eps that adds noise – introduction to uniqueness versus equivalence, the RIP, relation to spark and the mutual coherence, stability of P_0^eps, pursuit algorithms for the noisy case – Greedy, IRLS, LARS, stability of BP.

7

Chapter 6: LARS, Pursuit in the unitary case, iterative shrinkage algorithms.

8

Chapter 9: Priors and their role in signal and image processing, The Sparse-Land model and its use, The role of P_0^eps for various signal processing tasks
Slides by Michael Elad
[HW2: Iterative-Shrinkage]

9 The low-rank model [1]
Chapter 9: The co-sparse analysis model Slides

10

Chapter 12: Dictionary learning – using existing transforms, learning a dictionary, the MOD and the K-SVD algorithms, and their modifications. Double-sparsity, learning unitary transforms. Slides by Michael Elad

11

Task Driven Dictionary learning [2]

12

Chapter 14: Image denoising – global thresholding, Guleryuz patch-based processing, Shaked-Hel-Or scheme, K-SVD denoising

13 Poisson denoising Slides
Recent advances in sparse representation

Course Description

In the field of signal and image processing there is a fascinating new arena of research that has drawn a lot of interest in the past ~15 years, dealing with sparse and redundant representations. Once can regard this branch of activity as a natural continuation to the vast activity on wavelet theory, which thrived in the 90’s. Another perspective – the one we shall adopt in this course – is to consider this developing field as the emergence of a highly effective model for data that extends and generalizes previous models. As models play a central role in practically every task in signal and image processing, the effect of the new model is far reaching. The core idea in sparse representation theory is a development of a novel redundant transform, where the number of representation coefficients is larger compared to the signal’s original dimension. Alongside this “waste” in the representation comes a fascinating new opportunity – seeking the sparsest possible representation, i.e., the one with the fewest non-zeros. This idea leads to a long series of beautiful (and surprisingly, solvable) theoretical and numerical problems, and many applications that can benefit directly from the new developed theory. In this course we will survey this field, starting with the theoretical foundations, and systematically covering the knowledge that has been gathered in the past years. This course will touch on theory, numerical algorithms, and applications in image processing.