Course Schedule

Class number 

Topic

Speaker(s)

Presentation

#1 Introduction to deep learning Raja Giryes  Slides
#2 Neural network components Raja Giryes  Slides
#3 Deep learning optimization Raja Giryes
#4 Applications Raja Giryes
#5 Applications Raja Giryes
#6-1  Semantic Segmentation [1] Itai Horev and Tom Tirer  Slides
#6-2  Semantic Segmentation [2]  Jenny Zuckerman and Sivan Doveh  Slides
#6-3  Semantic Segmetnation [4][5]  Iris Tal and Aviram Bar Haim  Slides
#7-1  Object Detection [6]  Shir Chorev and Idan Geller  Slides
#7-2  Object Detection [7][8]  Eyal Bar-Shalom and Avi Resler  Slides
#7-3  Object Detection [9][10]  Nezah Kalamaro and Gilad Uziel  Slides
#8-1  Object Detection [11]  Hadar Shreiber and Lital Eligon  Slides
#8-2  CRF for segmentation [12]  Roy Eisener and Tamar Weiser  Slides
#8-3  CRF for segmentation [13]  Guy Tevet and Hen Punchek  Slides
#9-1  Speech recognition [14][15]  Amnon Drory and Matan Karo  Slides
#9-2  Speech identification [16]  Sivan Niv and Omer Shmueli  Slides
#9-3  Wavenet [17]  Ido Guy and Daniel Brodesky  Slides
#10-1  Machine translation [18][19]  Michael Zolotov and Maxim Fumin  Slides
#10-2  Machine translation [20][21]  Oshrat Bar and Hadar Cohen Dvik  Slides
#10-3  Diagram description [22][23]  Alexander Breverman and Lior Carmi
#11-1  VAE [24]  Yotam Nahmias and Elad Reichman
#11-2  GAN [25][26][27]  Moran Rubin and Omer Shtein  Slides
#11-3  GAN [28][29]  Nimrod Gilboa and Uriah Paso  Slides
#12-1  Capsule networks [30]  Elran Gamzo and Daniel Kigli
 #12-2  Style transfer [31]  Ben Fishman and Nir Dgani
 #12-3  Geometric Deep Learning [32]  Almog Luz and Boaz Menis
 #13-1   3D-shapes processing [33][34]  Maxim Roshior and Ilya Nelkenbaum
 #13-2  Ojbect annotation [35][36]  Roman Bodilovsky and Amit Bekerman
 #13-3  Reinforcment learning [37]  Sapir Ben Yakar and Ofir Navati

 

 

Course Details

Lecturer: Raja Giryes

Time: Tuesdays 18:00-20:00, winter semester 2017/2018

Location: Room 134, Wolfson Bldg.

Project Presentation Day (mandatory): 30.1.2018, 9:00-13:30 (morning session) and 15:00-19:30 (evening session), Room 011, Kitot Bldg.

Literature: Recently published papers about deep learning.

The following material may be helpful:

 

 

 

Course Description

The past five years have seen a dramatic increase in the performance of recognition systems due to the introduction of deep architectures for feature learning and classification. This course will overview the recent advances in the area of deep learning. It will be given in a seminar fashion. The first five lectures will be given by the course lecturer and the rest of the lectures will be given by the students. This course is participating in the ICLR reproducibility challenge (http://www.cs.mcgill.ca/~jpineau/ICLR2018-ReproducibilityChallenge.html)

Course Requirements

  • First five lectures are given by the lecturers.
  • For the other lectures: each week three pairs of students will present papers in the field of deep learning. Each pair will be given 30 minutes for presentation with an additional 5 minutes for discussion with the class.
  • Each student is required to complete the Udacity course of deep learning
  • https://www.udacity.com/course/deep-learning–ud730. This course teaches how to use tensor-flow. A self-learning ability is expected from the students.
  • The course requirements include a final project to be performed by pairs as well, and based on recently published 1-3 papers or a paper submitted to ICLR review (https://openreview.net/group?id=ICLR.cc/2018/Conference). The project will include
    • A final report (10-20 pages) summarizing these papers, their contributions, and your own findings (open questions, simulation results, etc.).
    • A Power-point presentation of the project in a mini-workshop that we will organize at the end of the semester.

Grading

  • 35% – project seminar
  • 35% – project report
  • 30% – paper presentation in class

Previous Semester

Course Description

The past five years have seen a dramatic increase in the performance of recognition systems due to the introduction of deep architectures for feature learning and classification. This course will overview the recent advances in the area of deep learning. It will be given in a seminar fashion. The first lecture will be given by the course lecturers and the rest of the lectures will be given by the students.

Seminar Schedule

Class number 

Topic

Speaker(s)

Presentation

#1-1 Introduction and recent advances in deep learning Shai Avidan  Slides
#1-2 History and Theory of deep learning Raja Giryes  Slides
#2-1 Training neural networks-Backpropagation [1] Ofir Liba, Michael Kotlyar  Slides
#2-2 Training techniques for deep neural networks [2], [3] Tal Shapira, Yuval Yaakovi  Slides
#3-1 Stochastic gradient descent (SGD) + path-SGD [4],[5] Tommy Mordo, Adi Kadoche  Slides
#3-2 The dropout technique and other regularizations for deep neural networks [6],[7]  Lior Fritz, Itzik Avital  Slides
#4-1  Residual networks and batch normalization [8],[9]  Amir Azulai, Ittay Madar  Slides
#4-2  The triplet loss and face recognition [10],[11] Tal Perl, Lihi Shiloh Slides
#5-1 Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTMS) machines [12] Michael Green, Shaked Rose Slides
#5-2  LSTM for translation and image captioning [13],[14] Oran Gafni, Noa Yedidia Slides
#6-1  Deep Reinforcement Learning [15], [16] Dor Bank, Eyal Weiss Slides
#6-2  Transfer learning and style transfer in deep learning [17],[18],[19] Gal Barzilai, Ram Mhalb Slides
#7-1  Representation learning and autoencoders [20]  Roee Levi, Aviv Zahroni Slides
#7-2  Unsupervised deep learning [21],[22] Erez Aharonov, Eilon Noam Slides
#8-1  Mimicking deep neural networks with shallow and narrow networks [23],[24],[25] Ishay Be’eri, Elad Knoll Slides
#8-2  Deep Compression – compressing neural networks [26],[27],[28] Shlomi Bugdary, Shimon Akrish Slides
#9-1  region-based convolutional neural networks (R-CNN) [29],[30],[31],[32] Eyal Gilad, Eran Ashkenazi Slides
#9-2  Object detection and dense captioning [33],[34] Dana Berman, Guy Leibovitz Slides
#10-1  CRF for semantic segmentation [35],[36],[37]  Adi Hayat, Asaf Bar Zvi Slides
#10-2  Multitask and multi scale in deep learning for semantic segmentation [41],[42] Amit Nativ, Yotam Gil Slides
#11-1  Deep learning for video classification and action recognition [38],[39],[40] Gil Sharon, Natalie Carlebach Slides
#11-2  Generative adversarial nets [43],[44],[45]  Ziv Freund, Shai Elmalem Slides
#12-1  Deep learning on graphs [46],[47] Asher Kabakovitch, Itay Boneh Slides1, Slides2
#12-2  Binary Deep Learning [48],[49] Roey Nagar, Kostya Berestyshevsky Slides
 #13-1  Deep learning and random forests [52],[53] Or Gordisky, Meir Dalal Slides
 #13-2  Deep Learning for 3D shapes and point sets [50],[51] Robbie Galfrin, Alon Wander  N/A

 

 

Course Requirements

  • First lecture is given by the lecturers.
  • For the other lectures: each week two pairs of students will present papers in the field of deep learning. Each pair will be given 40 minutes for presentation with an additional 10 minutes for discussion with the class.
  • Each student is required to complete the Udacity course about deep learning
  • https://www.udacity.com/course/deep-learning–ud730. This course teaches how to use tensor-flow. A self-learning ability is expected from the students.
  • The course requirements include a final project to be performed by pairs as well, and based on recently published 1-3 papers. The project will include
    • A final report (10-20 pages) summarizing these papers, their contributions, and your own findings (open questions, simulation results, etc.).
    • A Power-point presentation of the project in a mini-workshop that we will organize at the end of the semester.

Grading

  • 35% – project seminar
  • 35% – project report
  • 30% – paper presentation in class