Keynotes

Dr. Ian Goodfellow

Staff Research Scientist

Google Brain, USA

Generative Adversarial Networks

Bio: Ian Goodfellow is a staff research scientist at Google Brain. He leads a group of researchers studying adversarial techniques in AI. He developed the first defenses against adversarial examples, was among the first to study the security and privacy of neural networks, and helped to popularize the field of machine learning security and privacy. He is the lead author of the MIT Press textbook Deep Learning (www.deeplearningbook.org). Previously, Ian has worked at OpenAI and Willow Garage, and has study with Andrew Ng and Gary Bradski at Stanford University, and with Yoshua Bengio and Aaron Courville at Universiteì de Montreìal. In 2017, Ian was listed among MIT Technology Review's 35 Innovators under 35, recognizing his invention of generative adversarial networks.

Abstract: This presentation reviews key theoretical principles of GAN learning, applications, experimentation tips, lessons learned and potential future improvements to GANs. This presentation will not focus on perception beyond the visible spectrum directly, but instead lays the groundwork for talks later in the workshop that apply GANs to solving problems in PBVS.

Dr. Dimitris G. Manolakis

Senior Staff Member

MIT Lincoln Laboratory, USA

Hyperspectral Imaging Remote Sensing: Progress and Challenges

Bio: Dimitris G. Manolakis is currently a senior staff member at MIT Lincoln Laboratory, Lexington, MA, USA. Dr. Manolakis has taught at the University of Athens, Northeastern University, Boston College, and Worcester Polytechnic Institute. He is a coauthor of the textbooks Digital Signal Processing: Principles, Algorithms, and Applications (Prentice-Hall, 2006, 4th ed.), Statistical and Adaptive Signal Processing (Artech House, 2005), Applied Digital Signal Processing (Cambridge University Press, 2011), and Hyperspectral Imaging Remote Sensing (Cambridge University Press, 2016). He is an IEEE Fellow and in 2013 he received the IEEE Signal Processing Society Education Award.

Abstract: Hyperspectral imaging applications are many and span civil, environmental, and military needs. Typical examples include the detection of specific terrain features and vegetation, mineral, or soil types for resource management; detecting and characterizing materials, surfaces, or paints; the detection of man-made materials in natural backgrounds for the purpose of search and rescue; the detection of specific plant species for the purposes of counter narcotics; and the detection of military vehicles for the purpose of defense and intelligence. The objective of this presentation is to discuss the state of hyperspectral imaging systems that operate in the reflective and emissive regions of the spectrum and the algorithms used for data exploitation. The algorithms used in the visible-to-near infrared and sort-wave infrared (VNIR-SWIR) might be used in the long-wave infrared (LWIR) spectrum for the detection of solid-phase and gas-phase targets; however, the phenomenology and the corresponding signal models are quite different. Affordable technology and advanced algorithms have led to mature hyperspectral system for automated, real-time target detection in the VNIR-SWIR; similar efforts have started in the LWIR. However, evolving applications put more demands on sensors and algorithms. To address these challenges, we must understand and exploit the strong couplings among the underlying phenomenology, the theoretical framework for algorithm development and analysis, and the requirements of practical applications. Some of these key issues will be addressed in the presentation in the context of real-world applications.

Prof. Sabine Süsstrunk

Professor

EPFL, Switzerland

RGB+: Using Near-Infrared (NIR) to improve Computational Photography Applications

Bio: Sabine Süsstrunk is full professor in the School of Information and Communication Sciences (IC) at the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, where she leads the Images and Visual Representation Lab since 1999. Her research areas are in computational photography, color computer vision and color image processing, image quality, and computational aesthetics. She has published over 150 scientific papers, of which 7 have received best paper/demos awards (ACM Multimedia 2010, IS&T CIC 2012, IEEE ICIP 2013, etc.), and holds 10 patents. In 2013, she received the IS&T/SPIE Electronic Imaging Scientist of the Year Award. She is a Fellow of IEEE and IS&T.

Abstract: Conventional digital cameras exhibit a number of limitations that computational photography systems try to overcome. For example, the disambiguation of how much the illuminant(s) and the object reflectance contribute to a pixel value is mathematically ill-posed. Given how most modern cameras capture images, blur and limited depth-of-field may also introduce noise and unwanted artifacts. To solve this problem, experts have proposed modified hardware, smart algorithms using priors, and (deep) machine learning approaches. In our research, we use "extra information" in the form of near-infrared (NIR), the wavelength range adjacent to the visible spectrum and easily captured by conventional silicon sensors. Capturing NIR can improve computational photography tasks such as dehazing, white-balancing, shadow detection, deblurring, and depth-of-field extension, as well as computer vision applications such as detection and classification.