Plenary Talks


Signal analysis and machine learning in semiconductor manufacturing - Current applications and future directions

Prasanna Mulgaonkar and Peter Raulefs
AI Lab, Intel Cooperation, Santa Clara.
http://www.intel.com/research/machine_learning.htm

 Download handout 1: Prasanna Mulgaonkar
 Download handout 2: Peter Raulefs

Abstract: One day in the not-too-distant future, computers might thwart a network attack by spreading news of suspicious activity more rapidly than a worm or virus could propagate. Digital cameras could learn to automatically recognize your family, and interact with your PC to properly catalogue the photographs when you downloaded the images. An elderly parent may be able to continue living in her own home rather that a nursing home, thanks to a sensor network that monitors her daily activities, provides help when needed, and alerts a close relative when it notices a significant change in her level of functioning.

Research into machine learning, which could enable these scenarios and more, is now underway within Intel. Machine learning refers to the ability of a computer to process a range of data, from numbers and text to audio and visual data, and extract and analyse underlying patterns, using statistical algorithms. The goal is to structure volumes of data in order to make useful decisions, and in some cases, act on those decisions.

Semiconductor manufacturing faces increasing challenges as device scaling and circuit complexity drive high-volume manufacturing of nanoscale structures to unprecedented needs in fault detection and diagnosis, process and production control. As manufacturing processes extend over more than a thousand complex operations, machine-learning techniques are critical to construct and dynamically update models for diagnostic and predictive inference from sensor, metrology, and test data. Signal analysis and statistical machine learning enable breakthroughs in yield improvement, variability reduction, and process control optimised for product performance. Machine learning has a crucial function in dynamically planning and scheduling production flow vs. fluctuating process performance and market demand. Using examples from our experience at Intel, we discuss current state and future directions.

Supervised and Unsupervised Learning with Energy-Based Models.

Yann LeCun
Courant Institute of Mathematical Sciences, New York University
http://yann.lecun.com

Abstract: Energy-Based Models (EBMs) capture dependencies between variables by associating a scalar energy to each configuration of those variables. Given a set of observed variables X (e.g. an image) and a set of variables to be predicted Y (e.g. the label of the object in the image), making a decision consists in finding a value of Y that minimizes the energy function E(Y,X). Training an EBM consists in finding an energy function, at the minimum of a loss functional, that assigns low energies to configurations of X and Y that are compatible (e.g. an object image and the corresponding object label), and high energies to incompatible configurations. We discuss conditions that the loss functional must satisfy so that its minimization will cause the machine to approach the desired behaviour.

The main advantages of EBMs over traditional probabilistic approaches is that there is no need to estimate normalization terms that may be intractable. Moreover, the absence of normalization gives us considerable freedom in the parameterisation the energy.

Several applications of this framework will be described, including: a real-time system for simultaneously detecting human faces in images and estimating their pose; a face verification system based on a trainable similarity metric; a learning method for mapping high-dimensional data to low-dimensional manifolds with invariance properties; a method for segmenting biological images; and an unsupervised method for learning sparse-overcomplete feature representations for object recognition.

 Download handout

Sparse Representations in Biology and Signal Processing

Barak Pearlmutter
Hamilton Institute, NUI Maynooth.
http://hamilton.may.ie/barak_pearlmutter.htm

Abstract: A striking feature of many sensory processing problems is that there appear to be many more neurons engaged in the internal representations of the signal than in its transduction. For example, humans have about 30,000 cochlear neurons, but at least a thousand times as many neurons in the auditory cortex. Such apparently redundant internal representations have sometimes been proposed as necessary to overcome neuronal noise. We instead posit that they directly subserve computations of interest. We first review how sparse overcomplete linear representations can be used for source separation, using a particularly difficult case, the monaural HRTF (differential filtering imposed on a source by its path from its origin to the cochlea) as a sole cue, as an example. We then (a) show how the approach naturally generalises to a wide variety of cues, (b) explore some robust and generic predictions about neuronal representations that follow from taking sparse linear representations as a model of neuronal sensory processing, and (c) discuss a novel approach to estimating the required overcomplete signal dictionaries from data.

 Download handout


Speaker biographies

Peter Raulefs, Senior Principal Scientist, Intel Corporation, Santa Clara, CA.

Peter Raulefs is a Senior Principal Scientist and Manager in the Analysis & Control Technology department of Intel's Technology and Manufacturing Group. After starting out as a physicist doing elementary particle physics research, he obtained a PhD in Computer Science from the University of Karlsruhe (Germany), and was an associate and full professor of Computer Science at Bonn, Kaiserslautern, and Dresden universities. After joining FMC Corporation (Santa Clara, California), he led the development of process control systems for chemical factories, and the development of a real-time architecture for a computer-assisted fighter pilot system.

At Intel Corporation in Santa Clara, California, he founded the Analysis & Control Technology department, and leads research and development of engineering systems, statistical computing, and machine learning applications deployed across Intel's factories.

 Download handout 2: Peter Raulefs


Prasanna Mulgaonkar, Research Sector Director, Intel Corporation, Santa Clara, CA.

Dr. Mulgaonkar currently directs Intel´s exploratory research activities in the areas of Machine Learning, Machine Vision, and Architectures. He also has active interest and joint responsibility for exploratory research in advanced wireless information systems. In the area of Machine Learning, the key focus of the research agenda is the development and demonstration of techniques for embedded and real-time learning and inference. In the machine vision space, he is focusing on embedded, statisticall robust, machine vision techniques. In the area of architectures, he is involved in explorations in log-based architectures, and advanced programming paradigms. The wireless agenda looks at systems that can scale to extremely high node densities, and management of information flows in mobile environments.

Prior to joining Intel, Dr. Mulgaonkar spent 19 years at SRI International during which he developed, supervised, and ran a number of research projects including Small Unit Operation Situational Awareness Systems, Flapping-Wing Propulsion Using Electrostrictive Polymer Actuators, and Heel-Strike Generators Using Electrostrictive Polymers. Dr. Mulgaonkar, has authored or co-authored more than 50 technical reports and book chapters, and three patents. He holds a Bachelor´s degree in Electrical Engineering from the Institute of Technology in Kanpur, India, and holds a Master´s degree and Ph.D. in Computer Science and Applications from Virginia Tech. Dr. Mulgaonkar is currently a member of the Army Science Board (ASB) of the Department of the Army. The ASB is the US Army´s senior scientific advisory body, advising the Chief of Staff of the Army on technology issues.

Dr. Mulgaonkar's research interests are in wireless information systems, high-level computer vision, robotics, spatial reasoning, and model-based matching.

 Download handout 1: Prasanna Mulgaonkar


Yann LeCun, Courant Institute of Mathematical Sciences, New York University

Yann LeCun received an Electrical Engineer Diploma from Ecole Supérieure d'Ingénieurs en Electrotechnique et Electronique (ESIEE), Paris in 1983, and a PhD in CS from Université Pierre et Marie Curie (Paris) in 1987. After a postdoc at the University of Toronto, he joined AT&T Bell Laboratories in Holmdel, NJ, in 1988, and became head of the Image Processing Research Department at AT&T Labs-Research in 1996. In 2002 he became a Fellow at the NEC Research Institute in Princeton. He has been a professor of computer science at NYU's Courant Institute of Mathematical Sciences since 2003. Yann's research interests include computational and biological models of learning and perception, computer vision, mobile robotics, data compression, digital libraries, and the physical basis of computation. His image compression technology, called DjVu, is used by numerous digital libraries and publishers to distribute scanned documents on-line, and his handwriting recognition technology is used to process a large percentage of bank checks in the US. He has been general chair of the annual Learning at Snowbird workshop since 1997, and program chair of CVPR 2006.

 Download handout


Barak Pearlmutter, Hamilton Institute, NUI Maynooth

Prof. Barak A. Pearlmutter (PhD Computer Science, Carnegie Mellon University) has done both foundational and applied work in machine learning and theoretical neuroscience, and has also worked on programming language design and implementation. His current work focusses on a number of areas: enabling technologies for brain-computer interfaces; sparse representations for sensory processing; and high-performance scientific programming languages that incorporate nestable derivative-taking operators.

 Download handout