2021 Keynote Speakers

View the Entire 2021 Conference Program

Jeff Dean, SVP, Google Research & Google Health

Jeff DeanThe Potential of Machine Learning for Hardware Design

Monday, December 6, 2021 | 8:45 AM - 9:45 AM

In this talk I'll describe the tremendous progress in machine learning over the last decade, how this has changed the hardware we want to build for performing such computations, and describe some of the areas of potential for using machine learning to help with some difficult problems in computer hardware design.  I'll also briefly touch on some future directions for machine learning and how this might affect things in the future.

About

Jeff Dean (ai.google/research/people/jeff) joined Google in 1999 and is currently a Google Senior Fellow and SVP for Google Research and Google Health. His teams are working on systems for speech recognition, computer vision, language understanding, and various other machine learning tasks. He has co-designed/implemented many generations of Google's crawling, indexing, and query serving systems, and co-designed/implemented major pieces of Google's initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google's distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems, protocol buffers, the open-source TensorFlow system for machine learning, and a variety of internal and external libraries and developer tools.

Jeff received a Ph.D. in Computer Science from the University of Washington in 1996, working with Craig Chambers on whole-program optimization techniques for object-oriented languages. He received a B.S. in computer science & economics from the University of Minnesota in 1990. He is a member of the National Academy of Engineering, and of the American Academy of Arts and Sciences, a Fellow of the Association for Computing Machinery (ACM), a Fellow of the American Association for the Advancement of Sciences (AAAS), and a winner of the 2012 ACM Prize in Computing.

 

Bill Dally, Chief Scientist, NVIDIA

Bill DallyGPUs, Machine Learning, and EDA

Tuesday, December 7, 2021 | 8:45 AM - 9:45 AM

GPU-accelerated computing and machine learning (ML) have revolutionized computer graphics, computer vision, speech recognition, and natural language processing.  We expect ML and GPU-accelerated computing will also transform EDA software and as a result, chip design workflows.  Recent research shows that orders of magnitudes of speedups are possible with accelerated computing platforms and that the combination of GPUs and ML can enable automation on tasks previously seen as intractable or too difficult to automate.  This talk will cover near-term applications of GPUs and ML to EDA tools and chip design as well as a long term vision of what is possible. The talk will also cover advances in GPUs and ML-hardware that are enabling this revolution.

About

Bill Dally joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University, where he was chairman of the computer science department. Dally and his Stanford team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. Dally was previously at the Massachusetts Institute of Technology from 1986 to 1997, where he and his team built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. From 1983 to 1986, he was at California Institute of Technology (CalTech), where he designed the MOSSIM Simulation Engine and the Torus Routing chip, which pioneered “wormhole” routing and virtual-channel flow control. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and has received the ACM Eckert-Mauchly Award, the IEEE Seymour Cray Award, and the ACM Maurice Wilkes award. He has published over 250 papers, holds over 120 issued patents, and is an author of four textbooks. Dally received a bachelor's degree in Electrical Engineering from Virginia Tech, a master’s in Electrical Engineering from Stanford University and a Ph.D. in Computer Science from CalTech.  He was a cofounder of Velio Communications and Stream Processors.

Mary 'Missy' Cummings, Professor of Computer Engineering/Director of the Humans Autonomy Laboratory, Duke University

Mary Missy CummingsMan vs. Machine or Man + Machine?

Wednesday, December 8, 2021 | 9:00 AM - 9:45 AM

This talk will focus on how to allocate roles and functions between humans and computers in order to design systems that leverage the symbiotic strengths of humans and computer. Such collaborative systems should allow humans the ability to harness the raw computational and search power of artificial intelligence, but also allow them the ability to combat uncertainty with creative, out-of-the-box thinking. Successful systems of the future will be those that combine the human and computer as a team instead of simply replacing humans.

About

Professor Mary (Missy) Cummings received her B.S. in Mathematics from the US Naval Academy in 1988, her M.S. in Space Systems Engineering from the Naval Postgraduate School in 1994, and her Ph.D. in Systems Engineering from the University of Virginia in 2004. A naval officer and military pilot from 1988-1999, she was one of the U.S. Navy's first female fighter pilots. She is currently a Professor in the Duke University Electrical and Computer Engineering Department, and the Director of the Humans and Autonomy Laboratory. She is an American Institute of Aeronautics and Astronautics (AIAA) Fellow, and a member of several national technology and policy committees. Her research interests include the human role in cybersecurity, human-autonomous system collaboration, explainable artificial intelligence, human-systems engineering, and the ethical and social impact of technology.

Kwabena Boahen, Professor of Bioengineering and Electrical Engineering, Stanford University

Kwabena Boahen The Future of AI Hardware: A 3D Silicon Brain

Thursday, December 9, 2021 | 8:45 AM - 9:45 AM

AI's commercial success in the last decade culminates a shift half-a-century ago from developing newfangled transistors to miniaturizing integrated circuits in 2D. With billions of richly interacting mathematically abstracted neurons, deep nets benefited enormously from this paradigm-shift. But now communicating a neuron's output uses 1,000× the energy it took to compute it, diminishing the returns of miniaturization and spurring a paradigm-shift from shrinking transistors and wires in 2D to stacking them in 3D. While 3D's compactness minimizes data movement, surface area drops drastically, severely constraining heat dissipation. I will show how to satisfy this constraint by exploiting recent advances in neuroscience.

Over the past six decades, conceptions of how a brain computes have evolved from synaptocentric to axocentric to dendrocentric. A synaptocentric network aggregates weighted signals spatially and distributes the nonnegative part. Deep nets realize this concept. An axocentric network filters signals temporally, aggregates the results spatially, integrates over time, and then, when threshold is reached, distributes a spike. Neuromorphic computing realizes this concept. Finally, a dendrocentric network aggregates signals in a spatiotemporally inseparable fashion and distributes a sequence of spikes produced by an ensemble of neurons. This concept could lead to a silicon brain that, scaling linearly with the number of neurons in energy and heat, like a biological brain, would be thermally viable in three dimensions.

About

Kwabena Boahen (M’89, SM’13, F’16) received the B.S. and M.S.E. degrees in electrical and computer engineering from the Johns Hopkins University, Baltimore, MD, both in 1989 and the Ph.D. degree in computation and neural systems from the California Institute of Technology, Pasadena, in 1997. He was on the bioengineering faculty of the University of Pennsylvania from 1997 to 2005, where he held the first Skirkanich Term Junior Chair. He is presently Professor of Bioengineering and Electrical Engineering at Stanford University, with a courtesy appointment Computer Science. He is also a member of Stanford’s Bio-X and Wu Tsai Neurosciences Institutes. He  founded Stanford’s Brains in Silicon lab, which develops silicon integrated circuits that emulate the way neurons compute and computational models that link biophysical neuronal mechanisms to cognitive behavior. His research is interdisciplinary, bringing together the seemingly disparate fields of neurobiology and medicine with electronics and computer science. His scholarship is widely recognized, with over ninety publications to his name. These include a cover story in Scientific American featuring his group’s work on a silicon retina and a silicon tectum correctly “wire together” automatically. (May 2005). He has been invited to give over 80 seminar, plenary, and keynote talks. These include a 2007 TED talk, “A computer that works like the brain”, that has been viewed half-a-million times. He has received several distinguished honors, including a Packard Fellowship for Science and Engineering (1999) and a National Institutes of Health Director’s Pioneer Award (2006). He was elected a fellow of the American Institute for Medical and Biological Engineering (2016) and of the Institute of Electrical and Electronic Engineers (2016). In recognition of his group’s work on Neurogrid (2006-12), an iPad-size platform that emulates the cortex in biophysical detail and at functional scale. As this combination hitherto required a supercomputer, Neurogrid resurged interest in neuromorphic computing. These students went on to lead the design of IBM’s TrueNorth chip. In his most recent research effort, the Brainstorm Project, he led a multi-university, multi-investigator team to co-design hardware and software that makes neuromorphic computing much easier to apply. A spin-out from his Stanford group, Femtosense Inc (2018), is commercializing this breakthrough.