The Design Automation Conference (DAC) is the premier event devoted to the design and design automation of electronic chips and systems. DAC focuses on the latest methodologies and technology advancements in electronic design. The 58th DAC will bring together researchers, designers, practitioners, tool developers, students and vendors.

View the Entire 2021 Conference Program

Register Now

2021 Keynote Speakers

Jeff Dean Jeff Dean

The Potential of Machine Learning for Hardware Design

In this talk I'll describe the tremendous progress in machine learning over the last decade, how this has changed the hardware we want to build for performing such computations, and describe some of the areas of potential for using machine learning to help with some difficult problems in computer hardware design. I'll also briefly touch on some future directions for machine learning and how this might affect things in the future.

Bill Dally Bill Dally

GPUs, Machine Learning, and EDA

GPU-accelerated computing and machine learning (ML) have revolutionized computer graphics, computer vision, speech recognition, and natural language processing.  We expect ML and GPU-accelerated computing will also transform EDA software and as a result, chip design workflows.  Recent research shows that orders of magnitudes of speedups are possible with accelerated computing platforms and that the combination of GPUs and ML can enable automation on tasks previously seen as intractable or too difficult to automate.  This talk will cover near-term applications of GPUs and ML to EDA tools and chip design as well as a long term vision of what is possible. The talk will also cover advances in GPUs and ML-hardware that are enabling this revolution.

Mary Missy Cummings Mary Missy Cummings

Man vs. Machine or Man + Machine

This talk will focus on how to allocate roles and functions between humans and computers in order to design systems that leverage the symbiotic strengths of humans and computer. Such collaborative systems should allow humans the ability to harness the raw computational and search power of artificial intelligence, but also allow them the ability to combat uncertainty with creative, out-of-the-box thinking. Successful systems of the future will be those that combine the human and computer as a team instead of simply replacing humans.

Kwabena Boahen Kwabena Boahen

The Future of AI Hardware: A 3D Silicon Brain

AI's commercial success in the last decade culminates a shift half-a-century ago from developing newfangled transistors to miniaturizing integrated circuits in 2D. With billions of richly interacting mathematically abstracted neurons, deep nets benefited enormously from this paradigm-shift. But now communicating a neuron's output uses 1,000× the energy it took to compute it, diminishing the returns of miniaturization and spurring a paradigm-shift from shrinking transistors and wires in 2D to stacking them in 3D. While 3D's compactness minimizes data movement, surface area drops drastically, severely constraining heat dissipation. I will show how to satisfy this constraint by exploiting recent advances in neuroscience.

Over the past six decades, conceptions of how a brain computes have evolved from synaptocentric to axocentric to dendrocentric. A synaptocentric network aggregates weighted signals spatially and distributes the nonnegative part. Deep nets realize this concept. An axocentric network filters signals temporally, aggregates the results spatially, integrates over time, and then, when threshold is reached, distributes a spike. Neuromorphic computing realizes this concept. Finally, a dendrocentric network aggregates signals in a spatiotemporally inseparable fashion and distributes a sequence of spikes produced by an ensemble of neurons. This concept could lead to a silicon brain that, scaling linearly with the number of neurons in energy and heat, like a biological brain, would be thermally viable in three dimensions.

Learn More

2021 SkyTalk Speakers

William Chappell William Chappell

Cloud & AI Technologies for Faster, Secure Semiconductor Supply Chains

Semiconductors are deeply embedded in every aspect of our lives, and recent security threats and global supply chain challenges have put a spotlight on the industry. Significant investments are being made both by nation states and commercial industry, to manage supply chain dependencies, ensure integrity and build secure, collaborative environments to foster growth. These shifts provide unique opportunities for our industry. This talk blends insights and experiences from government initiatives and Azure's Special Capabilities & Infrastructure programs, to outline how Cloud + AI technologies, along with tool vendors, fabless semiconductor companies, IP providers, foundries, equipment manufacturers and other ecosystem stakeholders can contribute to building a robust, end-to-end, secure silicon supply chain for both commercial and government applications, while generating value for their businesses.

Kailash Gopalakrishnan Kailash Gopalakrishnan

Abstract will be available shortly.

Sam Naffziger Sam Naffziger

Cross-Disciplinary Innovations Required for the Future of Computing

With traditional drivers of compute performance a thing of the past, innovative engineers are tapping into new vectors of improvement to meet the world's demand for computation. Like never before, the future of computing will be owned by those who can optimize across the previously siloed domains of silicon design, processor architecture, package technology and software algorithms to deliver performance gains with new capabilities. These approaches will derive performance and power efficiency through tailoring of the architecture to particular workloads and market segments, leveraging the much greater performance/Watt and performance/area of accelerated solutions. Designing and verifying multiple tailored solutions for markets where a less efficient general purpose design formerly sufficed can be accomplished through modular architectures using 2.5D and 3D packaging approaches. Delivering on modular solutions for high volume markets requires simultaneously optimizing across packaging, silicon, interconnect technologies where in the past, silicon design was sufficient. This talk will cover these trends with the vectors of innovation required to deliver these next generation compute platforms.

Learn More

2021 TechTalk Speakers

Serge Leef Serge Leef

Reimagining Digital Simulation

In the last few decades, digital event-driven simulation has largely relied on underlying hardware for performance gains; core algorithms have not undergone truly transformative changes. Past efforts to accelerate simulation with special purpose hardware has repeatedly fallen behind the ever-improving performance of general-purpose computers, enabled by Moore's Law. Emulation-based strategies have also reached a performance ceiling. We are now at the end of the road with Moore's Law, and the time is right to fundamentally rethink simulation algorithms, methodologies, and computational strategies: considering hyperscaling, facilitated by the cloud, and advances in domain specific computing. This talk will examine the past and a possible future of simulation, a key technology enabler for advanced chip designs.

Steve Roddy Steve Roddy

The AI Hype Cycle is Over. Now What?

The expectations around AI and ML have been enormous, which fueled investment and innovation as companies scrambled for scalable approaches to building and deploying AI and ML solutions. Experimentation, in both hardware and software, has been the order of the day:

  • Ramping up the core technology to improve accuracy and take on more use cases.
  • Experimenting with the technology (models and processors) to understand what was possible, what worked, what didn't and why.

The exuberance of the moment, however, created some unintended consequences. Take, for example, a fully parameterized, complex Transformer network. In an analysis by Northeastern University, the 300 million parameter model took 300 tons of carbon to train. Since then, accuracy and efficiency have improved gradually.

Today, as the shouting dies down, the biggest trend – one that is having profound effects in helping teams innovate – is around hardware. The days of general-purpose hardware anchoring AI and ML are quickly giving way to specialized compute that allows engineers to not only tune their solutions for accuracy and efficiency but deploy their solutions more effectively across the compute spectrum. Industry veteran Steve Roddy, head of AI and ML product for Arm, will describe how a new era of democratized design is accelerating innovation in AI and design teams who embrace are speeding ahead of the pack.

Michael Jackson Michael Jackson

More than Moore and Charting the Path Beyond 3nm

For more than fifty years, the trend known as Moore’s Law has astutely predicted a doubling of transistor count every twenty-four months. As 3nm technology moves into production, process engineers are feverishly working to uphold Moore’s Law by further miniaturizing the next generation of semiconductor technology. Meanwhile, a second trend referred to as “More than Moore” was coined in 2010 to reflect the integration of diverse functions and subsystems in 2D SoCs and 2.5D and 3D packages. Today, the trends of Moore’s Law and “More than Moore” synergize to produce ever higher value systems.

Working together, advances in both process technology and electronic design automation (EDA) have driven fundamental evolutions behind these two important semiconductor trends. This talk will examine the amazing and innovative developments in EDA over the years, culminating in the era of 3DIC and Machine Learning-based EDA to chart the path to 3nm and More than Moore.

Learn More