The Design Automation Conference (DAC) is the premier event devoted to the design and design automation of electronic chips and systems. DAC focuses on the latest methodologies and technology advancements in electronic design. The 58th DAC will bring together researchers, designers, practitioners, tool developers, students and vendors.
View the Entire 2021 Conference Program
Mr. Costello is considered to have founded the EDA industry when in the late 1980s he became President of Cadence Design Systems and drove annual revenues to over $1B—the first EDA company to achieve that milestone. In 2004, he was awarded the Phil Kaufman Award by the Electronic System Design Alliance in recognition of his business contributions that helped grow the EDA industry. After leaving Cadence, Joe has led numerous startups to successful exits such as Enlighted, Orb Networks, think3, and Altius. He received his BS in Physics from the Harvey Mudd College and also has a master's degree in Physics from both Yale University and UC Berkeley.
The Potential of Machine Learning for Hardware Design
In this talk I'll describe the tremendous progress in machine learning over the last decade, how this has changed the hardware we want to build for performing such computations, and describe some of the areas of potential for using machine learning to help with some difficult problems in computer hardware design. I'll also briefly touch on some future directions for machine learning and how this might affect things in the future.
GPUs, Machine Learning, and EDA
GPU-accelerated computing and machine learning (ML) have revolutionized computer graphics, computer vision, speech recognition, and natural language processing. We expect ML and GPU-accelerated computing will also transform EDA software and as a result, chip design workflows. Recent research shows that orders of magnitudes of speedups are possible with accelerated computing platforms and that the combination of GPUs and ML can enable automation on tasks previously seen as intractable or too difficult to automate. This talk will cover near-term applications of GPUs and ML to EDA tools and chip design as well as a long term vision of what is possible. The talk will also cover advances in GPUs and ML-hardware that are enabling this revolution.
Man vs. Machine or Man + Machine
This talk will focus on how to allocate roles and functions between humans and computers in order to design systems that leverage the symbiotic strengths of humans and computer. Such collaborative systems should allow humans the ability to harness the raw computational and search power of artificial intelligence, but also allow them the ability to combat uncertainty with creative, out-of-the-box thinking. Successful systems of the future will be those that combine the human and computer as a team instead of simply replacing humans.
Cloud & AI Technologies for Faster, Secure Semiconductor Supply Chains
Semiconductors are deeply embedded in every aspect of our lives, and recent security threats and global supply chain challenges have put a spotlight on the industry. Significant investments are being made both by nation states and commercial industry, to manage supply chain dependencies, ensure integrity and build secure, collaborative environments to foster growth. These shifts provide unique opportunities for our industry. This talk blends insights and experiences from government initiatives and Azure's Special Capabilities & Infrastructure programs, to outline how Cloud + AI technologies, along with tool vendors, fabless semiconductor companies, IP providers, foundries, equipment manufacturers and other ecosystem stakeholders can contribute to building a robust, end-to-end, secure silicon supply chain for both commercial and government applications, while generating value for their businesses.
The precision scaling powered performance roadmap for AI Inference and Training systems
Over the past decade, Deep Neural Network (DNN) workloads have dramatically increased the computational requirements of AI Training and Inference systems - significantly outpacing the performance gains obtained traditionally using Moore's law of silicon scaling. New computer architectures, powered by low precision arithmetic engines (FP16 for training and INT8 for Inference), have laid the foundation for high performance AI systems - however, there remains an insatiable desire for AI compute with much higher power-efficiency and performance. In this talk, I'll outline some of the exciting innovations as well as key technical challenges - that can enable systems with aggressively scaled precision for inference and training, while fully preserving model fidelity. I'll also highlight some key complementary trends, including 3D stacking, sparsity and analog computing, that can enable dramatic growth in the AI system capabilities over the next decade.
Cross-Disciplinary Innovations Required for the Future of Computing
With traditional drivers of compute performance a thing of the past, innovative engineers are tapping into new vectors of improvement to meet the world's demand for computation. Like never before, the future of computing will be owned by those who can optimize across the previously siloed domains of silicon design, processor architecture, package technology and software algorithms to deliver performance gains with new capabilities. These approaches will derive performance and power efficiency through tailoring of the architecture to particular workloads and market segments, leveraging the much greater performance/Watt and performance/area of accelerated solutions. Designing and verifying multiple tailored solutions for markets where a less efficient general purpose design formerly sufficed can be accomplished through modular architectures using 2.5D and 3D packaging approaches. Delivering on modular solutions for high volume markets requires simultaneously optimizing across packaging, silicon, interconnect technologies where in the past, silicon design was sufficient. This talk will cover these trends with the vectors of innovation required to deliver these next generation compute platforms.
Reimagining Digital Simulation
In the last few decades, digital event-driven simulation has largely relied on underlying hardware for performance gains; core algorithms have not undergone truly transformative changes. Past efforts to accelerate simulation with special purpose hardware has repeatedly fallen behind the ever-improving performance of general-purpose computers, enabled by Moore's Law. Emulation-based strategies have also reached a performance ceiling. We are now at the end of the road with Moore's Law, and the time is right to fundamentally rethink simulation algorithms, methodologies, and computational strategies: considering hyperscaling, facilitated by the cloud, and advances in domain specific computing. This talk will examine the past and a possible future of simulation, a key technology enabler for advanced chip designs.
Delivering Systemic Innovation to Power in the Era of SysMoore
The SysMoore era is characterized by the widening gap between what is realized through classic Moore's Law scaling and massively increasing system complexity. The days of traditional System-on-a-chip complexity are giving way to systems-of-chips complexity, with the continued need for smaller, faster, and lower-power process nodes coupled with large-scale multi-die integration methodologies to coalesce new breeds of intelligence and compute, at scale. To enable such systems, we need to look beyond targeted but piece-meal innovation to something much broader and more able to deliver holistically and on a grander scale.
Systemic thinking coupled with systemic innovation is key to addressing both prevailing and future industry challenges and approaching them comprehensively is necessary to deliver the technological and productivity gains demanded to drive the next wave of transformative products.
This presentation will outline some of the myriad prevailing challenges facing designers in this era of SysMoore and the systemic innovations across the broad, silicon-to-software spectrum to address them. Join us to learn, how a combination of intelligent, autonomous, and analytics-driven design, is paving the way to reliable, autonomous, always-connected vehicles and how this hyper-integrated approach to innovation is being deployed to deliver the secure, AI-enabled, multi-die HPC compute systems of tomorrow. And much more!
More than Moore and Charting the Path Beyond 3nm
For more than fifty years, the trend known as Moore’s Law has astutely predicted a doubling of transistor count every twenty-four months. As 3nm technology moves into production, process engineers are feverishly working to uphold Moore’s Law by further miniaturizing the next generation of semiconductor technology. Meanwhile, a second trend referred to as “More than Moore” was coined in 2010 to reflect the integration of diverse functions and subsystems in 2D SoCs and 2.5D and 3D packages. Today, the trends of Moore’s Law and “More than Moore” synergize to produce ever higher value systems.
Working together, advances in both process technology and electronic design automation (EDA) have driven fundamental evolutions behind these two important semiconductor trends. This talk will examine the amazing and innovative developments in EDA over the years, culminating in the era of 3DIC and Machine Learning-based EDA to chart the path to 3nm and More than Moore.
The AI Hype Cycle is Over. Now What?
The expectations around AI and ML have been enormous, which fueled investment and innovation as companies scrambled for scalable approaches to building and deploying AI and ML solutions. Experimentation, in both hardware and software, has been the order of the day:
- Ramping up the core technology to improve accuracy and take on more use cases.
- Experimenting with the technology (models and processors) to understand what was possible, what worked, what didn't and why.
The exuberance of the moment, however, created some unintended consequences. Take, for example, a fully parameterized, complex Transformer network. In an analysis by Northeastern University, the 300 million parameter model took 300 tons of carbon to train. Since then, accuracy and efficiency have improved gradually.
Today, as the shouting dies down, the biggest trend – one that is having profound effects in helping teams innovate – is around hardware. The days of general-purpose hardware anchoring AI and ML are quickly giving way to specialized compute that allows engineers to not only tune their solutions for accuracy and efficiency but deploy their solutions more effectively across the compute spectrum. Industry veteran Steve Roddy, head of AI and ML product for Arm, will describe how a new era of democratized design is accelerating innovation in AI and design teams who embrace are speeding ahead of the pack.