The exponential rise of AI is disrupting the EDA industry. There's no better place to learn how to position your company to take advantage of this trend than at #60DAC, ideas and research around AI have been a growing part of the DAC program for several years.
Since 2018, submissions of AI/ML papers to the DAC program have increased by 86%. In 2022, 30% of DAC program sessions focused on AI hardware; software design; development challenges, along latest research findings.
4th ROAD4NN Workshop: Research Open Automatic Design for Neural Networks
Organizer: Zhenman Fang, Simon Fraser University
Presenters: Yanzhi Wang, Northeastern University, Linghao Song, University of California, Los Angeles
Description: In the past decade, machine learning, especially neural network based deep learning, has achieved an amazing success. Various neural networks, such as CNNs, RNNs, LSTMs, Transformers, Vision Transformers, GNNs, and SNNs, have been deployed for various industrial applications like image classification, speech recognition, and automated control. On one hand, there is a very fast algorithm evolvement of neural network models, and almost every week there is a new model from a major academic and/or industry institute. On the other hand, all major industry giants have been developing and/or deploying specialized hardware platforms to accelerate the performance and energy-efficiency of neural networks across the cloud and edge devices. This includes Nvidia GPU, Intel Nervana/Habana/Loihi ASICs, Xilinx FPGAs, Google TPUs, Microsoft Brainwave, Amazon Inferentia, to name just a few. However, there is a significant gap between the fast algorithm evolvement and staggering hardware development, hence calling for broader participation in software-hardware co-design from both academia and industry. In this workshop, we focus on the research open automatic design for neural networks, a holistic open source approach to general-purpose computer systems broadly inspired by neural networks. More specifically, we discuss full stack open source infrastructure support to develop and deploy novel neural networks, including novel algorithms and applications, hardware architectures and emerging devices, as well as programming, system, and tool support. We plan to bring together academic and industry experts to share their experience, discuss challenges they face as well as potential focus areas for the community.
;
Organizer: Yongan Zhang, Georgia Institute of Technology
Presenters: Yingyan (Celine) Lin, Georgia Institute of Technology, Yonggan Fu, Georgia Institute of Technology, Yang Zhao, Rice University, Chaojian Li, Georgia Institute of Technology, Amir Yazdanbakhsh, Google, Bichen WU, Meta
Description: Deep neural networks (DNNs) have experienced significant breakthroughs in various artificial intelligence (AI) applications, leading to a growing interest in designing efficient DNNs that can be used on resource-constrained edge devices. One of the most promising techniques for achieving this is hardware-aware Neural Architecture Search (HW-NAS), which automates the process of designing efficient DNN structures for different applications and devices based on specified hardware-cost metrics, such as latency or energy. However, developing optimal HW-NAS solutions is challenging due to the computationally expensive training process and the need for cross-disciplinary knowledge in algorithm, micro-architecture, and device-specific compilation. Additionally, designing DNN accelerators is non-trivial and requires a large team of experts. Research has also shown that optimal DNN accelerators require a joint consideration of network structure, accelerator design, and algorithm-to-hardware mapping, but this direction of jointly designing or searching for all three aspects has only been slightly explored. The tutorial aims to educate the research community on the challenges of co-searching DNNs and hardware, and to bring together researchers and practitioners interested in automating co-design for deployment in resource-constrained devices. Specifically, we will present our following works addressing the above challenges.
;
Organizer: Jana Doppa, Washington State University
Presenters: Umit Ogras, University of Wisconsin, Madison, Priyadarshini Panda, Yale University
Description: Many real-world edge applications including self-driving cars, mobile health, robotics, and augmented reality (AR) / virtual reality (VR) are enabled by complex AI algorithms (e.g., deep neural networks). Currently, much of this computation for these applications happen in the cloud, but there are several good reasons to perform the processing on local edge platforms (e.g., smartphones). First, it will increase the accessibility of AI applications to different parts of the world by reducing the dependence on the communication fabric and the cloud infrastructure. Second, many of these applications such as robotics and AR/VR require low latency. We may not be able to achieve their critical real-time requirements without performing the computation directly on the edge platform. Third, for many important applications (e.g., mobile health), privacy and security are important concerns (e.g., medical data). Since edge platforms are constrained by resources (power, computation, and memory), there is a great need for innovative solutions to realize the vision of practically useful Edge AI. This tutorial will address inference and training tasks using different neural networks including CNNs and GNNs for diverse edge applications. The presenters will describe the most-compelling research advances in the design of hardware accelerators using emerging technologies (e.g., ReRAM, monolithic 3D, heterogeneous cores, chiplets); software approaches (e.g., model pruning, quantization of weights, adaptive computation); and various approaches for synergistic co-design to push the Pareto front of energy, performance, accuracy, and privacy.
;
Organizer: Andrew Kahng, University of California, San Diego
Moderator: Leigh Anne Clevenger, Silicon Integration Initative, Inc.
Panelists: Scot Weber, AMD; Haoxing (Mark) Ren, NVIDIA; Thomas Andersen, Synopsys; Charles Alpert, Cadence Design Systems, Inc.; Paul Villarrubia, IBM Corporation; Youngsoo Shin, Korea Advanced Institute of Science & Technology
Description:
Academic researchers, as well as the EDA industry and its major customers, have invested enormous effort and resources in the quest to apply AI in EDA and IC design. There is now a substantial body of production experience with AI-boosted EDA tools and design methodologies. Clearly, EDA can be helped by AI. For the DAC community, this motivates questions that seek a next level of understanding:
- What benefits (from infusion of AI) have been seen by design organizations?
- What areas/directions in EDA are now seen as "dead ends” for AI application?
- What areas/directions are likely to bring the greatest future benefits?
- Should we look for future benefits of AI "inside” vendor tools, or "outside” in customer deployment and methodology built around vendor tools?
- Looking back from the year 2030, how will AI have fundamentally changed EDA and IC design?
- Where do EDA suppliers and design organizations diverge the most in their envisioning of AI's future in EDA and IC design?
This panel brings together experts from across tool development, business development, production deployment, and the leading edge of research for AI-boosted EDA. All panelists have deep experience with IC physical implementation and signoff, which has been a focus area for AI application in EDA.
;