DAC 58 will bring together the most forward-thinking leaders in design automation, representing both industry and academia, to drive innovation and research in the field of design and automation of electronic systems. We understand there are barriers to travel outside of your control, so if you are unable to join your peers and colleagues for the premier conference in this industry, we want to ensure you are still able to access the cutting-edge research you need to stay ahead of the competition and on the forefront of electronic design. Take advantage of the exclusive DAC Virtual Registration Package which will grant you access to a select number of sessions along with limited networking capabilities on our virtual platform from the comfort of your own home or office.
Architecture-Aware Precision Tuning with Multiple Number Representation Systems*
- Daniele Cattaneo, Politecnico di Milano, Milan, Italy
- Michele Chiari, Politecnico di Milano, Milan, Italy
- Nicola Fossati, Politecnico di Milano, Milan, Italy
- Giovanni Agosta, Politecnico di Milano, Milan, Italy
- Stefano Cherubin, Codeplay Software Ltd., Edinburgh, United Kingdom
Description: Precision tuning trades accuracy for speed and energy savings, usually by reducing the data width, or by switching from floating point to fixed point representations. However, comparing the precision across different representations is a difficult task. We present a metric that enables this comparison, and employ it to build a methodology based on Integer Linear Programming for tuning the data type selection. We apply the proposed metric and methodology to a range of processors, demonstrating an improvement in performance (up to 9x) with a very limited precision loss (<2.8% for 90% of the benchmarks) on the PolyBench benchmark suite.
Distilling Arbitration Logic from Traces using Machine Learning: A Case Study on NoC*
- Yuan Zhou, Zhiru Zhang, Cornell University, Ithaca, NY
- Hanyu Wang, Shanghai Jiao Tong University, Shanghai, China
- Jieming Yin, Lehigh University, Bethlehem, PA
Deep learning techniques have been shown to achieve superior performance on several arbitration tasks in computer hardware. However, these techniques cannot be directly implemented in hardware because of the prohibitive area and latency overhead. In this work, we propose a novel methodology to automatically "distill" the arbitration logic from simulation traces. We leverage tree-based models as a bridge to convert deep learning models to logic, and present a case study on a network-on-chip port arbitration task. The generated arbitration logic achieves significant reduction in average packet latency compared with the baselines.
DNN-Opt: An RL Inspired Optimization for Analog Circuit Sizing Using Deep Neural Networks
- Ahmet F. Budak, The University of Texas at Austin, Austin, TX
- David Pan, The University of Texas at Austin, Austin, TX
- Nan Sun, The University of Texas at Austin, Austin, TX
- Prateek Bhansali, Intel Corporation, Hillsboro, OR
- Chandramouli V. Kashyap, Intel Corporation, Hillsboro, OR
- Bo Liu, University of Glasgow, Glasgow, United Kingdom
In this paper, we present DNN-Opt, a novel Deep Neural Network (DNN) based black-box optimization framework for analog sizing. Our method outperforms other black-box optimization methods on small building blocks and large industrial circuits with significantly fewer simulations and better performance. This paper's key contributions are a novel sample-efficient two-stage deep learning optimization framework inspired by the actor-critic algorithms developed in the Reinforcement Learning (RL) community and its extension for industrial-scale circuits. This is the first application of DNN based circuit sizing on industrial scale circuits to the best of our knowledge.
Gemmini: Enabling Systematic Deep-Learning Architecture Evaluatioin via Full-Stack Integration
- Hasan N. Genc, University of California, Berkeley, Berkeley, CA
- Seah Kim, University of California, Berkeley, Berkeley, CA
- Alon Amid, University of California, Berkeley, Berkeley, CA
- Ameer Haj-Ali, University of California, Berkeley, Berkeley, CA
- Vighnesh Iyer, University of California, Berkeley, Berkeley, CA
- Pranav Prakash, University of California, Berkeley, Berkeley, CA
- Jerry Zhao, University of California, Berkeley, Berkeley, CA
- Daniel Grubb, University of California, Berkeley, Berkeley, CA
- Harrison Liew, University of California, Berkeley, Berkeley, CA
- Howard Mao, University of California, Berkeley, Berkeley, CA
- Albert Ou, University of California, Berkeley, Berkeley, CA
- Colin Schmidt, University of California, Berkeley, Berkeley, CA
- Samuel Steffl, University of California, Berkeley, Berkeley, CA
- John Wright, University of California, Berkeley, Berkeley, CA
- Ion Stoica, University of California, Berkeley, Berkeley, CA
- Krste Asanovic, University of California, Berkeley, Berkeley, CA
- Borivoje Nikolic, University of California, Berkeley, Berkeley, CA
- Yakun Sophia Shao, University of California, Berkeley, Berkeley, CA
- Jonathan Ragan-Kelley, Massachusetts Institute of Technology, Cambridge, MA
DNN accelerators are often developed and evaluated in isolation without considering the cross-stack, system-level effects in real-world environments. This makes it difficult to appreciate the impact of System-on-Chip (SoC) resource contention, OS overheads, and programming-stack inefficiencies on overall performance/energy-efficiency. To address this challenge, we present Gemmini, an open-source, full-stack DNN accelerator generator. Gemmini generates a wide design-space of efficient ASIC accelerators from a flexible architectural template, together with flexible programming stacks and full SoCs with shared resources that capture system-level effects. Gemmini-generated accelerators have also been fabricated, delivering up to three orders-of-magnitude speedups over high-performance CPUs on various DNN benchmarks.
A Resource Binding Approach to Logic Obfuscation
- Michael Zuzak, University of Maryland, College Park, College Park, MD
- Yuntao Liu, University of Maryland, College Park, College Park, MD
- Ankur Srivastava, University of Maryland, College Park, College Park, MD
Logic locking counters security threats during IC fabrication. Research has identified a trade-off between 2 goals of locking, error injection and SAT attack resilience. As a result, locking often cannot inject sufficient error to impact an IC while maintaining SAT resilience. We propose using architectural context available during resource binding to co-design architectures/locking configurations with high corruption and SAT resilience. We propose 2 security-focused binding/locking algorithms and apply them to bind/lock 11 MediaBench benchmarks. These circuits showed a 26x and 99x increase in the application errors of a fixed locking configuration while maintaining SAT resilience and incurring minimal design overhead.
Accelerating EDA Algorithms with GPUs and Machine Learning
Topic Area(s): EDA, Machine Learning/AI
Session Organizers: Brucek Khailany, NVIDIA, Austin, TX; David Pan, The University of Texas at Austin, Austin, TX
Recent advancements in GPU accelerated computing platforms and machine learning (ML) based optimization techniques have led to exciting recent research progress with large speedups on many EDA algorithms fundamental to semiconductor design flows. In this session, we highlight ongoing research deploying GPUs and ML to mask synthesis, IC design automation, and PCB design at commercial EDA vendors and semiconductor design and manufacturing companies. Research into mask synthesis techniques shows the potential for GPUs to accelerate inverse lithography and for running training and inference of ML models for process modeling. In PCB layout editing, GPU-accelerated path rendering techniques can scale to millions of rendered objects with interactive responsiveness. In IC physical design, GPU-accelerated reinforcement learning for DRC fixing combined with traditional EDA optimization techniques can automate standard cell layout generation. The combination of GPUs and ML can enable large speedups and automate key EDA tasks previously seen as intractable.
Democratizing Design Automation: Next Generation Opensource Tools for Hardware Specialization
Topic Area(s): EDA, Machine Learning/AI
Session Organizer: Antonino Tumeo, Pacific Northwest National Laboratory, Richland, WA
The growth of autonomous systems, coupled with design efforts and cost challenges brought by new technology nodes, is driving the need for generators that could quickly transition high-level algorithmic specifications to specialized hardware implementations. The necessity to explore additional dimensions of the design space (e.g., accuracy, security, system size and cooling) is further emphasizing the need for interoperable tools. This special session focuses on efforts for interoperable, modularized, and opensource tools to provide a no-human-in-the-loop design cycle from high-level specifications to ASICs and further promote novel research. The first talk introduces the status quo and CIRCT, an initiative aiming at applying MLIR and the LLVM development methodology to design automation. The second and third talks describe state-of-the-art tools, for high-level synthesis, and for logic synthesis, respectively, and discuss explorations to bridge the two. The session overviews how interoperability is achieved today, opportunities, challenges, and new perspectives enabled by community efforts.
Design Automation of Autonomous Systems: State-of-the-Art and Future Directions
Topic Area(s): Autonomous Systems, Design
Session Organizer: Qi Zhu, Northwestern University, Evanston, IL
Shaoshan Liu, PerceptIn, Mountain View, CA
Design processes leverage various automated tools to support requirement engineering, design, implementation, verification, validation, testing and evaluation. In the domains of automotive and aerospace, design automation processes and tools have been architected and developed over the years and used to design products with established level of confidence. The recent success of Artificial Intelligence (AI) has shown great promises in improving system intelligence and autonomy for these applications. However, the adoption of those techniques also presents significant challenges for the design processes to ensure system safety, performance, reliability, security, etc. This special session will discuss essential design automation processes/tools and industrial efforts to support the development and deployment of future autonomous systems, particularly in the domains of automotive and aerospace.
Hardware Aware Learning for Medical Image Computing and Computer Assisted Intervention
Topic Area(s): Design, Machine Learning/AI
Session Organizer: Lei Yang, University of New Mexico, Albuquerque, NM
Deep learning has recently demonstrated performance comparable with, and in some cases superior to, that of human experts in medical image computing. However, deep neural networks are typically very large, which combined with large medical image sizes create various hurdles towards their clinical applications. In medical image computing, not only accuracy but also latency and security are of primary concern, and the hardware platforms are sometimes resource-constrained. The first two talks in this session propose novel solutions for the data acquisition and data processing stages in medical image computing respectively, using hardware-oriented schemes for lower latency, memory footprint and higher performance in embedded platforms. Considering the privacy requirement, the third talk further demonstrates a software/hardware co-exploration framework for hybrid trusted execution environment in medical image computing, preserving privacy while achieving higher efficiency than human experts.
Machine Learning Meets Computing Systems Design: The Bidirectional Highway
Topic Area(s): Design, Machine Learning/AI
Session Organizer: Partha P. Pande, Washington State University, Pullman, WA
With the rising needs of advanced algorithms for large-scale data analysis and data-driven discovery, and significant growth in emerging applications from the edge to the cloud, we need low-cost, high-performance, energy-efficient, and reliable computing systems targeted for these applications. Developing these application-specific hardware elements must become easy, inexpensive, and seamless to keep up with extremely rapid evolution of AI/ML algorithms and applications. Therefore, it is of high priority to create innovative design frameworks enabled by data analytics and machine learning that reduces the engineering cost and design time of application-specific hardware. There is also a need to continually advance software algorithms and frameworks to better cope with data available to platforms at multiple scales of complexity. To the best of our knowledge, this is the first special session at any EDA conference that explores both directions of cross-fertilization between computing system design and ML.
A Quantum Leap in Machine Learning: From Applications to Implementations
Topic Area(s): Design
Session Organizer: Robert Wille, Johannes Kepler University, Linz, Austria
Classical machine learning techniques that have been extensively studied for discriminative and generative tasks are cumbersome and, in many applications, inefficient. They require millions of parameters and remain inadequate in modeling a target probability distribution. For example, computational approaches to accelerate drug discovery using machine learning face curse-of-dimensionality due to exploding number of constraints that need to be imposed using reinforcement learning. Quantum machine learning (QML) techniques, with strong expressive power, can learn richer representation of data with less number of parameters, training data and training time. However, the methodologies to design these QML workloads and their training is still emerging. Furthermore, usage model of the small and noisy quantum hardware in QML tasks to solve practically relevant problems is an active area of research. This special session will provide insights on building, training and exploiting scalable QML circuits to solve socially relevant combinatorial optimization applications including drug discovery.
Smart Robots with Sensing, Understanding, and Acting
Topic Area(s): Autonomous Systems, Machine Learning/AI
Session Organizer: Janardhan Rao (Jana) Doppa, Washington State University, Pullman, WA; Yu Wang, Tsinghua University, Beijing, China
The robotics industry holds enormous promise but development rates are bogged down by increasingly complex software to meet performance and safety requirements in the face of long tail events. Moreover, intelligent robots should adapt in the field to unexpected conditions that may not have ever been observed during design time. Design automation for autonomy has the potential to accelerate the rate at which we overcome these challenges (particularly outside of the autonomous driving sector which throws massive resources at the problem.) In this talk I discuss how the key tools of machine learning, AutoML, simulation, and design optimization have made an impact on systems development for two medical robotics projects - ocular microsurgery and tele-nursing - and will continue to make an impact in other sectors like automated warehouses, service robots, and agriculture.