AI-Assisted EDA: Auto-Placement and Routing with RL
DOI:
https://doi.org/10.15662/IJRAI.2022.0502002Keywords:
Reinforcement Learning, Electronic Design Automation, Placement, Routing, Deep Learning, Graph Neural Networks, Auto-Placement, DeepPlace, DeepPR, MaskPlace, Parameter Tuning, EDA AutomationAbstract
As the demand for faster and more efficient chip design escalates, traditional placement and routing techniques within Electronic Design Automation (EDA) struggle to keep pace with increasing complexity and scale. Reinforcement Learning (RL), particularly when combined with Graph Neural Networks (GNNs) or deep learning architectures, has emerged as a promising approach to automate and optimize these processes. This paper explores RLenabled auto-placement and routing methodologies in chip design, focusing on their ability to learn from experience, generalize across unseen netlists, and streamline physical design workflows. We present an integrated review of RL-based systems such as DeepPlace and DeepPR, which jointly handle macro placement and routing, as well as Google's pioneering deep RL framework that significantly reduces human effort and design time arXiv+1MDPI. These agents leverage multi-view embeddings, attention mechanisms, and reward shaping to optimize layout quality, wirelength, congestion, and manufacturability. Performance evaluations on benchmark sets demonstrate that RL-driven approaches can produce layouts of comparable or superior quality within hours—contrasted with the weeks required by manual design cycles arXiv. Additional advancements include MaskPlace, which reframes placement as visual representation learning and further improves metrics like wirelength and density arXiv. Moreover, agent-based methods for parameter tuning within commercial EDA tools showcase considerable improvements in solution quality with dramatically fewer iterations MDPIScribd. Overall, RL-based EDA automation offers strong potential to enhance productivity and design quality. However, challenges such as reward sparsity, scalability, training cost, and reproducibility remain open research areas. This paper consolidates recent successes and outlines future directions to advance AI-assisted EDA solutions.
References
1. Mirhoseini, A., et al. (2020). Chip Placement with Deep Reinforcement Learning. arXiv. arXiv
2. Cheng, R., & Yan, J. (2021). On Joint Learning for Solving Placement and Routing in Chip Design (DeepPlace,DeepPR). arXiv. arXiv
3. Google’s RL-based placement and MaskPlace developments. MDPIarXiv
4. RL-parameter tuning frameworks for EDA tools. MDPIScribd
5. Challenges in reward design and RL applicability in EDA