AI-Assisted EDA: Auto-Placement and Routing with RL

Authors

  • Mohit Rajesh Malhotra St Marys group of Institutions Guntur, AP, India Author

DOI:

https://doi.org/10.15662/IJRAI.2023.0602002

Keywords:

Electronic Design Automation (EDA), Auto-Placement, Routing, Reinforcement Learning (RL), Deep Q-Network (DQN), Policy Gradient, Actor-Critic, Chip Design Optimization, Congestion Control, Timing Closure

Abstract

Electronic Design Automation (EDA) plays a critical role in modern integrated circuit (IC) design, particularly in placement and routing, which significantly impact the performance, power, and area of chips. Traditionally, placement and routing have relied on heuristic algorithms and manual tuning, which are time-consuming and struggle with the increasing complexity of modern designs. Recently, Artificial Intelligence (AI), especially Reinforcement Learning (RL), has emerged as a promising approach to automate and optimize these tasks more effectively. This paper explores AI-assisted EDA with a focus on auto-placement and routing using RL. Reinforcement learning, by enabling an agent to learn optimal policies through interactions with the environment, offers a dynamic and adaptive solution to the combinatorial optimization problems inherent in placement and routing. The paper discusses key RL algorithms applied to placement and routing tasks, including Deep Q-Networks (DQN), Policy Gradient methods, and Actor-Critic architectures. We review recent advances where RL agents learn to place standard cells and macros with considerations for congestion, timing, and power constraints. Similarly, RL-based routing algorithms adaptively find optimal wire paths that minimize delay and crosstalk. Integrations of RL with traditional EDA tools and constraints handling are also analyzed. Experimental results indicate that RL-assisted approaches can outperform classical heuristics by reducing wirelength, timing violations, and congestion while decreasing manual intervention. However, challenges such as state space explosion, reward shaping, and training time remain. The paper concludes by discussing future directions, including multi-agent RL, transfer learning across chip designs, and hybrid approaches combining RL with conventional EDA methods to improve scalability and robustness. This comprehensive review demonstrates the transformative potential of RL in enhancing auto-placement and routing, paving the way for more efficient and intelligent chip design automation.

References

1. Kahng, A. B., Lienig, J., Markov, I. L., & Hu, J. (2011). VLSI Physical Design: From Graph Partitioning to Timing Closure. Springer.

2. Mirhoseini, A., Goldie, A., Yazgan, M., et al. (2020). A Graph Placement Methodology for Fast Chip Design. arXiv preprint arXiv:2004.07527.

3. Mao, H., You, H., Qin, J., et al. (2019). Learning to Optimize: Training Deep Neural Networks for Chip Placement. ICLR 2020.

4. Zhao, H., Pan, Y., & Xia, J. (2020). Reinforcement Learning for Routing in Electronic Design Automation. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 39(8), 1720-1733.

5. Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction (2nd ed.). MIT Press.

6. Liao, Y., Liang, M., & Song, X. (2020). Hybrid Heuristic and Reinforcement Learning for VLSI Routing Optimization. IEEE Access, 8, 210401-210413.

7. Chen, M., Luo, Y., & Wang, S. (2019). Deep Reinforcement Learning for Macro Placement in VLSI Physical Design. Proceedings of DAC 2019.

Downloads

Published

2023-03-01

How to Cite

AI-Assisted EDA: Auto-Placement and Routing with RL. (2023). International Journal of Research and Applied Innovations, 6(2), 8578-8581. https://doi.org/10.15662/IJRAI.2023.0602002