“LLM for SoC Security: A Paradigm Shift” (2023)

AUTHORS:

D. Saha, S. Tarek, K. Yahyaei, S. Kumar, J. Zhuo, M. Tehranipoor, and F. Farahmandi

As the ubiquity and complexity of system-on-chip (SoC) designs increase across electronic devices, incorporating security into an SoC design flow poses significant challenges. Existing security solutions are inadequate to effectively verify modern SoC designs due to their limitations in scalability, comprehensiveness, and adaptability. On the other hand, large language models (LLMs) are celebrated for their remarkable success in language understanding, advanced reasoning, and program synthesis tasks. Recognizing an opportunity, our research explores leveraging the emergent capabilities of generative pre-trained transformers (GPTs) to address the existing gaps in SoC security, aiming for a more efficient, scalable, and adaptable methodology. By integrating LLMs into the SoC security verification paradigm, we open a new frontier of possibilities and challenges to ensure the security of increasingly complex SoCs. This paper offers an in-depth analysis of existing works, presents practical case studies, and demonstrates comprehensive experiments. We also present the achievements, prospects, and challenges of employing LLM in different SoC security verification tasks.