Logo Logo
Help
Contact
Switch Language to German

Gupta, Pritha ORCID logoORCID: https://orcid.org/0000-0002-7277-4633; Drees, Jan Peter ORCID logoORCID: https://orcid.org/0000-0002-7982-9908 und Hüllermeier, Eyke ORCID logoORCID: https://orcid.org/0000-0002-9944-4108 (2023): Automated Side-Channel Attacks using Black-Box Neural Architecture Search. 18th International Conference on Availability, Reliability and Security, Benevento, Italy, 29 August - 1 September 2023. In: Proceedings of the 18th International Conference on Availability, Reliability and Security, New York, NY, USA: Association for Computing Machinery. pp. 1-11 [PDF, 969kB]

Abstract

The application of convolutional neural networks (CNNs) to break cryptographic systems through hardware side-channels facilitated rapid and adaptable attacks on cryptographic systems like smart cards and Trusted Platform Modules (TPMs). However, current approaches rely on manually designed CNN architectures by domain experts, which are time-consuming and impractical for attacking new systems.

To overcome this, recent research has delved into the use of neural architecture search (NAS) to discover appropriate CNN architectures automatically. This approach aims to alleviate the burden on human experts and facilitate more efficient exploration of new attack targets. However, these works only optimize the architecture using the secret key information from the attack dataset and explore limited search strategies with one-dimensional CNNs. In this work, we propose a fully black-box NAS approach that solely utilizes the profiling dataset for optimization. Through an extensive experimental parameter study, we investigate which choices for NAS, such as using 1-D or 2-D CNNs and various search strategies, produce the best results on 10 state-of-the-art datasets for identity leakage model.

Our results demonstrate that applying the Random search strategy on 1-D inputs achieves a high success rate, enabling retrieval of the correct secret key using a single attack trace on two datasets. This combination matches the attack efficiency of fixed CNN architectures and outperforms them in 4 out of 10 datasets. Our experiments also emphasize the importance of repeated attack evaluations for ML-based solutions to avoid biased performance estimates.

Actions (login required)

View Item View Item