Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models
Recent studies show that on-device deployed deep learning (DL) models, such as those of Tensor Flow Lite (TFLite), can be easily extracted from real-world applications and devices by attackers to generate many kinds of adversarial and other attacks. Although securing deployed on-device DL models has gained increasing attention, no existing methods can fully prevent these attacks. Traditional software protection techniques have been widely explored. If on-device models can be implemented using pure code, such as C++, it will open the possibility of reusing existing robust software protection techniques. However, due to the complexity of DL models, there is no automatic method that can translate DL models to pure code. To fill this gap, we propose a novel method, CustomDLCoder, to automatically extract on-device DL model information and synthesize a customized executable program for a wide range of DL models. CustomDLCoder first parses the DL model, extracts its backend computing codes, configures the extracted codes, and then generates a customized program to implement and deploy the DL model without explicit model representation. The synthesized program hides model information for DL deployment environments since it does not need to retain explicit model representation, preventing many attacks on the DL model. In addition, it improves ML performance because the customized code removes model parsing and preprocessing steps and only retains the data computing process. Our experimental results show that CustomDLCoder improves model security by disabling on-device model sniffing. Compared with the original on-device platform (i.e., TFLite), our method can accelerate model inference by 21.0% and 24.3% on x86-64 and ARM64 platforms, respectively. Most importantly, it can significantly reduce memory consumption by 68.8% and 36.0% on x86-64 and ARM64 platforms, respectively.
Thu 19 SepDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
15:30 - 17:10 | Analyzing Neural ModelsTechnical Papers at EI 9 Hlawka Chair(s): Saeid Tizpaz-Niari University of Texas at El Paso | ||
15:30 20mTalk | Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models Technical Papers Mingyi Zhou Monash University, Xiang Gao Beihang University, Pei Liu CSIRO’s Data61, John Grundy Monash University, Chunyang Chen TU Munich, Xiao Chen University of Newcastle, Li Li Beihang University DOI | ||
15:50 20mTalk | Decomposition of Deep Neural Networks into Modules via Mutation Analysis Technical Papers Ali Ghanbari Auburn University DOI | ||
16:10 20mTalk | Large Language Models Can Connect the Dots: Exploring Model Optimization Bugs with Domain Knowledge-Aware Prompts Technical Papers Hao Guan University of Queensland; Southern University of Science and Technology, Guangdong Bai University of Queensland, Yepang Liu Southern University of Science and Technology DOI | ||
16:30 20mTalk | DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation Technical Papers Mingke Yang ShanghaiTech University, Yuqi Chen ShanghaiTech University, Yi Liu Nanyang Technological University, Ling Shi Nanyang Technological University DOI | ||
16:50 20mTalk | Evaluating Deep Neural Networks in Deployment: A Comparative Study (Replicability Study) Technical Papers Eduard Pinconschi Carnegie Mellon University, Divya Gopinath KBR; NASA Ames, Rui Abreu INESC-ID; University of Porto, Corina S. Păsăreanu Carnegie Mellon University; NASA Ames DOI |