TY - JOUR
T1 - RBNN: Memory-Efficient Reconfigurable Deep Binary Neural Network with IP Protection for Internet of Things
T2 - Memory-Efficient Reconfigurable Deep Binary Neural Network With IP Protection for Internet of Things
AU - Qiu, Huming
AU - Ma, Hua
AU - Zhang, Zhi
AU - Gao, Yansong
AU - Zheng, Yifeng
AU - Fu, Anmin
AU - Zhou, Pan
AU - Abbott, Derek
AU - Al-Sarawi, Said F.
PY - 2023/4/1
Y1 - 2023/4/1
N2 - Currently, a high demand for on-device deep neural network (DNN) model deployment is limited by the large model size, computing-intensive floating-point operations (FLOPS), and intellectual property (IP) infringements (i.e., easy access to model duplication for the avoidance of license payments). One appealing solution to addressing the first two concerns is model quantization, which reduces the model size and uses integer operations commonly supported by microcontrollers (MCUs usually do not support FLOPS). To this end, a 1-bit quantized DNN model or deep binary neural network (BNN) significantly improves the memory efficiency, where each parameter in a BNN model has only 1 bit. However, BNN cannot directly provide IP protection (in particular, the functionality of the model is locked unless there is a license payment). In this article, we propose a reconfigurable BNN (RBNN) to further amplify the memory efficiency for resource-constrained Internet of Things (IoT) devices while naturally protecting the model IP. Generally, RBNN can be reconfigured on demand to achieve any one of M (M > 1) distinct tasks with the same parameter set, thus only a single task determines the memory requirements. In other words, the memory utilization is improved by a factor of M. Our extensive experiments corroborate that up to seven commonly used tasks (M = 7, six of these tasks are image related and the last one is audio) can co-exist (the value of M can be larger). These tasks with a varying number of classes have no or negligible accuracy drop-off (i.e., within 1%) on three binarized popular DNN architectures, including VGG, ResNet, and ReActNet. The tasks span across different domains, e.g., computer vision and audio domains validated herein, with the prerequisite that the model architecture can serve those crossdomain tasks. To fulfill the IP protection of an RBNN model, the reconfiguration can be controlled by both a user key and a deviceunique root key generated by the intrinsic hardware fingerprint (e.g., SRAM memory power-up pattern). By doing so, an RBNN model can only be used per paid user per authorized device, thus benefiting both the user and the model provider. The source code is released at https://github.com/LearningMaker/RBNN.
AB - Currently, a high demand for on-device deep neural network (DNN) model deployment is limited by the large model size, computing-intensive floating-point operations (FLOPS), and intellectual property (IP) infringements (i.e., easy access to model duplication for the avoidance of license payments). One appealing solution to addressing the first two concerns is model quantization, which reduces the model size and uses integer operations commonly supported by microcontrollers (MCUs usually do not support FLOPS). To this end, a 1-bit quantized DNN model or deep binary neural network (BNN) significantly improves the memory efficiency, where each parameter in a BNN model has only 1 bit. However, BNN cannot directly provide IP protection (in particular, the functionality of the model is locked unless there is a license payment). In this article, we propose a reconfigurable BNN (RBNN) to further amplify the memory efficiency for resource-constrained Internet of Things (IoT) devices while naturally protecting the model IP. Generally, RBNN can be reconfigured on demand to achieve any one of M (M > 1) distinct tasks with the same parameter set, thus only a single task determines the memory requirements. In other words, the memory utilization is improved by a factor of M. Our extensive experiments corroborate that up to seven commonly used tasks (M = 7, six of these tasks are image related and the last one is audio) can co-exist (the value of M can be larger). These tasks with a varying number of classes have no or negligible accuracy drop-off (i.e., within 1%) on three binarized popular DNN architectures, including VGG, ResNet, and ReActNet. The tasks span across different domains, e.g., computer vision and audio domains validated herein, with the prerequisite that the model architecture can serve those crossdomain tasks. To fulfill the IP protection of an RBNN model, the reconfiguration can be controlled by both a user key and a deviceunique root key generated by the intrinsic hardware fingerprint (e.g., SRAM memory power-up pattern). By doing so, an RBNN model can only be used per paid user per authorized device, thus benefiting both the user and the model provider. The source code is released at https://github.com/LearningMaker/RBNN.
KW - Deep neural network (DNN)
KW - Internet of Things (IoT)
KW - intellectual property (IP) protection
KW - Quantization
KW - reconfigurable binary neural network (RBNN)
UR - https://www.webofscience.com/wos/woscc/full-record/WOS:001006891800013
UR - https://www.scopus.com/pages/publications/85136058726
U2 - 10.1109/TCAD.2022.3197499
DO - 10.1109/TCAD.2022.3197499
M3 - Article
SN - 0278-0070
VL - 42
SP - 1185
EP - 1198
JO - IEEE Transactions on Computer - Aided Design of Integrated Circuits and Systems
JF - IEEE Transactions on Computer - Aided Design of Integrated Circuits and Systems
IS - 4
ER -