Mcunet: Tiny Deep Learning on IoT Devices

By Ji Lin et al
Published on Nov. 19, 2020
Read the original document by opening this link in a new tab.

Table of Contents

1 Introduction
2 Background
3 MCUNet: System-Algorithm Co-Design

Summary

Machine learning on tiny IoT devices based on microcontroller units (MCU) is appealing but challenging. The memory of microcontrollers is significantly smaller than mobile phones, making deep learning deployment difficult. MCUNet proposes a framework that jointly designs the efficient neural architecture (TinyNAS) and the lightweight inference engine (TinyEngine) to enable ImageNet-scale inference on microcontrollers. By running deep learning models on these tiny devices, the scope of AI applications can be expanded. MCUNet achieves >70% ImageNet top1 accuracy on an off-the-shelf commercial microcontroller, using significantly less SRAM and Flash compared to quantized MobileNetV2 and ResNet-18. The study suggests that the era of always-on tiny machine learning on IoT devices has arrived. The paper discusses the challenges of deep learning deployment on microcontrollers, the importance of memory-efficient inference libraries, and the system-algorithm co-design approach of MCUNet.
×
This is where the content will go.