Technical Name Neural Network Design, Acceleration and Deployment based on HarDNet - A Low Memory Traffic Network
Project Operator National Tsing Hua University
Project Host 林永隆
Summary
本團隊將基於已發表在International Conference on Computer Vision (ICCV) 2019的神經網路骨幹架構HarDNet,做多項的技術展示,包括:
1. Deployment of HarDNet on GPU (功耗:大於200瓦) [註:除此一展示項目外,其餘皆為今年增列的新技術]
2. Deployment of HarDNet on FPGA (功耗:數十瓦)
3. Deployment of HarDNet on lightweight edge devices such as Google Coral TPU and Intel Movidius VPU (功耗:小於10瓦)

我們的展示將呈現HarDNet在上述三個運算能力、功耗大不相同的平台上,都能有相對應的HarDNet變形並在整體性能(速度、準度)方面具有高度的競爭力,尤其在real-time semantic segmentation這個應用上,HarDNet被Papers with Code網站評比為世界第一,獲得state of the art (SOTA)的殊榮。

此外,我們也將介紹HarDNet在下列三個不同面向(維度)上的技術延伸:
1. Exploration of neural network (NN) architectures based on HarDNet and its variants, compressed, secure, etc
2. Design of energy-efficient, high-performance NN accelerators
3. Deployment for various platforms and applications

第1項,我們將介紹compressed HarDNet與secure HarDNet,分別是對HarDNet進行再優化(壓縮)以及提升HarDNet的安全性致使它較不易受對抗式攻擊的襲擊;第2項,我們將提出幾個高效、節能的神經網路加速器設計方案,可使HarDNet在效能甚至能耗上更進一步;第3項,除了已考慮的運算平台和應用之外,我們亦嘗試了更多的平台和應用,擴展HarDNet在佈署、應用兩方面的適應性(adaptability)和多樣性(diversity)。
Scientific Breakthrough
Being performed on various computing platforms such as GPU, FPGA, and AI edge device, HarDNet can consistently achieve highly competitive performance in terms of speed and accuracy. Especially, for the application of real-time semantic segmentation, HarDNet is ranked first around the world and has been recognized as "state of the art" (SOTA). Not only have we already deployed HarDNet on various platforms with different power budgets, but also we have been working on applying HarDNet and its variants on a variety of computer vision tasks besides those already done.
Industrial Applicability
1. 矽谷智慧語音晶片大廠採用本團隊開發之RNN加速方案,其高階AI語音晶片已於2020下線,該晶片預估產值上看億元美金。
2. 台灣記憶體晶片製造大廠與本團隊共同合作AI computing in memory技術,開創下一代晶片新藍海。
3. 成立新創公司,為產業提供動能,為國家培養AI人才。
Keyword HarDNet (Harmonic DenseNet) Network Architecture Hardware Accelerator Edge AI Deployment Model Compression Security of Neural Networks Approximate Computing
Notes
  • Contact
other people also saw