본문 바로가기
로그인

RESEARCH

Semiconductor System Lab

Through this homepage, we would like to share our sweats, pains,
excitements and experiences with you.

HI SYSTEMS 

CNPU: Mobile Deep RL Accelerator

본문

Overview

Recently, deep neural networks (DNNs) are actively used for object recognition, but also for action control, so that an autonomous system, such as the robot, can perform human-like behaviors and operations. Unlike recognition tasks, real-time operation is important in action control, and it is too slow to use remote learning on a server communicating through a network. New learning techniques, such as reinforcement learning (RL), are needed to determine and select the correct robot behavior locally. Fig. 7.4.1(a) shows an example of a robot agent that uses a pre-trained DNN without RL, and Fig. 7.4.1(b) depicts an autonomous robot agent that learns continuously in the environment using RL. The agent without RL falls down if the land slope changes, but the RL-based agent iteratively collects walking experiences and learns to walk even though the land slope changes. 

Implementation results
 Figure 7
Performance comparison
Figure 6 
Architecture
Figure 2 
Features

  - Energy Efficient Reconfigurable 

  - DRL Processor Experience  Compressor for Reducing EMA 

  - Adaptive data reuse transposable PE array 


Related Papers

  - ISSCC 2019 [pdf] 

Address#1233, School of Electrical Engineering, KAIST, 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 34141, Republic of Korea
Tel +82-42-350-8068 Fax +82-42-350-3410E-mail sslmaster@kaist.ac.kr·© SSL. All Rights Reserved.·Design by NSTAR