본문 바로가기
로그인

RESEARCH

Semiconductor System Lab

Through this homepage, we would like to share our sweats, pains,
excitements and experiences with you.

HI SYSTEMS 

K-GLASS II

본문

Overview

Wearable head-mounted display (HMD) smart devices are emerging as a smartphone substitute due to their ease of use and suitability for advanced applications, such as gaming and augmented reality (AR). Most current HMD systems suffer from: 1) a lack of rich user interfaces, 2) short battery life, and 3) heavy weight. Although current HMDs (e.g. Google Glass) use a touch panel and voice commands as the interface, such interfaces are solely smartphone extensions and are not optimized for HMD. Recently, gaze was proposed for an HMD user interface, but gaze cannot realize a natural user interface and experience (UI/UX), due to its limited interactivity and lengthy gaze-calibration time (several minutes). In this paper, gesture and speech recognition are proposed as natural UI/UX, based on: 1) speech pre-processing: 2-channel ICA (independent component analysis), speech selection, and noise cancellation and 2) gesture pre-processing: depth/color-map generation, hand detection, hand segmentation, and noise cancellation. This paper presents a lowpower natural UI/UX processor with an embedded deep-learning core (NINEX) to provide wearable AR for HMD users without calibration. Moreover, it provides higher recognition accuracy than previous work. 

Implementation results


 
Performance comparison
 
Architecture
 
Features

  - 5-stage Pipelined Hand Segmentation Core (PHSC)

  - User’s Voice Activated Speech Separation Core (USSC)

  - Dropout Deep Learning Engine (DDLE)

  - True Random Number Generator (TRNG)

Related Papers

  - ISSCC 2016 [pdf] 

Address#1233, School of Electrical Engineering, KAIST, 291 Daehak-ro (373-1 Guseong-dong), Yuseong-gu, Daejeon 34141, Republic of Korea
Tel +82-42-350-8068 Fax +82-42-350-3410E-mail sslmaster@kaist.ac.kr·© SSL. All Rights Reserved.·Design by NSTAR