Sleep apnea detection via depth video and audio feature learning

Cheng Yang, Gene Cheung, Vladimir Stankovic, Kevin Chan, Nobutaka Ono

Research output: Contribution to journalArticlepeer-review

26 Citations (Scopus)
144 Downloads (Pure)


Obstructive sleep apnea, characterized by repetitive obstruction in the upper airway during sleep, is a common sleep disorder that could significantly compromise sleep quality and quality of life in general. The obstructive respiratory events can be detected by attended in-laboratory or unattended ambulatory sleep studies. Such studies require many attachments to a patient’s body to track respiratory and physiological changes, which can be uncomfortable and compromise the patient’s sleep quality. In this paper, we propose to record depth video and audio of a patient using a Microsoft Kinect camera during his/her sleep, and extract relevant features to correlate with obstructive respiratory events scored manually by a scientific officer based on data collected by Philips system Alice6 LDxS that is commonly used in sleep clinics. Specifically, we first propose an alternating-frame video recording scheme, where different 8 of the 11 available bits in captured depth images are extracted at different instants for H.264 video encoding. At the decoder, the uncoded 3 bits in each frame can be recovered via block-based search. Next, we perform temporal denoising on the decoded depth video using a motion vector graph smoothness prior, so that undesirable flickering can be removed without blurring sharp edges. Given the denoised depth video, we track a patient’s chest and abdominal movements based on a dual-ellipse model. Finally, we extract ellipse model features via a wavelet packet transform (WPT), extract audio features via non-negative matrix factorization (NMF), and insert them as input to a classifier to detect respiratory events. Experimental results show first that our depth video compression scheme outperforms a competitor that records only the 8 most significant bits. Second, we show that our graph-based temporal denoising scheme reduces the flickering effect without over-smoothing. Third, we show that using our extracted depth video and audio features, our trained classifiers can deduce respiratory events scored manually based on data collected by system Alice6 LDxS with high accuracy.
Original languageEnglish
Pages (from-to)822-835
Number of pages15
JournalIEEE Transactions on Multimedia
Issue number4
Early online date9 Nov 2016
Publication statusPublished - 1 Apr 2017


  • obstructive sleep apnea
  • sleep quality
  • obstructive respiratory event
  • sleep study
  • sleep monitoring


Dive into the research topics of 'Sleep apnea detection via depth video and audio feature learning'. Together they form a unique fingerprint.

Cite this