Deep normalization for speaker vectors

Yunqi Cai, Lantian Li, Andrew Abel, Xiaoyan Zhu, Dong Wang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)

Abstract

Deep speaker embedding has demonstrated state-of-the-art performance in speaker recognition tasks. However, one potential issue with this approach is that the speaker vectors derived from deep embedding models tend to be non-Gaussian for each individual speaker, and non-homogeneous for distributions of different speakers. These irregular distributions can seriously impact speaker recognition performance, especially with the popular PLDA scoring method, which assumes homogeneous Gaussian distribution. In this article, we argue that deep speaker vectors require deep normalization, and propose a deep normalization approach based on a novel discriminative normalization flow (DNF) model. We demonstrate the effectiveness of the proposed approach with experiments using the widely used SITW and CNCeleb corpora. In these experiments, the DNF-based normalization delivered substantial performance gains and also showed strong generalization capability in out-of-domain tests.

Original languageEnglish
Article number9296778
Pages (from-to)733-744
Number of pages12
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Volume29
Early online date17 Dec 2020
DOIs
Publication statusPublished - 1 Feb 2021

Keywords

  • normalization flow
  • speaker embedding
  • speaker recognition
  • training
  • transforms
  • task analysis
  • covariance matrices
  • probabilistic logic
  • dimensionality reduction

Fingerprint

Dive into the research topics of 'Deep normalization for speaker vectors'. Together they form a unique fingerprint.

Cite this