• Home
  • About
    • Daniel Michelsanti

      Data Scientist

    • More
    • Email
    • Google Scholar
    • LinkedIn
  • Publications
    • All Publications
    • All Tags
  • CV

Conditional Generative Adversarial Networks for Speech Enhancement and Noise-Robust Speaker Verification

Authors

Michelsanti D., Tan Z.-H.

Conference

Interspeech 2017

Abstract

Improving speech system performance in noisy environments remains a challenging task, and speech enhancement (SE) is one of the effective techniques to solve the problem. Motivated by the promising results of generative adversarial networks (GANs) in a variety of image processing tasks, we explore the potential of conditional GANs (cGANs) for SE, and in particular, we make use of the image processing framework proposed by Isola et al. [1] to learn a mapping from the spectrogram of noisy speech to an enhanced counterpart. The SE cGAN consists of two networks, trained in an adversarial manner: a generator that tries to enhance the input noisy spectrogram, and a discriminator that tries to distinguish between enhanced spectrograms provided by the generator and clean ones from the database using the noisy spectrogram as a condition. We evaluate the performance of the cGAN method in terms of perceptual evaluation of speech quality (PESQ), short-time objective intelligibility (STOI), and equal error rate (EER) of speaker verification (an example application). Experimental results show that the cGAN method overall outperforms the classical short-time spectral amplitude minimum mean square error (STSA-MMSE) SE algorithm, and is comparable to a deep neural network-based SE approach (DNN-SE).

Full text Poster



generative adversarial networksspeech enhancementspeaker verification Share Tweet +1