0%

CTF-MNMF demo

作者: Taihui Wang
Email: wangtaihui@mail.ioa.ac.cn

Convolutive Transfer Function-Based Multichannel Nonnegative Matrix Factorization for Overdetermined Blind Source Separation

Taihui Wang 1, 2 , Feiran Yang 3 , Member, IEEE, and Jun Yang 1, 2 , Senior Member, IEEE

1 Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing,
China

2 University of Chinese Academy of Sciences, Beijing, China

3 State Key Laboratory of Acoustics, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China

Abstract

Most multichannel blind source separation (BSS) approaches rely on a spatial model to encode the transfer functions from sources to microphones and a source model to encode the source power spectral density. The rank-1 spatial model has been widely exploited in independent component analysis (ICA), independent vector analysis (IVA), and independent low-rank matrix analysis (ILRMA). The full-rank spatial model is also considered in many BSS approaches, such as full-rank spatial covariance matrix analysis (FCA), multichannel nonnegative matrix factorization (MNMF), and FastMNMF, which can improve the separation performance in the case of long reverberation times. This paper proposes a new MNMF framework based on the convolutive transfer function (CTF) for overdetermined BSS. The time-domain convolutive mixture model is approximated by a frequency-wise convolutive mixture model instead of the widely adopted frequency-wise instantaneous mixture model. The iterative projection algorithm is adopted to estimate the demixing matrix, and the multiplicative update rule is employed to estimate nonnegative matrix factorization (NMF) parameters. Finally, the source image is reconstructed using a multichannel Wiener filter. The advantages of the proposed method are twofold. First, the CTF approximation enables us to use a short window to represent long impulse responses. Second, the full-rank spatial model can be derived based on the CTF approximation and slowly time-variant source variances, and close relationships between the proposed method and ILRMA, FCA, MNMF and FastMNMF are revealed. Extensive experiments show that the proposed algorithm achieves a higher separation performance than ILRMA and FastMNMF in reverberant environments.

Separated audio samples

The following table shows an example of the 2-music source separation task, where the public NMF model is used and the number of bases is set to 32. The reverberation time is 470 ms, and the channel number is 6.

Source 1 Source 2
Clean reverberant sources
Mixture at the first microphone
Separated sources uing CTF-MNMF

The following table shows an example of the 2-speech source separation task, where the private NMF model is used and the number of bases is set to 2. The reverberation time is 470 ms, and the channel number is 6.

Source 1 Source 2
Clean reverberant sources
Mixture at the first microphone
Separated sources uing CTF-MNMF

The following table shows an example of the 2-speech source separation task, where the private NMF model is used and the number of bases is set to 2. The reverberation time is 1300 ms, and the channel number is 6.

Source 1 Source 2
Clean reverberant sources
Mixture at the first microphone
Separated sources uing CTF-MNMF

The following table shows an example of the 4- speech source separation tast, where the private NMF model is used and the number of bases is set to 2. The reverberation time is 470 ms, and the channel number is 8.

Source 1 Source 2 Source 3 Source 4
Clean sources
Mixture at the first microphone
Separated sources uing CTF-MNMF
## Other samples

For more audio samples, the reader is referred to TaihuiWang/CTF-MNMF (github.com)