Bregman reported that the human auditory system uses four psychoacoustically heuristic regularities related to aco-ustic events, to solve the problem of Auditory Scene Analysis (ASA) [1]. If a segregation model was constructed using constraints related to these heuristic regularities, it would be applicable not only to a preprocessor for robust speech recognition systems but also to various types of signal processing.
Some ASA-based segregation models already exist. There are two main types of models, based on either bottom-up [2] or top-down processes [3,6]. All these models use some of the four regularities, and the amplitude (or power) spectrum as the acoustic feature. Thus they cannot completely extract the desired signal from a noisy signal when the signal and noise exist in the same frequency region.
In contrast, we have discussed the need to use not only the amplitude spectrum but also the phase spectrum in order to completely extract the desired signal from a noisy signal, addressing the problem of segregating two acoustic sources [8].
This problem is defined as follows [8].
First, only the mixed signal f(t), where
f(t)=f1(t)+f2(t), can be observed.
Next, f(t) is decomposed into its frequency components by a filterbank (the number of channels is K).
The output of the k-th channel Xk(t) is represented by
This problem is an ill-inverse problem because there are currently no equations for determining the two instantaneous phases. Therefore, we have proposed a method of solving this problem using constraints related to the four regularities [8]. It was assumed that the fundamental frequency was constant and known, and that , although this method could extract the synthesized vowel from a noisy synthesized vowel with high accuracy. Here, means that each frequency of the signal component that passed through the channel coincides with the center frequency of each channel. Therefore, it is difficult to extract real speech from noisy speech using this method because the fundamental frequency of speech fluctuates, and multiples of the fundamental frequency cannot coincide with the center frequencies of the channels.
This paper proposes a new method for extracting real speech from noisy speech by (1) incorporating of a method of estimating the fundamental frequency and (2) reconsidering the constraint of .