TY - JOUR
T1 - A state-space model with neural-network prediction for recovering vocal tract resonances in fluent speech from Mel-cepstral coefficients
AU - Togneri, Roberto
AU - Deng, L.
PY - 2006
Y1 - 2006
N2 - In this paper, we present a state-space formulation of a neural-network-based hidden dynamic model of speech whose parameters are trained using an approximate EM algorithm. This efficient and effective training makes use of the output of an off-the-shelf formant tracker (for the vowel segments of the speech signal), in addition to the Mel-cepstral observations, to simplify the complex sufficient statistics that would be required in the exact EM algorithm. The trained model, consisting of the state equation for the target-directed vocal tract resonance (VTR) dynamics on all classes of speech sounds (including consonant closure and constriction) and the observation equation for mapping from the VTR to Mel-cepstral acoustic measurement, is then used to recover the unobserved VTR based on the extended Kalman filter. The results demonstrate accurate estimation of the VTR, especially during rapid consonant-vowel or vowel-consonant transitions and during consonant closure when the acoustic measurement alone provides weak or no information to infer the VTR values. The practical significance of correctly identifying the VTRs during consonantal closure or constriction is that they provide target frequency values for the VTR or formant transitions from adjacent sounds. Without such target values, the VTR transitions from vowel to consonant or from consonant to vowel are often very difficult to extract accurately by the previous formant tracking techniques. With the use of the new technique reported in this paper, the consonantal VTRs and the related transitions become more reliably identified from the speech signal. (C) 2006 Elsevier B.V. All rights reserved.
AB - In this paper, we present a state-space formulation of a neural-network-based hidden dynamic model of speech whose parameters are trained using an approximate EM algorithm. This efficient and effective training makes use of the output of an off-the-shelf formant tracker (for the vowel segments of the speech signal), in addition to the Mel-cepstral observations, to simplify the complex sufficient statistics that would be required in the exact EM algorithm. The trained model, consisting of the state equation for the target-directed vocal tract resonance (VTR) dynamics on all classes of speech sounds (including consonant closure and constriction) and the observation equation for mapping from the VTR to Mel-cepstral acoustic measurement, is then used to recover the unobserved VTR based on the extended Kalman filter. The results demonstrate accurate estimation of the VTR, especially during rapid consonant-vowel or vowel-consonant transitions and during consonant closure when the acoustic measurement alone provides weak or no information to infer the VTR values. The practical significance of correctly identifying the VTRs during consonantal closure or constriction is that they provide target frequency values for the VTR or formant transitions from adjacent sounds. Without such target values, the VTR transitions from vowel to consonant or from consonant to vowel are often very difficult to extract accurately by the previous formant tracking techniques. With the use of the new technique reported in this paper, the consonantal VTRs and the related transitions become more reliably identified from the speech signal. (C) 2006 Elsevier B.V. All rights reserved.
U2 - 10.1016/j.specom.2006.01.001
DO - 10.1016/j.specom.2006.01.001
M3 - Article
SN - 0167-6393
VL - 48
SP - 971
EP - 988
JO - Speech Communication
JF - Speech Communication
IS - 8
ER -