default search action
Computer Speech & Language, Volume 36
Volume 36, March 2016
- Kian Ebrahim Kafoori, Seyed Mohammad Ahadi:
Bounded cepstral marginalization of missing data for robust speech recognition. 1-23 - Marc Delcroix, Atsunori Ogawa, Seong-Jun Hahm, Tomohiro Nakatani, Atsushi Nakamura:
Differenced maximum mutual information criterion for robust unsupervised acoustic model adaptation. 24-41 - Yingying Gao, Weibin Zhu:
Detecting affective states from text based on a multi-component emotion model. 42-57 - Sarang Chehrehsa, Tom James Moir:
Speech enhancement using Maximum A-Posteriori and Gaussian Mixture Models for speech and noise Periodogram estimation. 58-71 - Rahul Gupta, Kartik Audhkhasi, Sungbok Lee, Shrikanth S. Narayanan:
Detecting paralinguistic events in audio stream using context in features and probabilistic decisions. 72-92 - Scott Novotney, Richard M. Schwartz, Sanjeev Khudanpur:
Getting more from automatic transcripts for semi-supervised language modeling. 93-109 - Soonil Kwon, Sung-Jae Kim, Joon Yeon Choeh:
Preprocessing for elderly speech recognition of smart devices. 110-121 - Ruzica Bilibajkic, Zoran Saric, Slobodan T. Jovicic, Silvana Punisic, Misko Subotic:
Automatic detection of stridence in speech using the auditory model. 122-135 - Jesús Vilares Ferro, Manuel Vilares Ferro, Miguel A. Alonso, Michael P. Oakes:
On the feasibility of character n-grams pseudo-translation for Cross-Language Information Retrieval tasks. 136-164
- Karen Livescu, Frank Rudzicz, Eric Fosler-Lussier, Mark Hasegawa-Johnson, Jeff A. Bilmes:
Speech Production in Speech Technologies: Introduction to the CSL Special Issue. 165-172
- Leonardo Badino, Claudia Canevari, Luciano Fadiga, Giorgio Metta:
Integrating articulatory data in deep neural network-based acoustic modeling. 173-195 - Ming Li, Jangwon Kim, Adam C. Lammert, Prasanta Kumar Ghosh, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Speaker verification based on the fusion of speech acoustics and inverted articulatory signals. 196-211 - Karen Livescu, Preethi Jyothi, Eric Fosler-Lussier:
Articulatory feature-based pronunciation modeling. 212-232 - Ramya Rasipuram, Mathew Magimai-Doss:
Articulatory feature based continuous speech recognition using probabilistic lexical modeling. 233-259 - Sandesh Aryal, Ricardo Gutierrez-Osuna:
Data driven articulatory synthesis with deep neural networks. 260-273 - Thomas Hueber, Gérard Bailly:
Statistical conversion of silent articulation into audible speech using full-covariance HMM. 274-293 - Farook Sattar, Frank Rudzicz:
Principal differential analysis for detection of bilabial closure gestures from articulatory data. 294-306 - Samuel S. Silva, António J. S. Teixeira:
Quantitative systematic analysis of vocal tract data. 307-329 - Vikram Ramanarayanan, Maarten Van Segbroeck, Shrikanth S. Narayanan:
Directly data-derived articulatory gesture-like representations retain discriminatory information about phone categories. 330-346 - Colin J. Champion, S. M. Houghton:
Application of continuous state Hidden Markov Models to a classical problem in speech recognition. 347-364 - Turgay Koç, Tolga Çiloglu:
Nonlinear interactive source-filter models for speech. 365-394
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.