The impact of language models and loss functions on repair disfluency detection

Simon Zwarts and Mark Johnson
Macquarie University


Abstract

Unrehearsed spoken language often contains disfluencies. In order to correctly interpret a spoken utterance, any such disfluencies must be identified and removed or otherwise dealt with. Operating on transcripts of speech which contain disfluencies, we study the effect of language model and loss function on the performance of a linear reranker that rescores the 25-best output of a noisy-channel model. We show that language models trained on large amounts of non-speech data improve performance more than a language model trained on a more modest amount of speech data, and that optimising f-score rather than log loss improves disfluency detection performance.

Our approach is driven by a log-linear reranker, operating on the top $n$ analyses of a noisy channel model. We use large language models, introduce new features into this reranker and examine different optimisation strategies. We obtain a disfluency detection f-scores of $0.838$ which improves upon the current state-of-the-art.




Full paper: http://www.aclweb.org/anthology/P/P11/P11-1071.pdf