Lagrangian relaxation for natural language decoding
[摘要] The major success story of natural language processing over the last decade has been the development of high-accuracy statistical methods for a wide-range of language applications. The availability of large textual data sets has made it possible to employ increasingly sophisticated statistical models to improve performance on language tasks. However, oftentimes these more complex models come at the cost of expanding the search-space of the underlying decoding problem. In this dissertation, we focus on the question of how to handle this challenge. In particular, we study the question of decoding in large-scale, statistical natural language systems. We aim to develop a formal understanding of the decoding problems behind these tasks and present algorithms that extend beyond common heuristic approaches to yield optimality guarantees. The main tool we utilize, Lagrangian relaxation, is a classical idea from the field of combinatorial optimization. We begin the dissertation by giving a general background introduction to the method and describe common models in natural language processing. The body of the dissertation consists of six chapters. The first three chapters discuss relaxation methods for core natural language tasks : (1) examines the classical problem of parsing and part-of-speech tagging; (2) addresses the problem of language model composition in syntactic machine translation; (3) develops efficient algorithms for non-projective dependency parsing. The second set of chapters discuss methods that utilize relaxation in combination with other combinatorial techniques: (1) develops an exact beam-search algorithm for machine translation; (2) uses a parsing relaxation in a coarse-to-fine cascade. At the core of each chapter is a relaxation of a difficult combinatorial problem and the implementation of this algorithm in a large-scale system.
[发布日期] [发布机构] Massachusetts Institute of Technology
[效力级别] [学科分类]
[关键词] [时效性]