Practical natural language processing systems, such as those that translate between human languages, encode very complex transformations. Such a complex transformation often breaks down into a cascade of simpler transformations, each of which can be encoded as an formal transducer automaton.
What kinds of formal transducers are an appropriate fit to the practical tasks?
Practitioners have a lot of experience with finite-state string transducers, which work reasonably well for left-to-right tasks like speech recognition, but less well for language translation.
Tree automata have recently been taken up as an alternative, and the empirical results are good. However, the formal situation is not as nice -- there are many variations of finite-state tree transducers, each with different formal properties and a different fit to natural language transformation problems. This talk will cover formal properties of tree transducers related to their expressiveness for natural language, their closure under composition, their teachability, and their backwards-compatibility with string transducers.
Back To MoL Page