A concise, nontechnical overview of the development of machine translation, including the different approaches, evaluation issues, and major players in the industry.
The dream of a universal translation device goes back many decades, long before Douglas Adams's fictional Babel fish provided this service in The Hitchhiker's Guide to the Galaxy. Since the advent of computers, research has focused on the design of digital machine translation tools--computer programs capable of automatically translating a text from a source language to a target language. This has become one of the most fundamental tasks of artificial intelligence. This volume in the MIT Press Essential Knowledge series offers a concise, nontechnical overview of the development of machine translation, including the different approaches, evaluation issues, and market potential. The main approaches are presented from a largely historical perspective and in an intuitive manner, allowing the reader to understand the main principles without knowing the mathematical details.
The book begins by discussing problems that must be solved during the development of a machine translation system and offering a brief overview of the evolution of the field. It then takes up the history of machine translation in more detail, describing its pre-digital beginnings, rule-based approaches, the 1966 ALPAC (Automatic Language Processing Advisory Committee) report and its consequences, the advent of parallel corpora, the example-based paradigm, the statistical paradigm, the segment-based approach, the introduction of more linguistic knowledge into the systems, and the latest approaches based on deep learning. Finally, it considers evaluation challenges and the commercial status of the field, including activities by such major players as Google and Systran.
Thierry Poibeau is Director of Research at the Centre National de la Recherche Scientifique in Paris, Head of the LATTICE (Langues, Textes, Traitements Informatiques et Cognition) Laboratory, and Affiliated Lecturer in the Department of Theoretical and Applied Linguistics at the University of Cambridge.
Quick overview: The book states that natural language processing is inherently subjective. Computers cannot interpret or handle subjectivity. So the challenge is to create a "good" interpretation by objective analysis of the languages.
The book details historical approaches to machine translation, the current state of the art, and the likely future.
I thought it was a fascinating introduction to machine translation. I had no idea how complex translating is, and how difficult it is for computers. Mr. Poibeau does a great job explaining the field, research, and details without becoming too technical. It is a dense read, but I came away with a sense of awe. It makes me more grateful for Google Translate and all the other translation systems out there.
Already outdated in 2019, as Deep/Machine Learning changed this area already significantly in the last 2 years, but still very helpful to understand the basic methods and historical development in this area.
Interesting and informative description of a field that develops so quickly that the book already feels a bit outdated. Nevertheless, it gives a good historical overview which is interesting for anyone involved or interested translation.
i would say that this is still a bit too technical to be "nontechnical" :) it felt a bit too specialised (some chapters) to be general access, this coming from someone with NLP training...
I knew pretty much nothing about machine translation before I read this (for a class). I was impressed how well the history and aspects of machine translation were explained. It's a great introduction for anyone studying translation (or interested in the subject)!