Mike Schuster: The move to Neural Machine Translation at GoogleAbstract: Machine learning and in particular neural networks have made great advances in the last few years for products that are used by millions of people, most notably in speech recognition, image recognition and most recently in neural machine translation. Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference. Also, most NMT systems have difficulty with rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which addresses many of these issues. The model consists of a deep LSTM network with 8 encoder and 8 decoder layers using attention and residual connections. To accelerate final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units for both input and output. On the WMT'14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using human side-by-side evaluations it reduces translation errors by more than 60% compared to Google's phrase-based production system. The new Google Translate was launched in late 2016 and has improved translation quality significantly for all Google users.
Bio: Dr. Mike Schuster graduated in Electric Engineering from the Gerhard-Mercator University in Duisburg, Germany in 1993. After receiving a scholarship he spent a year in Japan to study Japanese in Kyoto and Fiber Optics in the Kikuchi laboratory at Tokyo University. His professional career in machine learning and speech brought him to Advanced Telecommunications Research Laboratories in Kyoto, Nuance in the US and NTT in Japan where he worked on general machine learning and speech recognition research and development after getting his PhD at the Nara Institute of Science and Technology. Dr. Schuster joined the Google speech group in the beginning of 2006, seeing speech products being developed from scratch to toy demos to serving millions of users in many languages over the next eight years, and he was the main developer of the original Japanese and Korean speech recognition models. He is now part of the Google Brain group which focuses on building large-scale neural network and machine learning infrastructure for Google and has been working on infrastructure with the TensorFlow toolkit as well as on research, mostly in the field of speech and translation with various types of recurrent neural networks. In 2016 he led the development of the new Google Neural Machine Translation system, which reduced translation errors by more than 60% compared to the previous system.
Akira Mizuno: Simultaneous Interpreting, Cognitive Constraints, and Information StructureAbstract: Simultaneous interpreting involves heavy cognitive load, which becomes heavier when interpreters interpret simultaneously between structurally different languages such as Japanese and English. The cognitive load can be measured by the number of chunks held in the focus of attention of the Cowan’s model of working memory. An analysis of a small corpus of simultaneous interpreting between English and Japanese indicated that simultaneous interpreters frequently made use of translation strategies in order not to surpass the capacity of working memory. These strategies, different from traditional translation method which frequently involves word order reversal, seem to have intended to perform “a minimum reverse integration”. In this talk, I will indicate that these are not ad-hoc strategies but more appropriate translation method than the traditional method, which can be supported by the theories of information structure and contribute to the research of machine translation.
Bio: Akira Mizuno is a former professor of Aoyama Gakuin University and the President of the Japan Association for Interpreting and Translation Studies (JAITS). He has been involved in conference interpreting and broadcast interpreting since 1988. His main interest is Interpreting and Translation Studies, Functional Linguistics, and Cognitive Science. In 2010, he co-edited and co-authored Translation Theories in Japan and in 2015, published Theories of Simultaneous Interpreting Cognitive Constraints and Translation Strategies.