ISSN :2582-9793

Towards Faster k-Nearest-Neighbor Machine Translation

Original Research (Published On: 16-Feb-2024 )
Towards Faster k-Nearest-Neighbor Machine Translation
DOI : https://dx.doi.org/10.54364/AAIML.2024.41111

Xiangyu Shi and Yunlong Liang

Adv. Artif. Intell. Mach. Learn., 4 (1):1943-1958

Xiangyu Shi : BeiJing JiaoTong University

Yunlong Liang : BeiJing JiaoTong University

Download PDF Here

DOI: https://dx.doi.org/10.54364/AAIML.2024.41111

Article History: Received on: 21-Dec-23, Accepted on: 09-Feb-24, Published on: 16-Feb-24

Corresponding Author: Xiangyu Shi

Email: 22120416@gmail.com

Citation: Xiangyu Shi, Yunlong Liang, Jinan Xu, Yufeng Chen (2024). Towards Faster k-Nearest-Neighbor Machine Translation. Adv. Artif. Intell. Mach. Learn., 4 (1 ):1943-1958

          

Abstract

    

Recent works have proven the effectiveness of k-nearest- neighbor machine translation(a.k.a kNN-MT) approaches to produce remarkable improvement in cross-domain transla- tions. However, these models suffer from heavy retrieve over- head on the entire datastore when decoding each token. We observe that during the decoding phase, about 67% to 84% of tokens are unvaried after searching over the corpus datas- tore, which means most of the tokens cause futile retrievals and introduce unnecessary computational costs by initiating k-nearest-neighbor searches. We consider this phenomenon is explainable in linguistics and propose a simple yet effec- tive multi-layer perceptron (MLP) network to predict whether a token should be translated jointly by the neural machine translation model and probabilities produced by the kNN or just by the neural model. The results show that our method succeeds in reducing redundant retrieval operations and sig- nificantly reduces the overhead of kNN retrievals by up to 53% at the expense of a slight decline in translation quality. Moreover, our method could work together with all existing kNN-MT systems.

Statistics

   Article View: 551
   PDF Downloaded: 28