We extend the exploration of back-translation from bilingual to multilingual and have different findings.
We introduce a BERT-variant model and new training objectives to learn multimodal representation for eProduct.
We study how to best combine label smoothing and soft contextualized data augmentation, and stack their improvements for neural machine translation.
Alternative to teacher-student learning, our models distill knowledge equally at both sentence level and token level for neural machine translation.
We apply different GAN techniques to generate 3D turbulent flows, and analyze their physical properties.