Inconsistency of a Recurrent Language Model: A Question I Forgot to Ask in 2014 - Kyunghyun Cho, Associate Professor, New York University

KyungHyun Cho image

DATE: Mon, July 20, 2020 - 2:00 pm

LOCATION: Please register to receive the Zoom link


Please register for this event here



In this talk, I will go back to the basic of neural sequence modeling and ask the glaringly obvious question I forgot to ask in 2014; "is density estimation a good strategy for sequence generation?" I try to give my initial stab at belatedly answering this question by empirically investigating the discrepancy between density estimation and sequence generation, studying the effectiveness of two distinct approaches to neural sequence modeling (Lee, Tran, Firat & Cho, 2020), and by theoretically studying the inconsistency of incomplete decoding in a recurrent language model (Welleck, Kulikov, Kim, Pang & Cho, 2020). I will conclude the talk by discussing some lessons I have learned during the past half a year trying to answer this question.



Kyunghyun Cho is an associate professor of computer science and data science at New York University and CIFAR Associate Fellow. He was a research scientist at Facebook AI Research from June 2017 to May 2020 and a postdoctoral fellow at University of Montreal until Summer 2015 under the supervision of Prof. Yoshua Bengio, after receiving PhD and MSc degrees from Aalto University April 2011 and April 2014, respectively, under the supervision of Prof. Juha Karhunen, Dr. Tapani Raiko and Dr. Alexander Ilin. He tries his best to find a balance among machine learning, natural language processing, and life, but almost always fails to do so.


Please register for this event here

< Back to Events