Ethan Perez

I'm a second-year Ph.D. student in Natural Language Processing at New York University. I am grateful to be advised by Kyunghyun Cho and Douwe Kiela and funded by NSF and Open Philanthropy.

My research focuses on developing learning algorithms that have the long-term potential to answer questions that people don't know the answers to. Supervised learning cannot answer such questions, even in principle, so I am investigating other learning paradigms for generalizing beyond the available supervision.

I earned a Bachelor’s from Rice University as the Engineering department’s Outstanding Senior. Previously, I've spent time at Facebook AI Research and Google, and I had the great pleasure of working at the Montreal Institute for Learning Algorithms with Aaron Courville and Hugo Larochelle.

Email  /  CV  /  Google Scholar  /  Twitter

Research
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Mike Lewis, Scott Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela,
arXiv 2020.

We present a single, retrieval-based architecture that can learn a variety of knowledge-intensive tasks: extractive and generative alike.

Unsupervised Question Decomposition for Question Answering
Ethan Perez, Patrick Lewis, Scott Wen-tau Yih, Kyunghyun Cho, Douwe Kiela,
Reasoning for Complex Question Answering Workshop, AAAI 2020   (Oral Presentation)
[Code] [Blog Post]  

We decompose a hard question into several, easier questions with unsupervised learning, improving multi-hop question answering without extra supervision.

Finding Generalizable Evidence by Learning to Convince Q&A Models
Ethan Perez, Siddharth Karamcheti, Rob Fergus, Jason Weston, Douwe Kiela, Kyunghyun Cho,
EMNLP 2019. [Code] [Blog Post] [Press]  

We find text evidence for an answer to a question by finding text that convinces Q&A models to pick that answer.

ELI5: Long Form Question Answering
Angela Fan, Yacine Jernite*, Ethan Perez*, David Grangier, Jason Weston, Michael Auli
ACL 2019. [Code] [Blog Post] [Website]  

We introduce a dataset for abstractive question-answering where answers are 100+ words long (many "how" and "why" questions).

FiLM: Visual Reasoning with a General Conditioning Layer
Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, Aaron Courville
AAAI 2018. [Code] [Presentation]  

A general-purpose neural network layer can be used to integrate multimodal input to answer reasoning questions about images.

Feature-wise transformations
Vincent Dumoulin, Ethan Perez, Nathan Schucher, Florian Strub, Harm de Vries, Aaron Courville, Yoshua Bengio
Distill 2018.  

A review of a simple and surprisingly effective class of neural conditioning mechanisms.

Visual Reasoning with Multi-hop Feature Modulation
Florian Strub, Mathieu Seurin, Ethan Perez, Harm de Vries, Jeremie Mary, Aaron Courville, Olivier Pietquin
ECCV 2018. [Code]  

Decoding FiLM conditioning parameters in multiple hops helps for more advanced vision-and-language tasks such as visual dialogue.

HoME: a Household Multimodal Environment
Simon Brodeur, Ethan Perez*, Ankesh Anand*, Florian Golemo*, Luca Celotti, Florian Strub, Hugo Larochelle, Aaron Courville
ICLR 2018 Workshop. [Code]  

We introduce a simulated environment for agents to learn from vision, audio, semantics, physics, and object-interaction within a realistic, household context.

Semi-supervised learning with the deep rendering mixture model
Tan Nguyen, Wanjia Liu, Ethan Perez, Richard G. Baraniuk, Ankit B. Patel
arXiv 2018.  

A probabilistic graphical model underlying CNNs achieves state-of-the-art semi-supervised image classification.


Design courtesy of Jon Barron