Paraphrases do not explain word analogies
Louis Fournier, Ewan Dunbar
Interpretability and Analysis of Models for NLP Short paper Paper
You can open the pre-recorded video in separate windows.
Abstract:
Many types of distributional word embeddings (weakly) encode linguistic regularities as directions (the difference between jump and jumped will be in a similar direction to that of walk and walked, and so on). Several attempts have been made to explain this fact. We respond to Allen and Hospedales’ recent (ICML, 2019) theoretical explanation, which claims that word2vec and GloVe will encode linguistic regularities whenever a specific relation of paraphrase holds between the four words involved in the regularity. We demonstrate that the explanation does not go through: the paraphrase relations needed under this explanation do not hold empirically
NOTE: Video may display a random order of authors.
Correct author list is at the top of this page.