Tuesday, May 19, 2015

NLI - Natural Language Inference


Can recursive neural tensor networks learn logical reasoning?

 Samuel R. Bowman NLP Group, Dept. of Linguistics Stanford University Stanford, CA 94305-2150 sbowman@stanford.edu
 Abstract
 Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of some animal walks from some dog walks or some cat walks, given that dogs and cats are animals. This model learns representations that generalize well to new types of reasoning pattern in all but a few cases, a result which is promising for the ability of learned representation models to capture logical reasoning.


http://web.stanford.edu/~sbowman/arxiv_submission.pdf

depends on coreference resolution
http://nlp.stanford.edu/projects/coref.shtml

Natural Language Processing for the rest of us.

Opinions, Entities and Sentiments in 6 languages 

No comments:

Post a Comment