Can recursive neural tensor networks learn logical reasoning?
Samuel R. Bowman NLP Group, Dept. of Linguistics Stanford University Stanford, CA 94305-2150 sbowman@stanford.eduAbstract
Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of some animal walks from some dog walks or some cat walks, given that dogs and cats are animals. This model learns representations that generalize well to new types of reasoning pattern in all but a few cases, a result which is promising for the ability of learned representation models to capture logical reasoning.
http://web.stanford.edu/~sbowman/arxiv_submission.pdf
depends on coreference resolution
http://nlp.stanford.edu/projects/coref.shtml
Natural Language Processing for the rest of us.
Opinions, Entities and Sentiments in 6 languages
http://www.opener-project.eu/
and across domains.http://www.opener-project.org
https://github.com/opener-project
ICLR 2016 best award papers
http://www.iclr.cc/doku.php?id=iclr2016%3Amain#best_paper_awards
and across domains.http://www.opener-project.org
https://github.com/opener-project
ICLR 2016 best award papers
http://www.iclr.cc/doku.php?id=iclr2016%3Amain#best_paper_awards
No comments:
Post a Comment