Articles in category: Summarization
http://nlp.hivefire.com/category/5/summarization/
Automatic summarization
automatic summarization: extraction and abstraction.
http://en.wikipedia.org/wiki/Automatic_summarization
SocialNLP 2015
The 3rd International Workshop on
Natural Language Processing for Social Media
In conjunction with WWW 2015 @ May 19, 2015, Florence, Italy.
In conjunction with NAACL 2015 @ Jun 05, 2015, Denver, Colorado, USA.
https://sites.google.com/site/socialnlp2015/
Auto summarizing news articles using Natural Language Processing (NLP)
http://knackforge.com/blog/selvam/auto-summarizing-news-articles-using-natural-language-processing-nlp
Document Summarization
Modelling and Visualising and Summarising Documents with a Single Convolutional Neural Network
http://memkite.com/blog/2015/01/29/deep-learning-for-natural-language-processing/
nlp – where to get News summarization corpus?
The Stanford NLP (Natural Language Processing) Group
Stanford Parser FAQ
http://nlp.stanford.edu/software/parser-faq.shtml
Stanford Parser FAQ
http://nlp.stanford.edu/software/parser-faq.shtml
Stanford Deterministic Coreference Resolution System. News | About | Download | Usage | Questions | Mailing lists | Release history. News. May 7, 2013: Recent …
BIST Parsers
(Yoav Goldberg)
Graph & Transition based dependency parsers using BiLSTM feature extractors
The techniques behind the parser are described in the paper Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations.
Required software
Python 2.7 interpreter
PyCNN library
https://github.com/elikip/bist-parser
Noah’s ARK
Noah’s ARK is Noah Smith’s informal research group at the Language Technologies Institute, School of Computer Science, Carnegie Mellon University.
NLTK Book Ch . 2 – Natural Language Toolkit
The techniques behind the parser are described in the paper Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations.
Required software
Python 2.7 interpreter
PyCNN library
https://github.com/elikip/bist-parser
Noah’s ARK
Noah’s ARK is Noah Smith’s informal research group at the Language Technologies Institute, School of Computer Science, Carnegie Mellon University.
NLTK Book Ch . 2 – Natural Language Toolkit
1.1 Gutenberg Corpus. NLTK includes a small selection of texts from the Project Gutenberg electronic text archive, which contains some 25,000 free electronic books …
Philip Resnik’s Home Page – University of Maryland …
Oh, and by the way, my name is not spelled PhilipResnick, Phillip Resnik, or Phillip Resnick, though this explicit disclaimer may help people who don’t know that …
natural language processing blog
my biased thoughts on the fields of natural language processing (NLP), computational linguistics (CL) and related topics (machine learning, math, …
Natural Language Processing (NLP): An Introduction
04-07-2011 · Introduction. This tutorial provides an overview of natural language processing (NLP) and lays a foundation for the JAMIA reader to better appreciate the …
1. Language Processing and Python – Natural Language Toolkit
1. Language Processing and Python. It is easy to get our hands on millions of words of text. What can we do with it, assuming we can write some simple programs?
Automatic summarization – Wikipedia, the free encyclopedia
Automatic summarization is the process of reducing a text document with a computer program in order to create a summary that retains the most important points of the …
CICLing 2015 Conference: Computational Linguistics and …
16 th International Conference on Intelligent Text Processing and Computational Linguistics. April 14–20, 2015 • Cairo, Egypt. Co-located: 1 st International …
News summary app Clipped gets automated infographics as it readies API
http://thenextweb.com/apps/2014/07/25/news-summary-app-clipped-updated-automated-infographics-readies-api/
Similar to the Summly app acquired by Yahoo last year, Clipped uses machine learning to scan an article and then summarize the most important parts.
http://thenextweb.com/apps/2014/07/25/news-summary-app-clipped-updated-automated-infographics-readies-api/
Similar to the Summly app acquired by Yahoo last year, Clipped uses machine learning to scan an article and then summarize the most important parts.
Quickie: NLP Article Summarization01 JANUARY 2015
http://rarmknecht.com/quickie-nlp-article-summarization/
Very interesting gist posted here on computationally writing a summary of a news article. Discussion is here.
This is something I'd like to take a deeper look at later. Especially considering my brief NLP script that computed a Flesch–Kincaid score for some ebooks I had on hand.
Very interesting gist posted here on computationally writing a summary of a news article. Discussion is here.
This is something I'd like to take a deeper look at later. Especially considering my brief NLP script that computed a Flesch–Kincaid score for some ebooks I had on hand.
where to get News summarization corpus?
http://stackoverflow.com/questions/18502361/where-to-get-news-summarization-corpus
The Summbank 1.0 here:ldc.upenn.edu/Catalog/catalogEntry.jsp?catalogId=LDC2003T16
is available for a fee.
Natural Language Processing for Informal Text
(NLPIT 2015)
In conjunction with The International Conference on Web Engineering
(ICWE 2015), June 23, 2015, Rotterdam, The Netherlands
http://wwwhome.cs.utwente.nl/~badiehm/nlpit2015/
Toward Abstractive Summarization Using Semantic Representations
Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh and Noah A. Smith
Accepted by the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2015)
http://www.cs.cmu.edu/~feiliu/
Natural Language Processing at Google
http://research.google.com/pubs/NaturalLanguageProcessing.html
Tutorial: The Logic of AMR: Practical, Unified, Graph-Based Sentence Semantics for NLP
http://naacl.org/naacl-hlt-2015/tutorial-amr-semantics.html
QUESTION ANSWERING
IQ TESThttp://arxiv.org/pdf/1509.03390v1.pdf
Measuring an Artificial Intelligence System’s Performance on a Verbal IQ Test For Young Children*
Stellan Ohlsson1 , Robert H. Sloan2 , György Turán3 4, Aaron Urasky3
Affiliations and email: 1 Department of Psychology, University of Illinois at Chicago, Chicago, IL 60607, stellan@uic.edu. 2 Department of Computer Science, University of Illinois at Chicago, Chicago, IL 60607, sloan@uic.edu. 3 Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago, Chicago, IL 60607, gyt@uic.edu (Turán), aaron.urasky@gmail.com (Urasky). 4 MTA-SZTE Research Group on Artificial Intelligence, Szeged, Hungary.
Abstract
We administered the Verbal IQ (VIQ) part of the Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III) to the ConceptNet 4 AI system. The test questions (e.g., “Why do we shake hands?”) were translated into ConceptNet 4 inputs using a combination of the simple natural language processing tools that come with ConceptNet together with short Python programs that we wrote. The question answering used a version of ConceptNet based on spectral methods. The ConceptNet system scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5 to 7 year-olds. Large variations among subtests indicate potential areas of improvement. In particular, results were strongest for the Vocabulary and Similarities subtests, intermediate for the Information subtest, and lowest for the Comprehension and Word Reasoning subtests. Comprehension is the subtest most strongly associated with common sense. The large variations among subtests and ordinary common sense strongly suggest that the WPPSI-III VIQ results do not show that “ConceptNet has the verbal abilities a four-year-old.” Rather, children’s IQ tests offer one objective metric for the evaluation and comparison of AI systems. Also, this work continues previous research on Psychometric AI.
Gated-Attention Readers for Text Comprehension - June 5, 2016
Bhuwan Dhingra∗ Hanxiao Liu∗ William W. Cohen Ruslan Salakhutdinov
School of Computer Science
Carnegie Mellon University
{bdhingra, hanxiaol, wcohen, rsalakhu}@cs.cmu.edu
Abstract
In this paper we study the problem of answering
cloze-style questions over short documents.
We introduce a new attention mechanism
which uses multiplicative interactions
between the query embedding and intermediate
states of a recurrent neural network reader.
This enables the reader to build query-specific
representations of tokens in the document
which are further used for answer selection.
Our model, the Gated-Attention Reader, outperforms
all state-of-the-art models on several
large-scale benchmark datasets for this task—
the CNN & Daily Mail news stories and Children’s
Book Test. We also provide a detailed
analysis of the performance of our model and
several baselines over a subset of questions
manually annotated with certain linguistic features.
The analysis sheds light on the strengths
and weaknesses of several existing models.
No comments:
Post a Comment