Monday, April 28, 2014

 2014 will see commercial neural network deep learning chips and commercial neuromorphic chips. Deep Learning chips can outperform graphic processors by 150 times for some tasks and new neuromorphic chips can tolerate, adapt and learn from mistakes.
 http://nextbigfuture.com/2014/01/deep-learning-chips-can-outperform.html

Thursday, April 24, 2014

Design Patterns for Large-Scale Real-Time Learning

http://www.cloudera.com/content/cloudera/en/resources/library/recordedwebinar/design-patterns-for-large-scale-real-time-learning.html


  1. cloudera/ml · GitHub

    https://github.com/cloudera/ml

    ml - The Cloudera Data Science Team's Tools for Data Preparation, Machine Learning, and Model Evaluation.

  2. https://github.com/cloudera/oryx

    oryx - Simple real-time large-scale machine learning infrastructure.

ML Meetups



SF Machine Learning Meetup
Thursday, April 24, 2014
Large-Scale Machine Learning with Apache Spark

We'll have a series of events talking about machine learning in Spark.
It's our pleasure to have Xiangrui Meng from Databricks as our first speaker on this series to introduce Spark to data scientists.
For the next meetup on May 1, we will have a join event with Cloudera talking about part2 of Spark, mllib, and large scale multinomial logistic regression implementation in Spark.
In the future, we'll talk about Random Forest implementation in Spark.

Spark is a new cluster computing engine that is rapidly gaining popularity — with over 150 contributors in the past year, it is one of the most active open source projects in big data, surpassing even Hadoop MapReduce. Spark was designed to both make traditional MapReduce programming easier and to support new types of applications, with one of the earliest focus areas being machine learning. In this talk, we’ll introduce Spark and show how to use it to build fast, end-to-end machine learning workflows. Using Spark’s high-level API, we can process raw data with familiar libraries in Java, Scala or Python (e.g. NumPy) to extract the features for machine learning. Then, using MLlib, its built-in machine learning library, we can run scalable versions of popular algorithms. We’ll also cover upcoming development work including new built-in algorithms and R bindings.

Bio:
Xiangrui Meng is a software engineer at Databricks. He has been actively involved in the development of Spark MLlib since he joined. Before Databricks, he worked as an applied research engineer at LinkedIn, where he was the main developer of an offline machine learning framework in Hadoop MapReduce. His thesis work at Stanford is on randomized algorithms for large-scale linear regression.

Wednesday, April 23, 2014

google search for shape recognition algorithm
https://www.google.com/search?q=shape+recognition+algorithm+&oq=shape+recognition+algorithm+&aqs=chrome..69i57j0l5.18057j0j7&sourceid=chrome&espv=210&es_sm=119&ie=UTF-8
https://github.com/yusugomori/DeepLearning.git
Deep Learning (Python, C/C++, Java, Scala)
An Introduction to Deep Learning: From Perceptrons to Deep Networks
BY IVAN VASILEV - JAVA DEVELOPER @ TOPTAL

Deep Learning for Natural Language Processing

CS224D: Deep Learning for Natural Language Processing
Richard Socher and James Hong and Sameep Bagadia and David Dindi and B. Ramsundar and N. Arivazhagan and Qiaojing Yan

ACL 2012 + NAACL 2013 Tutorial: Deep Learning for NLP (without Magic)
http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial
Richard Socher, Chris Manning and Yoshua Bengio

Slides


Updated Version of Tutorial at NAACL 2013


Videos








Reasoning With Neural Tensor Networks
for Knowledge Base Completion
http://nlp.stanford.edu/~socherr/SocherChenManningNg_NIPS2013.pdf
http://wordnet.princeton.edu/
http://en.wikipedia.org/wiki/Markov_random_field
http://nlp.stanford.edu/software/CRF-NER.shtml

softmax in NLP
RECURSIVE DEEP LEARNING
FOR NATURAL LANGUAGE PROCESSING
AND COMPUTER VISION
A DISSERTATION
SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE
AND THE COMMITTEE ON GRADUATE STUDIES
OF STANFORD UNIVERSITY
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
Richard Socher
August 2014
http://nlp.stanford.edu/~socherr/thesis.pdf

www.socher.org

GloVe: Global Vectors for Word Representation

Stanford NLP

GloVe

http://stanford.edu/~jpennin/papers/glove.pdf
Best word vectors so far? 11% more accurate than word2vec, fast to train, statistically efficient, good task accuracy

Related Tutorials

• See “Neural Net Language Models” Scholarpedia entry

• Deep Learning tutorials:

http://deeplearning.net/tutorials
• Stanford deep learning tutorials with simple programming
assignments and reading list
http://deeplearning.stanford.edu/wiki/
• Recursive Autoencoder class project
http://cseweb.ucsd.edu/~elkan/250B/learningmeaning.pdf
• Graduate Summer School: Deep Learning, Feature Learning
http://www.ipam.ucla.edu/programs/gss2012/
• ICML 2012 RepresentaGon Learning
tutorial

http://www.iro.umontreal.ca/~bengioy/talks/deep-learning-tutorial-2012.html
• More reading (including tutorial references):
http://nlp.stanford.edu/courses/NAACL2013/


Papers

Parsing Natural Scenes and Natural Language
with Recursive Neural Networks
http://www-nlp.stanford.edu/pubs/SocherLinNgManning_ICML2011.pdf

Recursive Deep Models for Semantic Compositionality
Over a Sentiment Treebank
http://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf


http://www.scalanlp.org/api/breeze/index.html#breeze.linalg.softmax$


http://en.wikipedia.org/wiki/Natural_language_processing
http://en.wikipedia.org/wiki/Natural_language_understanding
http://en.wikipedia.org/wiki/Mathematica
http://en.wikipedia.org/wiki/CUDA
http://blog.wolfram.com/2010/11/15/the-free-form-linguistics-revolution-in-mathematica/
http://www.wolfram.com/language/?source=nav





Deeply Moving: Deep Learning for Sentiment Analysis
http://nlp.stanford.edu/sentiment/
Stanford Named Entity Recognizer (NER)
http://nlp.stanford.edu/software/CRF-NER.shtml
Stanford NER is also known as CRFClassifier. The software provides a general implementation of (arbitrary order) linear chain Conditional Random Field (CRF) sequence models



Google Word2Vec

https://code.google.com/p/word2vec/

http://www.i-programmer.info/news/105-artificial-intelligence/6264-machine-learning-applied-to-natural-language.html
Representing words as high dimensional vectors
https://plus.google.com/+ResearchatGoogle/posts/VwBUvQ7PvnZ
Efficient Estimation of Word Representations in Vector Space(http://goo.gl/ZvBp8F)
http://arxiv.org/pdf/1301.3781.pdf
http://radimrehurek.com/2014/02/word2vec-tutorial/



all things numenta and cortical

The Path to Machine Intelligence

PROPERTIES OF SPARSE DISTRIBUTED

REPRESENTATIONS

And Their Application To HTM

(DRAFT)

SUBUTAI AHMAD AND JEFF HAWKINS

NUMENTA TECHNICAL REPORT

NTA-2014-01

OCTOBER 28, 2014

©Numenta, Inc. 2014
http://numenta.com/assets/pdf/whitepapers/SDR_Properties%20draft%2010-28-14.pdf

Numenta open source project Nupic

http://numenta.org/

Numenta · GitHub

https://github.com/numenta
https://github.com/numenta/nupic/wiki/Using-NuPIC

NUPic NLP
https://github.com/numenta/nupic/wiki/Natural-Language-Processing

NuPIC is not currently tuned for NLP, but should be capable of some basic NLP functions. If letters are used as categories, it should be able to recognize common word and sentence structures. However, without a hierarchy, it will not be able to formulate a deep understanding of input text, because it is limited to one small region of the brain within it's model.
However, there could still be some interesting experiments performed even with this limitation. For example, words would be encoded into SDRs externally, through the cortical.io API, and feed directly into the CLA using a "pass-through" encoder.


https://github.com/numenta/nupic/wiki/Encoders
https://www.youtube.com/watch?v=3gjVVNPnPYA&feature=youtu.be&t=2m40s



Arbitrary names converted into an SDR
http://comments.gmane.org/gmane.comp.ai.nupic/757

[nupic-discuss] How are SDRs created in higher layers?


http://comments.gmane.org/gmane.comp.ai.nupic/3969

Online Prediction Framework OPF

Online Prediction Framework (OPF) is a framework for working with and deriving predictions from online learning algorithms, including Numenta’s Cortical Learning Algorithm (CLA). OPF is designed to work in conjunction with a larger architecture, as well as in a standalone mode (i.e. directly from the command line). It is also designed such that new model algorithms and functionalities can be added with minimal code changes.

all things cortical



http://www.cortical.io/contexts.html


Retina can be found in Information Retrieval literature under the name of Word Space. This was first described by Hinrich Schütze; also see hisDistributional SemanticsWord Space (1993)
by Hinrich Schütze
Advances in Neural Information Processing Systems 5

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.8856
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=162AD9E06D0E3F582827B36498DF356D?doi=10.1.1.41.8856&rep=rep1&type=pdf

Hinrich Schütze
http://www.web-geek.com/Computers/Artificial_Intelligence/People.html
collaborated with Stanford NLP - Christopher Manning
Manning, Christopher Stanford University. Probabilistic parsing, grammar induction, text categorization and clustering, electronic dictionaries, information extraction and presentation, and linguistic typology.
Schütze, Hinrich Stanford University. Statistical NLP, text mining, Co-author of "Foundations of Statistical Natural Language Processing" with Christopher Manning.

Magnus Sahlgren's dissertation named The Word-Space Modelhttp://su.diva-portal.org/smash/get/diva2:189276/FULLTEXT01


http://web.stanford.edu/~jpennin/papers/glove.pdf
GloVe: Global Vectors for Word Representation
Jeffrey Pennington, Richard Socher, Christopher D. Manning
Computer Science Department, Stanford University, Stanford, CA 94305

jpennin@stanford.edu, richard@socher.org, manning@stanford.edu

test metric, tests GloVe vs Word2Vec
On the importance of comparing apples to apples: a case study using the GloVe model
Yoav Goldberg, 10 August 2014
https://docs.google.com/document/d/1ydIujJ7ETSZ688RGfU5IMJJsbxAi-kRl8czSwpti15s/mobilebasic?pli=1









all things R


Split the Elements of a Character Vector
https://stat.ethz.ch/R-manual/R-devel/library/base/html/strsplit.html

Implications of the NuPIC Geospatial Encoder
http://inbits.com/2014/08/implications-of-the-geospatial-encoder/
clortex
Clojure Library for Jeff Hawkins' Hierarchical Temporal Memory



SUN, OCT 07, 2012
Wait, The Brain Is A Bloom Filter? - @Petrillic
Ian Danforth
Engineeringhttp://numenta.com/blog/wait-the-brain-is-a-bloom-filter.html

numenta open source SDR

IS OUR NEOCORTEX A GIANT SEMANTIC BLOOM FILTER ? OF NATURAL INTELLIGENCE, MACHINE LEARNING & JEFF HAWKINS

http://doubleclix.wordpress.com/2013/04/14/is-our-neocortex-a-giant-semantic-bloom-filter-of-natural-intelligence-machine-learning-jeff-hawkins/

http://doubleclix.wordpress.com/category/machine-learning/


Felix Andrews

Hackathon demo: cortical.io encoder
27 OCTOBER 2014
Last weekend I joined Numenta’s Fall 2014 Hackathon. A fantastic event. It underscores Numenta’s approach of being totally open with their work and supportive of the community.
http://www.neurofractal.org/felix/


















  1. Towards exhaustive pairwise matching in large image ...

    dl.acm.org/citation.cfm?id=2403335
    Association for Computing Machinery
    by K Srijan - ‎2012 - ‎Related articles
    Oct 7, 2012 - Michael Mitzenmacher, Compressed bloom filters, IEEE/ACM ...... large portion of the cerebral cortex devoted to analyzing retinal signals. Although ... to design and develop bio-inspired models for form and motion processing.
  2. Bloom Filter

http://numenta.com/assets/pdf/whitepapers/SDR_Properties%20draft%2010-28-14.pdf
5. References Numenta 2014 Page 24

[15] Olshausen, Bruno A., and David J. Field. "Sparse coding with an
overcomplete basis set: A strategy employed by V1." Vision research
37.23 (1997): 3311-3325.
[16] Olshausen, Bruno A., and David J. Field. "Sparse coding of sensory
inputs." Current opinion in neurobiology 14.4 (2004): 481-487.
[17] Tibshirani, Robert. "Regression shrinkage and selection via the
lasso." Journal of the Royal Statistical Society. Series B
(Methodological) (1996): 267-288.
[18] Vinje, William E., and Jack L. Gallant. "Sparse coding and decorrelation
in primary visual cortex during natural vision." Science 287.5456
(2000): 1273-1276.

Cortical concepts, theory and technology




















Monday, April 21, 2014

Saturday, April 19, 2014

IMAGE RECOGNITION. GENERATING CAPTION FROM IMAGE. GENERATING IMAGE FROM CAPTION. SEMANTIC LABELING.



Deep Visual-Semantic Alignments for Generating Image Descriptions
https://cs.stanford.edu/people/karpathy/cvpr2015.pdf
Andrej Karpathy Li Fei-Fei Department of Computer Science, Stanford University {karpathy,feifeili}@cs.stanford.edu

Abstract We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.

ICLR2016
Generating Images from Captions with Attention
http://arxiv.org/abs/1511.02793
Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov
(Submitted on 9 Nov 2015 (v1), last revised 29 Feb 2016 (this version, v2))
Motivated by the recent progress in generative models, we introduce a model that generates images from natural language descriptions. The proposed model iteratively draws patches on a canvas, while attending to the relevant words in the description. After training on Microsoft COCO, we compare our model with several baseline generative models on image generation and retrieval tasks. We demonstrate that our model produces higher quality samples than other approaches and generates images with novel scene compositions corresponding to previously unseen captions in the dataset.

3 MODEL Our proposed model defines a generative process of images conditioned on captions. In particular, captions are represented as a sequence of consecutive words and images are represented as a sequence of patches drawn on a canvas ct over time t = 1, ..., T. The model can be viewed as a part of the sequence-to-sequence framework (Sutskever et al., 2014; Cho et al., 2014; Srivastava et al., 2015).
3.1 LANGUAGE MODEL: THE BIDIRECTIONAL ATTENTION RNN


Show and Tell: A Neural Image Caption Generator 
http://arxiv.org/pdf/1411.4555v2.pdf
Oriol Vinyals Google vinyals@google.com Alexander Toshev Google toshev@google.com Samy Bengio Google bengio@google.com Dumitru Erhan Google dumitru@google.com 
Abstract 
Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art


AlexNet

http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
ImageNet Classification with Deep Convolutional Neural Networks
 Alex Krizhevsky University of Toronto kriz@cs.utoronto.ca Ilya Sutskever University of Toronto ilya@cs.utoronto.ca Geoffrey E. Hinton University of Toronto hinton@cs.utoronto.ca
 Abstract
 We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called “dropout” that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

AlexNet - GitHub
This model is a replication of the model described in the AlexNet publication. 

SegNet

Alex Kendall, Vijay Badrinarayanan, Roberto Cipollahttp://mi.eng.cam.ac.uk/projects/segnet/

Caffe
http://caffe.berkeleyvision.org/

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors.Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.

Check out our web image classification demo!
Why Caffe?

Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.

Extensible code fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Thanks to these contributors the framework tracks the state-of-the-art in both code and models.

Speed makes Caffe perfect for research experiments and industry deployment. Caffe can processover 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning. We believe that Caffe is the fastest convnet implementation available.

Community: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia. Join our community of brewers on the caffe-users group and Github.

* With the ILSVRC2012-winning SuperVision model and caching IO. Consult performance details.

Computer Vision Open Source Code
OpenCV
http://docs.opencv.org/

Java API
http://docs.opencv.org/java/
http://opencv.org/opencv-java-api.html


API
http://docs.opencv.org/modules/refman.html

Downloads
http://opencv.org/downloads.html

Installation
http://docs.opencv.org/doc/tutorials/introduction/table_of_content_introduction/table_of_content_introduction.html

Tutorials
http://docs.opencv.org/doc/tutorials/tutorials.html
http://docs.opencv.org/2.4.4-beta/doc/tutorials/introduction/desktop_java/java_dev_intro.html

JavaCV
https://code.google.com/p/javacv/


Silicon Valley Computer Vision Meetup
http://www.meetup.com/Silicon-Valley-Computer-Vision/events/176686442/

John Brewer on github
https://github.com/jeradesign
https://github.com/jeradesign/spot-it-challenge

Signal & Image Processing : An International Journal (SIPIJ) Vol.3, No.5, October 2012
DOI : 10.5121/sipij.2012.3503 29

AN AUTOMATIC ALGORITHM FOR OBJECT
RECOGNITION AND DETECTION BASED ON ASIFT
KEYPOINTS

Reza Oji
Department of Computer Engineering and IT, Shiraz University
Shiraz, Iran
oji.reza@gmail.com

http://arxiv.org/ftp/arxiv/papers/1211/1211.5829.pdf


http://arxiv.org/abs/1510.00149

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song HanHuizi MaoWilliam J. Dally
(Submitted on 1 Oct 2015 (v1), last revised 15 Feb 2016 (this version, v5))

Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.

FractalNet: Ultra-Deep Neural Networks without Residuals
http://arxiv.org/abs/1605.07648Gustav Larsson, Michael Maire, Gregory Shakhnarovich
(Submitted on 24 May 2016)
We introduce a design strategy for neural network macro-architecture based on self-similarity. Repeated application of a single expansion rule generates an extremely deep network whose structural layout is precisely a truncated fractal. Such a network contains interacting subpaths of different lengths, but does not include any pass-through connections: every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. This property stands in stark contrast to the current approach of explicitly structuring very deep networks so that training is a residual learning problem. Our experiments demonstrate that residual representation is not fundamental to the success of extremely deep convolutional neural networks. A fractal design achieves an error rate of 22.85% on CIFAR-100, matching the state-of-the-art held by residual networks.
Fractal networks exhibit intriguing properties beyond their high performance. They can be regarded as a computationally efficient implicit union of subnetworks of every depth. We explore consequences for training, touching upon connection with student-teacher behavior, and, most importantly, demonstrating the ability to extract high-performance fixed-depth subnetworks. To facilitate this latter task, we develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. With such regularization, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer.


faception
https://docs.com/flavio-bernardotti/9946/faception
"Our personality is determined by our DNA and reflected in our face. It's kind of a signal."
Social and Life Science Research personalities modify or stay the same, according to the genes. Thus, experts believe that people's faces are reflections of their DNA's.
As of the moment, Faception has revealed 15 personalities, as reported by Rt.com. These personalities include extrovert, genius, academic researcher, professional poker players, bingo player, brand promoter, white collar offender, paedophile and terrorist.
Faception has allegedly been able to identify with success the nine terrorists who were the culprits of November terror incidents in Paris, as reported by The Daily Mail UK.


Friday, April 18, 2014

BOOKS ONLINE

SAFARI BOOKS ONLINE

http://techbus.safaribooksonline.com/?uicode=oracle

MIT BOOKS ONLINE

Structure and Interpretation of Computer Programs (SICP), the basis for MIT's venerable 6.001:
http://mitpress.mit.edu/sicp/full-text/book/book.html

Arshak Navruzyan

Organizer


VP Product at Argyle Data.

Do you currently work with machine learning in your work or studies?

Using Oryx, MLlib & Oxdata

Which area of machine learning most interests you?

SVM, RF, ANN, Deep Learning

Demis Hassabis (born 27 July 1976) is a British computer game designer, artificial intelligence programmer,neuroscientist and world-class games player.

http://en.wikipedia.org/wiki/Demis_Hassabis

Neural Networks Books & Papers


Neural Networks for Applied Sciences and Engineering: From Fundamentals to Complex Pattern Recognition [Hardcover]
by Sandhya Samarasinghe
http://www.amazon.com/dp/084933375X




http://www.cs.cmu.edu/~guestrin/Class/10701-S05/slides/NNet-CrossValidation-2-2-2005.pdf

Neural Network Learning

[BOOK] Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the" echo state network" approach

H Jaeger - 2002 - pdx.edu
... universal approximation property") · most popular supervised training algorithm: backpropagation
algorithm · huge literature, 95 % of neural network publications concern feedforward nets (my
estimate ... all biological neural networks are recurrent ... Dotted line: network ouput. 8 ...

[PDF] A general method for multi-agent reinforcement learning in unrestricted environments

J Schmidhuber - Adaptation, Coevolution and Learning in Multiagent …, 1996 - aaai.org
... each agent is in fact just a connection in a fully recurrent neural net (a by-product of this research
is a general reinforce- ment learning algorithm for such nets). ... Each single connection's
environment continually changes, simply because both network activations and all the ...

From perception-action loops to imitation processes: A bottom-up approach of learning by imitation

P Gaussier, S Moga, M Quoy… - Applied Artificial …, 1998 - Taylor & Francis
... straint is that the student architecture must be the same as the teacher neural network (NN)
architecture (a ... The details of the neural architecture are presented in Figure 11. ... The translation
mechanism of the SW block corresponds to the ~TI neuron modeling (Durbin & Rumelhart ...

Neural Networks and the Backpropagation Algorithm
http://jeremykun.com/2012/12/09/neural-networks-and-backpropagation/
Posted on December 9, 2012 by j2kun
Neurons, as an Extension of the Perceptron Model

Google DeepMind Code
How to Code and Understand DeepMind's Neural Stack Machine
Learning to Transduce with Unbounded Memory

https://iamtrask.github.io/2016/02/25/deepminds-neural-stack-machine/
Posted by iamtrask on February 25, 2016



















Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series) Hardcover

http://www.amazon.com/dp/0262018020

Introduction to the Math of Neural Networks [Kindle Edition]

Jeff Heaton 
http://www.amazon.com/dp/B00845UQL6/
Learning Deep Architectures for AI (Foundations and Trends(r) in Machine Learning) Paperback by Yoshua Bengio http://www.amazon.com/Learning-Architectures-Foundations-Trends-Machine/dp/1601982941/

Thursday, April 17, 2014