Monday, November 7, 2016

ML Text Generation Problems and Solutions



lstm generating text


TensorFlow using LSTMs for generating text
http://stackoverflow.com/questions/36609920/tensorflow-using-lstms-for-generating-text

text generation with RNN

A: Transforming text with neural network
Implementing seq2seq with sampled decoder outputs
http://stackoverflow.com/questions/36228723/implementing-seq2seq-with-sampled-decoder-outputs/36246038#36246038


Q: RNN for End-End Speech Recognition using TensorFlow
http://stackoverflow.com/questions/38385292/rnn-for-end-end-speech-recognition-using-tensorflow


Q: Tensorflow Android demo: load a custom graph in?
http://stackoverflow.com/questions/39318586/tensorflow-android-demo-load-a-custom-graph-in


building up a stacked LSTM model for text classification in TensorFlow
http://stackoverflow.com/questions/34790159/stacked-rnn-model-setup-in-tensorflow

A Practical Guide for Debugging Tensorflow Codes
Jongwook Choi
June 18th, 2016
Latest Update: Dec 9th, 2016
https://github.com/wookayin/TensorflowKR-2016-talk-debugging

Generative Adversarial Networks


NIPS 2016 Tutorial: Generative Adversarial Networks

https://arxiv.org/abs/1701.00160
Ian Goodfellow
(Submitted on 31 Dec 2016 (v1), last revised 5 Jan 2017 (this version, v2))
This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs). The tutorial describes: (1) Why generative modeling is a topic worth studying, (2) how generative models work, and how GANs compare to other generative models, (3) the details of how GANs work, (4) research frontiers in GANs, and (5) state-of-the-art image models that combine GANs with other methods. Finally, the tutorial contains three exercises for readers to complete, and the solutions to these exercises.

StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks
https://arxiv.org/abs/1612.03242
Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, Dimitris Metaxas
(Submitted on 10 Dec 2016)
Synthesizing photo-realistic images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose stacked Generative Adversarial Networks (StackGAN) to generate photo-realistic images conditioned on text descriptions. The Stage-I GAN sketches the primitive shape and basic colors of the object based on the given text description, yielding Stage-I low resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high resolution images with photo-realistic details. The Stage-II GAN is able to rectify defects and add compelling details with the refinement process. Samples generated by StackGAN are more plausible than those generated by existing approaches. Importantly, our StackGAN for the first time generates realistic 256 x 256 images conditioned on only text descriptions, while state-of-the-art methods can generate at most 128 x 128 images. To demonstrate the effectiveness of the proposed StackGAN, extensive experiments are conducted on CUB and Oxford-102 datasets, which contain enough object appearance variations and are widely-used for text-to-image generation analysis.

Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.
(arXiv:1702.07464v1 [cs.CR]) 
In recent years, a branch of machine learning called Deep Learning has become incredibly popular thanks to the ability of a new class of algorithms to model and interpret a large quantity of data in a similar way to humans. Properly training deep learning models involves collecting a vast amount of users' private data, including habits, geographical positions, interests, and much more. Another major issue is that it is possible to extract from trained models useful information about the training set and this hinders collaboration among distrustful participants or parties that deal with sensitive information.

To tackle this problem, collaborative deep learning models have recently been proposed where parties share only a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy to make information extraction even more challenging, as shown by Shokri and Shmatikov at CCS'15. Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates valid samples of the targeted training set that was meant to be private. Interestingly, we show that differential privacy applied to shared parameters of the model as suggested at CCS'15 and CCS'16 is utterly futile. In our generative model attack, all techniques adopted to scramble or obfuscate shared parameters in collaborative deep learning are rendered ineffective with no possibility of a remedy under the threat model considered.


Sequence Modeling via Segmentations
https://arxiv.org/abs/1702.07464
Chong Wang, Yining Wang, Po-Sen Huang, Abdelrahman Mohamed, Dengyong Zhou, Li Deng
(Submitted on 24 Feb 2017)
Segmental structure is a common pattern in many types of sequences such as phrases in human languages. In this paper, we present a probabilistic model for sequences via their segmentations. The probability of a segmented sequence is calculated as the product of the probabilities of all its segments, where each segment is modeled using existing tools such as recurrent neural networks. Since the segmentation of a sequence is usually unknown in advance, we sum over all valid segmentations to obtain the final probability for the sequence. An efficient dynamic programming algorithm is developed for forward and backward computations without resorting to any approximation. We demonstrate our approach on text segmentation and speech recognition tasks. In addition to quantitative results, we also show that our approach can discover meaningful segments in their respective application contexts.


Hidden Community Detection in Social Networks

We introduce a new paradigm that is important for community detection in the realm of network analysis. Networks contain a set of strong, dominant communities, which interfere with the detection of weak, natural community structure. When most of the members of the weak communities also belong to stronger communities, they are extremely hard to be uncovered. We call the weak communities the hidden community structure.
We present a novel approach called HICODE (HIdden COmmunity DEtection) that identifies the hidden community structure as well as the dominant community structure. By weakening the strength of the dominant structure, one can uncover the hidden structure beneath. Likewise, by reducing the strength of the hidden structure, one can more accurately identify the dominant structure. In this way, HICODE tackles both tasks simultaneously.
Extensive experiments on real-world networks demonstrate that HICODE outperforms several state-of-the-art community detection methods in uncovering both the dominant and the hidden structure. In the Facebook university social networks, we find multiple non-redundant sets of communities that are strongly associated with residential hall, year of registration or career position of the faculties or students, while the state-of-the-art algorithms mainly locate the dominant ground truth category. In the Due to the difficulty of labeling all ground truth communities in real-world datasets, HICODE provides a promising approach to pinpoint the existing latent communities and uncover communities for which there is no ground truth. Finding this unknown structure is an extremely important community detection problem.


important for NLP - larger role of rare words, smaller role for frequent words
implemented in ADAGRAD
ADAGRAD - adaptive learning rates for each parameter
Related paper:
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization, Duchi et al 2010
Learning rate is adapting differently for each parameter and rare parameters get larger updates than frequently occurring parameters. Word vectors!

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization

http://www.jmlr.org/papers/v12/duchi11a.html
John Duchi, Elad Hazan, Yoram Singer; 12(Jul):2121−2159, 2011.
Abstract
We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms.
Keywords: subgradient methods, adaptivity, online learning, stochastic convex optimization

http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf
John Duchi JDUCHI@CS.BERKELEY.EDU Computer Science Division University of California, Berkeley Berkeley, CA 94720 USA
Elad Hazan EHAZAN@IE.TECHNION.AC.IL Technion - Israel Institute of Technology Technion City Haifa, 32000, Israel
Yoram Singer SINGER@GOOGLE.COM Google 1600 Amphitheatre Parkway Mountain View, CA 94043 USA

Use rectified linear function ReLu instead of Tanh and sigmoid. ReLu is null when x is negative and linear at x in (0, 1)


Deep Learning Tricks of the Trade

Prevent Feature Co-adaptation by Dropout (Jeff Hinton et al. 2012) - 
randomly set 50% of the inputs to each neuron to 0
paper -
Improving neural networks by preventing co-adaptation of feature detectors
https://arxiv.org/abs/1207.0580
Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, Ruslan R. Salakhutdinov
(Submitted on 3 Jul 2012)
When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random "dropout" gives big improvements on many benchmark tasks and sets new records for speech and object recognition.



Random hyperparameter search!

in a paper - Y. Bengio (2012) Practical Recommendations for Gradient Based Training of Deep Architectures

1. Unsupervised pre-training
2. Stochastic gradient descent and setting learning rates
3. Main hyper-parameters
learning rate schedule & early stopping,
 mini-batches,
parameter initialization,
number of hidden units, regularization (= weight decay)
4. How to efficiently search for hyper-parameter configurations
short answer: Random hyperparameter search!

Practical Recommendations for Gradient-Based Training of Deep Architectures
https://arxiv.org/pdf/1206.5533.pdf

Yoshua Bengio Version 2, Sept. 16th, 2012
 Abstract
Learning algorithms related to artificial neural networks and in particular for Deep Learning may seem to involve many bells and whistles, called hyperparameters. This chapter is meant as a practical guide with recommendations for some of the most commonly used hyper-parameters, in particular in the context of learning algorithms based on back-propagated gradient and gradient-based optimization. It also discusses how to deal with the fact that more interesting results can be obtained when allowing one to adjust many hyper-parameters. Overall, it describes elements of the practice used to successfully and efficiently train and debug large-scale and often deep multi-layer neural networks. It closes with open questions about the training difficulties observed with

 Some more advanced and recent tricks in later lectures.

Language Models:

A language model computes a probability for a sequence of words
Probability is usually conditioned on window of n previous words
Very useful for a lot of tasks:
Can be used to determine whether a sequence is a good grammatical translation or speech utterance.
Example: going home vs going house

Recurrent Neural Networks

Solution: Condition the neural network on all previous words and tie the weight at each time step

















































Sunday, October 9, 2016

MIRI - Machine Intelligence Research Institute, Berkeley, CA

MIRI is releasing a paper introducing a new model of deductively limited reasoning: “Logical induction,” authored by Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, myself, and Jessica Taylor. Readers may wish to start with the abridged version.

https://intelligence.org/2016/09/12/new-paper-logical-induction/
New paper: “Logical induction”

September 12, 2016 | Nate Soares | Papers

Sunday, July 31, 2016

immigration

SWEDEN

Immigration Meltdown Sweden 2025 - immigration destroying safety, freedom and equality
https://www.youtube.com/watch?v=9Lc-gDHgjrQ

For the love of Odin- dig deep, find your inner Viking spirit & rise...

Sweden is Kicking Everyone Out Now

https://www.youtube.com/watch?v=1L8FDHGelJo

Sweden's Migrant Rape Epidemic
https://www.youtube.com/watch?v=sdGPPLmR5Bc

Muslims go crazy during anti islamic movie in sweden
https://www.youtube.com/watch?v=gMtJzV3O4NI
This European experiment of bringing in all these , so called regugees, is a failure. Why arn't all these men back home fighting for their country instead of running away. They are bringing their evil 6th century culture into a modern society and trying to impose their system on the majority. Merkle in Germany has certainly failed her citizens. With all kinds of rapes and sexual violence toward women, who these idiots consider below them,. All of them need to be deported and thrown out of the host country. Leave it up to a liberal to fk things up

Brawl at Swedish city hall
https://www.youtube.com/watch?v=e7l8IuHeuC0

CANADA

Robert Spencer in Ottawa April 13: The real motive for Islamic migration to the West
https://www.youtube.com/watch?v=7lOGHgPK0ug

Islam is an obnoxious weed. It is a medieval, evil, barbaric, regressive ideology. It is sick. Please Canada do not take any more migrants and return the ones already taken or convert them to Christianity/Judaism in a mass conversion program. Amen to you people in the West who do not understand the evil ideology of Islam which we in the East have had to come to terms with.







Saturday, July 16, 2016

political investigations


public intelligence
SAUDI ARABIA, UNITED STATES
Declassified 28 Pages From Congressional 9/11 Investigation
July 15, 2016
https://info.publicintelligence.net/US-911-28-Pages.pdf

transcript of Obama's speech at DNC 2016
http://www.latimes.com/politics/la-na-pol-obama-2016-convention-speech-transcript-20160727-snap-story.html


Trojanized Propaganda App Uses Twitter to Infect, Spy on Terrorist Sympathizers
By McAfee Labs on Jul 26, 2016
https://blogs.mcafee.com/mcafee-labs/trojanized-propaganda-app-uses-twitter-to-infect-spy-on-terrorist-sympathizers/

Cybercrime Exposed
Cybercrime-as-a-Service
 By Raj Samani, Vice President and CTO, EMEA, McAfee François Paget, Senior Threat Research Engineer, McAfee® Labs
http://www.mcafee.com/us/resources/white-papers/wp-cybercrime-exposed.pdf

Eurabia - The Islamization of Europe
https://www.youtube.com/watch?v=2H3DTygAhjQ

american history
Gavin McInnes and Jim Goad the greatest writer of our generation
https://www.youtube.com/watch?v=lClAzbXsnBQ

Sidney Blumenthal
For: Hillary..From: Sid Re – Syria, Turkey, Israel, Iran
https://wikileaks.org/clinton-emails/emailid/12171
https://seeker401.wordpress.com/2016/06/06/for-hillary-from-sid-re-syria-turkey-israel-iran/
SOURCE: Sources with access to the highest levels of the Governments and institutions discussed below. This includes political parties and regional intelligence and security services.

Was a Trump Server Communicating With Russia?
http://www.slate.com/articles/news_and_politics/cover_story/2016/10/was_a_server_registered_to_the_trump_organization_communicating_with_russia.html
This spring, a group of computer scientists set out to determine whether hackers were interfering with the Trump campaign. They found something they weren’t expecting.
By Franklin Foer

Trump’s Server, Revisited
http://www.slate.com/articles/news_and_politics/politics/2016/11/the_trump_server_evaluating_new_evidence_and_countertheories.html
Sorting through the new evidence, and competing theories, about the Trump server that appeared to be communicating with a Russian bank.
By Franklin Foer

Rubio questions David Friedman at ambassador to Israel hearing
https://youtu.be/zSX7aBlTXXs

on the issues
http://www.ontheissues.org/Hillary_Clinton.htm

http://www.ontheissues.org/Donald_Trump.htm









Sunday, June 19, 2016

DeepMind Research


Demis Hassabis
Research Homepage
http://demishassabis.com/



DEEP REINFORCEMENT LEARNING
FRIDAY, 17TH JUNE, 2016
by David Silver, Google DeepMind

https://deepmind.com/blog


DECOUPLED NEURAL INTERFACES USING SYNTHETIC GRADIENTS
MONDAY, 29TH AUGUST, 2016
https://deepmind.com/blog#decoupled-neural-interfaces-using-synthetic-gradients
by Max Jaderberg, DeepMind
Neural networks are the workhorse of many of the algorithms developed at DeepMind. For example, AlphaGo uses convolutional neural networks to evaluate board positions in the game of Go and DQN and Deep Reinforcement Learning algorithms use neural networks to choose actions to play at super-human level on video games.

This post introduces some of our latest research in progressing the capabilities and training procedures of neural networks called Decoupled Neural Interfaces using Synthetic Gradients. This work gives us a way to allow neural networks to communicate, to learn to send messages between themselves, in a decoupled, scalable manner paving the way for multiple neural networks to communicate with each other or improving the long term temporal dependency of recurrent networks. This is achieved by using a model to approximate error gradients, rather than by computing error gradients explicitly with backpropagation. The rest of this post assumes some familiarity with neural networks and how to train them. If you’re new to this area we highly recommend Nando de Freitas lecture series on Youtube on deep learning and neural networks.

https://scholar.google.com/citations
What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated
D Kumaran, D Hassabis, JL McClelland
Trends in Cognitive Sciences 20 (7), 512-534
2016
Model-Free Episodic Control
C Blundell, B Uria, A Pritzel, Y Li, A Ruderman, JZ Leibo, J Rae, ...
arXiv preprint arXiv:1606.04460
2016
Neural Mechanisms of Hierarchical Planning in a Virtual Subway Network
J Balaguer, H Spiers, D Hassabis, C Summerfield
Neuron 90 (4), 893-903
12016
Mastering the game of Go with deep neural networks and tree search
D Silver, A Huang, CJ Maddison, A Guez, L Sifre, G Van Den Driessche, ...
Nature 529 (7587), 484-489
1202016
Approximate Hubel-Wiesel Modules and the Data Structures of Neural Computation
JZ Leibo, J Cornebise, S Gómez, D Hassabis
arXiv preprint arXiv:1512.08457
12015
Hippocampal place cells construct reward related sequences through unexplored space
HF Olafsdottir, C Barry, AB Saleem, D Hassabis, HJ Spiers
Elife 4, e06063
182015
Human-level control through deep reinforcement learning
V Mnih, K Kavukcuoglu, D Silver, AA Rusu, J Veness, MG Bellemare, ...
Nature 518 (7540), 529-533
3582015
A goal direction signal in the human entorhinal/subicular region
MJ Chadwick, AEJ Jolly, DP Amos, D Hassabis, HJ Spiers
Current Biology 25 (1), 87-92
182015
Foraging under competition: the neural basis of input-matching in humans
D Mobbs, D Hassabis, R Yu, C Chu, M Rushworth, E Boorman, ...
The Journal of Neuroscience 33 (23), 9866-9872
102013
Imagine all the people: how the brain creates and uses personality models to predict behavior
D Hassabis, RN Spreng, AA Rusu, CA Robbins, RA Mar, DL Schacter
Cerebral Cortex, bht042
492013
Detecting representations of recent and remote autobiographical memories in vmPFC and hippocampus
HM Bonnici, MJ Chadwick, A Lutti, D Hassabis, N Weiskopf, EA Maguire
The journal of neuroscience 32 (47), 16982-16991
502012
The future of memory: remembering, imagining, and the brain
DL Schacter, DR Addis, D Hassabis, VC Martin, RN Spreng, KK Szpunar
Neuron 76 (4), 677-694
3002012
Multi-voxel pattern analysis in human hippocampal subfields
HB Bonnici, M Chadwick, D Kumaran, D Hassabis, N Weiskopf, ...
Frontiers in human neuroscience 6, 290
472012
Decoding representations of scenes in the medial temporal lobes
HM Bonnici, D Kumaran, MJ Chadwick, N Weiskopf, D Hassabis, ...
Hippocampus 22 (5), 1143-1153
482012
Scene construction in amnesia: An fMRI study
SL Mullally, D Hassabis, EA Maguire
The Journal of Neuroscience 32 (16), 5646-5653
532012
Is the brain a good model for machine intelligence?
R Brooks, D Hassabis, D Bray, A Shashua
Nature 482 (7386), 462-463
22012
Decoding overlapping memories in the medial temporal lobes using high-resolution fMRI
MJ Chadwick, D Hassabis, EA Maguire
Learning & Memory 18 (12), 742-746
312011
Role of the hippocampus in imagination and future thinking
EA Maguire, D Hassabis
Proceedings of the National Academy of Sciences 108 (11), E39-E39
512011
Imagining fictitious and future experiences: Evidence from developmental amnesia
EA Maguire, F Vargha-Khadem, D Hassabis
Neuropsychologia 48 (11), 3187-3192
752010
Differential engagement of brain regions within a ‘core’network during scene construction
JJ Summerfield, D Hassabis, EA Maguire
Neuropsychologia 48 (5), 1501-1509
662010




Saturday, June 4, 2016

Voice First Dynamic Architecture and Context Retention

Voice First Dynamic Architecture and Context Retention

1. Viv
 http://viv.ai/
offers from Facebook and Google
Voice Personal Assistant, Voice Commerce, Voice First, Voice Conversation

(65 Labs) patent 2015
Marcello Bastea-Forte, et al
Dynamically evolving cognitive architecture system based on third-party developers US 20140380263 A1
http://www.google.com/patents/US20140380263
second generation of Siri
VIV.ai website: "Viv is an artificial intelligence platform that enables developers to distribute their products through an intelligent, conversational interface. It’s the simplest way for the world to interact with devices, services and things everywhere. Viv is taught by the world, knows more than it is taught, and learns every day."


2. VocalIQ
bought by Apple

VocalIQ, which was spun out of the University of Cambridge’s Dialogue Systems Group, uses deep learning to improve language recognition, with a focus on trying to understand the context in which commands are given.

The company is led by chief executive Blaise Thomson, a South Africa-born mathematician, and chairman Steve Young, a professor of Information Engineering at Cambridge. It raised £750,000 in seed funding last year, led by Amadeus Capital Partners, the venture capital firm.

VocalIQ was formed in March 2011 to exploit technology developed by the Spoken Dialogue Systems Group at University of Cambridge, UK. Still based in Cambridge, the company builds a platform for voice interfaces, making it easy for everybody to voice enable their devices and apps. Example application areas include smartphones, robots, cars, call-centres, and games.

company website - http://vocaliq.com

investor - http://parkwalkadvisors.com/newsletter/newsletter-2014-06-20/

VocalIQ was formed in March 2011 to exploit technology developed by the Spoken Dialogue Systems Group at University of Cambridge, UK. Still based in Cambridge, the company's has a B2B focus, helping other companies and developers build spoken language interfaces. Example application areas include smartphones, robots, cars, call-centres, and games.

More than a billion smart devices were shipped in 2013, with input interfaces that are difficult to use and voice interaction often available but seldom used. VocalIQ’s proprietary technology dramatically improves the performance of voice-based systems and simplifies the authoring process for new applications

The company provides a layer of middleware that sits between the speech recogniser and the application. This middleware implements machine learning algorithms which interpret and track the user’s intentions, and automatically determine the most appropriate response back to the user.

More detail can be found on the company's website https://www.crunchbase.com/organization/vocaliq#/entity

Based on award-winning research from the University of Cambridge, VocalIQ uses state-of-the art techniques for all its components. These technologies have been tested in various settings, showing significant increases in performance compared to traditional approaches typically used in industry. Specific benefits include increased success rates, shorter dialogs, and reduced development costs.
Semantic decoding: Before deciding how the system should respond, it is important to work out what the user meant by what they said. There are always many ways to express the same thing in a conversation. Deciphering this meaning is the task of the semantic decoder. VocalIQ has developed various machine learning approaches to learning the meaning of a sequence of words automatically, and it provides this technology as part of its products.

Dialog management: Deciding how to respond to each user input is the task of the dialog manager. By integrating everything that might have been said in the dialog, including possible errors, we have been able to show significant improvements in the decision making performance.

Language generation: System prompts and responses to questions are designed by the application developer using simple template rules. These are then conveyed to the user via a text-to-speech engine.

3. Amazon - 1000 people working on the next generation of Alexa

Brian Roemmele




Sunday, April 10, 2016

The Psychology of Persuasion

The Psychology of Persuasion

Source: Boundless. “The Psychology of Persuasion.” Boundless Communications. Boundless, 21 Jul. 2015. 
https://www.boundless.com/communications/textbooks/boundless-communications-textbook/persuasive-speaking-14/introduction-to-persuasive-speaking-72/the-psychology-of-persuasion-285-4176/

two psychological theories of persuasion









  • each person is unique, so there is no single psychological key to persuasion.
  • Cialdini proposed six psychological persuasive techniques: reciprocity, commitment and consistencysocial proof, authority, liking, and scarcity.
  • The Relationship Based Persuasion technique has four steps: survey the situation, confront the five barriers to a successful influence encounter, make the pitch, and secure the commitments.
  • social proof
    People tend to do things that they see others are doing.
  • reciprocity
    the responses of individuals to the actions of others
  • There is no single key to a successful persuasive speech. Some people take longer than others to be persuaded, and some respond to different persuasion techniques. Therefore, persuasive speakers should be cognizant of audience characteristics to customize the pitch.
    The psychology of persuasion is best exemplified by two theories that try to explain how people are influenced.
    Robert Cialdini, in his book on persuasion, defined six "weapons of influence:"
    1. Reciprocity: People tend to return a favor. In Cialdini's conferences, he often uses the example of Ethiopia providing thousands of dollars in humanitarian aid to Mexico just after the 1985 earthquake, despite Ethiopia suffering from a crippling famine and civil war at the time. Ethiopia had been reciprocating for the diplomatic support Mexico provided when Italy invaded Ethiopia in 1937.
    2. Commitment and Consistency: Once people commit to what they think is right, they are more likely to honor that commitment even if the original motivation is subsequently removed. For example, in car sales, suddenly raising the price at the last moment works because buyers have already decided to buy.
    3. Social Proof: People will do things they see other people are doing. In one experiment, if one or more person looked up into the sky, bystanders would then look up to see what they could see. This experiment was aborted, as so many people looked up that they stopped traffic.
    4. Authority: People will tend to obey authority figures, even if they are asked to perform objectionable acts. Cialdini cites incidents like the Milgram experiments in the early 1960s and the My Lai massacre in 1968.
    5. Liking: People are easily persuaded by other people whom they like. Cialdini cites the marketing of Tupperware, wherein people were more likely to buy from others they liked. Some of the biases favoring more attractive people are discussed, but generally more aesthetically pleasing people tend to use this influence over others.
    6. Scarcity: Perceived scarcity will generate demand. For example, saying that offers are available for a "limited time only" encourages sales.
    The second theory is called Relationship Based Persuasion. It was developed by Richard Shell and Mario Moussa. The overall theory is that persuasion is the art of winning over others. Their four step approach is:
    1. Survey your situation: This step includes an analysis of the persuader's situation, goals and challenges.
    2. Confront the five barriers: Five obstacles pose the greatest risks to a successful influence encounter - relationships, credibility, communication mismatches, belief systems, and interest and needs.
    3. Make your pitch: People need a solid reason to justify a decision, yet at the same time many decisions are made on basis of intuition. This step also deals with presentation skills.
    4. Secure your commitments: In order to safeguard the longtime success of a persuasive decision, it is vital to deal with politics at both the individual and organizational level.
  • HBR
    Harnessing the Science of Persuasion

    Robert B. Cialdini
    FROM THE OCTOBER 2001 ISSUE
    https://hbr.org/2001/10/harnessing-the-science-of-persuasion
    Executive Summary

    If leadership, at its most basic, consists of getting things done through others, then persuasion is one of the leader’s essential tools. Many executives have assumed that this tool is beyond their grasp, available only to the charismatic and the eloquent. Over the past several decades, though, experimental psychologists have learned which methods reliably lead people to concede, comply, or change. Their research shows that persuasion is governed by several principles that can be taught and applied. The first principle is that people are more likely to follow someone who is similar to them than someone who is not. Wise managers, then, enlist peers to help make their cases. Second, people are more willing to cooperate with those who are not only like them but who like them, as well. So it’s worth the time to uncover real similarities and offer genuine praise. Third, experiments confirm the intuitive truth that people tend to treat you the way you treat them. It’s sound policy to do a favor before seeking one. Fourth, individuals are more likely to keep promises they make voluntarily and explicitly. The message for managers here is to get commitments in writing. Fifth, studies show that people really do defer to experts. So before they attempt to exert influence, executives should take pains to establish their own expertise and not assume that it’s self-evident. Finally, people want more of a commodity when it’s scarce; it follows, then, that exclusive information is more persuasive than widely available data. By mastering these principles–and, the author stresses, using them judiciously and ethically–executives can learn the elusive art of capturing an audience, swaying the undecided, and converting the opposition.

    Robert B. Cialdini is the Regents’ Professor of Psychology at Arizona State University and the author of Influence: Science and Practice (Allyn & Bacon, 2001), now in its fourth edition. Further regularly updated information about the influence process can be found at www.influenceatwork.com.

    Robert Cialdini - Harnessing The Science Of Persuasion.pdf
    http://content.yudu.com/Library/A17ln5/RobertCialdiniHarnes/resources/1.htm
    saved to disk

    Scott Adams
    @ScottAdamsSays
    Creator of Dilbert. You might like my book about success: http://amzn.to/1oTGu8x
    periscop session mentioning Hilary Clinton soaring campaign since she started working with persuasion guru Robert B. Cialdini or somebody like him.
    https://twitter.com/ScottAdamsSays/status/762366963184078848

    Charisma on Command - YouTube
    https://www.youtube.com/user/charismaoncommand
    Why Trump Will SMASH Hillaryhttps://www.youtube.com/watch?v=LibRNYJmZ-I

    Make the audience feel that they should be a part of your success: @SirineFad on startup pitches | on @EntMagazineME
    https://www.entrepreneur.com/article/278843
    Pitch Perfect: Four Tips To Tell Your Startup Story Better

    What do investors want?

    That is the “golden question” we often hear in the startup community; a topic that headlines several pitching workshops. However, that is notthe question entrepreneurs should be asking. Regardless of what the investor wants, there are a few fundamental techniques entrepreneurs must master before going out and pitching their startups. Here are a few tips to help you do just that:
    1. Master your one-liner

    If you can’t describe the entire concept of your business in one line, you’re not ready. This is one of the toughest exercises entrepreneurs have to endure, and for some, one of the longest. A one-liner is not a summary though- it’s an opener for interesting questions. The best one-liners are the intriguing ones; they contain one or more attention-seeking keywords that make people want to learn more about the startup.

    A one-liner is the essence of your startup: what your product is, how users use it, and how big the opportunity is for this product. Throw in the keywords that are dealmakers: these can relate to your USP (how is your product or your model different), or to the cutting-edge technology that you are introducing.

    The difficulty in doing this lies in squeezing in and structuring a great deal of information in a logical, impactful and audience agnostic manner. Anyone, regardless of how tech or business savvy they are, should be able to understand what your core concept is.


    For instance, a design software startup can be described as follows: a cross-platform plug-in software that allows any user to perform this action in less time/less cost. You can choose to add the size of the market you are addressing or how innovative this solution is.
    2. The dominator effect: show off what you’ve got

    Tactfully and modestly, guide and accompany your audience at your pitch to their own conclusion, which should be one that contains statements and assumptions such as “this startup is going to dominate the market,” or “this team is on to something big.”

    But remember that showcasing ambition can sometimes be irritating to your audience; you need to learn how to package it. You need toshow them that you have what it takes to dominate the market. You want to make them feel that they should be part of the success you will become, thanks to their support.

    To dominate your market, you need to understand it. Design thinking is one of the best methodologies that help you understand your users and develop solutions they will want to use. In this regard, talking to your users is beyond crucial; it should be mandatory.

    To dominate is to win over your competitors. One of the best strategists out there, Sun Tzu, once said: “If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained, you will also suffer a defeat.” But knowing your competitors doesn’t mean a mere listing of who they are and what they offer; it’s a thorough understanding of why users select them, and what you need to do to attract those users and retain them.

    3. The lean wax-on, wax-off: understand your users’ journey

    We often meet entrepreneurs coming up with solutions to problems they have never faced. Before you launch your MVP, take the time toexperience and understand your users’ journey. Become your user, and invest time and effort in demystifying the smallest and most boring of tasks or users’ constraints. By doing so, you will discover new ways of innovating both your product and your model in a manner that ensures growth and scalability.

    The founders of a tech startup I mentor developed a product they assumed all developers would die for, until reality hit and no sales deal was closed. The tech team then revisited the complete user journey and engaged with many of their customers to identify the gaps in both their offering and their marketing techniques. As soon as new iterations were rolled out, sales started coming in.

    As an entrepreneur, remember to avoid isolating yourself. Surround yourself with key mentors and advisors who truly add value to your work and your entrepreneurial journey. Select them thoughtfully and carefully, but be open to accept their feedback, even if harsh, and implement their recommendations. Coachability is a trait investors appreciate and look for.
    4. Narrate it like a boss

    Pitching is basically telling a story, one that keeps your audience hooked. The flow of your story is paramount, no flashbacks or long descriptions allowed. It should be as easy as 1-2-3, even if your product is heavily technical.

    The story needs to be about you (and your team) and what inspired you to come up with a solution to a problem a group of people face. It is about how you will reach those people, and convince them to purchase your solution, and drop the competitors- for a certain amount, that will, hopefully, make you rich! The end of the story is the purpose of telling it.

    The best pitches are the ones that make your audience the heroes of the story. If your product is an app, then tell it from a user’s perspective, and invite your audience to be the lead user in your story. If your product is B2B software, you can trace the story from a before-and-after angle.

    It takes time to master the above pitching techniques, and if you need help in this, make it a point to network more. Attend events, join a community of likeminded people, like, say an incubator center, or a co-working space. Entrepreneurs are often some of the most supportive people you can come across- those who went through the same challenges and mastered the above techniques will surely help you. Good luck!

    Related:
    Developing A Good Pitch: A How-To Guide For Entrepreneurs Sharing Ideas


    7 triggers to yes
    http://the7triggers.com/
    Want more business? Stop selling. Influence buying instead.
    hack your customer's brain



    Obama's Persuasion Professors

    The Making of Barack Obama: The Politics of Persuasion
    https://www.amazon.com/Making-Barack-Obama-Politics-Persuasion/dp/1602354677
    THE MAKING OF BARACK OBAMA: THE POLITICS OF PERSUASION provides the first comprehensive treatment of why Obama's rhetorical strategies were so effective during the 2008 presidential campaign, during the first four years of his presidency, and once again during the 2012 presidential campaign. From his "Yes We Can" speech, to his "More Perfect Union Speech," to his Cairo "New Beginnings" speech, candidate-Obama-turned-President-Obama represents what a skilled rhetorician can accomplish within the public sphere.

    Contributors to the collection closely analyze several of Obama's most important speeches, attempting to explain why they were so rhetorically effective, while also examining the large discursive structures Obama was engaging: a worldwide financial crisis, political apathy, domestic racism, Islamophobia, the Middle East peace process, Zionism, and more.

    THE MAKING OF BARACK OBAMA will appeal to politically engaged, intelligent readers, scholars of rhetoric, and anyone interested in understanding how the strategic use of language in highly charged contexts-how the art of rhetoric-shapes our world, unites and divides people, and creates conditions that make social change possible.

     For those new to the formal study of rhetoric, editors Matthew Abraham and Erec Smith include a glossary of key terms and concepts. Contributors include Matthew Abraham, René Agustin De los Santos, David A. Frank, John Jasso, Michael Kleine, Richard Marback, Robert Rowland, Steven Salaita, Courtney Jue, Erec Smith, and Anthony Wachs.

     "From the inspiring slogans and speeches of his campaign to the eloquent successes and failures of his presidency, Barack Obama has been extravagantly praised and sarcastically criticized for the distinctive power of his rhetoric. The essays in this collection persuasively analyze that rhetoric in all its specific tactics and general strategies, in its idealist yearnings and its pragmatic compromises, in its ambitious strivings and its political obstacles.
    Dr. Craig Fox, a behavioral economist at the University of California, Los Angeles.
    “Most decisions — from which investment to choose to how to treat a disease to whether or not to go to war — must be made without knowing in advance how they will turn out. Successful decisions under uncertainty depend on our minimizing our ignorance, accepting inherent randomness and knowing the difference between the two.”

    Behavioral Science & Policy: 1 1st Edition, Kindle Edition
    by Craig Fox (Editor), Sim B. Sitkin (Editor)
    https://www.amazon.com/Behavioral-Science-Policy-Craig-Fox-ebook/dp/B01F1G6HWW/ref=sr_1_5

    How Behavioral Science Propelled Obama's Win

    Roger Dooley
    https://www.forbes.com/sites/rogerdooley/2012/11/19/obama-behavioral/#290077f27496

    The team was organized by Craig Fox, a behavioral economist at UCLA. It included experts like Robert Cialdini, professor emeritus at Arizona State University and author of the social science classic, Influence: The Psychology of Persuasion, and the University of Chicago's Richard Thaler, coauthor of Nudge.

    Richard H. Thaler is the Charles R. Walgreen Distinguished Service Professor of Economics and Behavioral Science at the University of Chicago's Graduate School of Business where he director of the Center for Decision Research. He is also a Research Associate at the National Bureau of Economic Research where he co-directs the behavioral economics project.

     Professor Thaler's research lies in the gap between psychology and economics. He is considered a pioneer in the fields of behavioral economics and finance. He is the author of numerous articles and the books
     Misbehaving: The Making of Behavioral Economics;
     Nudge: Improving Decisions about Health, Wealth and Happiness (with Cass Sunstein),
     The Winner's Curse,
     and Quasi Rational Economics
     and was the editor of the collections: Advances in Behavioral Finance, Volumes 1 and 2. He also wrote a series of articles in the Journal of Economics Perspectives called: "Anomalies". He is one of the rotating team of economists who write the Economic View column in the Sunday New York Times.