Saturday, June 4, 2016

Voice First Dynamic Architecture and Context Retention

Voice First Dynamic Architecture and Context Retention

1. Viv
 http://viv.ai/
offers from Facebook and Google
Voice Personal Assistant, Voice Commerce, Voice First, Voice Conversation

(65 Labs) patent 2015
Marcello Bastea-Forte, et al
Dynamically evolving cognitive architecture system based on third-party developers US 20140380263 A1
http://www.google.com/patents/US20140380263
second generation of Siri
VIV.ai website: "Viv is an artificial intelligence platform that enables developers to distribute their products through an intelligent, conversational interface. It’s the simplest way for the world to interact with devices, services and things everywhere. Viv is taught by the world, knows more than it is taught, and learns every day."


2. VocalIQ
bought by Apple

VocalIQ, which was spun out of the University of Cambridge’s Dialogue Systems Group, uses deep learning to improve language recognition, with a focus on trying to understand the context in which commands are given.

The company is led by chief executive Blaise Thomson, a South Africa-born mathematician, and chairman Steve Young, a professor of Information Engineering at Cambridge. It raised £750,000 in seed funding last year, led by Amadeus Capital Partners, the venture capital firm.

VocalIQ was formed in March 2011 to exploit technology developed by the Spoken Dialogue Systems Group at University of Cambridge, UK. Still based in Cambridge, the company builds a platform for voice interfaces, making it easy for everybody to voice enable their devices and apps. Example application areas include smartphones, robots, cars, call-centres, and games.

company website - http://vocaliq.com

investor - http://parkwalkadvisors.com/newsletter/newsletter-2014-06-20/

VocalIQ was formed in March 2011 to exploit technology developed by the Spoken Dialogue Systems Group at University of Cambridge, UK. Still based in Cambridge, the company's has a B2B focus, helping other companies and developers build spoken language interfaces. Example application areas include smartphones, robots, cars, call-centres, and games.

More than a billion smart devices were shipped in 2013, with input interfaces that are difficult to use and voice interaction often available but seldom used. VocalIQ’s proprietary technology dramatically improves the performance of voice-based systems and simplifies the authoring process for new applications

The company provides a layer of middleware that sits between the speech recogniser and the application. This middleware implements machine learning algorithms which interpret and track the user’s intentions, and automatically determine the most appropriate response back to the user.

More detail can be found on the company's website https://www.crunchbase.com/organization/vocaliq#/entity

Based on award-winning research from the University of Cambridge, VocalIQ uses state-of-the art techniques for all its components. These technologies have been tested in various settings, showing significant increases in performance compared to traditional approaches typically used in industry. Specific benefits include increased success rates, shorter dialogs, and reduced development costs.
Semantic decoding: Before deciding how the system should respond, it is important to work out what the user meant by what they said. There are always many ways to express the same thing in a conversation. Deciphering this meaning is the task of the semantic decoder. VocalIQ has developed various machine learning approaches to learning the meaning of a sequence of words automatically, and it provides this technology as part of its products.

Dialog management: Deciding how to respond to each user input is the task of the dialog manager. By integrating everything that might have been said in the dialog, including possible errors, we have been able to show significant improvements in the decision making performance.

Language generation: System prompts and responses to questions are designed by the application developer using simple template rules. These are then conveyed to the user via a text-to-speech engine.

3. Amazon - 1000 people working on the next generation of Alexa

Brian Roemmele




No comments:

Post a Comment