Copy
Behavioral Signals logo

OPINION

 

VOICE FIRST Revolution...almost

A lot of enthusiasm going around these days as the big players in voice, Amazon and Google, made big announcements in technologies and services. Amazon is locking in customers with @AmazonPay by facilitating utility bill payment through Alexa. According to MarketWatch and Amazon Pay VP, Patrick Gauthier, the company saw an opportunity in bill payments as about 70% of consumers don’t enroll in automatic payments.  

You: “Alexa, pay the electricity company $125.34”
Alexa: “The electricity company just received $125.34”

That easy.

A quick search online, and on its Twitter account, shows it's been adding hundreds of new merchants fundamentally changing billing, payments and banking. We doubt anything will be faster, easier and more frictionless. So why would anyone want to leave Alexa for any other voice assistant or service for that matter, when they can pay each individual bill with one command. Food for thought, for those trying to catch up...

Google introduced the BERT update (pre-trained NLP model) to its Search engine last week that helps it understand “longer, more conversational queries”. The addition of this new algorithm, designed to better understand what’s important in natural language queries, is a significant change, as Google said it impacts 1 in 10 queries. For example, when someone queries "2019 brazil traveler to usa need a visa" it will yield results for the actual website to apply, compared to some generic information on the topic. While most users, online, are focused on what this will mean for search engine optimization(SEO), rankings, and featured snippets, we see this as a step closer to using voice-search, successfully, for targeted information, but also to communicating with our computers and devices, naturally, as we do with humans. 

Meanwhile, we're seeing a lot of industries getting their own unique native voice assistants, which means they have the right domain knowledge and tools to compete in voice-search. Real-estate is a domain we have not seen anything up to now, but the Ask DOSS voice assistant is here to change that. The startup behind it is based out of Houston, Texas and is working on an intelligent personal assistant that is said will be similar to Siri, but exclusive to the real estate market. The company is developing the app for the residential real estate market but said, on its website, the next phase of development will be answering questions and displaying listings for commercial properties. Being able to search real estate listings and get useful information via natural language commands, will be a real test on these new technologies.

We're looking forward to more and more #voicefirst announcements. This will totally revolutionize how we communicate with our devices and our expectations. Next step is using this voice data for more enhanced insights.

IN THE NEWS

Hackernoon.com

Voice user interfaces can save us...from ourselves

Ches Sweeting, a product designer and #voicefirst advocate, breaks down the steps we take online to get news updates, consume content, and order products. He compares duration and number of actions, with how it would be if we used only voice. He also dwells on what is feasible, today, with the available Voice UXs and where these still need work. He sees the benefits of using voice "...voice-first interfaces: to return us to a world of clarity, and escape the clutter of promoted ads & irrelevance all vying for our attention" but predicts this probably will not stay like that for long.  Read more >

DisruptorDaily.com

Voice AI and Healthcare

DisruptorDaily looks at the state of AI in Healthcare and asks 28 experts to share their insights. Rana Gujral, CEO at Behavioral Signals, says "... We can see applications of AI through virtual assistants, both for patient care and as medical staff assistants as well as smart wearables and social robots that are helping patients monitor their health and their emotions while collecting a lot of data" adding "...There are many advantages to replacing tedious repetitive tasks that medical staff and carers have to do every day; they can focus on more important aspects of patient healthcare, like finding new cures and treatments or just finding time to empathize with the patient. Each tedious task that replaces or assists in a medical procedure will advance healthcare by a small step. It will open up windows to health innovation".  Read more >

Medium.com

Deep Hierarchical Fusion & Sentiment Analysis

An outline, of one of three, research papers presented by the Behavioral Signals' ML Team at Interspeech 2019.  This specific one is written by Efthymis Georgiou, Machine Learning Engineer, where he explains in laymen terms Deep Hierarchical Fusion with Application in Sentiment Analysis. His initial aim is to introduce the reader to the multimodal machine learning research area and underline the need of exploiting different modalities, in order to encapsulate all semantic and affective information of a message; second, is to give an insight of the company's work and specifically describe how exactly the proposed algorithm combines (fuses) different modalities.  Read more >

FROM OUR BLOG

Conversational Analytics: What Is it?

Conversational Analytics: What Is It?

Conversations happen everywhere. The phone calls we make to discuss bills or obtain support. The online meeting conference software we use to coordinate with clients. The apps like WhatsApp and Snapchat that we use with friends and family. AI technology is able to capture and analyze the data from those conversations.

When implemented safely and securely, conversational analytics allows customers to leverage the hugely valuable data already at their fingertips in millions of annual calls and conversations to better understand employees, customers, and potential target audiences. It can lead to better customer service, higher sales numbers, and key insights that drive the future of the company.
Read more >

EVENTS

WIRED25

This is a repeat event, spread across several venues in SF. It's divided into a summit section with major speakers, including the likes of Ben Horowitz - Cofounder of Andreessen Horowitz Venture Capital, and a cultural and fam section, that includes speakers like Adam Savage - Editor-in-Chief for Tested.com, EP & Host of Savage Buildsreadings, but also readings, screenings, tastings, performances, robots, VR, and other activities for both parents and kids.

Date: November 8-9, 2019
Venue: San Francisco, US

xp.wired.com

AI & BIG DATA 2019

The AI & Big Data Expo promises to showcase the most cutting-edge technologies from more than 350 exhibitors and provide insight from over 500 speakers sharing their unparalleled industry knowledge and real-life experiences. With over 12K visitors and AI, at the front and center, it's an event worth visiting. See you there.

Date: November 13-14, 2019
Venue: Santa Clara, Silicon Valley, US

www.ai-expo.net/northamerica/
See all future planned events >
Website
Medium
LinkedIn
Twitter
Facebook
Link
Copyright © 2019 Behavioral Signals, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp