Copy
Behavioral Signals logo

OPINION

 

VOICE & DIVERSITY

A Washington Post study found that voice assistants, like Alexa, understand what is being said 30% greater for people with non-native accents. The top performing voices, when it comes to VA 'understanding', come out of educated whites, upper-middle-class Americans, mainly from the West Coast. Makes sense since the AI was trained with data that came out of a homogenous set of people, that designed this technology, and all sounded similar. Meanwhile, voice assistants not only have an issue with understanding accents or speech impairments, they have been created to reinforce gender stereotypes. Voice assistants that are in our homes and use female names, female voices, often with an obedient or flirtatious style, as a UNESCO report points out, perpetuating sexist attitudes towards women.

Why it Matters? AI, Voice Assistants, and the Workplace
While someone may argue that all this conversation about 'voice diversity' is an insignificant problem for AI development, compared with model training, finding the right data —or the more media-trendy concern on whether AI will take over our jobs or someday kill us— it's worth looking into a recent Oracle study on how AI is changing human relations: 64% of people trust AI more than their manager. The increasing adoption of AI at work is having a significant impact on the way employees interact with their managers. As a result, the traditional role of HR teams and the manager is shifting. This is impacting everyone all over the world, [..workers in India (89%) and China (88%) are more trusting of robots over their managers, followed by Singapore (83%), Brazil (78%), Japan (76%), UAE (74%), Australia/New Zealand (58%), U.S. (57%), UK (54%) and France (56%)..]. So, if people are trusting AI more, how much an AI can understand them... matters. Unbiased inclusive design and usability matters.

One solution could be to make voice assistants sound gender-neutral —and it's something that’s entirely possible, as demonstrated by the makers of Q, the world’s first gender-neutral voice assistant. Second, immensely popular voice assistants, that are developed outside the US, are being designed based on local languages like Chinese or Russian (Baidu's DuerOS or Yandex's Alice), and in some cases in local accents as is BBC's 'Beep' that understands British. Voice Assistants could become more inclusive by diversifying their training data sets of accents, and by diversifying their machine learning engineers' origin and gender. 

Google's Project Euphonia, that wants to make voice tech accessible to people with disabilities, is a step in the right direction. It's collaborating with nonprofits and volunteers to collect more voice data from people with impaired speech. Being disabled means you need this type of technology that will make your life easier and inclusive.

It's time we start thinking inclusive design, user experience, non-English languages, speech disorders, and discover the right people to help design better conversations between humans and machines.

IN THE NEWS

VoiceFirstHealth.com

Emotion Recognition by Voice [Podcast]

Teri Fisher, MD, interviewed Rana Gujral, CEO of Behavioral Signals, on emotion recognition and healthcare. They discussed the AI technology and research behind emotion recognition, how it can improve human and human-to-machine interactions, how you can predict intent, and what sort of KPIs can businesses target with this technology. They also discussed ethics and how the technology can be misused by bad actors.  Listen here >

HealthcareITNews.com

Doctors can now say 'Hey Epic'...

Clinicians are getting a new voice assistant, called 'Epic', to help them retrieve information concerning their patients. Epic, is a private company and one of the largest electronic health record (EHR) providers in the US. According to the company, hospitals that use its software held medical records of 54% of patients in the United States, while Becker's Hospital Review says 20 of the best US hospitals, like Mayo Clinic, Johns Hopkins, UCLA Medical Center, use EPIC's EHR system. While someone can expect to read about all the typical voice functions, like retrieving or storing data, the article points out a few use cases within the hospital that are interesting, like the use of voice in the surgery room where surgeons cannot type after having scrubbed in, or patients messaging their nurses to request services.  Read more >

Callcentrehelper.com

How Are New Voice Technologies Impacting the Contact Centre

Mike Palmer of Spearline, a company that monitors call tolls globally, discusses voice and a wide range of technologies that are changing how a contact center works. From AI powered assistants, voice analytics, voice search, up to biometric authentication, he looks at their potentials for business and focuses on enterprise network issues like latency, echo, and background noise, that can make it difficult for voice technologies to work well, and why network teams need to take a proactive approach to voice services management.  Read more >

FROM OUR BLOG

What Emotion AI Has in Store for 2020

What Emotion AI Has in Store for 2020

As machine learning algorithms continue to improve, systems will develop increasingly sophisticated abilities to evaluate and measure human emotion and AI will become better at showing empathy. Biases in programming will be confronted and regulations set in place. AI will accelerate other sciences, by leveraging its speed and the huge amounts of data being collected, leading to breakthroughs from medicine to new materials. Meanwhile, according to a Gartner report, by 2023, AI and other technologies will lead to the tripling of disability employment. Because AI tools make it possible to connect at a higher level with minimal user input, it will become easier to hire people who previously struggled with certain job tasks and help to improve productivity and retention rates of those people.

Read more >

EVENTS

Conversational Interaction Conference

Save the date for Conversational Interaction, on 10-11 January 2020. The Conference emphasizes delivering intelligent insights on NLU and speech recognition technology available for commercial use, tools & services that help companies use conversational technology, case studies of deployments, and best practices for the successful use of NLU technology. Technology executives, Technology experts, Marketing execs, Customer service experts, Developers, Designers, Creative talent, and Advertising executives all have a great reason to attend and meet core technologies, be part of the conversational interface trend and network.

We will be there with Rana Gujral, CEO of Behavioral Signals, at TRACK 2 – Advances in voice technology session with a topic on Predicting The Future Through Our Voice on Monday | 3:35 – 4:15.



Date: February 10-11, 2020
Venue: Double Tree Hotel, San Jose, CA, US

www.conversationalinteraction.com

Voice-Connected Business 2020

An event regarding Voice Technology and how it is changing the way Brands interact with customers. Main topics include Conversational AI, Voice User Interfaces, Marketing & Digital Strategy, Choice of Platform, Third-Party Apps & Services, Audio Branding, Speech Recognition & NLP

Date: February 25-26, 2020
Venue: Crown Plaza, Portland, Oregon, US

manetech.com

Top AI Conferences 2020

Check out a comprehensive list of the Top AI Conferences for 2020, from around the world, some of which we will be attending ourselves either as speakers or exhibitors.
See all future planned events >
Missed a newsletter? Get caught up here:
Website
Medium
LinkedIn
Twitter
Facebook
Link
Copyright © 2020 Behavioral Signals, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp