Copy
Behavioral Signals logo

OPINION

 

DIAGNOSING DISEASES VIA SPEECH

You walk into your office, and your colleague at the next desk greets you... but something is not quite right. You ask what's wrong and they tell you they're suffering from a terrible migraine that was induced by a horrible family fight. Aha, you proclaim! You knew it from the moment you walked in. But did you or did his greeting... signal a problem? Can we use Voice to detect illness? We're not talking about capturing words like 'I feel sick' or 'I have a fever' but the actual vocal qualities of our voice. When we produce sounds and form words, there's a lot of energy and complexity that goes into that. Our lungs and vocal chords, our overall energy levels, our mood, our mental and emotional state, all contribute as to how our voice will sound and, obviously, all these factors are prone to illness. We speak differently depending on our mood, so imagine what happens when experiencing mental health issues. 

Health professionals have been using vocal analysis for diagnosis for years. The Diagnostic and Statistical Manual (DSM) of mental health has been using speech and language to diagnose mental illness for at least 50 years (e.g., the second DSM, published in 1968, has “talkativeness” and “accelerated speech” as two common symptoms of what was then called manic depression, now termed bipolar). But what about diagnosing something like heart disease, migraines, or Parkinson’s disease using only the sound of someone’s voice? Amazon is taking personalised medication to a whole different level... they filed for a patent in 2017 that would allow Alexa to determine whether you're sick and... sell you targeted products. The Canary Speech app has successfully completed FDA clinical trials and is ready to detect Alzeheimer's, PTS, and depression for suicide prevention, while researchers from the Polytechnic of Porto, School of Engineering, in Portugal have submitted a paper suggesting a methodology for early detection of Parkinson's using signal and speech processing techniques integrated with machine learning algorithms. Meanwhile, the pharmaceutical giant, Boehringer Ingelheim, is working on an app using speech recognition to detect warning signals for Schizophrenia or Alzheimer's dementia.

There remains a lot to be desired, but the technology is looking promising. The way we diagnose health issues is bound to change, and voice is going to play a significant role. At Behavioral Signals, through Joana Correia's research work on speech analysis for clinical applications, the machine learning team has been able to work on building deeper models for detecting depression from speech. Depression and anxiety have a significant economic impact; the estimated cost to the global economy is US$ 1 trillion per year in lost productivity, according to the World Health Organization.

IN THE NEWS

Hackernoon.com

Voice Is the Safest and Most Accurate
for Emotion AI Analysis

The conversation is on. Face Recognition vs Voice. Which is more accurate and violates people's privacy less? A lot of ink will be poured over the topic, in the future, as the public raises concerns regarding surveillance and personal privacy. Facial recognition has been proven not to be fail safe and people are concerned about wrong identifications and its consequences. So how does voice recognition play into the mix? Is it less invasive? Can identity be obscured? How anonymized is the data?   Read more >

AIthority.com

4 Things to Keep in Mind When Investing in Tech Startups

Rana Gujral, CEO at Behavioral Signals, talks about AI startups and why investing them is a risky endeavor. He points out 4 factors that an investor, and a founder, should take into account. Ethos & moral compass, Longevity & risk, Relevancy and Team. For the later he points out 'Investors prefer to see that the early-stage startup team (usually comprising of the founders and perhaps a salesperson or engineer) works well with one another and has a similar motivation to solve problems. They should be able to answer the question “Why did you start this business together?” with clarity, passion, and detail'. Read more >

VoiceTechPodcast.com

The Emotion Machine [Podcast]

- Why is it important that machines that can read, interpret, replicate and experience emotions? It’s an essential element of intelligence, and users will demand and require increasingly greater intelligence from the machines they interact with.
- How does emotion analysis improve human computer conversations? It helps to establish conversational context and intent in voice interaction. 

Discover the answers to these questions and many others on this week's Voice Tech Podcast, with Carl Robinson and Rana Gujral.
Read more >

FROM OUR BLOG

Collecting Data Alone Is Not Enough

FROM OUR BLOG

Collecting Data Alone... Is Not Enough

More than ever before, we are able to capture and analyze data about the customer journey. Chatbots, AI-agent assistants and other AI-powered tools provide actionable insights in real-time for the customer service and product development teams. While conversational AI is growing more sophisticated and can provide key insights, there is still very much a human element to understanding how something makes a customer feel – whether positive or negative – and acting on that knowledge subjectively. There isn’t always a clear yes or no answer hidden in the data. Artificial intelligence is attempting to fill this gap, however. 

Read more >

EVENTS

Speech, Music and Mind 2019 (SMM19)

SMM19 is a Satellite Workshop of Interspeech 2019, focused on detecting and influencing mental states, with an emphasis on multi-modal approaches with diverse applications across culture, languages and music. The program will have a forenoon session with the theme Detecting mental states and an afternoon session based on the theme Influencing mental states.

Our Co-founder & Chief Scientist Prof. Shrikanth (Shri) S. Narayanan will be speaking during the forenoon session about ‘Understanding affective expressions and experiences through behavioral machine intelligence’ as a Keynote Speaker.

Date: Sept. 14, 2019
City: Wien, Austria

SVIEF 2019

The 9th SVIEF will be held on Sep 7th-8th, 2019 at Santa Clara Convention Center. It is designed to be an intense, informative and interactive event which focuses on the theme Beyond the Next Revolution, and features 100+ high-profile speakers and 150+ tech exhibitors. Rana Gujral, our CEO will be there as Keynote Speaker with a topic on Behavioral Prediction.

Date: Sept. 7-8, 2019
City: Santa Clara, CA, US


Find All Events here > 
Facebook
Twitter
Link
Website
Copyright © 2019 Behavioral Signals, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp