Prepare to Protect Your Customers’ Voices
Digital assistants are always listening, creating a significant security risk as the threat of voice-based cybercrime grows.
The threat of voice-based cybercrime is growing along with the explosion of voice-directed digital assistants, billions of which are already embedded in our mobile phones, computers, cars, and homes. Digital assistants are always listening, creating a significant security risk, especially as millions of people work from home during the pandemic. It’s estimated that in the next two years, there will be more virtual digital assistants than people in the world. Nearly two-thirds of businesses plan to use voice assistants for their customer interactions, according to a 2018 survey conducted by Pindrop Security.
Already, the number of cyberattacks on voice-recognition systems is rising as people converse with bots to play music, pay their bills, book reservations at restaurants, and perform other everyday tasks. It now takes less than four seconds of original audio to create a deepfake of someone’s voice. Recently, hackers used machine learning software to impersonate a CEO and order subordinates to wire hundreds of thousands of dollars to a fraudulent account.
Get Updates on Innovative Strategy
The latest insights on strategy and execution in the workplace, delivered to your inbox once a month.
Please enter a valid email address
Thank you for signing up
Much of today’s voice fraud, known as “vishing,” involves controlling voice assistants by methods such as embedding undetectable audio commands, replaying voice recordings, and modifying fraudsters’ voices to match the pitch of their victims’ voices. As hackers become better at impersonating people, they will be able to apply deepfakes of voices that will be far harder to detect.
The damage could be catastrophic unless companies take appropriate cybersecurity precautions. Financial services companies send millions of customers new credit cards after criminals steal information, but they can’t send them new voices. Unless voice activation is made secure, the rapid growth of machines that recognize voice commands could grind to a halt, damaging customers’ trust in the privacy safeguards of the many companies that use voice systems.
Pindrop’s survey found that 80% of businesses worldwide considered the security of their voice-directed services to be a concern. So how can managers make their customers’ voices safe?
As a first principle, companies should roll out voice-directed services only when they are confident of their ability to mitigate the accompanying cybersecurity risks.