Your digital assistant of choice, be it Alexa, Siri, or Google Now, should only carry out the voice commands you issue. But it turns out these assistants are not as loyal as we thought, and all a hacker has to do is whisper to them.
As FastCompany reports, a research team at Zhejiang University in China figured out how to issue commands to the digital assistants provided by Apple, Google, Amazon, Microsoft, Samsung, and Huawei that nobody else can hear. That includes Alexa, Cortana, Google Now, Huawei HiVoice, Samsung S Voice, and Siri They named the technique DolphinAttack, and it's possible due to a security flaw in the way these assistants work.
Inaudible ultrasonic frequencies are legitimately used by gadget makers as a way of pairing devices. For example, the Amazon Dash buttons uses them to pair with a phone. Analyzing voice input is also easier to do when the high ultrasonic frequencies are used.
What the DolphinAttack does is to take advantage of the 20kHz and above frequencies humans can't hear. A voice command is recorded and then translated it to an ultrasonic frequency version. Microphones still pick up the ultrasound just like a normal voice command and therefore treat it as such. Issuing commands to make a call, open a web address, even to unlock a door will all work with the appropriate silent command. Modifying a smartphone to issue such commands costs around $3.
- How to Kill Active Listening on Siri, Cortana, Alexa, and Google How to Kill Active Listening on Siri, Cortana, Alexa, and Google
The DolphinAttack is quite limited though because of range. You need to be very close to a smartphone or smartwatch for it to work, and if you want to command an Echo, for example, the hacker would first need to break into the home where it is located. Even so, this is a valid attack and one you can't easily protect yourself against unless you're willing to turn off your always-on assistant.
The obvious solution is for voice commands not to be listened to on such high frequencies, but then the quality of the voice command detection could fall. A new way of pairing devices would also need to be found, or at least, extra steps added to initiate the inaudible link. Both of these degrade convenience in order to achieve better security, so it seems unlikely they will be considered.