What's the real value—and danger—of smart assistants?

What’s the real value—and danger—of smart assistants?

You’ve heard them called virtual assistants, digital personal assistants, voice assistants, or smart assistants. Operated by artificial intelligence, technologies such as Siri, Alexa, Google Assistant, and Cortana have become ubiquitous in our culture. But what exactly do they do? And how seriously should we take them? While all the tech giants want us to use their smart assistants all the time, what do they offer us in return? And how do we keep our information and conversations safe?

Each of these smart assistants is limited by the platform and the devices they are running on. I shouldn’t expect Amazon’s Echo to give me step-by-step directions to the nearest pizza place…or should I?

Here’s what you need to know about smart assistants and the real value (and danger) they provide.

Getting started

If you’re looking to purchase a smart assistant, it’s best to take a beat and think about what it is you really need from it. Do you want to be able to control appliances or other devices in your home at the sound of your voice? Do you want to be able to look up valuable information without having to reach for your phone or boot up your computer? Are you looking for some virtual company for yourself or your kids?

While all these virtual assistants have a wealth of information at their disposal, it requires some getting used to in order to make optimal use of their possibilities. In addition, they each have their specialties, so take a good hard look at which technology is the best fit for your needs. And while you are shopping around, do not ignore the security implications—both what’s possible today and what could come to pass.

Understanding voice commands

While smart assistants have come a long way since the early days of Siri, they all have a common flaw: a less-than-accurate reading of voice commands. When the smart assistant is unable to understand the voice command, its AI experiences a dilemma between being sure of the received instructions and the danger of annoying the owner by having to ask to repeat the question or instructions too many times. This brings with it the risk of the assistant misunderstanding the given instructions and taking unwanted actions as a result.

We have covered this subject before, focusing on some of the vulnerabilities that researchers were able to uncover in smart assistants’ voice commands. The possible consequences can range from slightly annoying mistakes to ridiculous behavior, such as sending a recorded conversation to all your contacts.

Improvements in voice command technology are being made each year, as more precise algorithms are created to better adapt to complex vocal signals. Machine learning is being credited with significantly improving voice recognition, but there’s a long way to go before smart assistants can hear and process requests with the accuracy of human beings.

Kids and smart assistants

Do you let your kids/grandchildren play with your phone? Notwithstanding the fact that at a certain age our grandchildren probably have a better understanding of the phone than we do, we must warn against unsupervised usage of smart devices by young children. This absolutely extends to smart assistants, who can be accessed by children in the house alone, by simple voice command.

Parental controls are available for most of these devices, and some smart assistants have even been developed specifically for kids and allow for parents to easily access search history. Unlike phones and other screen devices, smart assistants are screen-less and encourage more human-like interactions. Experts are cautiously optimistic of the effects of smart assistants on children, but we hesitate to fully endorse the technology’s safety for kids, especially considering some of the security vulnerabilities inherent in the software, which we will cover below.


Read: Parenting in the Digital World: a review


Cybersecurity

Most of us, and I’m including myself here, love to show off what our latest gadgets can do. So we may be tempted, without thinking it through, to give control over our IoT devices to our smart assistants. Under normal circumstances, this shouldn’t be a problem—but circumstances are not always normal. Devices get lost, stolen, hacked into, and otherwise compromised by less-than-well-meaning individuals, which can be troublesome, to say the least, when they are in control of your domestic devices.

One such abnormal circumstance is a cybersecurity attack method that researchers have investigated called “dolphin attacks.” These are ultrasonic audio waves that are hard to hear for humans, but that the smart assistant would interpret as a command. To protect yourself from these types of attacks, you would have to turn the smart assistant off until you need it or introduce a confirmation protocol for certain commands, which would alert the human to the fact that the assistant has received a command of some sort. For convenience, the protocol could be set to work only in the case of sensitive operations. One could compare this to a 2FA for a certain subset of commands.

By using virtual assistants to do our online shopping, we also run the risk of these technologies and their parent companies learning facts about us that could be potentially sensitive, such as payment information and product ordering history. Consequently, this information is stored in the cloud, where security would be in the hands of the operator of the smart assistant or their cloud provider.

And with the growth of smart assistant usage, you can imagine this grows the interest of malware authors looking for associated vulnerabilities and bugs they can abuse for personal gain. In fact, the weaponization of IoT is just starting, but we expect it to grow quickly as there is little security in place to stop it.

Other concerns

Paying attention

Another important thing to keep in mind is that we humans are not as good as multi-tasking as we like to think. Even as your virtual assistant reads out your email to you, your brain gets distracted enough to avoid performing tasks that require your full attention. You could end up either missing the point of the mail or spicing up your family dinner with something inedible.

Eavesdropping

Even though this study showed no conclusive results about apps listening in on our conversations, we should realize that by using voice-activated assistants, we have implicitly given them permission to eavesdrop on us. They are designed to pay attention and wait for an activation command. And how often do we realize this, and turn them off when we don’t want or need them to listen? (My guess would be close to never.) If we are honest with ourselves, we don’t think to do this—plus how inconvenient is it to constantly boot up a device every time you want to use it, especially one designed to interact seamlessly with your life? As a consequence, they are always on standby and therefore always listening.

At least it’s funny

On the bright side, we have been introduced to a whole new dimension of humor, thanks to the snarky writers behind smart assistants’ programmed responses.

  • We can let the virtual assistants talk to each other and see what develops.
  • We can introduce smart assistants as guest stars in comedy TV shows. Who remembers The Big Bang Theory’s Raj meeting Siri “in the flesh”?
  • We can even try to make them tell us jokes.

If the story develops to our liking, maybe in a few years we’ll only remember the fun parts—leaving the security woes behind us. But if it develops in much the same way as many new “smart” applications have over the last few years, it will be more like: We thought it was fun at the time.

Don’t become a victim of your smart assistant. Use the parts of it that give personal value to you and your quality of life, and tighten up security on parts you don’t need. Think about what information you trust your assistant with and who could be behind the scenes. And remember: just because it’s a new, fun technology doesn’t mean you have to have it.

ABOUT THE AUTHOR

Pieter Arntz

Malware Intelligence Researcher

Was a Microsoft MVP in consumer security for 12 years running. Can speak four languages. Smells of rich mahogany and leather-bound books.