Blog about voice enhancements

wearable device interaction

Voice to interact with Wearable devices

Voice is the sensible way to interact with constrained devices.

Over the years we’ve been conditioned to think that our fingers are the only way to interact with machines and devices. Since it was invented in the 1860s, the typewriter defined the width of two hands as the convenient size for inputting text. More recently, gestures like touch and swipe worked for smaller devices like smartphones. But how do you interact with something that is lodged in your ear, or elegantly displayed on your wrist?

The answer is voice. A microphone less than a millimetre across is the smallest, most sensible and most economical way to control wearable devices.

Vlatko Milosevski, Business Development Manager at NXP Software, gives his insights into the role of voice-control and audio-sensing for wearable and constrained devices. It’s a market that will go from near-zero to a billion units sold annually in the next few years.

And the wearables sector is going to train a whole generation of users to forget their fingers … and do just about everything by talking to their device, whether it’s a PC, smartphone, a vending machine, home heating system or their car.

Vlatko. What is the main challenge for NXP Software with wearable or constrained products?

As the name ‘constrained’ suggests, this device product category is characterized by three major limitations: size, form factor, and the way of interaction.

These devices are made to be worn comfortably – in your ears, or on your hand, body or head. They are small, light and they fit ergonomically in or on the place where they are meant to be worn. They also need to look nice and fashionable. And, last but not the least, people should know how to use them instantly and intuitively.

As a consequence of this, constrained devices often do not have a screen to touch or a button to press. They are focused on excelling with a limited set of functionality: like making a call, tracking your physical activities, listening to music or quickly checking your notifications. The fact that these devices are smaller, thinner, lighter and simpler imposes limitations on several factors: the size and the capacity of the battery, the processing power and memory embedded in these devices, and the way one can interact with them.

The challenge of NXP Software is … how can LifeVibes technologies improve the core features of these devices when faced with these system and form factor constraints?

Why should NXP Software invest in a solution?

The wearable smart devices category is exploding. This was evident at the latest Consumer Electronics Show in Las Vegas,USA, in January 2015. And of course the hype will accelerate even more with the launch of the announced Apple smart watch. The PR machines at Apple, Samsung and other players will raise awareness of wearable devices.

Market intelligence companies are predicting a shipment of 150 million units just on smart wristband-worn devices in 2019. When you add wireless headsets and Internet Of Things devices in the home, we can expect an annual market of a billion of potential devices that can benefit from NXP Software’s voice-enhancement and audio-sensing technologies.

How is NXP Software’s solution beneficial … and for what kind of customers?

From its first day of its existence, NXP Software has been involved in making software product for wearable devices.

Millions of Sony and Samsung Bluetooth headsets have LifeVibes VoiceExperience software inside. Samsung’s smartwatches have LifeVibes inside.

We have provided state-of-the-art calling quality packaged in a ‘tight’ package which neither drains the device battery nor affects the device’s design or ergonomics. That is why our customers won numerous international prizes. For example, Plantronics BackBeat Go wireless headset won the best wearable device award at CES exhibition in 2014.

Our mission and proposition to our customers is simple : innovation. Innovations that will push the boundaries on what is possible with constraints like the form factor, ergonomics and economics that we are both faced with.

How do consumers benefit from our solution?

Where do I start? I suppose the most frequent scenario is that people can answer a phone call in any place and situation with their tiny wireless headset. No need to ‘go to a quiet place’ to have a comfortable conversation.

People can also interact with spoken commands to their devices in all sort of environments and situations. Our technologies are making speech-command recognition more robust and reliable.

With our acoustic technologies, wearable devices can detect for themselves where they are – for example in a car, a bus or train, or an office – and adapt themselves to that environment. You don’t want your messages to be read out loud when you are in the office or public transportation, but it is very handy to hear them loud if you are driving, isn’t it?
We are also stepping into new territory, where we are using our acoustic competence in pioneering applications like reliable heart rate monitoring. In this scenario the ‘noise’ from the device, environment or body movement is removed from the heart-rate sensor readings. The end result is that the user will have much more reliable measurement of their heart rate while they are exercising. In some situations, such reliable heart rate measurement can save lives. I think that NXP Software is the world leader in soundscape management.

How are we strong against the competition?

I’m confident in saying that we have the best audio-processing team of experts anywhere. Our team consists of hand-picked acoustic experts from all around the world.

We combine this acoustic expertise with deep computer architecture knowledge. This means we are capable of making powerful software algorithms that can run with very small and limited micro-processor power and memory consumption.

At the same time, we ensure that our processing is robust against that universal force called day-to-day life. It does not matter how the user wears the device, or what the design and form factor is: our algorithms just work and give the best results.

Last but not the least we are always close to the customers, no matter where the customer is. We have expert teams and labs in all the key markets where the ‘Smart Wearable Revolution’ is happening.

What has been the trend … and how will it evolve in the coming years?

As I mentioned before, this market is set to go from more-or-less zero to a billion units in the next few years. The wearables smartwatch market is set to be 150 million units a year, and the the sky is the limit with the Internet Of Things.

But a really exciting part of this evolution for me is that it will increase awareness with consumers of using voice control. Siri and Cortana have made people aware of voice capabilities … but most consumers still fall back on typing or gestures as a way to interact with their device. I think that the voice-only interaction with constrained devices will then work back up the device chain so more people start using voice for their smartphone or PC.

Why and how are we working together … and with which partners?

The industry ecosystem is complex and big.

Some companies are focusing on making the best embedded microprocessor architectures. Like ARM, CEVA and Cadence, to name a few. With these partners, we work closely to make sure that the execution and the design of our algorithms would consume the least possible power on their architectures.

Other companies are focused on making audio processing chipsets that are integrated inside the end device. These are firms like Cirrus Logic or our ‘big brother’, NXP Semiconductors. With these partners we work together to pre-integrate our software products onto their chipsets, so that at the end everything is ‘ready to go’ for the end device manufactures.

Of course, tthere are big software ecosystem players too, like Google and Microsoft. With them we are shaping together the future of software operating system frameworks and making sure that application developers can use our technologies in their apps seamlessly.

Why do our partners believe in us … and choose NXP Software to integrate on their specific platform?

Partly because they want to work together with the best technology providers in the industry, and partly because their own customers are interested in the combination of our software products and their products.

It’s valuable for existing and potential partners to know we have a decade-long win:win track record with our partners. The world is moving even faster today and companies cannot do everything on their own. They need to rely on expert and friendly help from their partners.

So today’s fast pace of innovation only accelerates the need for deeper partnerships in the industry.

What’s next. How do customer and consumer expectations develop?

The technology is getting more human and more intuitive. People simply want the technology to work for them; to understand what they want, no matter where they are; and to help them in making their daily tasks faster, better and easier.

So a voice-controlled device can be like a personal assistant, butler, coach, mentor, perhaps a doctor, or even a friend …and all in a single device! And attached on your wrist … and it has to look good and be fashionable. Like a personal expression of yourself.

It’s not a lot to ask for!

We are already witnessing technological breakthrough patterns that support this vision. For example: spoken commands instead of menu deep-diving; Google Play reminders/recommendations at the right moment, instead of manual checking; and Post-It style reminders to do the things that you should do.

Music apps today quickly learn what is your music taste, and they might recommend music from an artist that you have never heard of, but who turns out to sound exactly like you want. That’s the type of advice a friend or colleague would give … and we will continue to develop this type of interaction further

What is next from our perspective. What are we working on?

It’s interesting that although all the audio enhancement tools we’ve created since our inception have been mainly targeted at mobile phones … it’s like we’ve been preparing ourselves for the revolution of wearable devices, constrained devices and the Internet Of Things.

Yes, our audio enhancement tools make mobile phones better. Afterall, if a smartphone takes an extra second to load a webpage then nobody will notice. Yet poor call quality will quickly turn the user against their device and/or their mobile network.

But the ability to, say, filter out background noise from a voice-controlled, wearable device is crucial. It’s also incredibly valuable to be able to use the audio soundscape to sense the user’s envrionment: like with the in-the-office or in-the-car example I gave earlier.

And in both of those areas – voice-enhancement and environment-sensing – NXP Software is massively ahead of the curve in terms of research and expertise.

If applicable, how does this all link to smarter listening and better conversations?

Like I just said, the wearables market will be the pinnacle of implementing our technology for smarter listening and better conversations.

In fact I’d go so far as to say that the wearables market would simply not be viable without the algorithms that NXP Software has created, and will continue to create. The only sensible i/o for a wearable device is the human voice … and that’s an area where we are pre-eminent.

Leave a Reply