Context Call AudioSense Demonstration at 2015 Mobile World Congress
AudioSense is a new family of software algorithms that can enable device makers and app developers to create ‘contextually aware and sensing-enabled’ devices. Matthieu Vendeville explains what visitors to MWC 2015 were able to see on the NXP Software booth.
“We’ve created a unique Context Call demonstration so visitors could get real-life, hands-on experience of how our AudioSense algorithms work,” says Matthieu, who is the Technical Manager responsible for the demo.
“We have been showing two key user cases. One, how AudioSense can detect the user’s environment. For example, if they’re in a car, office or theater. And, two, how AudioSense can improve the quality of speaker mode conference calls.”
Sensing is a new topic, so perhaps you’re not sure about the applications and implications of being able to accurately measure a device user’s environment. Matthieu offers four topics below which help put the emerging science of contextual sensing into context:
Using contextual sensing to improve the mobile device experience
Imagine you’re at the cinema or the theater and your phone rings. It’s an embarrassing nightmare that Time Magazine covered in an article titled: “Show Stoppers: A Brief History of Rude and Disruptive Behavior in Theater.”
“It’s a problem that contextual awareness could solve because the AudioSense algorithms could work out that you’re in a theater, or even listening to a presentation, and put your phone into silent mode,” says Matthieu.
“Conversely, if AudioSense detects you are in an acoustically challenging area – like an underground railway platform – it could adapt the microphone characteristics so you could have a clear conversation without having to shout.”
Matthieu says being able to analyse the ambient soundscape has key advantages and applications. “People want to be able to have clear, natural conversations wherever they are. Regardless of their environment and what network they’re on, people judge their phone by call quality. It’s what phones are all about. AudioSense can give nearly realtime responsiveness to your environment. So if it detects, say, an ambulance approaching with its siren going, it can incrementally adjust the phone characteristics to maintain optimal call quality.”
The Context Call demonstration at Mobile World Congress will show how sensing works in real life scenarios.
Using beamforming to improve speaker mode conference calls
Apart from sensing the ambient soundscape to determine the user’s environment, NXP Software can also use beamforming techniques to locate the people in a room.
The benefit is that the handset can focus on the people who are talking and, by a process of elimination, more accurately filter out any distracting background noise.
“A popular use of smartphones today,” Matthieu says, “is to put them on the desk in speaker mode and have an impromptu conference call. Our algorithm can enable devices with two or more microphones to accurately get a fix on where inside the room the participants in a conversation are, and then beam in on their voices. It leads to significantly improved conference calls, even with lower-cost handsets.”
The Context Call demonstration at Mobile World Congress will show how two-microphone beaming and caller location enhances speaker mode call quality.
Using contextual data to drive $7.5 billion in app and ad revenue
According to research by the UK’s Juniper Research: “Ad-supported apps will account for 71% of the total location and context-based service revenue.” Juniper estimate that the global market for location and contextual apps will be $7.5 billion by 2019.
Microsoft have written a paper titled: ‘Bringing Contextual Ads to Mobile Phones.’ In the abstract of the paper, the authors write: “A recent study showed that while US consumers spent 30% more time on mobile apps than on traditional web, advertisers spent 1600% less money on mobile ads.” The abstract continues, “Irrelevance results in low clickthrough rates, and hence advertisers shy away from the mobile platform.”
Matthieu says: “When people of think of location and contextual services, they naturally assume that GPS is behind the equation. But GPS alone can’t be relied for contextual services. For example, GPS can detect you’re moving through the city, but it doesn’t know if you’re in a quiet car or on a noisy bus. AudioSense can provide that vital missing dimension for handset makers and app developers.”
“GPS also has limitations when you are in a building,” Matthieu continues, “or in a location like an underground metro station. So audio sensing really is a vital part of making location and contextual apps that work all the time, not just for when you’re visible by GPS satellites.”
Using contextual data for learning and adaptive apps
In a blog post titled ‘The Rise of Contextual Mobility‘, Vija Shankar, Director of Product Marketing at mobile development and consulting specialists Kony, writes: “… These contextual services require companies to collect, store, and analyze information from various sources, including the device. This will require tight integration with analytics solutions where developers store and analyze the future torrent of incoming contextual data (e.g. location, motion, environmental conditions) from mobile devices and future Internet of Things devices. One of the best ways to get contextual information from mobile devices today is to build native apps that can tap into device data.”
Shankar continues: “Combining this sensor data with other web accessible data and systems of record data will allow companies to build a new set of business processes and services that are learning, adaptive, and predictive.”
Matthieu adds to the story about learning and adaptive software being driven by contextual data: “To pick up on the previous use case about differentiating between whether the user is on a bus or in a car, an adaptive app could then learn if the user is a frequent driver or bus user, and provide contextually relevant content. For example, bus users could see an ad about bus company promotions … but a car driver gets a text message – only once the car has stopped – about parking facilities in the zone she has entered.”
“NXP Software is already on some 3.4 billion mobile devices,” Matthieu explains, “so we have a wealth of experience in integrating our IP into the software stacks of manufacturers. AudioSense algorithms will be at home on smartphones, wearable devices and Internet of Things products.”
See the Context Call demonstration at MWC 2015
OEM manufacturers of smart devices and applications developers are invited to visit NXP Software on Stand E30, Hall 7, at the Mobile World Congress. Technical specialists will be on hand to overview all the VoiceExperience and AudioSense features, and to give a more detailed demonstration of sensing features and to show how to improve speaker mode conversations with multi-microphone handsets.