Why Qualcomm Wants To Bring Ultrasound Transmitters To Smartphones And Tablets

Mobile chipmaker Qualcomm has a track record of pushing new capabilities into its chips faster than its competitors in a bid to carve out a bigger chunk of the market. Last year, for instance, its LTE Snapdragon processor helped it to take a 48 per cent revenue share in H1 (Strategy Analytics‘ figure), helping to drive more LTE handsets into the market which in turn accelerated the rate of 4G adoption.

The company made an interesting acquisition last November, buying some of the assets of an Israeli company called EPOS which makes digital ultrasound technology. Ultrasound may seem an odd technology to push into consumer electronics but Qualcomm clearly sees it as another differentiator for its chips, thanks to its potential to offer some novel additions to the user interface space — both for stylus-based inputs and even touch-less interfaces like gestures.

Discussing Qualcomm’s interest in ultrasound at the Mobile World Congress tradeshow in Barcelona, Raj Talluri, SVP of Product Management, explained that to put the technology to work in mobile devices an ultrasound transmitter could be located in a stylus, with microphones sited on the mobile device that can then detect the position of the pen.

Samsung has already included a capacitive stylus with its Galaxy Note phablet but Talluri said an ultrasound-based stylus would extend the capabilities — allowing a stylus to be used off-screen, say on the table top next to where your phone is resting, and still have its input detected.

“It’s is better [than a capacitive stylus] in some key different ways which we’re working on getting to market – for example you could write here [on the table next to the phone] and it will still detect where it is. So let’s say you have a [paper] notepad… and you have a phone [nearby on the table] and you can start writing on your notepad it will actually also be transcribed into text on the phone because what happens is the ultrasound can be used to calibrate any reasonable distance,” he told TechCrunch.

The technology could also support gesture-based interactions by positioning an ultrasound transmitter on the mobile device. “There are many use cases of ultrasound,” said Talluri. “You could put a little ultrasound transmitter here [on the corner of the screen] and transmit stuff and then when you cut the ultrasound field [by swiping above the device’s screen] you can do gestures.

“There’s many different things you can do with it, once you have it. So we’re working on it and hopefully we’ll get it to commercial products.”

Talluri would not be drawn on the likely timeframe of bringing this technology to market in Qualcomm chips, or which device makers Qualcomm is working with. “We haven’t announced anything yet. There’s clearly a lot of work to be done on it. We’re working on it we’re just not ready to announce,” he said. “We are very interested in in, that’s why we acquired the assets.”

He would say that Qualcomm is looking at both phone and tablet form factors for the ultrasound tech but added that it could work “anywhere” — including in wearable devices, such as Google Glass.

The system also doesn’t necessarily require new microphones to function — opening up the possibility of ultrasound-enabled accessories that can be retrofitted to existing devices to extend their capabilities.

“The other nice thing is that we find that the microphones [on existing mobile devices] that we put in to use for speech can also detect ultrasound waves — so you probably don’t need special microphones. There are lots of interesting ways to do it… You just need a transmitter somewhere,” said Talluri.

Discussing how mobile chipsets are generally going to evolve, Talluri said in his view the focus will be, not so much on on simply adding more and more cores, but rather on getting all the various chipset elements to work together better.

“We think the next generation of innovation is going to be more on heterogeneous compute. Right now if you look in the phone we’ve got CPUs, we’ve got GPUs, we’ve got video engines, we’ve got audio engines, we’ve got cameras, we’ve got security blocks but they all do one thing at a time.  Ideally you just want to say I want to do this and it should just go map itself to whatever its logical place is and if that place is busy it should work on something else, maybe not optimally,” he said.

“That’s what I mean by heterogeneous compute. Every block should be able to do other things so that’s kind of where I think SOC in general will evolve to. How can you take advantage of the silicon that you put inside the die to do multiple things, not just one thing at a time. I think that’s a more interesting concept than just put more cores.”

mwc13-event1