Google, Baidu and the race for an edge in the global speech recognition market

Speech recognition technology has been around for more than half a decade, though the early uses of speech recognition — like voice dialing or desktop dictation — certainly don’t seem as sexy as today’s burgeoning virtual agents or smart home devices.

If you’ve been following the speech recognition technology market for any length of time, you know that a slew of significant players emerged on the scene about six years ago, including Google, Apple, Amazon and Microsoft (in a brief search, I counted 26 U.S.-based companies developing speech recognition technology).

Since that time, the biggest tech trend setters in the world have been picking up speed and setting new benchmarks in a growing field, with Google recently providing open access to its new enterprise-level speech recognition API. While Google certainly seems to have the current edge in the market after substantial investments in machine learning systems over the past couple of years, the tech giant may yet have a potential Achilles’ heel in owning an important segment of the global market — lack of access to China.

The six-year ban on Google in China is a well-known fact, and aside from the very rare lapse in censorship, the block seems relatively immutable for the foreseeable future. With the world’s highest population to date, China also has more mobile users than anywhere in the world, and a majority use voice-to-text capabilities to initiate search queries and navigate their way through the digital landscape.

Google may be missing out on reams of Mandarin audio data, but Baidu hasn’t missed the opportunity to take advantage. As China’s largest search engine, Baidu has collected thousands of hours of voice-based data in Mandarin, which was fed to its latest speech recognition engine Deep Speech 2. The system independently learned how to translate some Mandarin to English (and vice versa) entirely on its own using deep learning algorithms.

The Baidu team that developed Deep Speech 2 was primarily based in its Sunnyvale AI Lab. Impressively, the research scientists involved were not fluent in Mandarin and knew very little of the language. Alibaba and Tencent are two other key players in the Chinese market developing speech recognition technology. Though both use deep learning platforms, neither company has gained the level of publicity and coverage of Baidu’s Deep Speech 2.

Despite its Mandarin prowess, Deep Speech 2 wasn’t originally trained to understand Chinese at all. “We developed the system in English, but because it’s all deep learning-based it mostly depends on data, so we were able to pretty quickly replace it with Mandarin data and train up a very strong Mandarin engine,” stated Dr. Adam Coates, director of Baidu USA’s AI Lab.

The system is capable of “hybrid speech,” something that many Mandarin speakers use when they combine English and Mandarin.

When Deep Speech 2 was first released in December 2015, Andrew Ng, the chief scientist at Baidu, described Deep Speech 2’s test run as surpassing Google Speech API, wit.ai, Microsoft’s Bing Speech, and Apple’s Dictation by more than 10 percent in word error rate.

According to Baidu, as of February of this year, Deep Speech 2’s most recently published error rate is at 3.7 percent for short phrases, while Google has a stated 8 percent word error rate as of about one year ago (to its credit, Google did reduce its error rate by 15 percent over the course of a year). Coates called Deep Speech 2’s ability to transcribe some speech “basically superhuman,” able to translate short queries more accurately than a native Mandarin Chinese speaker.

In addition, the system is capable of “hybrid speech,” something that many Mandarin speakers use when they combine English and Mandarin. “Because the system is entirely data-driven, it actually learns to do hybrid transcription on its own,” said Coates. This is a feature that could allow Baidu’s system to transition well when applied across languages.

Since Baidu’s initial breakthrough, Google has rebuilt its speech recognition system. The newly introduced Cloud Speech API offers developers the ability to speech-to-text translation into any app. The Cloud Speech API is described as working in a variety of noisy environments, and is able to recognize more than 80 languages and dialects.

Image analysis is another touted advantage that Google is using to help attract attention over similar services offered by Amazon and Microsoft. Baidu released via GitHub back in January 2016 the AI software that powers its Deep Speech 2 system, but has yet to release a similar API platform.

Baidu’s achievements and talented team of researchers seems to have the potential needed to significantly impact the technology.

Baidu is a bit hush-hush about much of its technology in development, and it’s difficult to say what specific advancements they’ve made since their introduction of Deep Speech 2 in December 2015. However, their continued progress and potential impact in the speech recognition market may show itself through the partnerships formed in rolling out its technology through other products and services.

Baidu recently tapped into the smart home market with an announcement of integration with Peel’s smart home platform, which offers a popular voice-based, universal remote app for smartphones and tablets.

Google unveiled a number of new AI-driven products, including Google Home, a voice-activated product that allows users to manage appliances and entertainment systems with voice commands, and which draws on the speech recognition technology in its announced “Google Assistant” (the product is scheduled to be released later this year).

In my recent interview with Coates, he also expressed Baidu’s intense interest and behind-the-scenes exploration of developing all manner of AI assistants; perhaps introduction of the “Baidu Assistant” is on the horizon.

Google has some of the best scientists worldwide and a massive technology budget, often putting them ahead of the curve. But Baidu’s achievements and talented team of researchers seems to have the potential needed to significantly impact the technology and gain a foothold in the lucrative Chinese voice market.

That being said, Google did take a minority stake last year in the Chinese-based startup Mobvoi, which is focused on voice recognition technology for mobile devices. With its speech recognition technology well under way, perhaps Google will find inroads that allow it to bypass other U.S.- and Chinese-based players and access the gigantic Chinese market after all.