Buying habits and the way we interact with technology is changing. Contact centers need to adapt by both delivering services in the channels where the customers are and embrace technology to provide even better service and improve agent experience. This means jumping on both omni-channel and leveraging their data even further. And the data they have the most? Speech.
Speech-to-text software (also referred to as voice/speech recognition software) transcribes audio files into text, usually in a word processor to enable editing and search. It’s the same technology that drives the tech space’s voice-enabled assistants, such as Siri, Alexa, or Google Assistant in the background.
Voice recognition works by breaking down spoken words to smaller samples (e.g. at a rate of 16khz, or 16000 samples per second ) and with complex algorithms combine and match the samples to local language pronunciation and start forming words. If you want to jump even deeper into the technical stuff, check out this post from Adam Geitgey.
As contact centers live and breathe voice, it hardly surprises anyone that they have tons of data available. The bigger the operations get, the more recordings are stored, thus making it impossible to process without machine help.
As a simple visualization, consider a Contact Center with 25 seats working in 2 shifts. If we calculate an average of 8-hour shifts with an average talk time of 45 min/hour, the amount of material collected per day averages at 300 hours. For a single person, this 1-day worth of material would already take over a month to process even with an accelerated 1.2x speed.
The question isn’t about getting data, but rather how to leverage that into business opportunities. Consider the possible implications from the following perspectives: QA, business development, and analysis.
Of course, the technology is still some way from being able to utilize this technology fully. However, we are on the right track and the more simple solutions (which can manage with higher error rates) can already be implemented with various languages. More on the different opportunities later on in this blogpost.
While the industry is, as discussed, still some way from getting the full benefits of the technology, big players such as Microsoft Azure, Google, IBM Watson, Amazon and Baidu (in China) invest billions in voice recognition software and deep learning. Furthermore, while the bigger players are mostly focusing on perfecting the accuracy of larger languages, English being at the forefront, there are several smaller local companies crafting voice recognition for their markets, as also seen in this year’s CCW in Berlin. For the Nordic region, companies like Inscripta (Finland), Gamgi (Sweden), and Capturi (Denmark) are working towards enhancing local language recognition.
At LeadDesk, we have followed the development of Speech-to-text for a longer time already and created compatible endpoints with the most used services. This means that customers can already use software from several vendors and hook it up to LeadDesk quite easily. What this also means is that we already have the capabilities to start working on larger Speech-to-text projects with relatively fast deployment.
The single biggest challenge is still enhancing the accuracy of transcriptions. As speech is far from monotone and languages vary across regions, accents, dialects, and grammar rules, perfecting the algorithms takes time and data. And a lot of both.
While some use cases already work with the current error rates, to gain deeper analysis capabilities, voice-recognition needs to get even better in terms of accuracy. The best voice recognition software can reportedly achieve as high as 97% accuracy in English (-us), while Google, Microsoft, and IBM Watson are all disclosing numbers around 95% (2018; assuming these numbers are higher today). This is however still lower in less spoken languages. While we are talking about small differences in percentages, the impacts are rather large.
The higher the accuracy rate becomes, the more possibilities it opens. As Andrew Ng, former head of science in Baidu and one of the most recognized experts in speech-recognition put it already in 2016; “As speech-recognition accuracy goes from 95% to 99%, we’ll go from barely using it to using all the time!”
(And if you are interested the standard for accuracy works as following:
Word Error Rate = (Substitutions + Deletions + Insertions) / Number of Words Spoken)
In a contact center context, accuracy is further complicated with background noises and even low-quality calls, due to the narrowband phone calls. However, advancements in this can already be seen as mobile phone manufacturers and operators continue implementing VoLTE or Wideband audio (also known as “HD Voice”) technology.
As talked about in the previous chapter, the long term vision on how Speech-to-text will affect contact centers is broad. Most of the implementations are already available in text-based applications. However as discussed, by adding voice as an extra level the output of the text itself is error-prone, and thus gaining full advantage of all solutions is still for the future.
To get a better grasp of what can be achieved, we asked our Solution Architect, Samuli Pihlaja, to tell us more about how he sees the use cases for speech to text. He presents a 4 level framework to explain use cases.
On its most basic level, the implication of speech-to-text is automatic transcription. The benefit is simple: On average reading is faster than listening and thus allows you to process more data
On average, adults read somewhere between 250-300 words per minute. Compare that to conversational speech averaging 120-150 words per minute. Thus reading through transcripts instead of listening to calls can save you time and let you go through double the number of calls.
Transcriptions for human reading is also less affected by errors in the transcribing process, as our brains can quite easily fill in the gaps that might have gone wrong.
Just look at this sentence; Yuo cna porbalby raed tihs esaliy desptie teh msispeillgns.
How can this be used in contact centers?
The second level of implications is applying simple search to text transcriptions. Think of it as scanning through a text document, finding keywords, and if needed jumping directly to the right spot in the call.
How can this be used in contact centers?
The third level in our framework is the analysis component. This not only requires higher accuracy but also the help of machines to understand patterns. By leveraging AI and machine learning we can start analyzing e.g. share of talk, topic distribution, tonality, and do mass analysis.
Another example of this is interaction analytics, that uses AI to efficiently collect and analyze large volumes of data and interactions from the channels at use. The data can then be categorized into specific areas for further analysis.
If you are wondering more about QA in Contact Centers, be sure to check out our ultimate guide for contact center QA.
A final and the most radical example of how contact centers can utilize speech recognition and machine learning is with the search of the “perfect call”. Finding and combining patterns of good and bad calls in order to analyze what makes or breaks your pitches.
Noticeable is that harnessing AI requires the machines to be fed data and start “learning”, so starting sooner will yield better results later. To get more insights on how AI can already help contact centers be more efficient, check out this blogpost.
The fourth level is leveraging speech in real-time implementations. The opportunities here are immense. Some simple implementations, such as voice-enabled call routing, are already possible. But this is only the beginning.
Real-time implementations will help drive the trend of self-service (one of the megatrends in contact centers) further and allow customers to e.g. handle the simple interactions with a voice-bot over the phone whenever they need. Thus letting agents focus on the harder cases.
Furthermore, we will see implementations, where agents can get real-time tips and tricks during the call based on factors such as keywords, topics, sentiment, and share of talk. Or why not get a popup with the right material directly to their screen from your help center when a specific problem is discussed. One company is even developing software that can listen to the pitch and wording of the customer and based on that tell when they are ready to buy.
In conclusion, speech recognition will bring a big change to the way contact centers are run. While the technology is still at a premature age, we will continue to see novel implications in the coming years.
Looking at what harnessing speech recognition can bring your business, it can be divided into the following topics:
Remember: getting the most out of machines will require “learning”, meaning the sooner you start the better results you will get later on.
Our custom work team, Professional Services, is happy to discuss projects and possibilities for your business needs.Contact us