Today we are really excited to announce the launch of Voicegain Whisper, an optimized version of Open AI's Whisper Speech recognition/ASR model that runs on Voicegain managed cloud infrastructure and accessible using Voicegain APIs. Developers can use the same well-documented robust APIs and infrastructure that processes over 60 Million minutes of audio every month for leading enterprises like Samsung, Aetna and other innovative startups like Level.AI, Onvisource and DataOrb.
The Voicegain Whisper API is a robust and affordable batch Speech-to-Text API for developersa that are looking to integrate conversation transcripts with LLMs like GPT 3.5 and 4 (from Open AI) PaLM2 (from Google), Claude (from Anthropic), LLAMA 2 (Open Source from Meta), and their own private LLMs to power generative AI apps. Open AI open-sourced several versions of the Whisper models released. With today's release Voicegain supports Whisper-medium, Whisper-small and Whisper-base. Voicegain now supports transcription in over multiple languages that are supported by Whisper.
Here is a link to our product page
There are four main reasons for developers to use Voicegain Whisper over other offerings:
While developers can use Voicegain Whisper on our multi-tenant cloud offering, a big differentiator for Voicegain is our support for the Edge. The Voicegain platform has been architected and designed for single-tenant private cloud and datacenter deployment. In addition to the core deep-learning-based Speech-to-text model, our platform includes our REST API services, logging and monitoring systems, auto-scaling and offline task and queue management. Today the same APIs are enabling Voicegain to processes over 60 Million minutes a month. We can bring this practical real-world experience of running AI models at scale to our developer community.
Since the Voicegain platform is deployed on Kubernetes clusters, it is well suited for modern AI SaaS product companies and innovative enterprises that want to integrate with their private LLMs.
At Voicegain, we have optimized Whisper for higher throughput. As a result, we are able to offer access to the Whisper model at a price that is 40% lower than what Open AI offers.
Voicegain also offers critical features for contact centers and meetings. Our APIs support two-channel stereo audio - which is common in contact center recording systems. Word-level timestamps is another important feature that our API offers which is needed to map audio to text. There is another feature that we have for the Voicegain models - enhanced diarization models - which is a required feature for contact center and meeting use-cases - will soon be made available on Whisper.
We also offer premium support and uptime SLAs for our multi-tenant cloud offering. These APIs today process over 60 millions minutes of audio every month for our enterprise and startup customers.
OpenAI Whisper is an open-source automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. The architecture of the model is based on encoder-decoder transformers system and has shown significant performance improvement compared to previous models because it has been trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection.
Learn more about Voicegain Whisper by clicking here. Any developer - whether a one person startup or a large enterprise - can access Voicegain Whisper model by signing up for a free developer account. We offer 15,000 mins of free credits when you sign up today.
There are two ways to test Voicegain Whisper. They are outlined here. If you would like more information or if you have any questions, please drop us an email support@voicegain.ai
Enterprises are increasingly looking to mine the treasure trove of insights from voice conversations using AI. These conversations take place daily on video meeting platforms like Zoom, Google Meet and Microsoft Teams and over telephony in the contact center (which take place on CCaaS or on-premise contact center telephony platforms).
Voice AI or Conversational AI refers to converting the audio from these conversations into text using Speech recognition/ASR technology and mining the transcribed text for analytics and insights using NLU. In addition to this, AI can be used to detect sentiment, energy and emotion in both the audio and text. The insights from NLU include extraction of key items from meetings. This include semantically matching phrases associated with things like action items. issues, sales blockers, agenda etc.
Over the last few years, the conversational AI space has seen many players launch highly successful products and scale their businesses. However most of these popular Voice AI options available in the market are multi-tenant SaaS offerings. They are deployed in a large public cloud provider like Amazon, Google or Microsoft. At first glance, this makes sense. Most enterprise software apps that automate workflows in functional areas like Sales and Marketing(CRM), HR, Finance/Accounting or Customer service are architected as multi-tenant SaaS offerings. The move to Cloud has been a secular trend for business applications and hence Voice AI has followed this path.
However at Voicegain, we firmly believe that a different approach is required for a large segment of the market. We propose an Edge architecture using a single-tenant model is the way to go for Voice AI Apps.
By Edge, we mean the following
1) The AI models for Speech Recognition/Speech-to-Text and NLU run on the customer's single tenant infrastructure – whether it is bare-metal in a datacenter or on a dedicated VPC with a cloud provider.
2) The Conversational AI app -which is usually a browser based application that uses these AI models is also completely deployed behind the firewall.
We believe that the advantages for Edge/On-Prem architecture for Conversational/Voice AI is being driven by the following four factors
Very often, conversations in meetings and call centers are sensitive from a business perspective. Enterprise customers in many verticals (Financial Services, Health Care, Defense, etc) are not comfortable storing the recordings and transcripts of these conversations on the SaaS Vendor's cloud infrastructure. Think about a highly proprietary information like product strategy, status of key deals, bugs and vulnerabilities in software or even a sensitive financial discussion prior to the releasing of earnings for a public company. Many countries also impose strict data residency requirements from a legal/compliance standpoint. This makes the Edge (On-Premises/VPC) architecture very compelling.
Unlike pure workflow-based SaaS applications, Voice AI apps include deep-learning based AI Models –Speech-to-Text and NLU. To extract the right analytics, it is critical that these AI models – especially the acoustic models in the speech-recognition/speech-to-text engine are trained on client specific audio data. This is because each customer use case has unique audio characteristics which limit the accuracy of an out-of-the-box multi-tenant model. These unique audio characteristics relate to
1. Industry jargon – acronyms, technical terms
2. Unique accents
3. Names of brands, products, and people
4. Acoustic environment and any other type of audio.
However, most AI SaaS vendors today use a single model to serve all their customers. And this results in sub-optimal speech recognition/transcription which in turn results in sub-optimal NLU.
For real-time Voice AI apps - for e.g in the Call Center - there is an architectural advantage for the AI models to be in the same LAN as the audio sources.
For many enterprises, SaaS Conversational AI apps are inexpensive to get started but they get very expensive at scale.
Voicegain offers an Edge deployment where both the core platform and a web app like Voicegain Transcribe can operate completely on our clients infrastructure. Both can be placed "behind an enterprise firewall".
Most importantly Voicegain offers a training toolkit and pipeline for customers to build and train custom acoustic models that power these Voice AI apps.
If you have any question or you would like to discuss this in more detail, please contact our support team over email (support@voicegain.ai)
As we announced here, Voicegain Transcribe is an AI based Meeting Assistant that you can take with you to all your work meetings. So irrespective of the meeting platform - Zoom, Microsoft Teams, Webex or Google Meet - Voicegain Transcribe has a way to support you.
We now have some exciting news for those users that regularly host Zoom meetings. Voicegain Transcribe users who are on Windows now have a free, easy and convenient way to access all their meeting transcripts and notes from their Zoom meetings. Transcribe Users can now download a new client app that we have developed - Voicegain Zoom Meeting Assistant for Local Recordings - onto their device.
With this client app, any Local Recording of a Zoom meeting (explained below) will be automatically submitted to Voicegain Transcribe. Voicegain's highly accurate AI models subsequently process the recording to generate both the transcript (Speech-to-Text) but also the minutes of the meeting and the topics discussed (NLU).
As always, you get started with a free plan that does not expire. So you can get started today without having to setup your payment information.
Zoom provides two options to record meetings on its platform - 1) Local Recording and 2) Cloud Recording.
Zoom Local recording is a recording of the meeting that is saved on the hard disk of the user's device. There are two distinct benefits of using Zoom Local Recording
Zoom Cloud Recording is when the recording of the meeting is stored on your Zoom Cloud account on Zoom's servers. Currently Voicegain does not directly integrate with Zoom Cloud Recording (however it is on our roadmap). In the interim, a user may download the Cloud Recording and upload it to Voicegain Transcribe in order to transcribe and analyze recordings saved in the cloud.
Zoom allows you to record individual speaker audio tracks separately as independent audio files. The screenshot above shows how to enable this feature on Zoom.
Voicegain Zoom Meeting Assistant for Local Recording supports uploading these independent audio files to Voicegain Transcribe so that you can get accurate speaker transcripts
The entire Voicegain platform including the Voicegain Transcribe App and the AI models can be deployed On-Premise (or in VPC) giving an enterprise a fully secure meeting transcription and analytics offering.
If you have any question, please sign up today, and contact our support team using the App.
Since June 2020, Voicegain has published benchmarks on the accuracy of its Speech-to-Text relative to big tech ASRs/Speech-to-Text engines like Amazon, Google, IBM and Microsoft.
The benchmark dataset for this comparison has been a 3rd Party dataset published by an independent party and it includes a wide variety of audio data – audiobooks, youtube videos, podcasts, phone conversations, zoom meetings and more.
Here is a link to some of the benchmarks that we have published.
1. Link to June 2020 Accuracy Benchmark
2. Link to Sep 2020 Accuracy Benchmark
3. Link to June 2021 Accuracy Benchmark
4. Link to Oct 2021 Accuracy Benchmark
5. Link to June 2022 Accuracy Benchmark
Through this process, we have gained insights into what it takes to deliver high accuracy for a specific use case.
We are now introducing an industry-first relative Speech-to-Text accuracy benchmark to our clients. By "relative", Voicegain’s accuracy (measured by Word Error Rate) shall be compared with a big tech player that the client is comparing us to. Voicegain will provide an SLA that its accuracy vis-à-vis this big tech player will be practically on-par.
We follow the following 4 step process to calculate relative accuracy SLA
In partnership with the client, Voicegain selects benchmark audio dataset that is representative of the actual data that the client shall process. Usually this is a randomized selection of client audio. We also recommend that clients retain their own independent benchmark dataset which is not shared with Voicegain to validate our results.
Voicegain partners with industry leading manual AI labeling companies to generate a 99% human generated accurate transcript of this benchmark dataset. We refer to this as the golden reference.
On this benchmark dataset, Voicegain shall provide scripts that enable clients to run a Word Error Rate (WER) comparison between the Voicegain platform and any one of the industry leading ASR providers that the client is comparing us to.
Currently Voicegain calculate the following two(2) KPIs
a. Median Word Error Rate: This is the median WER across all the audio files in the benchmark dataset for both the ASRs
b. Fourth Quartile Word Error Rate: After you organize the audio files in the benchmark dataset in increasing order of WER with the Big Tech ASR, we compute and compare the average WER of the fourth quartile for both Voicegain and the Big Tech ASR
So we contractually guarantee that Voicegain’s accuracy for the above 2 KPIs relative to the other ASR shall be within a threshold that is acceptable to the client.
Voicegain measures this accuracy SLA twice in the first year of the contract and annually once from the second year onwards.
If Voicegain does not meet the terms of the relative accuracy SLA, then we will train the underlying acoustic model to meet the accuracy SLA. We will take on the expenses associated with labeling and training . Voicegain shall guarantee that it shall meet the accuracy SLA within 90 days of the date of measurement.
1. Click here for instructions to access our live demo site.
2. If you are building a cool voice app and you are looking to test our APIs, click here to sign up for a developer account and receive $50 in free credits
3. If you want to take Voicegain as your own AI Transcription Assistant to meetings, click here.
Twilio platform supports encrypted call recordings. Here is Twillo documentation regarding how to setup encryption for the recordings on their platform.
Voicegain platform supports direct intake of encrypted recordings from the Twilio platform.
The overall diagram of how all of the components work together is as follows:
Bellow we describe how to configure a setup that will automatically submit encrypted recordings from Twilio to Voicegain transcription as soon as those recordings are completed.
Voicegain will require a Private Key in a PKCS#8 format to decrypt Twilio recordings. Twilio documentation describes how to generate a Private Key in that format.
Once you have the key, you need to upload it via Voicegain Web Console to the Context that you will be using for transcription. This can be done via Settings -> API Security -> Auth Configuration. You need to choose Type: Twilio Encrypted Recording.
We will be handling Twilio recording callbacks using an AWS Lambda function, but you can use an equivalent from a different Cloud platform or you can have your own service that handles https callbacks.
A sample AWS Lambda function in Python is available on Voicegain Github: platform/AWS-lambda-for-encrypted-recordings.py at master · voicegain/platform (github.com)
You will need to modify that function before it can be used.
First you need to enter the following parameters:
The Lambda function receives the callback from Twilio, parses the relevant info from it, and then submits a request to Voicegain STT API for OFFLINE transcription. If you want, you can modify, in the Lambda function code, the body of the request that will be submitted to Voicegain. For example, the github sample submits the results of transcription to be viewable in the Web Console (Portal), but you will likely want to change that, so that the results are submitted via a Callback to your HTTPS endpoint (there is a comment indicating where the change would need to be made).
You can also make other changes to the body of the request as needed. For the complete spec of the Voicegain Transcribe API see here.
Here is a simple python code that can be used to make an outbound Twilio call which will be recorded and then submitted for transcription.
Notice that:
It has been over 7 months since we published our last speech recognition accuracy benchmark. Back then the results were as follows (from most accurate to least): Microsoft and Amazon (close 2nd), then Voicegain and Google Enhanced, and then, far behind, IBM Watson and Google Standard.
Since then we have obtained more training data and added additional features to our training process. This resulted in a further increase in the accuracy of our model.
As far as the other recognizers are concerned:
We have decided to no longer report on Google Standard and IBM Watson accuracy, which were always far behind in accuracy.
We have repeated the test using similar methodology as before: used 44 files from the Jason Kincaid data set and 20 files published by rev.ai and removed all files where none of the recognizers could achieve a Word Error Rate (WER) lower than 25%.
This time only one file was that difficult. It was a bad quality phone interview (Byron Smith Interview 111416 - YouTube).
You can see boxplots with the results above. The chart also reports the average and median Word Error Rate (WER)
All of the recognizers have improved (Google Video Enhanced model stayed much the same but Google now has a new recognizer that is better).
Google latest-long, Voicegain, and Amazon are now very close together, while Microsoft is better by about 1 %.
Let's look at the number of files on which each recognizer was the best one.
Note, the numbers do not add to 63 because there were a few files where two recognizers had identical results (to 2 digits behind comma).
We now have done the same benchmark 4 times so we can draw charts showing how each of the recognizers has improved over the last 1 year and 9 months. (Note for Google the latest result is from latest-long model, other Google results are from video enhanced.)
You can clearly see that Voicegain and Amazon started quite bit behind Google and Microsoft but have since caught up.
Google seems to have the longest development cycles with very little improvement since Sept. 2021 till very recently. Microsoft, on the other hand, releases an improved recognizer every 6 months. Our improved releases are even more frequent than that.
As you can see the field is very close and you get different results on different files (the average and median do not paint the whole picture). As always, we invite you to review our apps, sign-up and test our accuracy with your data.
When you have to select speech recognition/ASR software, there are other factors beyond out-of-the-box recognition accuracy. These factors are, for example:
1. Click here for instructions to access our live demo site.
2. If you are building a cool voice app and you are looking to test our APIs, click here to sign up for a developer account and receive $50 in free credits
3. If you want to take Voicegain as your own AI Transcription Assistant to meetings, click here.
Today, we are really excited to announce the launch of Voicegain Transcribe, an AI based transcription assistant for both in-person and web meetings. With Transcribe, users can focus on their meetings and leave the note taking to us.
Transcribe can also be used to convert streaming and recorded audio from video events, webinars, podcasts and lectures into text.
Voicegain Transcribe is an app accessible from Chrome or Edge Browser and is powered by Voicegain's highly accurate speech recognition platform. Our out-of-the-box accuracy of 89% is on par with the very best.
Currently there are 3 main ways you can use Voicegain Transcribe:
If you join meetings directly from your Chrome or Edge browser (without any downloads or plug-ins), then you can use this feature to send audio to Voicegain. Examples of meeting platforms include Google Meet, BlueJeans, Webex and Zoom.
On a Windows device, browser sharing also works with a client desktop app like Zoom and Microsoft Teams. On a Mac/Apple device, browser sharing support desktop apps.
Voicegain offers a downloadable Windows client app that is installed on the user's computer. This app accesses Zoom Local Recordings and automatically uploads them for transcription to Voicegain Transcribe.
Zoom has two types of recordings - Local Recordings and Cloud Recordings. This app is for Local Recordings - where the recording is stored on the hard disk of the user's computer. To learn more about Zoom local recording click here.
Zoom also allows a separate audio file for each participant's recording. Voicegain App supports upload of these individual participant's audio file so that the speaker labels are accurately assigned to the transcript.
Users may also upload pre-recorded audio files of their meetings, podcasts, calls and generate the transcript. We support over 40 different formats including mp3, mp4, wav, aac and ogg). Voicegain supports speaker diarization - so we can separate speakers even on a single channel audio recording.
Currently we support English and Spanish. More languages are in our roadmap - German, Portuguese, Hindi.
Users can organize their meeting recordings and audio files into different projects. A project is like a workspace or a folder.
Users can save the voice signatures of meeting participants and users so that you can accurately assign speaker labels.
Voicegain can also extract meeting action items, positive and negative sentiment.
Users can also mask - in both text and audio - any personally identifiable information.
We are adding a feature where Voicegain Transcribe can join any meeting by having the user just enter the meeting url and inviting Voicegain Transcribe.
We are also adding a Chrome extension that will make it much easier to record and transcribe web meetings.
By signing up today, you will be signed up on our forever Free Plan - which makes you eligible for 120 mins of Meeting Transcription free every month . Once you are satisfied with our accuracy and our user experience, you can easily upgrade to Paid Plans.
If you have any questions, please email us at support@voicegain.ai
Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.
Read more →Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.
Read more →Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.
Read more →Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.
Read more →Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.
Read more →Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.
Read more →Interested in customizing the ASR or deploying Voicegain on your infrastructure?