Our Blog

News, Insights, sample code & more!

ASR
Announcing the launch of Voicegain Whisper ASR/Speech Recognition API for Gen AI developers

Today we are really excited to announce the launch of Voicegain Whisper, an optimized version of Open AI's Whisper Speech recognition/ASR model that runs on Voicegain managed cloud infrastructure and accessible using Voicegain APIs. Developers can use the same well-documented robust APIs and infrastructure that processes over 60 Million minutes of audio every month for leading enterprises like Samsung, Aetna and other innovative startups like Level.AI, Onvisource and DataOrb.

The Voicegain Whisper API is a robust and affordable batch Speech-to-Text API for developersa that are looking to integrate conversation transcripts with LLMs like GPT 3.5 and 4 (from Open AI) PaLM2 (from Google), Claude (from Anthropic), LLAMA 2 (Open Source from Meta), and their own private LLMs to power generative AI apps. Open AI open-sourced several versions of the Whisper models released. With today's release Voicegain supports Whisper-medium, Whisper-small and Whisper-base. Voicegain now supports transcription in over multiple languages that are supported by Whisper. 

Here is a link to our product page


There are four main reasons for developers to use Voicegain Whisper over other offerings:

1. Support for Private Cloud/On-Premise deployment (integrate with Private LLMs)

While developers can use Voicegain Whisper on our multi-tenant cloud offering, a big differentiator for Voicegain is our support for the Edge. The Voicegain platform has been architected and designed for single-tenant private cloud and datacenter deployment. In addition to the core deep-learning-based Speech-to-text model, our platform includes our REST API services, logging and monitoring systems, auto-scaling and offline task and queue management. Today the same APIs are enabling Voicegain to processes over 60 Million minutes a month. We can bring this practical real-world experience of running AI models at scale to our developer community.

Since the Voicegain platform is deployed on Kubernetes clusters, it is well suited for modern AI SaaS product companies and innovative enterprises that want to integrate with their private LLMs.

2. Affordable pricing - 40% less expensive than Open AI 

At Voicegain, we have optimized Whisper for higher throughput. As a result, we are able to offer access to the Whisper model at a price that is 40% lower than what Open AI offers.

3. Enhanced features for Contact Centers & Meetings.

Voicegain also offers critical features for contact centers and meetings. Our APIs support two-channel stereo audio - which is common in contact center recording systems. Word-level timestamps is another important feature that our API offers which is needed to map audio to text. There is another feature that we have for the Voicegain models - enhanced diarization models - which is a required feature for contact center and meeting use-cases - will soon be made available on Whisper.

4. Premium Support and uptime SLAs.

We also offer premium support and uptime SLAs for our multi-tenant cloud offering. These APIs today process over 60 millions minutes of audio every month for our enterprise and startup customers.

About OpenAI-Whisper Model

OpenAI Whisper is an open-source automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. The architecture of the model is based on encoder-decoder transformers system and has shown significant performance improvement compared to previous models because it has been trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection.

OpenAI Whisper model encoder-decoder transformer architecture

Source

Getting Started with Voicegain Whisper

Learn more about Voicegain Whisper by clicking here. Any developer - whether a one person startup or a large enterprise - can access Voicegain Whisper model by signing up for a free developer account. We offer 15,000 mins of free credits when you sign up today.

There are two ways to test Voicegain Whisper. They are outlined here. If you would like more information or if you have any questions, please drop us an email support@voicegain.ai

Read more → 
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Key Differentiators
Insights
Key Differentiators

Current speech-to-text enterprise market can be divided into 3 distinct groups of players. Note, that we are focusing here on speech-to-text platforms rather than complete end-user products (so we do not include consumer products like Dragon NaturallySpeaking, etc.)

  • The old ASRs - for example Nuance (and every speech company that Nuance acquired over the years) and Lumenvox. These speech-to-text engines go back to late 1990s early 2000s. They were built using technology relying on Gaussian Models and Hidden Markov Chains. They do require on-prem install.
  • Established Cloud Speech-to-Text services - like Google, AWS, Microsoft Azure, IBM. Some of these also began with recognizers build using Gaussian Models and Hidden Markov Chains, but by 2012 started transitioning to recognizers using Deep Neural Network models for speech recognition.
  • New players - these are new companies going back to about 2015. That is when Nvidia made it possible for pretty much anyone to train DNNs on Nvidia's new GPUs. A lot of small companies arose which built their own speech-to-text engines either from scratch or using open-source foundations. Now, 5 years later, many of them are entering speech-to-text market with mature products and delivering high recognition accuracy.

Where does Voicegain fit here?

We consider ourselves as as one of the new players as we started working on our own DNN-based speech-to-text engine at the end of 2016. However, we have been working with old style ASRs since 2006 and as a result we knew very well limitations of those. That was what motivated us to develop ASRs of our own.

We are also very familiar with employing ASRs in real-world large volume applications so we know which features the users of ASRs want - be it developers who build the applications, or IT personnel that has to host and maintain them.

All of this guided us  in decisions we made when developing our speech-to-text platform.

So how is Voicegain product different?

Below we list what we think are 4 key differentiators of our speech-to-text platform compared to competition. Note that the competitive field is pretty broad, and we consider a particular feature a differentiator if it is not a common feature in the market.

1) Edge Deployment

By, Edge Deployment we mean a deployment on customer premises (datacenter) or on VPC. Moreover, the deployment is fully orchestrated and managed from the Cloud (for more information see our blog post about Benefits of Edge Deployment). The aspect of orchestration and built-in management makes it essentially different from the old ASRs which were also deployed on-prem and required Support Contracts do deploy them successfully and to maintain them over time.

We think that Edge Deployment is critical for a speech-to-text platform which is to replace many of the old ASRs in their applications.

2) Acoustic Model Customization

Over the years when working with ASRs we noticed that there were cases where the ASR would show consistently higher error rates. Usually, this was related to IVR calls coming from customers in regions of the country with distinct accents.

In some of our use cases so far, ability to customize models has allowed us to reduce WER very significantly (e.g. from 8% WER to 3%).

We are currently working on a rigorous experiment where we are customizing our model to support Irish English. We plan to report in detail on the results in April.

3) Targeted support for IVR

Voicegain speech-to-text platform was developed specifically with IVR use cases in mind. Currently the platform supports the following 3 IVR uses cases, and we are working on adding conversational NLU later this year.

a) ASR with support for legacy IVR Standards

In order to make our speech-to-text engine an attractive solution for replacement of old ASRs, we implemented it to support legacy standards like MRCP and GRXML. That support is not a mere add-on, simply tagging a Web API on the back of an MRCP server, but is more integral - our core speech-to-text engine directly interprets a superset of MCRP protocol commands.

We also support GRXML and JSGF grammars - via MRCP, in IVR callbacks, and over Web API.

When used with grammars, big advantage of Voicegain recognizer is that at the core it is a large vocabulary recognizer. Grammars are used to do constrain the recognized utterances to facilitate semantic mapping, but the recognizer can also recognize Out-of-Grammar utterances, which opens new possibilities for IVR tuning.

b) Web-hook IVR Support (without VXML)

Flow-based IVR systems have traditionally been built using two approaches - (i) either having the dialog interactions interpreted on a VXML platform (VXML browser), or (ii) using webhooks invoking application logic running on standard web back-end platforms (examples of the latter are offerings of e.g. Twilio, Plivo, or Tropo).

Our platform supports webhook style IVRs. Incoming calls can be interfaced via standard telephony SIP/RTP, and the IVR dialog can be directed from any platform that implements web-hooks (e.g. Node.js, Django)

c) Enabling IVRs that use chatbot back-end

Many companies have invested significant effort into building their text based chatbots rather than using products like Google Dialogflow. What Voicegain platform provides is an easy way to deploy the existing chatbot logic on a telephony speech channel. This takes advantage of our platform's webhook-ivr IVR support and can feed real-time text (including multiple alternatives) to a chatbot platform. We also provide audio output either via TTS or prerecorded clips.

4) End-to-end support for Real-Time Continuous Speech-to-Text

Because IVR has always been our focus, we built our Acoustic Models to support low latency real-time speech-to-text (both continuous large vocabulary and with context-free grammars).  We also focused on convenient ways to stream audio into our speech-to-text platform, and to consume the generated transcript.

One of our products is Live Transcribe which allows for real-time transcription (with just few seconds delay) which is then broadcast over websockets and can be consumed on provided web clients. This opens possibility to do live speaker transcription with uses cases that may include conferences, lectures, etc. making these events easier to participate by hearing impaired audience members.

Read more → 
"Hello World" Example
Developers
"Hello World" Example

In this post we show in three steps what is needed to run your first transcription using Voicegain API.

We assume that you already signed up for Voicegain account and logged into the portal.


Step 1: Create new Context

Main reason to create new Context is to establish new authentication realm. Access to each Context can be separately controlled, so it is easy to disable access to certain Context without affecting other Contexts.

Contexts are also used for specifying default ASR settings.

You can create a new Context from the Context Dash



Step 2: Generate Authentication token

Voicegain APIs use JWT (JSON Web Tokens) to identify and authenticate the account making the request. In order to make API requests you need to generate a JWT which can easily be done from the portal.



Step 3: Run the curl command

Below is the complete input and output from curl command that submits a Web API request to Voicegain Synchronous Speech-to-Text API https://api.voicegain.ai/v1/asr/transcribe


In this case, the audio to be transcribed was retrieved from a URL. Audio can alternatively also be submitted in-line (within request).

Note that synchronous transcription has audio length limit of 60 seconds. Longer audio requires use of asynchronous transcription API.

For asynchronous transcription requests it is possible to stream the audio, e.g. via websocket. You can see some of Voicegain API documentation at: https://www.voicegain.ai/api

Read more → 
Benefits of Edge Deployment
Edge
Benefits of Edge Deployment

There is no denying that services available in the Cloud have significant benefits and is hence a popular choice. That is why Voicegain Speech-to-Text Platform is available both in the Cloud and at the Edge. The key benefits of accessing Voicegain as a Cloud services are:

  • Ease of Use - All it takes to start accessing Voicegain on the Cloud is to create an account on the Voicegain Web Console and get the developer API keys/security tokens. You can immediately start accessing the APIs that have been extensively documented.
  • No Maintenance - Voicegain ensures availability of the infrastructure and is responsible for the software updates and patches, backups, resources, etc.
  • High Security - We have the provider spends one time effort on securing the Cloud services for all of the tenants. Although Cloud is potentially more exposed, but the provider can devote more resources to address security in a systematic way.
  • High Availability - Cloud provides redundancy of the virtual platform and often geographic distribution. Geographic distribution provides more resiliency to network wide outages, etc.
  • Scalability - Cloud provider takes care of the growing demand for resources.
  • Lower Sys Admin, DBA etc. costs - This is largely related to the No Maintenance point.


What is Edge Deployment?

Before we discuss the benefits of Edge Deployment let's define what we mean by it.

  • Edge Computing is defined broadly as all computing outside the cloud happening at the edge of the network, and more specifically in applications where real-time processing of data is required. Edge of the network, in turn, is usually understood as within the "last mile", that part of the network that physically reaches the end-user's premises.
  • What we call Edge Deployment is a deployment of Edge Computing (in our case specifically Speech-to-Text services) either on customer premises (datacenter) or in a VPC of a cloud provider. Compute resources are either owned by or rented by the customer. However the Cloud can 'orchestrate' the deployed application and services it provides is deployed and managed from the Cloud . These services run in virtualized environment (in our case Kubernetes).

Benefits of Edge Deployment

Edge Computing for Speech-to-Text services has many advantages:

  1. Low Network Latencies & High Network Reliability - With Edge Computing processing of speech audio is brought close to where the audio originates. For example, all processing can be done in the same location where the Telco phone lines terminate for an IVR application. If the speech processing were to happen in the Cloud the audio data  would need to be sent over Internet which would introduce additional latency, jitter, and would make the service susceptible to occasional incidents on wide internet like trunks overloaded by DDoS attacks, fiber cuts, etc. One can avoid some of those issues by deploying more reliable network connectivity to the Cloud, e.g., Google Cloud Interconnect, but that comes at the cost and still does not solve the basic reality of extra latency.
  2. Lower Bandwidth Cost - Some Speech-to-Text applications generate a lot of data, e.g., Call Analytics application that processes 100% of the calls. Edge Deployment allows for putting processing resources right next to where the data is generates, e.g. right at the Call Center.
  3. Data Privacy and Control - with all the incoming and generated data confided to the Edge Computing environment and none of it going to the Voicegain Cloud, the customers can apply their own security protocols to protect the data.


Does Edge provide some of the benefits of the Cloud?

You may ask - what about the benefits of the Cloud, mentioned upfront? Do I get some of these with the Edge Deployment?

The answer is (qualified) "yes", and specifically:

  • Ease of Use - Edge Deployment is fully managed from the Cloud. Deployment of the entire application stack takes a few mouse clicks.
  • No Maintenance - Voicegain takes care of managing the components of the application - all the application components will be automatically updated and/or patched.The customer still needs to take care of the hardware and the Kubernetes cluster.
  • High Security - The same core application is deployed for all our customers and we have made sure that it is secure. In case of any new vulnerabilities found, they will be automatically patched.The network entry and exit points from the Edge environment are well defined and the customers can provide additional network security for these.
  • High Availability - Running on Kubernetes platform our application has been designed with high availability in mind - there are multiple instances of each services, and Kubernetes takes care of failover in case of hardware node failure.Because of the ease of deployment, it is easy for our customers to deploy multiple Edge instances, for example, to achieve geographic distribution.
  • Scalability - Again, thanks to the underlying Kubernetes platform, new processing resources can be added by adding new hardware nodes to the Kubernetes  cluster, they will be automatically taken advantage of by the Voicegain application.

Read more → 
Real-Time Transcription for the hearing Impaired
Transcription
Real-Time Transcription for the hearing Impaired

Countryside Bible Church has been using VoiceGain platform for real-time transcription since September 2018 (when our platform was still in alpha).

How it Started

In August 2018 one of our employees was approached by staff at CBC with a question about a software that would allow a deaf person to follow sermons live via transcription. One of the members at CBC is both hearing and vision impaired and cannot easily follow sign language; however, she can read large font on a computer screen from close by.

In August, Voicegain just started alpha tests of the platform, so his response was that indeed he knew such software and it was Voicegain. At that time, our testing was focusing on IVR use cases, so we still needed a few weeks to polish the transcription APIs and develop a web app that could consume the transcript stream (via websocket) and present it as scrolling text in a browser.

To improve recognition, we used about 200 hours of previously transcribed sermons from CBC to adapt our Acoustic DNN Model. Additionally, we created a specific CBC Language Model, by adding a corpus of text from several Bible translation, various transcribed sermons, list of CBC staff names, etc.

As far as the input audio is concerned, initially, we were streaming audio using a standard RTP protocol from ffmpeg tool. We had some issues with a reliability of raw RTP, so later we switched to a custom Java client that sends the audio using a proprietary protocol. The client runs as a daemon on a small Raspberry Pi device.




Current State

CBC audio-visual team has been running real-time transcription using our platform since  September 2018, pretty much ever Sunday. You can see an example of the transcription in action in the video below


Plans

Current plans for the transcription service is to integrate it into CBC website and to make it available together with streamed video. This will allow hearing impaired to follow the services at home via streaming. For now, the transcription text will be presented as an embedded web page element under the embedded video.

Because the streamed video is  more than 30 seconds delayed w.r.t. the real-time, we will be feeding the audio simultaneously to two ASR engines, one optimized for real-time response, and one optimized for accuracy. This is easy, because Voicegain Web API provides methods that allow for attaching two ASR sessions to a single audio stream. Each session, can in turn feed its own websocket stream. By accessing the appropriate websocket stream, web UI can display either the real-time of delayed transcript.

Example transcribed sermons

Because of their Terms of Use, we cannot provide direct results for any of the major ASR engines, but you can download the audio linked below, as well as the corresponding exact Transcripts and run comparison tests on a recognizer of your choice. Note that Voicegain ASR does ignore most of duplicated words that are in audio, that is why the transcript does have those duplicates removed.

The audio is Copyright of  Countryside Bible Church and transcripts are Copyright of Voicegain.

1.  God's Plan for Human History (Part 2)

Tom Pennington  |  Daniel 2  |  2018-11-04 PM

55 minutes 13 seconds, 7475 words

Audio Transcript VoiceGain Output

Accuracy: 1.08% character error rate

Note: Voicegain output is formatted to match Transcript. Normally it also includes timing information. This specific output was obtained on 4/30/19 from real-time recognizer which has slightly lower accuracy compared to off-line recognizer.


Read more → 
Raspberry Pi as Audio Streaming Client
Edge
Raspberry Pi as Audio Streaming Client

You can stream audio for Voicegain transcription API from any computer, but sometimes it is handy to have a dedicated inexpensive  device just for this task. Below we relay experiences of one of our customers in using a Raspbery Pi to stream audio for real-time transcription. It replaced a Mac Mini which was initially used for that purpose. Using Pi had two benefits: a) obviously the cost, and b) it is less likely than Mac Mini to be "hijacked" for other purposes.

Hardware

Voicegain Audio Streaming Daemon requires very little as far as computing resources, so in even a Raspberry Pi Zero is sufficient ; however, we recommend using Raspberry Pi 3 B+ mainly because it has on-board 1Gbps wired Ethernet port. WiFi connections are more likely to have problems with streaming using UDP protocol.

Here is a list of all hardware used in the project (with amazon prices (as of July 2019)):

  • Element14 Raspberry Pi 3 B+ Motherboard - $37.78
  • Miuzei Raspberry Pi 3 b+ Screen, 3.5 Inch - $23.99
  • Miuzei 3.5 Inch Screen Case for 3.5 LCD - $9.99
  • iPazzPort Wireless Mini Handheld Keyboard - $13.99
  • UGREEN USB Audio Adapter - $8.99
  • SanDisk Ultra 32GB microSDHC UHS-I card - $7.23
  • plus some existing USB 5V power supply was uses.

All the components added up to a total of $101.97. The reason why a mini monitor and a mini keyboard were included is that they make it more convenient to control the device while it is in the audio rack. For example, the alsa audio mixer can be easily adjusted this way, while at the same time monitoring the level of the audio via headphones.



Raspberry PI running AudioDaemon

Software

The device is running standard Raspbian which can easily be installed from an image using e.g.  balenaEtcher. After base install, the following was needed to get things running:

  • enable ssh access
  • change default audio device to USB sound card (Raspbian comes default with alsa and basic USB sound drivers)
  • installing driver for the display (otherwise output font is too tiny and not readable)
  • installing OpenJDK 9
  • use link generated from Voicegain Portal to download Voicegain AudioDaemon jar file and correct JSON config
  • seting the correct audio source number the AudioDaemon start script and launching the daemon

Observations

Here are some lessons learned from using this setup over the past 6 months:

  • While streaming the CPU use stays under 10%
  • Java heap is set to 128m, which seems to be more that enough because GCs manage to reduce it to about 54m
  • Raspberry Pi turned out to be very reliable - we have not had a single issue with the hardware nor with the Raspbian OS
  • Cheap USB audio card delivers very good sound quality (for speech recognition at least)
  • Very cheap USB power supplies should be avoided - sometimes they cause a hum in the audio (but that also depends on what audio device is being connected).

Read more → 
Voicegain Story
Announcement
Voicegain Story

The team behind VoiceGain has more than 12 years experience of using Automated Speech Recognition in real wold - developing and hosting complete IVR systems for large enterprises.

​We started of as Resolvity, Inc., back in 2005. We built our own IVR Dialog platform, utilizing AI to guide the dialog and to improve the recognition results from commercial ASR engines.

Resolvity Dialog Platform

The Resolvity Dialog platform, had some advanced AI modules. For example:


  • It had ontology that could be used to model Dialog Domain . This ontology then could be used to automatically drive the dialog. It would automatically generate follow up questions based on the information that was already acquired. We used this often in IVR applications that required recognition of product names.
  • It had an Incremental Case-Based Reasoning (CBR) troubleshooting engine which together with Ontology could be used to diagnose technical problems based on presented symptoms.
  • It had a module to correct systematic errors of the ASR engine to improve the accuracy (we received a US Patent for this)
  • It had an NLCR module that could automatically handle "How may I help you?" type of interactions. It used a combination on Ontology, Bayesian and Neural Network classifiers.


Hosted IVR

Starting from 2007 we were building complete IVR applications for Customer Support and hosting them on our servers in data centers. We build a Customer Solutions team that interacted with our customers ensuring that the IVR applications were always up to date and an Operations team that ensured that we ran the IVRs 24/7 with very high SLAs.

Resolvity Dialog Platform had a set of tools available that allowed us to analyze speech recognition accuracy in high detail and also allowed us to tune various ASR parameters (thresholds, grammars).

Moreover, because that platform was ASR-engine agnostic, we were able to see how a number of ASR engines from various brands performed in real life.



VoiceGain 1.0 Cloud PBX

In 2012-2013 Resolvity built a complete low-cost Cloud PBX platform on top of Open Source projects. We launched it for the India market under the brand name VoiceGain. The platform was providing complete end-to-end PBX+IVR functionality.

The version that we used in prod supported only DTMF, but we also had a functional ASR version. However, at that time it was built using conventional ASR technologies (GMM+HMM) and we found that training it for new  languages presented quite a bit of challenges.

VoiceGain was growing quite fast. We had presence in data centers in Bangalore and Mumbai. We were able to provision both landline and mobile numbers for our PBX+IVR customers. Eventually, although our technology was performing quite well, we found it expensive to run a very hands-on business in India from the USA and sold our India operations.

Augmented Recognition

​When the combination of hardware and AI developments made Deep Neural Networks possible, we decided to start working on our own DNN Speech Recognizer, initially with the goal to augment the results from the ASR engines that we used in our IVRs. Very quickly we noticed that with our new customized ASR used for IVR tasks we could achieve results better than with the commercial ASRs. We were able to confirm this by running comparison tests across data sets containing thousands of examples. The key to higher accuracy was ability to customize the ASR Acoustic Models to the specific IVR domain and user population.

​Own ASR Platform

Great results with augmented recognition lead us to launch a full scale effort to build a complete ASR platform, again under Voicegain (.ai) brand name, that would allow for easy model customization and be easy to use in IVR applications.

From our IVR experience we knew that large enterprise IVR users are (a) very price sensitive plus (b) require tight security compliance, that is why from day 1 we also worked on making the Voicegain platform deployable on the Edge.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Sign up for an app today
* No credit card required.

Enterprise

Interested in customizing the ASR or deploying Voicegain on your infrastructure?

Contact Us → 
Voicegain - Speech-to-Text
Under Your Control