Do androids hum electric beats?
Waveshaper AI uses artificial intelligence to make audio sound better. Our technology is based on cutting-edge research from the Queen Mary University of London on using deep neural networks for end-to-end audio processing.
Waveshaper AI has learned to repair audio, transform one microphone into another, recreate the sound of universally-loved vintage analog studio hardware, remove noise, and more. And it does so in real time, in stereo, and at high quality.
Powered by Deep Learning
This includes linear processing systems and systems with non-linearities, such as famously difficult to model electro-mechanical vintage electric guitar amplifiers that use vacuum tubes and produce characteristic sound. Waveshaper AI learns to recreate these effects perfectly and our inference engine can apply them to signals in real time on multiple platforms.
Waveshaper AI can be integrated into existing products and services that use traditional software or hardware DSP through use of an SDK, via our API, or embedded in IoT and hardware devices. The key advantage of using Waveshaper AI is a reduction in the footprint of complex custom software or hardware associated with audio processing.
How does it work?
Waveshaper has a family of deep neural network architectures designed to learn how to do accoustic signal processing. Our neural networks essentially listen to the way audio is processed traditionally by hardware and software systems, and then learns to approximate a function that is perceptually indistinguishable.
Waveshaper has developed a new kind of modular audio processing platform, called the audio inference engine. It takes frames of raw audio as input and feeds them through a deep neural network. The network is composed of layers which are trained to decompose the signal, compress consituent frequency information, and then apply transformations. The output is a new signal.
The audio inference engine is something like a virtual machine, with the type of audio processing function simply data that is loaded into the neural network. To change how audio is being processed, just change the neural network parameters, which are simply data.
The Waveshaper AI inference engine is written to do this very fast and on multiple platforms.