CUI 2024 slides available

The slides from my keynote are now online under Courses > One-off events. I’ll try to add a recording and perhaps a bibliography later.

A super-simple speech recogniser

We make what is possibly the world’s simplest speech recognition system. It can only recognise two different words, but will help you understand the basic idea of pattern recognition using template matching. The templates are just pre-recorded words, with known labels. The features extracted are just two formant frequencies in the middle of the word, […]

Continue reading...

Bitrate

The bitrate (or bit rate) of a signal is the number of bits required to store, or transmit, 1 s of that signal. A bit is a binary number: either 0 or 1. Let’s calculate the bitrate of a digital waveform. First you should revise the concepts of sampling and quantisation from this module of the […]

Continue reading...

My inaugural lecture

I talk about how speech synthesis works, in what I hope is a non-technical and accessible way, and finish off with an application of speech synthesis that gives personalised voices to people who are losing the ability to speak. I also try to mention bicycles as many times as possible. For a more up-to-date, slightly more technical, […]

Continue reading...

Classification and regression trees (CART)

A quick introduction to a very simple but widely-applicable model that can perform classification (predicting a discrete label) or regression (predicting a continuous value). The tree is learned from labelled data, using supervised learning. Before watching this video, you might want to check that you understand what Entropy is.

Continue reading...

Aliasing

Aliasing

In sampling and quantisation we saw that sampling a signal at a fixed rate means that there is an upper limit on the frequencies that can be represented. This limit is called the Nyquist frequency. Before sampling a signal, we must remove all energy above the Nyquist frequency, and here we will see what would […]

Continue reading...

The speed of sound

At the Parque de las Ciencias in Granada, Spain there is this long tube, open at the end nearest you and closed at the far end. We can calculate the length of this tube just from the audio recording, because we know the speed of sound. Here’s the waveform of part of the recording, showing […]

Continue reading...

Token passing

Token passing is a really nice way to understand (and even to implement) Viterbi search for Hidden Markov Models. Here we see token passing in action, and you can look at the spreadsheet to see the calculations. To keep things simple, we are ignoring transition probabilities in this example. It would be simple to add them […]

Continue reading...

Pipeline architecture for TTS

Pipeline architecture

Most text-to-speech systems split the problem into two main stages. The first stage is called the front end and contains many separate processes which gradually build up a linguistic specification from the input text. The second stage typically uses language-independent techniques (although they still require a language-specific speech corpus) to generate a waveform. Here we see those two […]

Continue reading...

TD-PSOLA …the hard way

Time-Domain Pitch Synchronous Overlap and Add (TD-PSOLA) can modify the fundamental frequency and duration of speech signals, without affecting the segment identity – that is, without changing the formants. Normally, it’s an automatic algorithm, but here we do it the hard way – by hand! If you want to follow-along, you will need Audacity and these materials (a […]

Continue reading...

The Gaussian probability density function: understanding the equation

The equation for the Gaussian probability density function looks a little scary at first, but this video should help you understand what each of the terms is doing, and how they fit together. After watching the video download the spreadsheet which shows the calculations and plots from this video (tip: the Apple Numbers.app version includes images […]

Continue reading...

Entropy: understanding the equation

The equation for entropy is very often presented in textbooks without much explanation, other than to say it has the desired properties. Here, I attempt an informal derivation of the equation starting from uniform probability distributions. A good way to think about information is in terms of sending messages. In the video, we send messages […]

Continue reading...