318 view times

Trigger Word Detection

1.2 – From audio recordings to spectrograms

What really is an audio recording?

  • A microphone records little variations in air pressure over time, and it is these little variations in air pressure that your ear also perceives as sound.
  • You can think of an audio recording is a long list of numbers measuring the little air pressure changes detected by the microphone.
  • We will use audio sampled at 44100 Hz (or 44100 Hertz).
    • This means the microphone gives us 44,100 numbers per second.
    • Thus, a 10 second audio clip is represented by 441,000 numbers (= $10 \times 44,100$).

Spectrogram

  • It is quite difficult to figure out from this “raw” representation of audio whether the word “activate” was said.
  • In order to help your sequence model more easily learn to detect trigger words, we will compute a spectrogram of the audio.
  • The spectrogram tells us how much different frequencies are present in an audio clip at any moment in time.
  • If you’ve ever taken an advanced class on signal processing or on Fourier transforms:
    • A spectrogram is computed by sliding a window over the raw audio signal, and calculating the most active frequencies in each window using a Fourier transform.
    • If you don’t understand the previous sentence, don’t worry about it.

Let’s look at an example.

# Calculate and plot spectrogram for a wav audio file
def graph_spectrogram(wav_file):
    rate, data = get_wav_info(wav_file)
    nfft = 200 # Length of each window segment
    fs = 8000 # Sampling frequencies
    noverlap = 120 # Overlap between windows
    nchannels = data.ndim
    if nchannels == 1:
        pxx, freqs, bins, im = plt.specgram(data, nfft, fs, noverlap = noverlap)
    elif nchannels == 2:
        pxx, freqs, bins, im = plt.specgram(data[:,0], nfft, fs, noverlap = noverlap)
    return pxx

# Load a wav file
def get_wav_info(wav_file):
    rate, data = wavfile.read(wav_file)
    return rate, data

x = graph_spectrogram("audio_examples/example_train.wav")

The graph above represents how active each frequency is (y axis) over a number of time-steps (x axis).

Figure 1: Spectrogram of an audio recording

  • The color in the spectrogram shows the degree to which different frequencies are present (loud) in the audio at different points in time.
  • Green means a certain frequency is more active or more present in the audio clip (louder).
  • Blue squares denote less active frequencies.
  • The dimension of the output spectrogram depends upon the hyperparameters of the spectrogram software and the length of the input.
  • In this notebook, we will be working with 10 second audio clips as the “standard length” for our training examples.
    • The number of timesteps of the spectrogram will be 5511.
    • You’ll see later that the spectrogram will be the input xx into the network, and so Tx=5511Tx=5511.

1.3 – Generating a single training example

Benefits of synthesizing data

Because speech data is hard to acquire and label, you will synthesize your training data using the audio clips of activates, negatives, and backgrounds.

  • It is quite slow to record lots of 10 second audio clips with random “activates” in it.
  • Instead, it is easier to record lots of positives and negative words, and record background noise separately (or download background noise from free online sources).

Process for Synthesizing an audio clip

  • To synthesize a single training example, you will:
    • Pick a random 10 second background audio clip
    • Randomly insert 0-4 audio clips of “activate” into this 10sec clip
    • Randomly insert 0-2 audio clips of negative words into this 10sec clip
  • Because you had synthesized the word “activate” into the background clip, you know exactly when in the 10 second clip the “activate” makes its appearance.
    • You’ll see later that this makes it easier to generate the labels \(y^{\langle t \rangle}\) as well.

Pydub

  • You will use the pydub package to manipulate audio.
  • Pydub converts raw audio files into lists of Pydub data structures.
    • Don’t worry about the details of the data structures.
  • Pydub uses 1ms as the discretization interval (1ms is 1 millisecond = 1/1000 seconds).
    • This is why a 10 second clip is always represented using 10,000 steps.
# Load audio segments using pydub 
activates, negatives, backgrounds = load_raw_audio()

print("background len should be 10,000, since it is a 10 sec clip\n" + str(len(backgrounds[0])),"\n")
print("activate[0] len may be around 1000, since an `activate` audio clip is usually around 1 second (but varies a lot) \n" + str(len(activates[0])),"\n")
print("activate[1] len: different `activate` clips can have different lengths\n" + str(len(activates[1])),"\n")

Overlaying positive/negative ‘word’ audio clips on top of the background audio

  • Given a 10 second background clip and a short audio clip containing a positive or negative word, you need to be able to “add” the word audio clip on top of the background audio.
  • You will be inserting multiple clips of positive/negative words into the background, and you don’t want to insert an “activate” or a random word somewhere that overlaps with another clip you had previously added.
    • To ensure that the ‘word’ audio segments do not overlap when inserted, you will keep track of the times of previously inserted audio clips.
  • To be clear, when you insert a 1 second “activate” onto a 10 second clip of cafe noise, you do not end up with an 11 sec clip.
    • The resulting audio clip is still 10 seconds long.
    • You’ll see later how pydub allows you to do this.

Label the positive/negative words

  • Recall that the labels $y^{\langle t \rangle}$ represent whether or not someone has just finished saying “activate.”
    • $y^{\langle t \rangle} = 1$ when that that clip has finished saying “activate”.
    • Given a background clip, we can initialize $y^{\langle t \rangle}=0$ for all $t$, since the clip doesn’t contain any “activates.”
  • When you insert or overlay an “activate” clip, you will also update labels for $y^{\langle t \rangle}$.
    • Rather than updating the label of a single time step, we will update 50 steps of the output to have target label 1.
    • Recall from the lecture on trigger word detection that updating several consecutive time steps can make the training data more balanced.
  • You will train a GRU (Gated Recurrent Unit) to detect when someone has finished saying “activate”.
Example
  • Suppose the synthesized “activate” clip ends at the 5 second mark in the 10 second audio – exactly halfway into the clip.
  • Recall that \(T_y = 1375\), so timestep \(687 = \) int(1375*0.5) corresponds to the moment 5 seconds into the audio clip.
  • Set \(y^{\langle 688 \rangle} = 1\)
  • We will allow the GRU to detect “activate” anywhere within a short time-internal after this moment, so we actually set 50 consecutive values of the label \(y^{\langle t \rangle}\) to 1.
    • Specifically, we have \(y^{\langle 688 \rangle} = y^{\langle 689 \rangle} = \cdots = y^{\langle 737 \rangle} = 1\).
Synthesized data is easier to label
  • This is another reason for synthesizing the training data: It’s relatively straightforward to generate these labels \(y^{\langle t \rangle}\) as described above.
  • In contrast, if you have 10sec of audio recorded on a microphone, it’s quite time consuming for a person to listen to it and mark manually exactly when “activate” finished.

Visualizing the labels

  • Here’s a figure illustrating the labels \(y^{⟨t⟩}\) in a clip.
    • We have inserted “activate”, “innocent”, activate”, “baby.”
    • Note that the positive labels “1” are associated only with the positive words.
Figure 2

Get a random time segment

  • The function get_random_time_segment(segment_ms) returns a random time segment onto which we can insert an audio clip of duration segment_ms.
  • Please read through the code to make sure you understand what it is doing.
def get_random_time_segment(segment_ms):
    """
    Gets a random time segment of duration segment_ms in a 10,000 ms audio clip.
    
    Arguments:
    segment_ms -- the duration of the audio clip in ms ("ms" stands for "milliseconds")
    
    Returns:
    segment_time -- a tuple of (segment_start, segment_end) in ms
    """
    
    segment_start = np.random.randint(low=0, high=10000-segment_ms)   # Make sure segment doesn't run past the 10sec background 
    segment_end = segment_start + segment_ms - 1
    
    return (segment_start, segment_end)

Check if audio clips are overlapping

  • Suppose you have inserted audio clips at segments (1000,1800) and (3400,4500).
    • The first segment starts at step 1000 and ends at step 1800.
    • The second segment starts at 3400 and ends at 4500.
  • If we are considering whether to insert a new audio clip at (3000,3600) does this overlap with one of the previously inserted segments?
    • In this case, (3000,3600) and (3400,4500) overlap, so we should decide against inserting a clip here.
  • For the purpose of this function, define (100,200) and (200,250) to be overlapping, since they overlap at timestep 200.
  • (100,199) and (200,250) are non-overlapping.

Exercise:

  • Implement is_overlapping(segment_time, existing_segments) to check if a new time segment overlaps with any of the previous segments.
  • You will need to carry out 2 steps:
  • Create a “False” flag, that you will later set to “True” if you find that there is an overlap.
  • Loop over the previous_segments’ start and end times. Compare these times to the segment’s start and end times. If there is an overlap, set the flag defined in (1) as True.

You can use:

for ....:
        if ... <= ... and ... >= ...:
            ...

Hint: There is overlap if:

  • The new segment starts before the previous segment ends and
  • The new segment ends after the previous segment starts.
# GRADED FUNCTION: is_overlapping

def is_overlapping(segment_time, previous_segments):
    """
    Checks if the time of a segment overlaps with the times of existing segments.
    
    Arguments:
    segment_time -- a tuple of (segment_start, segment_end) for the new segment
    previous_segments -- a list of tuples of (segment_start, segment_end) for the existing segments
    
    Returns:
    True if the time segment overlaps with any of the existing segments, False otherwise
    """
    
    segment_start, segment_end = segment_time
    
    ### START CODE HERE ### (≈ 4 lines)
    # Step 1: Initialize overlap as a "False" flag. (≈ 1 line)
    overlap = False
    
    # Step 2: loop over the previous_segments start and end times.
    # Compare start/end times and set the flag to True if there is an overlap (≈ 3 lines)
    for previous_start, previous_end in previous_segments:
        if segment_start <= previous_end and segment_end >= previous_start:
            overlap = True
    ### END CODE HERE ###

    return overlap

Insert audio clip

  • Let’s use the previous helper functions to insert a new audio clip onto the 10 second background at a random time.
  • We will ensure that any newly inserted segment doesn’t overlap with previously inserted segments.

Exercise:

  • Implement insert_audio_clip() to overlay an audio clip onto the background 10sec clip.
  • You implement 4 steps:
  • Get the length of the audio clip that is to be inserted.
    • Get a random time segment whose duration equals the duration of the audio clip that is to be inserted.
  • Make sure that the time segment does not overlap with any of the previous time segments.
    • If it is overlapping, then go back to step 1 and pick a new time segment.
  • Append the new time segment to the list of existing time segments
    • This keeps track of all the segments you’ve inserted.
  • Overlay the audio clip over the background using pydub. We have implemented this for you.

Insert ones for the labels of the positive target

  • Implement code to update the labels y⟨t⟩y⟨t⟩, assuming you just inserted an “activate” audio clip.
  • In the code below, y is a (1,1375) dimensional vector, since Ty=1375Ty=1375.
  • If the “activate” audio clip ends at time step tt, then set y⟨t+1⟩=1y⟨t+1⟩=1 and also set the next 49 additional consecutive values to 1.
    • Notice that if the target word appears near the end of the entire audio clip, there may not be 50 additional time steps to set to 1.
    • Make sure you don’t run off the end of the array and try to update y[0][1375], since the valid indices are y[0][0] through y[0][1374] because Ty=1375Ty=1375.
    • So if “activate” ends at step 1370, you would get only set y[0][1371] = y[0][1372] = y[0][1373] = y[0][1374] = 1

Exercise: Implement insert_ones().

  • You can use a for loop.
  • If you want to use Python’s array slicing operations, you can do so as well.
  • If a segment ends at segment_end_ms (using a 10000 step discretization),
    • To convert it to the indexing for the outputs yy (using a 13751375 step discretization), we will use this formula:segment_end_y = int(segment_end_ms * Ty / 10000.0)
# GRADED FUNCTION: insert_ones

def insert_ones(y, segment_end_ms):
    """
    Update the label vector y. The labels of the 50 output steps strictly after the end of the segment 
    should be set to 1. By strictly we mean that the label of segment_end_y should be 0 while, the
    50 following labels should be ones.
    
    
    Arguments:
    y -- numpy array of shape (1, Ty), the labels of the training example
    segment_end_ms -- the end time of the segment in ms
    
    Returns:
    y -- updated labels
    """
    
    # duration of the background (in terms of spectrogram time-steps)
    segment_end_y = int(segment_end_ms * Ty / 10000.0)
    
    # Add 1 to the correct index in the background label (y)
    ### START CODE HERE ### (≈ 3 lines)
    for i in range(segment_end_y+1, segment_end_y+51):
        if i < Ty:
            y[0, i] = 1.0
    ### END CODE HERE ###
    
    return y

Creating a training example

Finally, you can use insert_audio_clip and insert_ones to create a new training example.

Exercise: Implement create_training_example(). You will need to carry out the following steps:

  1. Initialize the label vector y as a numpy array of zeros and shape (1, T_y).
  2. Initialize the set of existing segments to an empty list.
  3. Randomly select 0 to 4 “activate” audio clips, and insert them onto the 10 second clip. Also insert labels at the correct position in the label vector y.
  4. Randomly select 0 to 2 negative audio clips, and insert them into the 10 second clip.
# GRADED FUNCTION: create_training_example

def create_training_example(background, activates, negatives):
    """
    Creates a training example with a given background, activates, and negatives.
    
    Arguments:
    background -- a 10 second background audio recording
    activates -- a list of audio segments of the word "activate"
    negatives -- a list of audio segments of random words that are not "activate"
    
    Returns:
    x -- the spectrogram of the training example
    y -- the label at each time step of the spectrogram
    """
    
    # Set the random seed
    np.random.seed(18)
    
    # Make background quieter
    background = background - 20

    ### START CODE HERE ###
    # Step 1: Initialize y (label vector) of zeros (≈ 1 line)
    y = np.zeros((1, Ty))

    # Step 2: Initialize segment times as an empty list (≈ 1 line)
    previous_segments = []
    ### END CODE HERE ###
    
    # Select 0-4 random "activate" audio clips from the entire list of "activates" recordings
    number_of_activates = np.random.randint(0, 5)
    random_indices = np.random.randint(len(activates), size=number_of_activates)
    random_activates = [activates[i] for i in random_indices]
    
    ### START CODE HERE ### (≈ 3 lines)
    # Step 3: Loop over randomly selected "activate" clips and insert in background
    for random_activate in random_activates:
        # Insert the audio clip on the background
        background, segment_time = insert_audio_clip(background, random_activate, previous_segments)
        # Retrieve segment_start and segment_end from segment_time
        segment_start, segment_end = segment_time
        # Insert labels in "y"
        y = insert_ones(y, segment_end)
    ### END CODE HERE ###

    # Select 0-2 random negatives audio recordings from the entire list of "negatives" recordings
    number_of_negatives = np.random.randint(0, 3)
    random_indices = np.random.randint(len(negatives), size=number_of_negatives)
    random_negatives = [negatives[i] for i in random_indices]

    ### START CODE HERE ### (≈ 2 lines)
    # Step 4: Loop over randomly selected negative clips and insert in background
    for random_negative in random_negatives:
        # Insert the audio clip on the background 
        background, _ = insert_audio_clip(background, random_negative, previous_segments)
    ### END CODE HERE ###
    
    # Standardize the volume of the audio clip 
    background = match_target_amplitude(background, -20.0)

    # Export new training example 
    file_handle = background.export("train" + ".wav", format="wav")
    print("File (train.wav) was saved in your directory.")
    
    # Get and plot spectrogram of the new recording (background with superposition of positive and negatives)
    x = graph_spectrogram("train.wav")
    
    return x, y

2.1 – Build the model

Our goal is to build a network that will ingest a spectrogram and output a signal when it detects the trigger word. This network will use 4 layers:
* A convolutional layer
* Two GRU layers
* A dense layer.

Here is the architecture we will use.

Figure 3

1D convolutional layer

One key layer of this model is the 1D convolutional step (near the bottom of Figure 3).

  • It inputs the 5511 step spectrogram. Each step is a vector of 101 units.
  • It outputs a 1375 step output
  • This output is further processed by multiple layers to get the final \(T_y = 1375\) step output.
  • This 1D convolutional layer plays a role similar to the 2D convolutions you saw in Course 4, of extracting low-level features and then possibly generating an output of a smaller dimension.
  • Computationally, the 1-D conv layer also helps speed up the model because now the GRU can process only 1375 timesteps rather than 5511 timesteps.
GRU, dense and sigmoid
  • The two GRU layers read the sequence of inputs from left to right.
  • A dense plus sigmoid layer makes a prediction for \(y^{\langle t \rangle}\).
  • Because y is a binary value (0 or 1), we use a sigmoid output at the last layer to estimate the chance of the output being 1, corresponding to the user having just said “activate.”

Unidirectional RNN

  • Note that we use a unidirectional RNN rather than a bidirectional RNN.
  • This is really important for trigger word detection, since we want to be able to detect the trigger word almost immediately after it is said.
  • If we used a bidirectional RNN, we would have to wait for the whole 10sec of audio to be recorded before we could tell if “activate” was said in the first second of the audio clip.

Implement the model

Implementing the model can be done in four steps:

Step 1: CONV layer. Use Conv1D() to implement this, with 196 filters, a filter size of 15 (kernel_size=15), and stride of 4. conv1d

output_x = Conv1D(filters=...,kernel_size=...,strides=...)(input_x)

  • Follow this with a ReLu activation. Note that we can pass in the name of the desired activation as a string, all in lowercase letters.
output_x = Activation("...")(input_x)

  • Follow this with dropout, using a keep rate of 0.8
output_x = Dropout(rate=...)(input_x)

Step 2: First GRU layer. To generate the GRU layer, use 128 units.

output_x = GRU(units=..., return_sequences = ...)(input_x)

  • Return sequences instead of just the last time step’s prediction to ensures that all the GRU’s hidden states are fed to the next layer.
  • Follow this with dropout, using a keep rate of 0.8.
  • Follow this with batch normalization. No parameters need to be set.output_x = BatchNormalization()(input_x)

Step 3: Second GRU layer. This has the same specifications as the first GRU layer.

  • Follow this with a dropout, batch normalization, and then another dropout.

Step 4: Create a time-distributed dense layer as follows:

X = TimeDistributed(Dense(1, activation = "sigmoid"))(X)

This creates a dense layer followed by a sigmoid, so that the parameters used for the dense layer are the same for every time step.
Documentation:

Exercise: Implement model(), the architecture is presented in Figure 3.

# GRADED FUNCTION: model

def model(input_shape):
    """
    Function creating the model's graph in Keras.
    
    Argument:
    input_shape -- shape of the model's input data (using Keras conventions)

    Returns:
    model -- Keras model instance
    """
    
    X_input = Input(shape = input_shape)
    
    ### START CODE HERE ###
    
    # Step 1: CONV layer (≈4 lines)
    X = Conv1D(196, 15, strides=4)(X_input)                    # CONV1D
    X = BatchNormalization()(X)                                # Batch normalization
    X = Activation('relu')(X)                                  # ReLu activation
    X = Dropout(0.8)(X)                                        # dropout (use 0.8)

    # Step 2: First GRU Layer (≈4 lines)
    X = GRU(units = 128, return_sequences=True)(X)             # GRU (use 128 units and return the sequences)
    X = Dropout(0.8)(X)                                        # dropout (use 0.8)
    X = BatchNormalization()(X)                                # Batch normalization
    
    # Step 3: Second GRU Layer (≈4 lines)
    X = GRU(units = 128, return_sequences=True)(X)             # GRU (use 128 units and return the sequences)
    X = Dropout(0.8)(X)                                        # dropout (use 0.8)
    X = BatchNormalization()(X)                                # Batch normalization
    X = Dropout(0.8)(X)                                        # dropout (use 0.8)
    
    # Step 4: Time-distributed dense layer (see given code in instructions) (≈1 line)
    X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed  (sigmoid)

    ### END CODE HERE ###

    model = Model(inputs = X_input, outputs = X)
    
    return model  

发表评论

电子邮件地址不会被公开。 必填项已用*标注