Sampling from and decoding an HMM#

This script shows how to sample points from a Hidden Markov Model (HMM): we use a 4-state model with specified mean and covariance.

The plot shows the sequence of observations generated with the transitions between them. We can see that, as specified by our transition matrix, there are no transition between component 1 and 3.

Then, we decode our model to recover the input parameters.

import numpy as np
import matplotlib.pyplot as plt

from hmmlearn import hmm

# Prepare parameters for a 4-components HMM
# Initial population probability
startprob = np.array([0.6, 0.3, 0.1, 0.0])
# The transition matrix, note that there are no transitions possible
# between component 1 and 3
transmat = np.array([[0.7, 0.2, 0.0, 0.1],
                     [0.3, 0.5, 0.2, 0.0],
                     [0.0, 0.3, 0.5, 0.2],
                     [0.2, 0.0, 0.2, 0.6]])
# The means of each component
means = np.array([[0.0, 0.0],
                  [0.0, 11.0],
                  [9.0, 10.0],
                  [11.0, -1.0]])
# The covariance of each component
covars = .5 * np.tile(np.identity(2), (4, 1, 1))

# Build an HMM instance and set parameters
gen_model = hmm.GaussianHMM(n_components=4, covariance_type="full")

# Instead of fitting it from the data, we directly set the estimated
# parameters, the means and covariance of the components
gen_model.startprob_ = startprob
gen_model.transmat_ = transmat
gen_model.means_ = means
gen_model.covars_ = covars

# Generate samples
X, Z = gen_model.sample(500)

# Plot the sampled data
fig, ax = plt.subplots()
ax.plot(X[:, 0], X[:, 1], ".-", label="observations", ms=6,
        mfc="orange", alpha=0.7)

# Indicate the component numbers
for i, m in enumerate(means):
    ax.text(m[0], m[1], 'Component %i' % (i + 1),
            size=17, horizontalalignment='center',
            bbox=dict(alpha=.7, facecolor='w'))
ax.legend(loc='best')
fig.show()
plot hmm sampling and decoding

Now, let’s ensure we can recover our parameters.

scores = list()
models = list()
for n_components in (3, 4, 5):
    for idx in range(10):
        # define our hidden Markov model
        model = hmm.GaussianHMM(n_components=n_components,
                                covariance_type='full',
                                random_state=idx)
        model.fit(X[:X.shape[0] // 2])  # 50/50 train/validate
        models.append(model)
        scores.append(model.score(X[X.shape[0] // 2:]))
        print(f'Converged: {model.monitor_.converged}'
              f'\tScore: {scores[-1]}')

# get the best model
model = models[np.argmax(scores)]
n_states = model.n_components
print(f'The best model had a score of {max(scores)} and {n_states} '
      'states')

# use the Viterbi algorithm to predict the most likely sequence of states
# given the model
states = model.predict(X)
Converged: True Score: -1210.9900881277529
Converged: True Score: -1161.1684715595038
Converged: True Score: -1161.4196555797307
Converged: True Score: -1025.6835598417024
Converged: True Score: -1025.821017779361
Converged: True Score: -1025.6835598416997
Converged: True Score: -1025.6835598417072
Converged: True Score: -1025.6835598417067
Converged: True Score: -956.8804374109906
Converged: True Score: -1131.0209480495707
Converged: True Score: -950.9354396757187
Converged: True Score: -887.2006054324655
Converged: True Score: -953.0082315464091
Converged: True Score: -1012.9297789779811
Converged: True Score: -945.6642829479538
Converged: True Score: -1075.3654750467247
Converged: True Score: -1049.4425031582875
Converged: True Score: -953.0081587554798
Converged: True Score: -953.0082315464031
Converged: True Score: -953.008231546405
Converged: True Score: -988.7827722182531
Converged: True Score: -955.5044409061904
Converged: True Score: -952.6982433875543
Converged: True Score: -958.7779907424907
Converged: True Score: -951.6890932988606
Converged: True Score: -969.2545191658811
Converged: True Score: -955.967270305164
Converged: True Score: -1139.0791466179148
Converged: True Score: -949.848973070894
Converged: True Score: -976.3618185301813
The best model had a score of -887.2006054324655 and 4 states

Let’s plot our states compared to those generated and our transition matrix to get a sense of our model. We can see that the recovered states follow the same path as the generated states, just with the identities of the states transposed (i.e. instead of following a square as in the first figure, the nodes are switch around but this does not change the basic pattern). The same is true for the transition matrix.

# plot model states over time
fig, ax = plt.subplots()
ax.plot(Z, states)
ax.set_title('States compared to generated')
ax.set_xlabel('Generated State')
ax.set_ylabel('Recovered State')
fig.show()

# plot the transition matrix
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 5))
ax1.imshow(gen_model.transmat_, aspect='auto', cmap='spring')
ax1.set_title('Generated Transition Matrix')
ax2.imshow(model.transmat_, aspect='auto', cmap='spring')
ax2.set_title('Recovered Transition Matrix')
for ax in (ax1, ax2):
    ax.set_xlabel('State To')
    ax.set_ylabel('State From')

fig.tight_layout()
fig.show()
  • States compared to generated
  • Generated Transition Matrix, Recovered Transition Matrix

Total running time of the script: (0 minutes 1.809 seconds)

Gallery generated by Sphinx-Gallery