Sampling from and decoding an HMM#

This script shows how to sample points from a Hidden Markov Model (HMM): we use a 4-state model with specified mean and covariance.

The plot shows the sequence of observations generated with the transitions between them. We can see that, as specified by our transition matrix, there are no transition between component 1 and 3.

Then, we decode our model to recover the input parameters.

import numpy as np
import matplotlib.pyplot as plt

from hmmlearn import hmm

# Prepare parameters for a 4-components HMM
# Initial population probability
startprob = np.array([0.6, 0.3, 0.1, 0.0])
# The transition matrix, note that there are no transitions possible
# between component 1 and 3
transmat = np.array([[0.7, 0.2, 0.0, 0.1],
                     [0.3, 0.5, 0.2, 0.0],
                     [0.0, 0.3, 0.5, 0.2],
                     [0.2, 0.0, 0.2, 0.6]])
# The means of each component
means = np.array([[0.0, 0.0],
                  [0.0, 11.0],
                  [9.0, 10.0],
                  [11.0, -1.0]])
# The covariance of each component
covars = .5 * np.tile(np.identity(2), (4, 1, 1))

# Build an HMM instance and set parameters
gen_model = hmm.GaussianHMM(n_components=4, covariance_type="full")

# Instead of fitting it from the data, we directly set the estimated
# parameters, the means and covariance of the components
gen_model.startprob_ = startprob
gen_model.transmat_ = transmat
gen_model.means_ = means
gen_model.covars_ = covars

# Generate samples
X, Z = gen_model.sample(500)

# Plot the sampled data
fig, ax = plt.subplots()
ax.plot(X[:, 0], X[:, 1], ".-", label="observations", ms=6,
        mfc="orange", alpha=0.7)

# Indicate the component numbers
for i, m in enumerate(means):
    ax.text(m[0], m[1], 'Component %i' % (i + 1),
            size=17, horizontalalignment='center',
            bbox=dict(alpha=.7, facecolor='w'))
ax.legend(loc='best')
fig.show()
plot hmm sampling and decoding

Now, let’s ensure we can recover our parameters.

scores = list()
models = list()
for n_components in (3, 4, 5):
    for idx in range(10):
        # define our hidden Markov model
        model = hmm.GaussianHMM(n_components=n_components,
                                covariance_type='full',
                                random_state=idx)
        model.fit(X[:X.shape[0] // 2])  # 50/50 train/validate
        models.append(model)
        scores.append(model.score(X[X.shape[0] // 2:]))
        print(f'Converged: {model.monitor_.converged}'
              f'\tScore: {scores[-1]}')

# get the best model
model = models[np.argmax(scores)]
n_states = model.n_components
print(f'The best model had a score of {max(scores)} and {n_states} '
      'states')

# use the Viterbi algorithm to predict the most likely sequence of states
# given the model
states = model.predict(X)
Converged: True Score: -1502.477074092094
Converged: True Score: -1150.8226512358115
Converged: True Score: -1085.8667105838022
Converged: True Score: -1141.0283410641698
Converged: True Score: -1156.0014542845818
Converged: True Score: -1085.8667105838035
Converged: True Score: -1085.8667105838103
Converged: True Score: -1085.8667105838051
Converged: True Score: -1241.1981839323714
Converged: True Score: -1085.8667105838024
Converged: True Score: -925.9654564427842
Converged: True Score: -1148.949062457521
Converged: True Score: -925.9654564427809
Converged: True Score: -1097.8830524463997
Converged: True Score: -925.9654564427809
Converged: True Score: -1204.7958717784222
Converged: True Score: -1038.0025119881893
Converged: True Score: -925.9654564427836
Converged: True Score: -925.9654564427793
Converged: True Score: -1103.768777280518
Converged: True Score: -990.5814588344191
Converged: True Score: -1273.7986146299174
Converged: True Score: -1109.6699869002573
Converged: True Score: -929.727439480624
Converged: True Score: -935.9613388266011
Converged: True Score: -915.8976473425214
Converged: True Score: -948.4098040638588
Converged: True Score: -1272.45559205048
Converged: True Score: -1039.5373957740806
Converged: True Score: -1068.1457843167957
The best model had a score of -915.8976473425214 and 5 states

Let’s plot our states compared to those generated and our transition matrix to get a sense of our model. We can see that the recovered states follow the same path as the generated states, just with the identities of the states transposed (i.e. instead of following a square as in the first figure, the nodes are switch around but this does not change the basic pattern). The same is true for the transition matrix.

# plot model states over time
fig, ax = plt.subplots()
ax.plot(Z, states)
ax.set_title('States compared to generated')
ax.set_xlabel('Generated State')
ax.set_ylabel('Recovered State')
fig.show()

# plot the transition matrix
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 5))
ax1.imshow(gen_model.transmat_, aspect='auto', cmap='spring')
ax1.set_title('Generated Transition Matrix')
ax2.imshow(model.transmat_, aspect='auto', cmap='spring')
ax2.set_title('Recovered Transition Matrix')
for ax in (ax1, ax2):
    ax.set_xlabel('State To')
    ax.set_ylabel('State From')

fig.tight_layout()
fig.show()
  • States compared to generated
  • Generated Transition Matrix, Recovered Transition Matrix

Total running time of the script: (0 minutes 2.125 seconds)

Gallery generated by Sphinx-Gallery