Commit 1817db9e authored by Erik Strand's avatar Erik Strand

Update writeup

parent 9a174bad
......@@ -139,8 +139,8 @@ def em_step():
predicted = np.array([cluster_func(x[n], m) for n in range(N)])
out_var[m] = cw_expectation((y - predicted)**2.0, m) + tiny
make_plot(plot_name(0))
print_everything()
make_plot(plot_name(0))
for i in range(50):
em_step()
make_plot(plot_name(i + 1))
......
......@@ -14,8 +14,12 @@ during the EM iterations.
My code lives [here](https://gitlab.cba.mit.edu/erik/nmm_2020_site/-/tree/master/_code/pset_11).
Here's a video of the convergence progress with three clusters.
Here's a video of 50 EM iterations with three clusters. I'm displaying the local linear models over
a $$4 \sigma$$ range. (I don't indicate the output variance in any way, would be nice to add this.)
<video width="480" height="320" controls="controls" muted plays-inline>
<source type="video/mp4" src="../assets/mp4/11_cwm.mp4">
</video>
Not surprisingly, for tanh three clusters seems to work best. Any less and the fit degrades, any
more and some clusters end up with needlessly similar local models.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment