Skip to content
Snippets Groups Projects
Commit 1817db9e authored by Erik Strand's avatar Erik Strand
Browse files

Update writeup

parent 9a174bad
Branches
No related tags found
No related merge requests found
...@@ -139,8 +139,8 @@ def em_step(): ...@@ -139,8 +139,8 @@ def em_step():
predicted = np.array([cluster_func(x[n], m) for n in range(N)]) predicted = np.array([cluster_func(x[n], m) for n in range(N)])
out_var[m] = cw_expectation((y - predicted)**2.0, m) + tiny out_var[m] = cw_expectation((y - predicted)**2.0, m) + tiny
make_plot(plot_name(0))
print_everything() print_everything()
make_plot(plot_name(0))
for i in range(50): for i in range(50):
em_step() em_step()
make_plot(plot_name(i + 1)) make_plot(plot_name(i + 1))
......
...@@ -14,8 +14,12 @@ during the EM iterations. ...@@ -14,8 +14,12 @@ during the EM iterations.
My code lives [here](https://gitlab.cba.mit.edu/erik/nmm_2020_site/-/tree/master/_code/pset_11). My code lives [here](https://gitlab.cba.mit.edu/erik/nmm_2020_site/-/tree/master/_code/pset_11).
Here's a video of the convergence progress with three clusters. Here's a video of 50 EM iterations with three clusters. I'm displaying the local linear models over
a $$4 \sigma$$ range. (I don't indicate the output variance in any way, would be nice to add this.)
<video width="480" height="320" controls="controls" muted plays-inline> <video width="480" height="320" controls="controls" muted plays-inline>
<source type="video/mp4" src="../assets/mp4/11_cwm.mp4"> <source type="video/mp4" src="../assets/mp4/11_cwm.mp4">
</video> </video>
Not surprisingly, for tanh three clusters seems to work best. Any less and the fit degrades, any
more and some clusters end up with needlessly similar local models.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment