Commit 62a01ac5 authored by Erik Strand's avatar Erik Strand

Add CT results

parent d6d6661f
......@@ -165,3 +165,57 @@ samples:
![recovered dct (from 50)](../assets/img/09_fig_g_3.png)
![recovered samples (from 50)](../assets/img/09_fig_g_4.png)
## CT Reconstruction
Code lives [here](https://gitlab.cba.mit.edu/erik/funky_ct).
Last year my [final project](http://fab.cba.mit.edu/classes/862.19/people/erik/project.html) for
[The Physics of Information Technology](http://fab.cba.mit.edu/classes/862.19/index.html) dealt with
reconstruction of CT images. I explored the classic techniques like filtered back projection, then
started but didn't finish a compressed sensing based approach. Recently I revisited this and it
finally works.
For basic context, a computed tomography machine takes xray images of an object from a number of
different angles. No one of these gives an internal slice of the object; they're all projections.
But using the right mathematical techniques, it's possible to infer the interior structure of the
object.
Compressed sensing reconstruction, in broad strokes, works just like the DCT problem above, except
instead of using a DCT, one simulates taking the xray images. Let's work through the reconstruction
of a horizontal 2D slice. Each xray projection gives a 1D image, i.e. a row of pixel values.
Traditionally these are all stacked into a single image like this.
![sinogram](../assets/img/09_original_sinogram.jpg)
This is called a sinogram. Each row of pixels is a projection. Moving from top to bottom corresponds
to the rotation of the object. This particular sinogram was generated by CBA's CT machine. It's a
scan of a piece of coral. This is the raw data that we want to match; it's the equivalent of the
DCT coefficients from the previous problem.
The transform we need to implement then, is from a 2D image to its projections. That is, given an
image where each pixel represents the transmissivity of that region of space for xrays, we need to
generate the resulting sinogram. Essentially this boils down to an elementary ray tracing algorithm.
Once we have this, we can run an optimization algorithm that tunes the pixel values to produce the
correct sinogram. My implementation computes the loss gradient as well as the loss, so I can use
conjugate gradient descent (as implemented for the problem set before this one). A total variation
(TV) regularization term is applied, essentially summing the absolute value of the gradient of the
transmissivity at each pixel. This damps out noise, and encourages blocks of homogenous density as
are common in biological and mechanical samples.
On to the results. Using all the data, here's a reconstructed slice of coral.
![reconstruction_1](../assets/img/09_reconstruction_1.png){: #coral}
Surprisingly, so far I haven't found much of a difference with and without the TV term. Here I use
only 2% of the projections. With TV I get this:
![reconstruction_2](../assets/img/09_coral_50_tv.png){: #coral2}
And without:
![reconstruction_2](../assets/img/09_coral_50_no_tv.png){: #coral3}
Regardless, the construction quality is much poorer with the limited data.
......@@ -34,6 +34,12 @@ img {
width: 100%;
}
img#coral, img#coral2, img#coral3 {
width: 79%;
margin: auto;
}
.captioned {
font-style: italic;
font-size: 85%;
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment