The last time Hackerfall tried to access this page, it returned a not found error. A cached version of the page is below, or clickhereto continue anyway

Thoughts on Demosaicing for X-Trans Sensors | Christian's Blog

A lot of new Fujifilm cameras use their own brand of X-Trans sensors which have a non-Bayer color mosaic. Fujifilm claims that the less regular arrangement makes it more uncommon for edges in the scene to make moire patterns with the mosaic. They say this justifies not including an optical lowpass filter, so their sensors are able to retain more details than those with Bayer mosaics.

The irregular pattern means Fujis raw files have to be demosaiced differently, and a lot of software available right now does a pretty bad job at this. All the open source raw processors Ive tried (Darktable, LightZone, RawTherapee, ufraw) produce an unacceptable amount of false color throughout the image (Figure $2$).

All the free software I have found takes its demosaicing method from Frank Markesteijns code in dcraw, which seems to follow along the same lines of adaptive homogeneity demosaicing (AHD) on Bayer arrays. AHD proper is pretty good at not introducing noticeable color artifacts compared to other demosaicing methods, so at least the principles behind Markesteijns algorithm are sound.

What follows is a deeper look into AHD and other relevant demosaicing techniques.

Fujifilm demosaicing

Fujifilm cameras themselves do a good job of demosaicing, but Fujifilm hasnt publically revealed anything about the technique they use. Fujifilm works with vendors directly (Silkypix developers, Apple, Adobe, maybe others) to produce good RAF processors, and hasnt released anything close to technical about it other than the really high-level description here. Further, it looks like the camera, even with all the settings turned down, doctors the pictures further than just demosaicing so the two images in Figure 2 arent precisely comparable.

Without any details from Fujifilm, we are left to our own devices

AHD demosaicing

AHD for Bayer mosaics relies on a bunch of ideas:

With these assumptions AHD does this:

  1. At each non-green pixel, interpolate green twice using filters from Assumption $\textit{A3}$. Once using pixels from side to side, and again using pixels above and below. Use this along with filters designed with Assumptions $\textit{A1,A3}$ to create two pairs of estimates for $R,B$.
  2. For the two interpolations, at each pixel position determine how big a region around that pixel is close in color to it. (close up to some color space)
  3. Form a final image by selecting the RGB value for the more homogenous of the two interpolations at each pixel location.
  4. Perform color smoothing to reduce color variations from the wrong direction being chosen in the previous step.

Markesteijn demosaicing

Markesteijn is similar to AHD:

  1. Perform an interpolation horizontally, vertically, and twice for opposite diagonals to form 4 interpolated images. The interpolation here is linear, but I couldnt figure out how he designed the filter.
  2. Find the the derivative of some color space metric at each position in each of the 4 images.
  3. Form a final image by selecting the RGB value of the interpolation whose color metric is slowest varying there.
  4. Perform color smoothing.

General frequency-domain demosaicing

An alternative framework for demosaicing is described by Eric Dubois in Chapter 7 of Single-Sensor Imaging Methods and Applications for Digital Cameras. It avoids working with specific color patterns by using the priciple that as long as the mosaic is periodic on some scale, lattice theory can be used to get a frequency-domain representation of sensor data. At the time of writing only this conference paper has been published on applying the idea to X-trans but the results look good. A method very similar to AHD can be derived from this.

Proposed method

Im considering the following method. Take these assumptions:

Then do this:

  1. At each pixel, interpolate the missing colors, each time using only pixels from a mask in Figure $3$. Use the following procedure:1.1. Form a cubic interpolation of the local green channel $\hat{G}$.1.2. By assumption $\textit{A1}$, the high frequency components of $R,G$ are mostly similar so letting $LF$ and $HF$ denote low, high pass filters and $\otimes$ be 2D convolution, then\begin{align}R& \approx (LF \otimes R) + (HF\otimes G) \\\Rightarrow (LF \otimes R) &\approx R-(HF\otimes G).\end{align} Then by Assumption $\textit{A4}$, cubic interpolation of the difference between $(HF\otimes \hat{G})$ and the sparse red samples we have forms an approximation of $(LF\otimes R)$.1.3. Form $(LF\otimes R) + (HF\otimes \hat{G})$ as a local approximation of $R$.1.4. Do the same for $B$.
  2. For all the interpolations, at each pixel position determine how big a region around that pixel is close in color to it. (close up to some color space)
  3. Form a final image by selecting the RGB value for the more homogenous of all the interpolations at each pixel location. If none of them are very good, give up and use the pixel from the circular mask.
  4. Perform color smoothing to reduce color variations from the wrong direction being chosen in the previous step.

Here is what I get from this technique in a small problematic segment of the image:

Next steps

The next step is to see how well these methods work with better corrections added. In particular, I am doing white balance by hand (and very badly) before demosaicing. Its important that proper white balance be done before the demosaicing happens because the homogeneity metrics in steps 2 and 3 are meant to capture human visual sensitivities, and using them to look at unrealistic colors will mean garbage comes out.

My end goal is to get nice pictures out of my camera RAFs so this project is subjective at its core, but still fun to think about technically. These results look promising enough so my next goal is to implement it in a local copy of Darktable.

Continue reading on www.public.asu.edu