The last time Hackerfall tried to access this page, it returned a not found error. A cached version of the page is below, or clickhereto continue anyway

A lot of new Fujifilm cameras use their own brand of X-Trans sensors which have a non-Bayer color mosaic. Fujifilm claims that the less regular arrangement makes it more uncommon for edges in the scene to make moire patterns with the mosaic. They say this justifies not including an optical lowpass filter, so their sensors are able to retain more details than those with Bayer mosaics.

The irregular pattern means Fujis raw files have to be demosaiced differently, and a lot of software available right now does a pretty bad job at this. All the open source raw processors Ive tried (Darktable, LightZone, RawTherapee, ufraw) produce an unacceptable amount of false color throughout the image (Figure $2$).

All the free software I have found takes its demosaicing method from Frank Markesteijns code in dcraw, which seems to follow along the same lines of adaptive homogeneity demosaicing (AHD) on Bayer arrays. AHD proper is pretty good at not introducing noticeable color artifacts compared to other demosaicing methods, so at least the principles behind Markesteijns algorithm are sound.

What follows is a deeper look into AHD and other relevant demosaicing techniques.

Fujifilm cameras themselves do a good job of demosaicing, but Fujifilm hasnt publically revealed anything about the technique they use. Fujifilm works with vendors directly (Silkypix developers, Apple, Adobe, maybe others) to produce good RAF processors, and hasnt released anything close to technical about it other than the really high-level description here. Further, it looks like the camera, even with all the settings turned down, doctors the pictures further than just demosaicing so the two images in Figure 2 arent precisely comparable.

Without any details from Fujifilm, we are left to our own devices

AHD for Bayer mosaics relies on a bunch of ideas:

- $\textit{A1}$: The difference images $(R-G),(B-G),(R-B)$ are slowly varying. (A really common assumption for demosaicing, the AHD paper cites a bunch of other papers that also use it).
- $\textit{A2}$: Gradients in pictures tend to be locally linear, so interpolation using only a directional portion of the local region will work better than with all the nearby pixels at once. For instance, if you are looking at the edge of a vertical wall through the mosaic and want to know the color of the center pixel, using colors to the left and right of center in your estimate might give a mixture of the walls color with colors of whatever is behind. If you were to use only pixels from above and below there is a better chance that you will interpolate using colors from either only the background or only the wall.
- $\textit{A3}$: Any horizontal or vertical strip of a Bayer mosaic will be either $\dots RGRGRG \dots$ or $ \dots BGBGBG \dots $. This periodicity makes it analytically simple to design linear filters to interpolate the unknown colors. Because X-Trans (seemingly by design) doesnt have this simple of periodicity in any direction we cant use the same trick as AHD.

With these assumptions AHD does this:

- At each non-green pixel, interpolate green twice using filters from Assumption $\textit{A3}$. Once using pixels from side to side, and again using pixels above and below. Use this along with filters designed with Assumptions $\textit{A1,A3}$ to create two pairs of estimates for $R,B$.
- For the two interpolations, at each pixel position determine how big a region around that pixel is close in color to it. (close up to some color space)
- Form a final image by selecting the RGB value for the more homogenous of the two interpolations at each pixel location.
- Perform color smoothing to reduce color variations from the wrong direction being chosen in the previous step.

Markesteijn is similar to AHD:

- Perform an interpolation horizontally, vertically, and twice for opposite diagonals to form 4 interpolated images. The interpolation here is linear, but I couldnt figure out how he designed the filter.
- Find the the derivative of some color space metric at each position in each of the 4 images.
- Form a final image by selecting the RGB value of the interpolation whose color metric is slowest varying there.
- Perform color smoothing.

An alternative framework for demosaicing is described by Eric Dubois in Chapter 7 of *Single-Sensor Imaging Methods and Applications for Digital Cameras*. It avoids working with specific color patterns by using the priciple that as long as the mosaic is periodic on some scale, lattice theory can be used to get a frequency-domain representation of sensor data. At the time of writing only this conference paper has been published on applying the idea to X-trans but the results look good. A method very similar to AHD can be derived from this.

Im considering the following method. Take these assumptions:

- $\textit{A1},\textit{A2}$ from AHD.
- $\textit{A4}:$ Cubic interpolation of sparse data points only adds high frequency noise, and $G$ is densly sampled enough that local cubic interpolation of it will not add much noise. (reasonably sound, see this paper).

Then do this:

- At each pixel, interpolate the missing colors, each time using only pixels from a mask in Figure $3$. Use the following procedure:1.1. Form a cubic interpolation of the local green channel $\hat{G}$.1.2. By assumption $\textit{A1}$, the high frequency components of $R,G$ are mostly similar so letting $LF$ and $HF$ denote low, high pass filters and $\otimes$ be 2D convolution, then\begin{align}R& \approx (LF \otimes R) + (HF\otimes G) \\\Rightarrow (LF \otimes R) &\approx R-(HF\otimes G).\end{align} Then by Assumption $\textit{A4}$, cubic interpolation of the difference between $(HF\otimes \hat{G})$ and the sparse red samples we have forms an approximation of $(LF\otimes R)$.1.3. Form $(LF\otimes R) + (HF\otimes \hat{G})$ as a local approximation of $R$.1.4. Do the same for $B$.
- For all the interpolations, at each pixel position determine how big a region around that pixel is close in color to it. (close up to some color space)
- Form a final image by selecting the RGB value for the more homogenous of all the interpolations at each pixel location. If none of them are very good, give up and use the pixel from the circular mask.
- Perform color smoothing to reduce color variations from the wrong direction being chosen in the previous step.

Here is what I get from this technique in a small problematic segment of the image:

The next step is to see how well these methods work with better corrections added. In particular, I am doing white balance by hand (and very badly) before demosaicing. Its important that proper white balance be done before the demosaicing happens because the homogeneity metrics in steps 2 and 3 are meant to capture human visual sensitivities, and using them to look at unrealistic colors will mean garbage comes out.

My end goal is to get nice pictures out of my camera RAFs so this project is subjective at its core, but still fun to think about technically. These results look promising enough so my next goal is to implement it in a local copy of Darktable.