The Universe in Data Form

Articles

Constructing Jupiter - North Pole/South Pole

In the past few weeks I’ve been tinkering with the raw data captured by JUNO (a NASA space probe that is currently in orbit around Jupiter), to see if common Visual Effects tools would be effective in processing and aligning the imagery correctly.

I chose 2 specific raw images that were taken by Junocam (JUNO’s on-board camera) as the probe’s highly elliptical orbit brought it close to Jupiter for the first time — an event known as Perijove 1. The images captured Jupiter’s north and south poles, and, after several hours of manipulation, basic maths and a small amount of python scripting, I ended up with the two images posted below.

Jupiter's south pole, constructed from raw images using common Visual Effects techniques and The Foundry's Nuke compositing software. Image Credit : NASA, SwRI, MSSS & Matt Brealey.

Jupiter's north pole, constructed from raw images using common Visual Effects techniques and The Foundry's Nuke compositing software. Visible compression artefacts are present in the raw data and not introduced through processing. The top section was not captured in the raw image and is therefore missing. Image Credit : NASA, SwRI, MSSS & Matt Brealey.

I was actually surprisingly impressed with the results, especially given that going in I had absolutely no idea how other people do this! All I knew was that the problems presented (cropping, aligning, warping, colour correction, batch processing and scripting) were all issues I had faced during my last 10 years in the VFX industry. Thankfully, the techniques I used all worked almost entirely as expected, although the ‘A Flawless Technique’ section below will cover the positives/negatives in a little bit more detail.

In this article I’ll break down the process of how I created the above images of Jupiter, step-by-step. But before I do, I want to briefly explain why it’s necessary to use image processing software at all.


Junocam & The Images That Almost Didn’t Exist [1]

The images returned from every stage of JUNO’s Journey so far have been truly awe-inspiring. Yet their existance is made even more special by the fact that had the original plan for JUNO gone ahead, those same images might never have been captured at all.

When initially designed, JUNO had seven instruments on-board for measuring different aspects of Jupiter, but a camera to capture visible light was not amongst them. In the end, it was thanks to the concepts of public engagement, outreach and education that a visible light camera, Junocam (designed by Malin Space Science Systems), was added to the probe.

Actually capturing images from the newly-added camera would, however, prove to be a somewhat peculiar process, due to the way that JUNO moves through space. To explain the challenge, first imagine you’re driving your car along a very long, slightly curving road, on a clear night. Then imagine that as you drive forwards, your car steadily front-flips at a rate of about 2 flips per minute. Your task is now to capture clear photos of the moon, using a camera gaffer-taped to the roof.

This is essentially the situation presented to the team behind Junocam (minus the gaffer-tape, of course). Except JUNO is also attempting to achieve this task whilst being both 588 million kilometers from Earth, and travelling at approximately 250,000 kilometers per hour. And to add yet one more complication to the process, the JUNO team can’t be 100% sure of when in any specific front-flip Jupiter might appear in view of the camera. Space exploration is clearly nothing if not challenging.

To solve this problem, Junocam has been designed as a pushframe camera. Whilst a framing camera produces a single image with a single click of the shutter (for all intents and purposes the same as your own DSLR), the pushframe camera aboard JUNO actually takes a series of images (or frames) as it rotates. In our car example from above, this means that our camera would be capturing a frame once every second or so, as the car cartwheels its way down the road. And whilst some of those frames might contain things we’re not interested in (the road in our case, and empty space in Junocam’s case), we could be reasonably sure that at least some of the frames would contain our target.

One of the small caveats of this pushframe design, is that the frames have to be reassembled before you can see the full image, and it’s this process that I’ll be describing below. We’ll start by investigating the raw images themselves.


Breaking Down the Raw Images

To process this raw imagery, I’m using a piece of industry-standard compositing software called Nuke, from a company called The Foundry. I’ll be posting another article in the next few days briefly outlining exactly where compositing fits into the Visual Effects process, but for now the main thing to understand is that all of the image processing techniques used here are extremely common in compositing work. The only thing that’s unusual in this case, is the source of the images being manipulated.

Before jumping into Nuke, let’s start by taking a look at a raw image as returned from JUNO (I’ve rotated this image clockwise 90° to be a little more web-friendly, but this is otherwise unaltered) :

A raw image of Jupiter's south pole, taken on August 27th 2016. Image Credit : NASA, SwRI, MSSS.

The image consists of a number of individual frames, that have been captured consecutively as the probe spins, and which are then stacked together into a single image by the camera software, before being returned to Earth. Calculating the number of individual frames in any raw image returned from JUNO is easy enough, as the pixel dimensions of a frame are always a constant 1648px wide by 384px high. In this example, we end up with 24 individual frames.

The raw image is constructed of 24 consecutively captured frames, outlined here. Image data featuring Jupiter is contained within frames 6-21. Raw Image Credit : NASA, SwRI, MSSS & Matt Brealey

The eagle-eyed amongst you may notice that there is another feature of the image that I haven’t yet discussed. Each frame in the raw, visible-band images returned from JUNO are in fact split into 3 smaller, 128px high ‘framelets’, one each for the red, green and blue filters attached to Junocam’s sensor [2].

Each individual frame consists of three 'framelets' - one red, one green and one blue, each capturing a different wavelength of light. Raw Image Credit : NASA, SwRI, MSSS & Matt Brealey

As each frame is designed to overlap slightly, simply isolating and then moving all of the red framelets so that they sit next to each other should give us a full (if not quite correctly aligned) red-filtered view of Jupiter. The same procedure can then be repeated for the green and blue framelets.

To achieve the above results in Nuke, I first use a Crop node — a tool which will allow me to easily isolate a single specific part of the original raw image. I can then add a set of expressions to the Crop node, to tie a specific frame number (for the image above this number would range from 1-24) to the position of that frame within the original image. The expressions used for this process, and the results as shown in the application, are displayed below.

A look at the expressions used to crop the raw image down to a single frame. As the frame numbers (on the left of the image) increase, the y-values on the crop node will also increase. 1648px is the width of the raw images returned from JUNO. 384px is the height of each frame. Image Credit : NASA, SwRI, MSSS & Matt Brealey

Animation showing each frame from the raw image being isolated in turn. As we move from frame to frame (the yellow playhead moving horizontally at the bottom of the animation), the crop region rises up across the original raw image (shown in red on the right side of the animation). The cropped result is shown in the center of the animation. Image Credit : Matt Brealey

Cropping to the framelets uses the same technique. First the containing frame is isolated, and then either the top (blue), middle (green) or bottom (red) third of the frame is extracted. Finally a Contact Sheet node is used to automatically position the corresponding framelets next to each other. Repeating for each colour filter in turn, and then merging the three filtered images together, results in the slightly psychodelic, three colour image below.

On the left, combining each of the red framelets results in a full but unaligned view of red-filtered Jupiter. On the right, combined but unaligned red, green and blue framelet images are layered on top of each other and coloured appropriately. Image Credit : NASA, SwRI, MSSS & Matt Brealey


Aligning the Framelets

From the jagged edges and the repeated surface details in the image above, it’s clear that although the individual red, green and blue filtered images (or channels) have been created, the framelets that form them aren’t yet aligned correctly. There are two reasons for this :

  1. The frames taken by Junocam are designed to overlap slightly, so that no surface detail is lost.
  2. As the probe moves over the surface, parallax will make the features towards the edges of a framelet appear to move more than those in the center. This problem was in fact a lot more pronounced in the north pole images, compared to the south.

Accounting for the first issue above is relatively simple – I add a Transform node to each framelet, moving it both horizontally and vertically until the features towards it’s center (i.e. where the transition between day/night occurs) line up correctly with the framelet below.

Fixing the second issue was slightly more tricky. I selectively scaled each framelet, moving the features on the outside of the frame horizontally into the correct position, whilst leaving the center of the frame untouched. For the VFX-artists amongst you, I did this by using a ColorLookup tool to grade an STMAP, adding curvepoints to selectively scale/distort the image.

Both techniques, and the resultant images of Jupiter are shown below.

Animation showing the framelets first being translated horizontally and vertically (fixing the overlap issue), and then being selectively scaled (to account for parallax). Note that in this second technique, only the left side of the framelet is altered. Image Credit : Matt Brealey, NASA, SwRI & MSSS

On the left, is the original red-filtered Jupiter channel. In the middle, each framelet has been transformed so that the surface features in the center of each frame line up. On the right, each framelet has also been selectively scaled, aligning those surface features on the edge of the planet. Image Credit : & Matt Brealey

After repeating the process for the green and blue channels, I ended up with three separate, seemingly perfectly aligned images of Jupiter. But layering them on top of each other revealed a problem – the surface features in each channel simply didn’t line up. Whilst a lot closer to the final result, there is clearly still some work to be done.

Whilst errors are obvious at the edges of the image, note the red/green ghosting around features in the center of the frame, implying that the red/green (and indeed, blue) channels are misaligned. Image Credit : NASA, SwRI, MSSS & Matt Brealey


Splinewarping & Aligning the Channels

Knowing that each channel was misaligned in a different way, the first step of this next process was to choose a ‘target’ channel - a reference image that the other two channels would then be manipulated to match. In my case, the red channel was the sharpest, most well-exposed image, so that became my target. The green and blue channels would now need to be warped to match it.

The red, green and blue channels of the south pole image, after the framelet alignment. The red channel was chosen as the 'target' channel for the next stage of the aligning process. Image Credit : NASA, SwRI, MSSS & Matt Brealey

The process of warping one image to match another is used for a wide variety of reasons in VFX, and is really an art in and of itself. The Splinewarp tool that I used on the JUNO images is probably the most common way of warping an image in Nuke, and it’s ‘pin’ functionality was especially useful in this case, as there were many surface features of differing sizes that needed to be matched together.

The basic technique with the ‘pin’ feature of the Splinewarp tool, is to first look at the image you want to warp (the ‘source’ image), and add a pin/pins on the feature you wish to move. You then switch to look at the target image, and move the pins to where they should actually be. The software will then warp the source image so that the two points line up perfectly.

An example of using the 'pin' feature of Nuke's Splinewarp tool to warp surface features into their correct position. In this case, features from the green channel of the south pole image are marked with blue pins, and then those pins are moved to the corresponding position in the red channel. The software then warps the green channel so that the pins align. Image Credit : Matt Brealey, NASA, SwRI & MSSS

This is obviously a very manual technique, however luckily in this case the old adage ‘less is more’ really does prove to be true – starting with as few pins as possible often gives the best results. One cup of tea later, and both the green and blue channels were aligned as best as I could manage given the resolution of the source images. I’ll talk more about the pros and cons of this method below.

The combined red, green and blue channels of the south pole image after the splinewarping process. All red/green ghosting has been minimised. Image Credit : NASA, SwRI, MSSS & Matt Brealey


Finishing the Image

After aligning the channels, there were two steps left : cleanup, and colour correction. The cleanup involved was actually minimal – removing the calibration marks[5] imperfections that appear as tiny dots in evenly-spaced straight lines across each channel. To do this, I used Nuke’s RotoPaint tool to cover up each dot in turn by copying (or ‘cloning’) nearby pixels to cover the marks. This ensures that the new pixel values remain consistant with the dot’s surrounding area.

An animation showing the process of painting-out imperfections on the filters that appear in different locations across all 3 channels. The pixels under the circle with the cross are cloned under the cursor as you paint each stroke. Image Credit : NASA, SwRI, MSSS & Matt Brealey

With the paint work complete, the final step of the process is to colour-correct the image to both highlight the features captured by Junocam, and to remove the yellow haze that is present in the raw images. Nuke itself has a large array of colour-correction tools so after some relatively simple masking, grading and sharpening, the image was finished.

An animation showing the colour correction applied to the image of Jupiter's south pole, in order to better expose the surface features, and remove the yellow haze. Image Credit : NASA, SwRI, MSSS & Matt Brealey

I was careful in these images not to individually alter the color of the three channels (which would more significantly change the resultant merged colour), instead opting to colour correct the merged, three channel image as a whole. However, any colour-correction not driven by exact real-world values is going to introduce a level of artistic interpretation into the final image. There is certainly a broader question as to where the line is drawn between that artistic interpretation of the results, and ‘reality’, but perhaps that’s best left for a future article.


A Flawless Technique…?

Whilst I am really quite pleased with the results of this initial test, the current workflow does leave a lot to be desired when it comes to preserving the absolute accuracy of the data. In this last section, I want to briefly list some of the issues present in the process, and my thoughts on how they could potentially be resolved.

Lens Distortion

The first issue I’d like to fix is that of lens distortion in the original cropped frames. Although the distortion present in Junocam is infact very low (approximately 3% at the corners of the frame) removing distortion from images is a standard VFX procedure, and one that I intend to resolve in future projects providing that I can find suitable reference frames from Junocam.

Colour Correction/The Yellow Haze

The color correction used to remove the yellow haze was applied entirely by eye, and is not based upon any known, real-world values. Again, this process is common in VFX work, and once I have the correct reference, it should be easy to apply the colour offsets required a little more accurately.

Aligning - 2D vs 3D

The biggest issue however is one of alignment. In the process described above I am attempting to use a purely 2D workflow to align and resolve imagery of a target that is, of course, 3-dimensional. As a result of the warping and transforming that I am therefore having to apply, I am also introducing a disparity in the lat/long postioning of the surface features present in the image, when compared to the raw data.

To get a rough estimate of the amount of warping present, I calculated a heat-map showing the difference between a pixel's unwarped position, and it's position in the final south pole image (I am including only the transforms applied to a single-feature/section of a framelet, not the transforms applied to a framelet as a whole). The result is shown below.

A heatmap showing the amount of warping applied to the green and blue channels from the south pole image, in order to match the 'target' red channel. The brighter the pixel, the more distortion was added between the source/final images, with a value of full colour representing 20px of movement. From left to right, both channels are shown together, the green channel is isolated, and the blue channel is isolated. Image Credit : Matt Brealey

As you can see, both the blue and green channels underwent significant distortion (a maximum of 20px towards the left side of the blue channel) in order to match the positions of the red 'target' image.

Additionally, whilst it was reasonably simple to align the three channels on the south pole image, the north pole was a different story altogether. Compression artefacts and lack of definition in the raw image (present due to JUNO's image quality/compression testing during perijove 1) meant that finding the corresponding features across the separate channels became next to impossible. The result is an image that still contains a large amount of red/green ghosting, and a larger lat/long disparity in the surface features present, than those in the south pole image.

It is, however, extremely common in the VFX process to use both 2D and 3D data in a workflow in order to achieve more accurate results. One of more powerful techniques used is 'projection' — the method of projecting an image onto a piece of corresponding geometry, analogous to a film projector in a cinema — and it is this technique that I intend to utilise next. Coupling 3D vector data from NASA/JPL (via services such as HORIZONS and SPICE) with the existing 2D raw data, I hope to be able to project the framelets onto a 3D model of Jupiter. Building them up one on top of the other, this will recreate a much more accurate, 3D representation of the state of Jupiter as Junocam passed overhead. Even if the result is merely used as a guide for the 2D workflow above, this technique should help minimise the lat/long disparities currently present.

Future articles will of course document my progress.


Notes

  1. A lot of the Junocam-specific information in this article came from the excellent technical paper provided on the Junocampage of the MissionJuno site. Other fantastic sources included the Junocam page of the MissionJuno Media Gallery, and the article ‘A new Earthrise over the Moon from Lunar Reconnaissance Orbiter’s pushframe camera’ written by the awesome Emily Lakdawalla over at The Planetary Society.
  2. Although not covered here, Junocam actually has 4 filters attached directly to it’s sensor. Whilst 3 of the filters are for red, green and blue filtered visible light (as manipulated throughout the article), the forth filter is present specifically to measure methane, in order to more clearly image the clouds in the Jovian atmosphere. More details are available in the fantastic technical paper mentioned above.
  3. My hope with this article is that the VFX techniques used are described in a suitable amount of detail to make the workflow understandable, however obviously these descriptions really only scratch the surface. If you’re interested in learning more about VFX in general, there are few better ways to start getting acquainted than by heading over to FXGuide’s Podcasts, selecting ‘fxguidetv’ from the horizontal menu, scrolling through the videos until you see an episode on a film you really liked, and clicking play.
  4. For a excellent example of compositing work in a VFX-oriented short-film, I absolutely recommend watching Arev Manoukian’s stunning Nuit Blanche and it’s accompanying Making Of video over on Vimeo.
  5. Thanks to Michael Caplinger over on the UnmannedSpaceflight forums for clarification that the calibration marks were in fact imperfections on the Junocam filters.