Today is going to be a little bit different. I’d like to focus on a special feature I touched on briefly in the episode above. The Hi-Res Mode of the Olympus OM-D cameras.
We’re going to get quite techie here, so please bear with me…
The Hi-Res Mode is a feature you will find in the Olympus OM-D E-M1 Mark II, E-M5 Mark II and the PEN-F. And it allows you to shoot an image of up to 80MP* with a much lower resolution sensor.
*different camera models have different resolutions in Hi-Res mode.
Short And Sweet
The camera captures multiple images by shifting the sensor and then combining these images into one large, high-resolution RAW (or JPG) file.
Long and Detailed
To understand how the high-resolution mode works, we first have to understand a little bit about how a camera sensor works.
The following is a simplified explanation. I’m not a camera engineer, nor do I develop or build cameras, so I’ll do my best to explain how things work, without getting into too much detail about the things even I don’t fully understand.
Before we go the full 100% nerd, there are a few words and terms you should better understand:
- Bayer Filter Mosaic – A Bayer filter mosaic is a colour filter array (CFA) for arranging RGB colour filters on a square grid of photosensors. Its particular arrangement of colour filters is used in most single-chip digital image sensors used in digital cameras, camcorders, and scanners to create a colour image — [Source: Wikipedia]
- Photosites = Pixels on a modern camera. This isn’t always true, but for reason of simplification, we’ll assume this. In fact, photosites are the actual, physically present entity on a sensor. Pixels are an abstract construct, saved in memory. A sensor is made up of Photosites, not Pixels.
- RGB – Red, Green, Blue – the 3 values that make up the final colour in a pixel.
- Pixel – PICture + ELement = Pixel. The squares of RGB information that make up an image. In digital imaging, a pixel is a physical point in a raster image, or the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen. [Source: Wikipedia]
As light travels through the lens it will hit the sensor. A sensor is covered with tiny ‘filters that break down the light into Reds Greens and Blues.
Behind each tiny filter are photosites.
Photosites are the physical units that will respond to the incoming, filtered light and generate one of these RGB values into an electronic signal.
In a Bayer matrix, the photosites are arranged into a repetitive pattern of 2×2 squares. Each includes one photosite for Red, one for Blue, and two for Green component (the human eye is more sensitive to shades of green, hence double the information of green on a sensor)
Finally, the pixels are mapped to the location of one photosite on the sensor. Resulting in:
1 Photosite = Area of 1 Pixel.
However, since a photosite only gathers either one Red, one Green OR one Blue value. In order to collect the other values the neighbouring R, G or B values are ‘borrowed’ (aka demosaicing) to make up the information needed to complete a pixel.
So yes, in fact, a pixel only has ONE true colour value from its actual position, and then gets the remaining 2 values from neighbouring photo sites to complete the full RGB values it needs.
FYI: The amount neighbouring values used to make up the missing data for each pixel depending on the camera and interpolation software used. In a RAW file, your RAW processor (i.e. Adobe Lightroom) will do this process. In a JPG, the camera will do this process is called demosaicing.
Regardless, the process of ‘borrowing’ can cause various issues like random noise & false colour, artefacts, soft edges etc.
We can summarise that in a normal photo each Pixel’s colour information is only 33% accurate. And in fact, your image is being ‘upscaled and a pixel really only represents the one Red, Green or Blue value of its photosite, since the remaining two values are ‘borrowed’ and averaged.
So, in fact, we can’t be 100% certain that the colour information is exactly correct for a pixel.
Unless we use…
Olympus makes use of the motors that are normally dedicated to stabilising the sensor and turns their movement into a ‘high frequency’ movements.
This allows the motors to move the sensor by exactly the amount needed to shift the photosites by one whole photosite unit.
The camera then only stores one RGB value, without borrowing the neighbour information and repeats this process until it has one Red, one Green and one Blue value per photosite location.
The result: The colour information now is 100% accurate for each Pixel. No ‘borrowing’ from neighbours needed. This makes the interpolation process redundant and the total amount of information has now been tripled*.
*to be accurate: it has been quadrupled, since the sensor has moved 3 times and it has read out 4 values (greens are doubled up and subsequently averaged) making up: 1xRed, 2xGreen/2, 1xBlue value.
BUT Olympus takes this feature one step further!
It now shifts the sensor diagonally by exactly half a photosite horizontally and half a photosite vertically. So we end up exactly at the intersection of 4 original photosites.
Then the camera repeats the 3 shifts (+1 to get back to the original spot) to create another image 100% accurate RGB values per photo site. We end with a total of 8 separate images, 4 in the first rotation + 4 in the second rotation (remember: the greens are doubled in each rotation).
Finally, the camera merges the two 100% colour accurate images into one RAW file and the result is an 80MP ORF (RAW) file (or a 50MP interpolated JPG file). Additionally, the camera stores the first image of the 8 shot sequence as an ORI file alongside the Hi-Res ORF file. This is simply the ‘normal’ RAW of the 20.4MP sensor. So in total, you end up with:
1x ORI file
1x ORF file
1x JPG file (if RAW + JPG is activated)
So, not only do we have a higher resolution image, we also get much better colour accuracy.
Ultimately the result is a much cleaner shot. Less random noise and a lot more detail!
The obvious advantage is the increase in resolution of course. But not only do we get more pixels, we also get more accurate pixels.
This becomes clearly visible when you start to push the files in Lightroom. There is much less noise and a lot more colour information.
Additionally, there’s almost zero visible fringing (false colour) in the Hi-Resolution shots.
Hi-Res mode LOVES sharpening too. The images really come to life once you add a little sharpening to them.
There are a few physical limitations to the Hi-Res shots.
- You can’t go above ISO 1600
- Minimum Aperture is f/8
You’ll need sharp glass for this mode. Since we’re effectively doubling the detail, we also need a lens that can resolve that kind of detail.
Luckily Olympus is amazing at making good glass. So you don’t have to look far. I’d recommend the Hi-Res Mode with the M.Zuiko 12-100mm f/4 IS PRO and at around f/5.6. That’s where you’ll end up getting the sharpest results.
I’ve also read that the Zuiko 50mm f/2 Macro lens is one of the sharpest lenses and that the Hi-Res mode will most likely work well with that lens too.
There are of course other limitations as well:
Since the camera uses the motors that usually do the sensor stabilising to create a Hi-Res shot, You don’t get any sensor stabilisation when you’re shooting in Hi-Res mode. The camera can’t move during the process of taking a Hi-Res shot. So you have to shoot these shots with a tripod. A solid one! If the camera moves as little as half a pixel the shot won’t be sharp.
Like every high-resolution camera, the higher the pixel count the easier it is to notice the difference between a slightly blurred shot and a sharp one. I always recommend taking a couple of shots to be sure to get at least one 100% sharp shot.
In addition to not moving the camera, there’s always the chance you will have movement in the scene (like leaves moving in the wind). Luckily Olympus have found a smart solution to cleaning this up in camera:
So the E-M1 Mark II captures a ‘regular’ 20MP shot and will ‘patch up’ the areas that have artefacts that occur during the process of creating the Hi-Res shot. It’s a pretty neat fix and works well most of the time.
You should understand that this feature is most likely still in its early stages. It’s a proof of concept and the limitations are things I’m sure Olympus is working on and I can’t wait to see how far they can take this feature in future cameras.
It’s exciting to think that, in the future, we could possibly be shooting ultra-high-resolution shots with cameras as small and versatile as the OM-D series.
In a studio environment it can truly be revolutionary. In the outdoors it really needs the right conditions and I highly recommend shooting multiple shots.
I love the feature and it’s one of the many reasons why I think Olympus make amazing cameras. It shows innovation and ‘outside-the-box’ thinking. It seems like more and more camera manufacturers are copying this feature too, which, to me, shows that Olympus is doing something right here. Either way, it’s very exciting to see where they will take this feature in the future!
http://www.wrotniak.net/photo/m43/em1.2-hires.html – I couldn’t have put it better. And his sequel to this article is just as interesting:
https://en.wikipedia.org/wiki/Pixel – for some general information about Pixels
https://en.wikipedia.org/wiki/Image_sensor – for some general information about modern Imaging Sensors
https://en.wikipedia.org/wiki/Demosaicing – for some general information about Demosaicing
https://www.dpreview.com/reviews/olympus-om-d-e-m5-ii/4 – great read about the Hi-Res Mode by Richard Butler
https://www.imaging-resource.com/PRODS/olympus-e-m5-ii/olympus-e-m5-iiTECH2.HTM – great read full com comparison images with the Nikon D810 and PENTAX 645Z.
CCD: The heart of a digital camera (how a charge-coupled device works) – great explanation how sensors work.
Capturing Digital Images (The Bayer Filter) – Computerphile – how sensors the Bayer Matrix works.
The Science of Camera Sensors – probably the most factual, correct and best explanation on the web on how sensors work. worth watching.