The Canon C100 is an 8 bit camera, so its images have ‘texture’ – a sort of electronic grain reminiscent of film. Most of the time this is invisible, or a pleasant part of the picture. In some situations, it can be an absolute menace. Scenes that contain large areas of gently grading tone pose a huge problem to an 8 bit system: areas of blue sky, still water, or in my case, a boring white wall of the interview room.
Whilst we set up, I shot some tests to help Alex with tuning his workflow for speed. It rapidly became obvious that we’d found the perfect shot to demonstrate the dangers of noise – and in particular, the C100’s some-time issue with a sort of pattern of vertical stripes:
Click the images below to view the image at 1:1 – this is important – and for some browsers (like Chrome) you may need to click the image again to zoom in.
So, due to the balance of the lighting (couldn’t black the room out, couldn’t change rooms), we were working at 1250 ISO – roughly equivalent to adding 6dB of gain. So, I’m expecting a little noise, but not much.
Not that much. And remember, this is a still – in reality, it’s boiling away and drawing attention to its self.
It’s recommended to run an Auto Black Balance on a camera at the start of every shoot or if the camera changes temperature (e.g. indoors to outdoors). Officially, one should Auto Black Balance after every ISO change). An Auto Black Balance routine identifies the ‘static’ noise to the camera’s image processor, which will then do a better job of hiding it.
So, we black balanced the camera, and Alex took over the role of lit object.
There was some improvement, but the vertical stripes could still be seen. It’s not helped by being a predominantly blue background – we’re seeing noise mostly from the blue channel, and blue is notorious for being ‘the noisy weak one’ when it comes to video sensors. Remember that when you choose your chromakey background (see footnote).
The first thought is to use a denoiser – a plugin that analyses the noise pattern and removes it. The C100 uses some denoising in-camera for its AVCHD recordings, but in this case even the in-camera denoiser was swamped. Neat Video is a great noise reduction plug-in, available for many platforms and most editing software. I tried its quick and simple ‘Easy Setup’, which dramatically improved things.
But it’s not quite perfect – there’s still some mottling. In some respects, it’s done too good a job at removing the speckles of noise, leaving some errors in colour behind. You can fettle with the controls in advanced mode to fine tune it, but perversely, adding a little artificial monochrome noise helped a lot:
We noticed that having a little more contrast in the tonal transition seemed to strongly alter the noise pattern – less subtlety to deal with. I hung up my jacket as a make-shift cucoloris to see how the noise was affected by sharper transitions of tone.
So, we needed more contrast in the background – which we eventually achieved by lowering the ambient light in the room (two translucent curtains didn’t help much). But in the meantime, we tried denoising this, and playing around with vignettes. That demonstrated the benefit of more contrast – although the colour balance was hideous.
However, there’s banding in this – and when encoded for web playback, those bands will be ‘enhanced’ thanks to the way lossy encoding works.
We finally got the balance right by using Magic Bullet Looks to create a vignette that raised the contrast of the background gradient, did a little colour correction to help the skin tones, and even some skin smoothing.
The Issue
We’re cleaning up a noisy camera image and generating a cleaner output. Almost all of my work goes up on the web, and as a rule, nice clean video makes for better video than drab noisy video. However, super-clean denoised video can do odd things once encoded to H.264 and uploaded to a service such as Vimeo.
Furthermore, not all encoders were created equal. I tried three different types of encoder: the quick and dirty Turbo264, the MainConcept H.264 encoder that works fast with OpenCL hardware, and the open source but well respected X264 encoder. The latter two were processed in Epsiode Pro 6.4.1. The movies follow the above story, you can ignore the audio – we were just ‘mucking around’ checking stuff.
The best results came from Episode using X264
Here’s the same master movie encoded via MainConcept – although optimised for OpenGL, it actually took 15% longer than X264 on my MacBook Pro, and to my eyes seems a little blotchier.
Finally Turbo264 – which is a single pass encoder aimed at speed. It’s not bad, but not very good either.
Finally, a look at YouTube:
This shows that each service tunes its encoding to its target audience. YouTube seems to cater for noisy video, but doesn’t like strong action or dramatic tonal changes – as befits its more domestic uploads. Vimeo is trying very hard to achieve a good quality balance, but can be confused by subtle gradation. Download the uploaded masters and compare if you wish.
In Conclusion:
Ideally, one would do a little noise reduction, then add a touch of film grain to ‘wake up’ the encoder and give it something to chew on – flat areas of tone seem to make the encoding ‘lazy’. I ended up using Magic Bullet Looks yet again, pepping up the skin tones with Colorista, a little bit of Cosmo to cater for any dramatic makeup we may come across (no time to alter the lighting between interviewees), a vignette to hide the worst of the background noise, and a subtle amount of film grain. For our uses, it looked great both on the ProRes projected version and the subsequent online videos.
Here’s the MBL setup:
What’s going on?
There are, broadly speaking, three classes of camera recording: 8 bits per channel, 10 bits per channel and 12 bits per channel (yes there are exotic 16 bit systems and beyond). There are three channels – one each for Red, Blue and Green. In each channel, the tonal range from black to white is split into steps. A 2 bit system allows 4 ’steps’ as you can make 4 numbers mixing up 2 ‘bits’ (00, 01, 10 and 11 in binary). So a 2 bit image would have black, dark grey, light grey and white. To make an image in colour, you’d have red green and blue versions stacked up on top of each other.
8 bit video has, in theory, 256 steps each for red, green and blue. For various reasons, the first 16 steps are used for other things, and peak white happens at step 235, leaving 20 steps for engineering uses. So there’s only about 220 steps between black and white. If that’s, say, 8 stops of brightness range, then a 0.5 stop difference in brightness has only 14 steps between them. That would create bands.
So, there’s a trick. Just like in printing, we can diffuse the edges of each band very carefully by ‘dithering’ the pixels like an airbrush. The Canon Cinema range perform their magic in just an 8 bit space by doing a lot of ‘diffusion dithering’ and that can look gosh-darn like film grain.
Cameras such as the F5 use 10 bits per channel – so there are 1024 steps rather than about 220, and therefore handle subtlety well. Alexa, BMCC and Epic operate at 12 bits per channel – 4096 steps between black and white for each channel. This provides plenty of space – or ‘data wriggle room’ to move your tonality around in post, and deliver a super-clean master file.
But as we’ve seen from the uploaded video – if web is your delivery, you’re faced with 4:2:0 colour and encoders that are out of your control.
The C100 with its 8 bit AVCHD codec does clever things including some noise reduction, and this may have skewed the results here, so I will need to repeat the test with a 4:2:2 ProRes type recorder, where no noise reduction is used, and other tests I’ve done have demonstrated that NeatVideo prefers noisy 10 bit ProRes over half-denoised AVCHD. But I think this will just lead to a cleaner image, and that doesn’t necessarily help.
As perverse as it may seem, my little seek-and-destroy noise hunt has lead to finding the best way to ADD noise.
Footnote: Like most large sensor cameras, the Canon C100 has a Bayer pattern sensor – pixels are arranged in groups of four in a 2×2 grid. Each group contains a red pixel sensor, a blue pixel sensor and two green ones. Green has twice the effective data, making it the better choice for chromakey. But perhaps that’s a different post.