C100 Chroma Subsampling – the fix

c100The C100’s AVCHD is a little odd – you may see ‘ghost interlace’ around strong colours in PsF video. AVCHD is 4:2:0 – the resolution of the colour is a quarter of the resolution of the base image. Normally, our eyes aren’t so bothered about this, and most of the time nobody’s going to notice. However, stronger colours found in scenes common to event videographers, and when ‘amplifying’ colours during grading, all draw attention to this artifact.

Note that this problem is completely separate from the ‘Malign PsF’ problem discussed in another post, but as the C100 is the only camera that generates this particular problem in its internal recordings, I suspect that this is where the issue lies. I’ve never seen this in Panasonic or Sony implementations of AVCHD.

This is a 200% frame of some strongly coloured (but natural) objects, note the peculiar pattern along the diagonals – not quite stair-stepped as you might imagine.

Please click the images to view them at the correct size:

There are stripes at the edge of the red peppers, and their length denotes interframe movement. These artefacts illustrate that there’s some interlace going on even though the image is progressive.

Like ‘true’ interlacing artefacts, these stripey areas add extra ‘junk information’ which must be encoded and compressed when delivering video in web ready formats. These are wasting bitrate and robbing the image of crispness and detail. Reds are most affected, but these issues crop up in areas of strong chrominance including fabrics, graphics and stage/theatrical lighting.

Some have pointed the finger of blame at edit software, specifically Final Cut Pro X. I wondered if it was the way FCPX imported the .MTS files, so I rewrapped them in ClipWrap from Divergent Media. In version 2.6.7, I’ve yet to experience the problems I experienced in earlier versions, but the actual results seem identical to FCPX:

For the sake of completeness, I took the footage through ClipWrap’s transcode process – still no change:

So the only benefit would be to older computers that don’t like handling AVCHD in its natural state.

To isolate the problem to the recording format rather than the camera, I also shot this scene on an external recorder using the Canon’s 4:2:2 HDMI output and in recorded in ProRes 422HQ. The colour information is far better, but note the extra noise in the image (the C100 includes noise reduction for its AVCHD recordings to help the efficiency of its encoding).

This is the kind of image one might expect from the Canon C300 which records 4:2:2 in-camera at 50 Mbits per second. Adding an external recorder such as the Atomos Ninja matches the C300’s quality. But let’s say you don’t have the option to use an external recorder – can the internal recordings be fixed?

RareVision make 5DtoRGB – an application that post-processes footage recorded internally in the 4:2:0 based H.264 and AVCHD codecs, and goes one further step by ‘smoothing’ (not just blurring) the chroma to soften the blockiness. In doing so, it fixes the C100’s AVCHD chroma interlace problem:

The results are a very acceptable midway point between the blocky (stripey) AVCHD and the better colour resolution of the ProResHQ. Here are the settings I use – I’ll do a separate guide to 5DtoRGB in a separate post.

The only key change is a switch from BT.601 to BT.709 (the former is for US Standard Definition, the latter is for all HD material, a new standard is available for 4K).

So why should you NOT process all your C100 rushes through 5DtoRGB?

It takes time. Processing a 37 second clip took 159 seconds (2 mins 39 seconds) on my i7 2.3 GHz MacBook Pro. Compare that with 83 seconds for ClipWrap to transcode, and only 6 seconds to rewrap (similar to Final Cut Pro’s import).

You will have to judge whether the benefits of shooting internally with the significant transcode time outweigh the cost of an external recorder and the inconvenience of using it. You may wish to follow my pattern for the majority of my non-chromakey, fast turnaround work, where I’ll shoot internally, and only when I encounter difficult situations, opt to transcode those files via 5DtoRGB.

I’ve also been investigating the use of a ‘denoiser’. It’s early days in my tests, but I’ve noticed that it masks the ‘interlaced chroma’ stripe pattern is effectively hidden:

This is not a panacea. Denoising is even more processor intensive – taking a long time to render. My early testing shows that you can under- and over-do it with unpleasant results, and that the finished result – assuming that you’re not correcting a fault, but preparing for a grade – doesn’t compress quite as well. It’s too slick, and therefore perversely needs some film grain on top. But that’s another post.

Canon C100 PsF – the fix

c100

The Canon C100 produces a very nice, very detailed image just like its bigger brother, the C300. However, the C100 uses AVCHD as its internal codec and Canon have chosen (yet again) a slightly odd version of this standard that creates problems in Non Linear Edit software such as Premiere Pro and Final Cut Pro X (excellent article by Allan Tépper, ProVideo Coalition).

Unless you perform a couple of extra steps, you may notice that the images have aliasing artifacts – stair steps on edges and around areas of detail.

PP6 – Edges before:

Here’s an example of the problem from within Adobe Premiere Pro, set to view the C100’s AVCHD footage at 200%. Note the aliasing around the leaves in the centre of the picture (click it to see a 1:1 view). Premiere has interpreted the progressive video as interlaced, and is ‘deinterlacing it’ by removing alternate lines of pixels and then ‘papering over the cracks’. It’s not very pretty.

PP6 – Interpret footage:

To cure this, we must tell Premiere that each 25psf clip from the C100 really is progressive scan, and it should lay off trying to fix something that isn’t broken. Control click your freshly imported C100 clips and select ‘Modify’ from the pop-up menu, then select ‘Interpret Footage…’

Alternatively, with your clips selected, choose ‘Interpret Footage…’ from the ‘Clip –> Modify’ menu.

Modify Clip

In the ‘Modify Clip’ dialog, the ‘Interpret Footage’ pane is automatically brought to the front. Click on the ‘Conform to:’ button and select ‘No Fields (Progressive Scan)’ from the pop-up:

PP Edges after

Now your clips will display correctly at their full resolution.

Final Cut Pro X – before:

The initial situation looks much worse in FCPX, which seems to have a bit of an issue with C100 footage, even after the recent update to version 10.1.

Select imported clips

The key to the FCPX fix is to let FCPX completely finish importing AVCHD before you try to correct the interlace problem. If you continue with these steps whilst the footage is still importing, changes will not ‘stick’ – clicking off the clips to select something else will show that nothing has really changed. Check that all background tasks have completed before progressing.

First, select all your freshly imported C100 clips. Eagle-eyed readers may wonder why the preview icon is so bright and vivid whilst the example clips are tonaly calmer. The five clips use different Custom Picture profiles.

Switch to Settings in Info tab

Bring up the Inspector if hidden (Command-4), and select the Info tab. In the bottom left of the Inspector, there’s a pop-up to show different Metadata views. Select Settings.

Change Field Dominiance Override to Progressive

In the Settings view of the Info pane, you’ll find the snappily titled ‘Field Dominance Override’, where you can force FCPX to interpret footage as Progressive – which is what we want. Setting it as Upper First will cater for almost all interlaced footage except DV, which is Lower First. Setting it back to ‘None’ lets FCPX decide. We want ‘Progressive’.

Final Cut Pro X – after:

Now the video displays correctly.

The before & after:

 

So Long SD

I’ve spent the best part of today trawling through the raw footage that I and my colleagues have shot of a particular event over the last five years, and found myself musing at the astonishing jump in quality we’ve witnessed.

So I’m looking at DSR570 footage – this is standard definition, 16:9 DVCAM done right. Putting Z1 footage next to DSR570 was like putting a vesta meal next to a proper curry. A DSR570 in good hands will capture great things and whilst not quite DigiBeta, the cameraman will make sure the images pop.

Now lets roll in an EX1 shooting at 720p. It’s a third of the cost of a DSR570 with good SD lens. It’s a little cocktail sausage of a camera – a big pointy stick through the middle of it would be a pleasing image to many people. It has a fixed factory lens on the front of it, and if you want to go wider or longer, you have to screw bits of bottle-bottom glass to it.

But the difference in picture quality – gawd, even though we have to scale it down to SD, it just kicks the DSR570 and 450’s butt. Okay, so most of the DSR footage is interlaced – because that’s what we did in the last century. Sigh. Interlace brings with it such a heavy payload to web video that I never wish to deal with it again. As for the Z1? How did we let this camera get away with its soft lens, oversharpened image, poor light sensitivity and dull image?

Yes, I did say web video. We’re now in a position where web video is at full SD quality (approximately 960×540, which is quarter 1080p) on YouTube, and full-on 1280×720 over at Vimeo. We can make beautiful and iridescent video at these resolutions that make the old school DVCAM 2/3″ cameras look like pin-hole cameras. Because most cameras were never set up to capture the full exposure range they might have been capable of, and stuck slavishly to broadcast video specifications from several decades ago, their images have to be pulled through post like an A&E victim.

Okay, so I am hamming things up a little. De-interlacing, scaling up and applying some colour correction to add a little zip to the images is hardly open-heart surgery. It’s more like tipping a spoonful of Calpol in its gob and sending it back to PE. But SD is so ‘done’, so ‘over’, so ‘finished’. An EX1 makes images that a DSR570 was never designed to do.

So now I need to cajole all my cameraman and DoP friends to pick up a Sausage-On-Stick EX1(R) and treat it like the decent tool it is, not that horrible little box-brownie called the Z1. But that’s unfair. I have spent too long in 720p to take SD seriously any more. I’m about to launch into a proper grown-up film-ou 1080 project too. After a couple of years in these lofty formats, Standard Definition – even PAL – seems just so… Lame?

The Progressive Society – Pt 2

Right. That’s it. I’ve had it with interlacing.

It was a cool trick back in the days of Ye Olde Cathode Ray Tube and valves, but interlacing is hanging around like a bad smell in these days of LCD and Plasma displays.

  • Is this web page interlaced? No.
  • Are any computer screens interlaced? No.
  • Is a video projector interlaced? No.
  • Is your TV at home interlaced? Well..

Maybe yes, if you haven’t gone flat panel yet (guilty m’lud), but you’d be hard pressed to pick up an interlaced TV set of any sort of quality at your local TV store.

And herein lies the rub: putting interlaced video onto a progressive scan display device LOWERS the resolution. Either by quite a bit (25%) or by a lot (50%). A lot of the cheaper LCD TVs simply chuck out every other field and double up what’s left. That’s why it’s cheap – or ‘Good Value’ and why TV looks all fuzzy and horrible. Higher end sets do some magic and scaling through hardware, but it’s not quite that beautiful astonishing look you get when you work out how to feed a true progressive source into a progressive screen.

But that’s exactly where television is moving. People are consuming audiovisual entertainment in places other than in front of the family screen. Web video, downloaded movies and the like are becoming the norm.

Now this is why I’m all hot under the collar: I’ve been producing progressive scan video for ‘data delivery’, and recently had cause to shoot a job in interlaced DV. Of course, it had things like captions in it, some graphics. Looks great on a PAL CRT monitor. But it was detstined for a life on an intranet, and being played from within PowerPoint. It needed to be deinterlaced (the horror of the Mouse Teeth is still in working memory). And behold… the zing, the sharpness, the ineffable vim of the whole thing has been diluted.

When web movies were 320×240 and MPEG1 files for PowerPoint weren’t much larger, the loss of resolution through deinterlacing wasn’t noticable, but mark this well: Web video has supersized. Measuring between 512×288 right up to 1280×720, there isn’t enough scaling down to hide the deinterlacing softness under the carpet.

So that’s it. It’s official.

I’m not shooting another frame of interlaced video.

The Progressive Society – Pt 1

I’m wrapping up a little trio of edits which involve bringing together existing footage from a variety of not-exactly-optimal sources (odd MPEG1 files, WMVs, some DV, all 4:3) with some professionally shot 720p footage. Thank goodness for FCP’s multi-format timeline (and MPEGstreamClip for converting virtually anything into anything else).

Of course, I’ve been editing 720p pretty much exclusively for months now, so when some DV footage needed to be inserted, I faced… Cue dramatic chord: The Curse of the Mouse Teeth From Hell.

I guess that if you’ve only ever edited DV footage, especially if you edit your own material, you’ll just shrug your shoulders and wonder what I’m on about. But if you’ve ever dropped in a bit of NTSC into a PAL timeline, or scaled up some DV, you’ll have seen those unsightly blocky edges that suddenly appear around anything that moves. As if some monster rat has been nibbling away at your video.

We’re not talking that blocky pixels from overworked compression or a bad tape dropout, we’re talking bobbly edges on things that move fast in frame. It’s caused by the interlaced video being stretched in a non standard way. For example, NTSC being stretched from 720×480 to 720×576, or PAL being stretched up from 720×576 to 1280×720. The on-off-on-off cadence goes to pot, and the fields are chosen in a ‘knit 3, pearl 7’ way, and you get… video that’s been attacked by monster-mice when anything moves.

Well, all this rodentry is only mentioned because I found a quick and dirty fix. Whilst not exactly perfect, it doesn’t rely on a quarter of a million quid’s worth of Snell & Wilcox Alchemist either.

Get the properties of your upscaled and interlaced footage, and set the clip’s properties to a field dominance of none. It gets scaled progressively, gaining some softness and quasi motion-blur at the expense of the crisp video like motion.