C100 noise – the fix

The Canon C100 is an 8 bit camera, so its images have ‘texture’ – a sort of electronic grain reminiscent of film. Most of the time this is invisible, or a pleasant part of the picture. In some situations, it can be an absolute menace. Scenes that contain large areas of gently grading tone pose a huge problem to an 8 bit system: areas of blue sky, still water, or in my case, a boring white wall of the interview room.

Setup

Whilst we set up, I shot some tests to help Alex with tuning his workflow for speed. It rapidly became obvious that we’d found the perfect shot to demonstrate the dangers of noise – and in particular, the C100’s some-time issue with a sort of pattern of vertical stripes:

Click the images below to view the image at 1:1 – this is important – and for some browsers (like Chrome) you may need to click the image again to zoom in.

So, due to the balance of the lighting (couldn’t black the room out, couldn’t change rooms), we were working at 1250 ISO – roughly equivalent to adding 6dB of gain. So, I’m expecting a little noise, but not much.

Not that much. And remember, this is a still – in reality, it’s boiling away and drawing attention to its self.

It’s recommended to run an Auto Black Balance on a camera at the start of every shoot or if the camera changes temperature (e.g. indoors to outdoors). Officially, one should Auto Black Balance after every ISO change). An Auto Black Balance routine identifies the ‘static’ noise to the camera’s image processor, which will then do a better job of hiding it.

So, we black balanced the camera, and Alex took over the role of lit object.

There was some improvement, but the vertical stripes could still be seen. It’s not helped by being a predominantly blue background – we’re seeing noise mostly from the blue channel, and blue is notorious for being ‘the noisy weak one’ when it comes to video sensors. Remember that when you choose your chromakey background (see footnote).

The first thought is to use a denoiser – a plugin that analyses the noise pattern and removes it. The C100 uses some denoising in-camera for its AVCHD recordings, but in this case even the in-camera denoiser was swamped. Neat Video is a great noise reduction plug-in, available for many platforms and most editing software. I tried its quick and simple ‘Easy Setup’, which dramatically improved things.

But it’s not quite perfect – there’s still some mottling. In some respects, it’s done too good a job at removing the speckles of noise, leaving some errors in colour behind. You can fettle with the controls in advanced mode to fine tune it, but perversely, adding a little artificial monochrome noise helped a lot:

We noticed that having a little more contrast in the tonal transition seemed to strongly alter the noise pattern – less subtlety to deal with. I hung up my jacket as a make-shift cucoloris to see how the noise was affected by sharper transitions of tone.

So, we needed more contrast in the background – which we eventually achieved by lowering the ambient light in the room (two translucent curtains didn’t help much). But in the meantime, we tried denoising this, and playing around with vignettes. That demonstrated the benefit of more contrast – although the colour balance was hideous.

However, there’s banding in this – and when encoded for web playback, those bands will be ‘enhanced’ thanks to the way lossy encoding works.

We finally got the balance right by using Magic Bullet Looks to create a vignette that raised the contrast of the background gradient, did a little colour correction to help the skin tones, and even some skin smoothing.

The Issue

We’re cleaning up a noisy camera image and generating a cleaner output. Almost all of my work goes up on the web, and as a rule, nice clean video makes for better video than drab noisy video. However, super-clean denoised video can do odd things once encoded to H.264 and uploaded to a service such as Vimeo.

Furthermore, not all encoders were created equal. I tried three different types of encoder: the quick and dirty Turbo264, the MainConcept H.264 encoder that works fast with OpenCL hardware, and the open source but well respected X264 encoder. The latter two were processed in Epsiode Pro 6.4.1. The movies follow the above story, you can ignore the audio – we were just ‘mucking around’ checking stuff.

The best results came from Episode using X264

Here’s the same master movie encoded via MainConcept – although optimised for OpenGL, it actually took 15% longer than X264 on my MacBook Pro, and to my eyes seems a little blotchier.

Finally Turbo264 – which is a single pass encoder aimed at speed. It’s not bad, but not very good either.

Finally, a look at YouTube:

This shows that each service tunes its encoding to its target audience. YouTube seems to cater for noisy video, but doesn’t like strong action or dramatic tonal changes – as befits its more domestic uploads. Vimeo is trying very hard to achieve a good quality balance, but can be confused by subtle gradation. Download the uploaded masters and compare if you wish.

In Conclusion:

Ideally, one would do a little noise reduction, then add a touch of film grain to ‘wake up’ the encoder and give it something to chew on – flat areas of tone seem to make the encoding ‘lazy’. I ended up using Magic Bullet Looks yet again, pepping up the skin tones with Colorista, a little bit of Cosmo to cater for any dramatic makeup we may come across (no time to alter the lighting between interviewees), a vignette to hide the worst of the background noise, and a subtle amount of film grain. For our uses, it looked great both on the ProRes projected version and the subsequent online videos.

Here’s the MBL setup:

What’s going on?

There are, broadly speaking, three classes of camera recording: 8 bits per channel, 10 bits per channel and 12 bits per channel (yes there are exotic 16 bit systems and beyond). There are three channels – one each for Red, Blue and Green. In each channel, the tonal range from black to white is split into steps. A 2 bit system allows 4 ’steps’ as you can make 4 numbers mixing up 2 ‘bits’ (00, 01, 10 and 11 in binary). So a 2 bit image would have black, dark grey, light grey and white. To make an image in colour, you’d have red green and blue versions stacked up on top of each other.

8 bit video has, in theory, 256 steps each for red, green and blue. For various reasons, the first 16 steps are used for other things, and peak white happens at step 235, leaving 20 steps for engineering uses. So there’s only about 220 steps between black and white. If that’s, say, 8 stops of brightness range, then a 0.5 stop difference in brightness has only 14 steps between them. That would create bands.

So, there’s a trick. Just like in printing, we can diffuse the edges of each band very carefully by ‘dithering’ the pixels like an airbrush. The Canon Cinema range perform their magic in just an 8 bit space by doing a lot of ‘diffusion dithering’ and that can look gosh-darn like film grain.

Cameras such as the F5 use 10 bits per channel – so there are 1024 steps rather than about 220, and therefore handle subtlety well. Alexa, BMCC and Epic operate at 12 bits per channel – 4096 steps between black and white for each channel. This provides plenty of space – or ‘data wriggle room’ to move your tonality around in post, and deliver a super-clean master file.

But as we’ve seen from the uploaded video – if web is your delivery, you’re faced with 4:2:0 colour and encoders that are out of your control.

The C100 with its 8 bit AVCHD codec does clever things including some noise reduction, and this may have skewed the results here, so I will need to repeat the test with a 4:2:2 ProRes type recorder, where no noise reduction is used, and other tests I’ve done have demonstrated that NeatVideo prefers noisy 10 bit ProRes over half-denoised AVCHD. But I think this will just lead to a cleaner image, and that doesn’t necessarily help.

As perverse as it may seem, my little seek-and-destroy noise hunt has lead to finding the best way to ADD noise.

Footnote: Like most large sensor cameras, the Canon C100 has a Bayer pattern sensor – pixels are arranged in groups of four in a 2×2 grid. Each group contains a red pixel sensor, a blue pixel sensor and two green ones. Green has twice the effective data, making it the better choice for chromakey. But perhaps that’s a different post.

C100 Chroma Subsampling – the fix

c100The C100’s AVCHD is a little odd – you may see ‘ghost interlace’ around strong colours in PsF video. AVCHD is 4:2:0 – the resolution of the colour is a quarter of the resolution of the base image. Normally, our eyes aren’t so bothered about this, and most of the time nobody’s going to notice. However, stronger colours found in scenes common to event videographers, and when ‘amplifying’ colours during grading, all draw attention to this artifact.

Note that this problem is completely separate from the ‘Malign PsF’ problem discussed in another post, but as the C100 is the only camera that generates this particular problem in its internal recordings, I suspect that this is where the issue lies. I’ve never seen this in Panasonic or Sony implementations of AVCHD.

This is a 200% frame of some strongly coloured (but natural) objects, note the peculiar pattern along the diagonals – not quite stair-stepped as you might imagine.

Please click the images to view them at the correct size:

There are stripes at the edge of the red peppers, and their length denotes interframe movement. These artefacts illustrate that there’s some interlace going on even though the image is progressive.

Like ‘true’ interlacing artefacts, these stripey areas add extra ‘junk information’ which must be encoded and compressed when delivering video in web ready formats. These are wasting bitrate and robbing the image of crispness and detail. Reds are most affected, but these issues crop up in areas of strong chrominance including fabrics, graphics and stage/theatrical lighting.

Some have pointed the finger of blame at edit software, specifically Final Cut Pro X. I wondered if it was the way FCPX imported the .MTS files, so I rewrapped them in ClipWrap from Divergent Media. In version 2.6.7, I’ve yet to experience the problems I experienced in earlier versions, but the actual results seem identical to FCPX:

For the sake of completeness, I took the footage through ClipWrap’s transcode process – still no change:

So the only benefit would be to older computers that don’t like handling AVCHD in its natural state.

To isolate the problem to the recording format rather than the camera, I also shot this scene on an external recorder using the Canon’s 4:2:2 HDMI output and in recorded in ProRes 422HQ. The colour information is far better, but note the extra noise in the image (the C100 includes noise reduction for its AVCHD recordings to help the efficiency of its encoding).

This is the kind of image one might expect from the Canon C300 which records 4:2:2 in-camera at 50 Mbits per second. Adding an external recorder such as the Atomos Ninja matches the C300’s quality. But let’s say you don’t have the option to use an external recorder – can the internal recordings be fixed?

RareVision make 5DtoRGB – an application that post-processes footage recorded internally in the 4:2:0 based H.264 and AVCHD codecs, and goes one further step by ‘smoothing’ (not just blurring) the chroma to soften the blockiness. In doing so, it fixes the C100’s AVCHD chroma interlace problem:

The results are a very acceptable midway point between the blocky (stripey) AVCHD and the better colour resolution of the ProResHQ. Here are the settings I use – I’ll do a separate guide to 5DtoRGB in a separate post.

The only key change is a switch from BT.601 to BT.709 (the former is for US Standard Definition, the latter is for all HD material, a new standard is available for 4K).

So why should you NOT process all your C100 rushes through 5DtoRGB?

It takes time. Processing a 37 second clip took 159 seconds (2 mins 39 seconds) on my i7 2.3 GHz MacBook Pro. Compare that with 83 seconds for ClipWrap to transcode, and only 6 seconds to rewrap (similar to Final Cut Pro’s import).

You will have to judge whether the benefits of shooting internally with the significant transcode time outweigh the cost of an external recorder and the inconvenience of using it. You may wish to follow my pattern for the majority of my non-chromakey, fast turnaround work, where I’ll shoot internally, and only when I encounter difficult situations, opt to transcode those files via 5DtoRGB.

I’ve also been investigating the use of a ‘denoiser’. It’s early days in my tests, but I’ve noticed that it masks the ‘interlaced chroma’ stripe pattern is effectively hidden:

This is not a panacea. Denoising is even more processor intensive – taking a long time to render. My early testing shows that you can under- and over-do it with unpleasant results, and that the finished result – assuming that you’re not correcting a fault, but preparing for a grade – doesn’t compress quite as well. It’s too slick, and therefore perversely needs some film grain on top. But that’s another post.

Canon C100 PsF – the fix

c100

The Canon C100 produces a very nice, very detailed image just like its bigger brother, the C300. However, the C100 uses AVCHD as its internal codec and Canon have chosen (yet again) a slightly odd version of this standard that creates problems in Non Linear Edit software such as Premiere Pro and Final Cut Pro X (excellent article by Allan Tépper, ProVideo Coalition).

Unless you perform a couple of extra steps, you may notice that the images have aliasing artifacts – stair steps on edges and around areas of detail.

PP6 – Edges before:

Here’s an example of the problem from within Adobe Premiere Pro, set to view the C100’s AVCHD footage at 200%. Note the aliasing around the leaves in the centre of the picture (click it to see a 1:1 view). Premiere has interpreted the progressive video as interlaced, and is ‘deinterlacing it’ by removing alternate lines of pixels and then ‘papering over the cracks’. It’s not very pretty.

PP6 – Interpret footage:

To cure this, we must tell Premiere that each 25psf clip from the C100 really is progressive scan, and it should lay off trying to fix something that isn’t broken. Control click your freshly imported C100 clips and select ‘Modify’ from the pop-up menu, then select ‘Interpret Footage…’

Alternatively, with your clips selected, choose ‘Interpret Footage…’ from the ‘Clip –> Modify’ menu.

Modify Clip

In the ‘Modify Clip’ dialog, the ‘Interpret Footage’ pane is automatically brought to the front. Click on the ‘Conform to:’ button and select ‘No Fields (Progressive Scan)’ from the pop-up:

PP Edges after

Now your clips will display correctly at their full resolution.

Final Cut Pro X – before:

The initial situation looks much worse in FCPX, which seems to have a bit of an issue with C100 footage, even after the recent update to version 10.1.

Select imported clips

The key to the FCPX fix is to let FCPX completely finish importing AVCHD before you try to correct the interlace problem. If you continue with these steps whilst the footage is still importing, changes will not ‘stick’ – clicking off the clips to select something else will show that nothing has really changed. Check that all background tasks have completed before progressing.

First, select all your freshly imported C100 clips. Eagle-eyed readers may wonder why the preview icon is so bright and vivid whilst the example clips are tonaly calmer. The five clips use different Custom Picture profiles.

Switch to Settings in Info tab

Bring up the Inspector if hidden (Command-4), and select the Info tab. In the bottom left of the Inspector, there’s a pop-up to show different Metadata views. Select Settings.

Change Field Dominiance Override to Progressive

In the Settings view of the Info pane, you’ll find the snappily titled ‘Field Dominance Override’, where you can force FCPX to interpret footage as Progressive – which is what we want. Setting it as Upper First will cater for almost all interlaced footage except DV, which is Lower First. Setting it back to ‘None’ lets FCPX decide. We want ‘Progressive’.

Final Cut Pro X – after:

Now the video displays correctly.

The before & after:

 

Three Wise Men at Prokit

ProKitIt’s the morning after the day before – Prokit’s amazing ‘Three Wise Men’ event. A great crowd, lots of interesting discussion.

The general idea was to bring together Andy Bell’s lighting workshop looking at 1-up and 2-up interviews, Chris Woolf’s in-depth investigation into wind and mechanical noise, and I’d round up with a quick look at lighting for chromakey, specifically matching the background plate.

Andy’s hands-on workshops are a lot of fun – ‘what can you do with two lamps and a given programme style?’ but with the constraints of time and space, Andy took us through lighting plots that moved a step or two beyond ‘3 point’ setups plus a few little tricks too.

I hope to see a recorded version of Chris’s presentation soon, as I only caught bits of it from the back. Chris is Rycote’s Technical Consultant, and therefore the depth of detail was great – definitely required taking notes, though.

There were two bits of information in my presentation that I promised to publish post-event and so here they are:

Firstly, GreenScreener – a very useful app for iOS and Android from Per Holmes and Hollywood Camerawork:

http://www.hollywoodcamerawork.us/gs_index.html

It’s so important to get an even light on the background, whilst not pouring too much light, and this app makes it falling-off-log easy to adjust lamp position, intensity and flagging to make it work.

Secondly, a great video from Eve Hazelton and the team from Realm Pictures, plus a follow-up on keying in AfterEffects with its built-in Keylight plug-in (similar in scope to the FCPX keyer we demonstrated on the day).

https://vimeo.com/34365256

Richard Payne from Holdan had bought a very new toy in – still in beta. This was a ‘live’ chromakey solution, with HD-SDI, HDMI or DVI inputs for foreground and background, plus output. We’d got it working behind the scenes, but it developed ‘demo nerves’ just as we were setting up and so we skipped it. However, initial tests looked really good and so hopefully we’ll get to do a more in-depth chromakey press which includes both live and post keying another time.

For what it’s worth – and probably the most valuable part of this post? How to pack away your Lastolite pop-up background:

POV’s 2013 Documentary Filmmaking Equipment Survey

Whilst the number of respondents is a bit too low to be a true picture, POV’s survey does paint an interesting picture of the Documentarist’s world. It’s still a ‘buy’ rather than ‘rent’ market, for the best part in love with Canon’s DSLRs and lenses.

However, there’s a couple of splits  I wanted to see, but isn’t here. Firstly the split by sensor size: what has happened to 2/3″, and what proportion are now S35? Secondly, and somewhat related, body design. There still seems to be plenty of room for ‘the little black sausage of joy’ – the fixed lens, all-in-one camera with a wide-ranging parfocal zoom.

Yes, the Mac dominates in Docco editing. I boggle slight at the FCP7 market – twice that of all the Premiere Pro flavours. FCP7 used to bog down with over 35-40 mins in a timeline, and for larger projects I’d have expected a larger takeup of Premiere Pro.

Still, at least Gaffer Tape makes it into the top 5 ‘things we love’ list.

POV’s 2013 Documentary Filmmaking Equipment Survey

Creating the Dance of the Seven Veils

Unboxing videos are an interesting phenomenon.

They don’t really count as ‘television’ or ‘film’ – in fact they’re not much more than a moving photo or even diagram. But they are part of the mythos of the launch of a new technical product.

I’ve just finished my first one – and it was ‘official’ – no pressure, then.

I first watched quite a few unboxing videos. This was, mostly, a chore. It was rapidly apparent that you need to impart some useful information to the viewer to keep them watching. Then there was the strange pleasure in ‘unwrapping’ – you have to become six years old all over again, even though – after a couple of decades of doing this – you’re more worried about what you’re going to do with all the packaging and when you can get rid of it.

So… to build the scene. My unpack able box was quite big. Too big for my usual ‘white cyclorama’ setup. I considered commandeering the dining room, but it was quite obvious that unless I was willing to work from midnight until six, that wasn’t going to happen. I have other work going on.

So it meant the office. Do I go for a nice Depth of Field look and risk spending time emptying the office of the usual rubbish and kibble? Or do I create a quiet corner of solitude? Of course I do. Then we have to rehearse the unpacking sequence.

Nothing seems more inopportune than suddenly scrabbling at something that won’t unwrap, or unfold, or not look gorgeous. So, I have to unwrap with the aim of putting it all back together gain – more than perfectly. I quickly get to see how I should pack things so it unpacks nicely. I note all the tricks of the packager’s origami.

So, we start shooting. One shot, live, no chance to refocus/zoom, just keep the motion going.

I practice and practice picking up bundles of boring cables and giving them a star turn. I work out the order in which to remove them. I remember every item in each tray. Over and over again.

Only two takes happened without something silly happening – and after the second ‘reasonable’ take, I was so done. But still, I had to do some closeups, and some product shots. Ideally, everything’s one shot, but there are times when a cutaway is just so necessary, and I wish I got more.

Learning Point: FIlm every section as a cutaway after you do a few good all-in-one takes.

Second big thing, which I kinda worked out from the get-go. Don’t try and do voiceover and actions. We’re blokes, multitasking doesn’t really work. It’s a one taker and you just need to get the whole thing done.

Do you really need voiceover, anyway? I chickened out and used ‘callout’ boxes of text in the edit. This was because I had been asked to make this unboxing video and to stand by for making different language versions – dubbing is very expensive, transcription and translation for subtitles can be expensive and lead to lots and lots of sync issues (German subs are 50% more voluminous than English subtitles and take time to fit in).

So, a bunch of call-out captions could be translated and substituted pretty easily. Well, that’s the plan.

Finally, remember the ‘call to action’ – what do you want your viewers to do having watched the video? Just a little graphic to say ‘buy here’ or ‘use this affiliate coupon’ and so on. A nod to the viewer to thank them for their attention.

And so, with a couple of hundred views in its first few hours of life, it’s not a Fenton video, but it’s out there stirring the pot. I’d like to have got more jokes and winks in there, but the audience likes these things plain and clear. It was an interesting exercise, but I’m keen to learn the lessons from it. Feedback welcomed! What do you want from an Unboxing Video?

BVE2013 – Did the dead cat just bounce?

Accountants have a lovely phrase – even a dead cat will bounce if it’s dropped from high enough. The world of video has been feeling the pinch for a few years now, but today – wandering the halls of the Broadcast Video Expo now in its new home – maybe it bounced back. People were smiling, feeling a little more confident. A real tonic to the system.

On the negative side, there was talk of how video clients were acutely aware of the cheapening of tools and how budgets were so squeezed. On the other hand, there was a genuine feeling of ‘democratisation’ in the markets I’ve frequented. On one hand ‘clients don’t feel comfortable with work-at-home editors’ but big names will now admit to ‘colour correcting on an iMac’. Clients may raise an eyebrow to DaVinci – ‘that’s the free software, right?’ but Grading and Colorists are back in the game. Just need to get our audio back in the limelight too. Broadcast is making it all very tricky again.

The big 800lb gorillas of the broadcast industry are not quite so dominant (!!) – but then, maybe it’s more telling that the show is now far more indie/corporate-friendly. I remember the BVE show was almost hostile to the corporate market. The visitors I met seemed to be 90% indie/corporate. Maybe birds of a feather stick together, but I definitely felt ‘amongst friends’ here.

Maybe that’s the grass roots poking through. Now that we’re hearing the parables of Netflix commissioning their own series, Google investing in content, these are a new round of broadcasters that the Web Generation of videographers and the avant garde of broadcast are taking to heart.

So are there new releases and excellent toys? Yes. The DJI Phantom stand – quadcopters with GoPros and NEX-7s on gimbal heads – was astonishingly busy. Queues to touch and feel the Sony F5 and Black Magic Cine Camera, Nikon out in force with a nod to ebullient Atomos, the Rode SmartLav (snark!) is in demo (tip – the Rode Lav is actually more interesting) and there’s a litany of distractions and shiny things…

Speaking of which, got to see some lovely lamps. Dedo has a booth where you can play with the new line of LED fittings – the 20W ‘son of LEDzilla’ particularly caught my eye. Small, neat, flexible, and can chuck light long distances. Only trouble is, so Teutonic is this company that ‘they’re not quite ready yet’ and have been so for a while. LEDs can be odd beasties, and the broadcast industry have said how LEDs ‘should’ work, but having worked with lesser LEDs and suffered challenges in skin tones, will be looking forward to lamps with true and fair rendition of skin tone.

Sad to find that there were a few companies I wanted to meet that weren’t here. But conversely, good to visit a show that can’t be swallowed in a day, let alone an afternoon.

3D isn’t here really. This year has a decidedly British take on 4K (jolly nice! Isn’t it doing well! Now, about HD…). If you need it, it’s here. If you think you need it, plenty of people to give you both sides. There’s a whole 4K pavilion, but it’s a separate side show. Another area which I felt sorry for was DVD duplication and its ilk. Vimeo and YouTube have their faults, they drive me nuts, but the concept of burning DVDs seems a little ‘Standard Def’ – and even BluRay seems a little difficult to justify.

If you can get along (this is a self-selecting audience, I know) do try the seminars. You’ll have to queue a bit, or suffer the standing, but unlike other years I’ve not been left out in the cold and there are some great presentations. Hopefully some will make it onto the web (a few are up already).

I have my take-aways from today, some I want to keep for myself, some I’m not sure make sense until I go again, but the biggest take home was the positive sleeves-rolled-up attitude of the people here.Just when many thought of upping stumps and retreating to the pavilion, there are clients out there who need video professionals who get great results because they’re good at what they do (whether on free software or high end systems).

So whilst I don’t feel we’re in recovery mode, maybe the bottom was scraped a while back and the bounce has happened. I’ll learn more on Thursday. If you can make the time to drop in on BVE, it should cheer you up if nothing else.

The perfect camera is now spaghetti sauce.

Howard Moskowitz proved (and Malcolm Gladwell presented – which is fun, watch it!) that there is no single perfect spaghetti sauce. If you want to ‘own’ a market, you must divide and rule. This is the current strategy of Sony’s camcorder division as yet another brace of cameras is launched.

Essentially, once upon a time, a cameraman made a decision to become an ‘owner-operator’ – to invest in their own kit to increase their profitability. Video equipment is notoriously expensive – an entire industry has been built up to make a profit hiring kit to production companies and Directors of Photography. A digibeta camcorder – the mainstay of the television industry – will cost the same as a high end executive saloon car, with much higher running costs. To make the jump from renting to owning signifies that you have the clients and the diary full of bookings that says ‘I have passed from Journeyman to Master’ as your monthly work shows that it’s more cost effective to lease your equipment and hire it back to your clients rather than put equipment hire as a ‘line item’ cost on your invoices. If you don’t recognise the subtlety of that statement, take your accountant out to lunch. If your accountant is good enough, then the cost of the lunch is a mere single digit percentage of what you’ll save over your first year. I married my accountant, you may choose a slightly less drastic path.

So, where were we? Oh yes: buying cameras.

We’d look at the market, the requirements, the costs, and you’d end up buying a camera that will last you 10 years at least. a few $k for repairs and realignment, a few $k for insurance, a few $k for depreciation, and you\re done. The camera will keep you going for 8-12 years. We’re happy with that. It sucks a bit, there’s the cost of feeding your accountant, but hey. It all works out.

AND THEN IT ALL CHANGED!

Suddenly we’re in a maelstrom of change: cameras are no longer big fat sausages of technology, there’s a bit at one end made of glass, there’s a bit at the other end that stores stuff, and the bit in the middle is just a computer that crunches pixels instead of spreadsheets.

Suddenly, we must think about the glass up front – that defines our look, glass is the new film stock. Waddaya mean, changing the lens changes the look?

Suddenly, we must think about the storage at the back end – that defines our post, we can take quick and dirty 4:2:0 8 bit AVCHD or we can chew for hours on Red raw files. We can record to SDHC or SSD to record 7 hours or 10 minutes. Become an IT guru or else hand over your rushes to computer geeks and hope – they’re doing more with less than your JobFit clapper loader whose got a better understanding of the value in that there solid state device. You can now swallow your day’s rushes without a glass of water, and you’re handing it over to a spotty youth who has been raised in a world with Command-Z.

Suddenly, we think about our choice and have a major middle-aged moment: Who Moved My Camera? (if you don’t know the horror in that phrase, you have been freelance for too long).

Here’s the beef (and it’s a life lesson):

There is no perfect camera. You must own several different cameras. Just like you must do several different jobs, you must be several different people to your children and you cannot live on one meal for ever:

  • You will own a little go pro style camera, disposable, go-anywhere, waterproof time-lapse machine.
  • You will own a DSLR to shoot where you shouldn’t shoot, and darn! It shoots great stills.
  • You will own a ‘handy cam’ with insane steady shot and an XLR bridge to shoot where you can’t shoot
  • You will own a Big Sensor Camera With Interchangable Lenses to shoot sexy looking stuff
  • You will own a Black Sausage of Joy Camera to shoot when you really have no idea what’s going to happen
  • You will own a 4K 12 bit Raw camera for chromakey, talking heads, beauty shots and stuff for 10 years ahead

And yes, you may even own a big, bad-ass, gas-guzzling Alexa, Red or F55. Or you rent that one after all.

The point is… there is no ‘one’ camera. It’s a collection, a family, a grab-bag, a menagerie of cameras. Call yourselves what you want – videographers, one-man-bands, DoPs, Director/Editors, we’re all going to need more than one camera to make a business plan. To say a PMW500 would make us all happy is to not get it.

If you want as big a share of the spaghetti sauce market, you must have solutions for those who like it runny, chunky, herby, hearty, gloomy and spicy – and whatever else the market wants. If you want to meet your clients’ expectations, you’ll need DSLR, Raw, Timelapse, Underwater/crappy-weather, discrete, showy, small, large and so on.

We all have to own lots of cameras. Tell your accountants. Yay!

Preparing Setups with Shot Designer

Following on from their line of successful film making tutorials for Directors, Per Holmes and the Hollywood Camera Work team have launched their new app for iOS/Android and Mac/Windows – Shot Designer.

This is a ‘blocking’ tool – a visual way of mapping out ‘who or what goes where, does what and when’ in a scene, and where cameras should be to pick up the action. For a full intro to the craft of blocking scenes from interviews to action scenes, check out the DVDs, but whilst they can be – and often are – scribbled out on scraps of paper, Shot Designer makes things neat, quick, sharable via dropbox, and *animated*. A complex scene on paper can become a cryptic mashup of lines and circles, but Shot Designer shows character and camera moves in real time or in steps.

You can set up lighting diagrams too – using common fittings including KinoFlos, 1x1s, large and small fresnels, and populate scenes with scenery, props, cranes, dollies, mic booms and so on – all in a basic visual language familiar to the industry and just the sort of heart-warming brief that crews like to see before they arrive on set.

Matt's 2-up setup

My quick example (taking less time that it would to describe over a phone) is a simple 2-up talking head discussion. The locked off wide is matched with two cameras which can either get a single closeup on each, or if shifted, a nice Over Shoulder shot. A couple of 800W fresnels provide key and back-light but need distance and throw to make this work (if too close to the talent, the ratio of backlight to key will be too extreme) so the DoP I send this to may recommend HMI spots – which will mean the 4 lamp Kino in front will need daylight bulbs. So, we’ll probably set up width-wise in the as yet un-recced room – but you get the idea: we have a plan.

Operationally, Shot Designer is quick to manipulate and is ruthlessly designed for tablet use but even sausage fingers can bash together a lighting design on an iPhone. There’s a highlighter mode so you can temporarily scribble over your diagram whilst explaining it. The software is smart too – you can link cameras so that you don’t ‘cross the line’, Cameras can ‘follow’ targets… It builds a shot list from your moves so you can check your coverage before you wrap and move to the next scene.

Interestingly, there’s a ‘Director’s Viewfinder’ that’s really handy: Shot Designer knows the camera in your device (and if it doesn’t you can work it out), so you can use that to pinch and zoom to get your shot size and read off the focal length for anything from a AF101 or 5D Mk 3 to an Arri Alexa – other formats (e.g. EX1R or Black Magic Cinema Camera) will be added to the list over time. Again, this is an ideal recce tool, knowing in advance about lens choice and even camera choice).

This really is not a storyboard application – Per Holmes goes to great lengths to stress that storyboarding can push you down a prescribed route in shooting and can be cumbersome when things change, whereas the ‘block and stage’ method of using multiple takes or multiple cameras gives you far more to work with in the ‘third writing stage’ of editing. You can incorporate your storyboard frames, or any images, even ones taken on your device, and associate them with cameras. Again, that’s handy from a recce point of view right up to a reference of previous shots to match a house style, communicating the oft-tricky negative space idea, keeping continuity and so on. However, future iterations of Shot Designer are planned to include a 3D view – not in the ‘Pre-viz’ style of something like iClone or FrameForge but a clear and flexible tool for use whilst in production.

There is a free ‘single scene’ version, and a $20 license for unlimited scenes over all platforms – but check their notes due to store policy: buyers should purchase the mobile version to get a cross-over license to the desktop app, as rules say if you buy the desktop app first you’ll still be forced to buy the mobile version.

Shot Designer may appear to be for Narrative filmmaking, but the block and stage method helps set up for multicam, and a minute spent on blocking and staging any scene from from wedding to corporate to indie production is time well spent. The ability to move from Mac or PC app to iPad or Android phone via dropbox to share diagrams and add notes is a huge step forward from the paper napkin or ‘knocked up in PowerPoint’ approach. It will even be a great ‘shot notebook’ to communicate what the director wants to achieve.

Just for its sharability and speed at knocking up lighting and setup diagrams, Shot Designer is well worth a look, even at $20 for the full featured version. If you combine it with the Blocking and Staging aspect and its planning capabilities, it’s a great tool for the Director, DoP and even (especially) a Videographer on a recce.

Edit: For those of us who haven’t bought an iPad yet – this might be the ‘killer app’ for the iPad mini…

Dealing with 109% whites – the footage that goes to 11

Super-whites are a quick way of getting extra latitude and preventing the video tell-tale of burned out highlights by allowing brighter shades to be recorded over the ‘legal’ 100% of traditional video. However, it’s come to my attention that some folk may not be reaping the advantages of superwhites – or even finding footage is ‘blown out’ in the highlights where the cameraman is adamant that zebras said it was fine.

So, we have a scale where 0% is black, and 100% is white. 8 bit video assigns numbers to brightness levels, but contains wriggle room, so if you have the Magic Numbers of Computing 0-255, you’d assume black starts at 0, and white ends up at 255. Alas not. Black starts at 16, and white ends at 235. Super whites use the extra room from 235 to 255 to squeeze in a little more latitude which is great.

But that’s on the camera media. Once you get into the edit software, you need to make sure you obey the 100% white law. And that’s where things go a bit pear shaped.

If you can excuse my laundry, here’s a shot with 109% whites – note them peeping up above the 100% line in the waveform monitor:

(Click the images below to get a full view)

01-fs100_into_fcpx-2012-07-12-13-55.jpg

Note also, that the fluffy white clouds are blown – there’s ugly detail snapping from pale blue to white around them. Although I shot this so I just got into 109%, the monitor shows us the 100% view, so it’s overexposed as far as the editor’s concerned.

So in my NLE – in this case, Final Cut Pro X – I drop the exposure down, and everything sits nicely on the chart. I could pull up the blacks if necessary…

02-fcpx_drops_luma-2012-07-12-13-55.jpg

But I’ve been told about an app called 5DtoRGB, which pre-processes your 109% superwhite footage to 100% as it converts to ProRes:

03-5dtorgb_into_fcpx-2012-07-12-13-55.jpg

Note that it is indeed true that the whites are brought down to under 100%, the blacks are still quite high and will require pulling down in my opinion. 5DtoRGB takes a lot longer to process its ProRes files – I’ve reports of 10x longer than FCP7 Log & Transfer, but I’ve not tested this myself.

I did some tests in Adobe Premiere CS6, which demonstrates the problem. We start with our NATIVE AVCHD clip, with whites happily brushing the 109% limit as before. These are just 1s and 0s, folks. It should look identical – and it does. Info in the Waveform Monitor, blown whites in the viewer.

Another technical note: the FCPX Waveform Monitor talks about 100% and 0%, but Adobe’s WFM uses the ‘voltage’ metaphor – analogue video signals were ‘one volt high’, but 0.3 volts were given over to timing signals, so 0.7 volts were used to go from black (0.3 volts) to white (1 volt). So… 0.3 = black in Adobe’s WFM. And another thing – I’m from a PAL country, and never really got used to NTSC in analogue form – if I remember correctly, blacks weren’t exactly at 0.3 volts, also known as IRE=0 – they were raised for some reason to IRE=7.5, thus proving that NTSC with its drop frames, 29.97 fpx, error-prone colour phase and the rest, should be buried in soft peat and recycled as firelighters. But I digress. Premiere:

06-premier_start-2012-07-12-13-55.jpg

Let’s get our Brightness and Contrast control out to bring the 109s down to 100:

08-premier_bright-2012-07-12-13-55.jpg

Hold on a tick, we haven’t adjusted ANYTHING, and Premiere has run a chainsaw along the 100% line. That white detail has been removed until you remove the filter – you can’t get it back whilst the Brightness & Contrast filter is there. Maybe this isn’t the right tool to use, but you’d think it would do something? Not just clip straight away?

I tried Curves:

09-premier-curve-2012-07-12-13-55.jpg

It’s tricky, but you can pull down the whites – it’s not pretty. Look how the WFM has a pattern of horizontal lines – that’s nastiness being added to your image. The highlights are being squashed, but you can’t bring your blacks down.

So finally, I found ‘ProcAmp’ (an old fashioned term for a Processing Amplifier – we had these in analogue video days). This simply shifts everything down to the correct position without trying to be clever:

10-premier-procamp-2012-07-12-13-55.jpg

At last. We have our full tonality back, and under our control.

With all these issues, and probably some misunderstanding about 109%, I can see the desire for something safe and quick using the new FS700 cinegammas in the form of CineGamma 2, which only allows 100% whites, ditto Rec709 in the FS100. But forewarned is fore-armed.

I donate the last 9% of my brightness range to specular highlights and the last shreds of detail in the sky, so I can have that ‘expensive film look’ of rolled off highlights. But if I didn’t haul them back into the usable range of video, all that stuff would appear as burned out blobs of white – ugly. However, I also spent a bit of time testing this out when I switched from FCP7 to FCPX, as the former took liberties with gamma so you could get away with things. The bugs in FCPX and Magic Bullet made me check and check again.

It’s been worth it.