C100 Chroma Subsampling – the fix

c100The C100’s AVCHD is a little odd – you may see ‘ghost interlace’ around strong colours in PsF video. AVCHD is 4:2:0 – the resolution of the colour is a quarter of the resolution of the base image. Normally, our eyes aren’t so bothered about this, and most of the time nobody’s going to notice. However, stronger colours found in scenes common to event videographers, and when ‘amplifying’ colours during grading, all draw attention to this artifact.

Note that this problem is completely separate from the ‘Malign PsF’ problem discussed in another post, but as the C100 is the only camera that generates this particular problem in its internal recordings, I suspect that this is where the issue lies. I’ve never seen this in Panasonic or Sony implementations of AVCHD.

This is a 200% frame of some strongly coloured (but natural) objects, note the peculiar pattern along the diagonals – not quite stair-stepped as you might imagine.

Please click the images to view them at the correct size:

There are stripes at the edge of the red peppers, and their length denotes interframe movement. These artefacts illustrate that there’s some interlace going on even though the image is progressive.

Like ‘true’ interlacing artefacts, these stripey areas add extra ‘junk information’ which must be encoded and compressed when delivering video in web ready formats. These are wasting bitrate and robbing the image of crispness and detail. Reds are most affected, but these issues crop up in areas of strong chrominance including fabrics, graphics and stage/theatrical lighting.

Some have pointed the finger of blame at edit software, specifically Final Cut Pro X. I wondered if it was the way FCPX imported the .MTS files, so I rewrapped them in ClipWrap from Divergent Media. In version 2.6.7, I’ve yet to experience the problems I experienced in earlier versions, but the actual results seem identical to FCPX:

For the sake of completeness, I took the footage through ClipWrap’s transcode process – still no change:

So the only benefit would be to older computers that don’t like handling AVCHD in its natural state.

To isolate the problem to the recording format rather than the camera, I also shot this scene on an external recorder using the Canon’s 4:2:2 HDMI output and in recorded in ProRes 422HQ. The colour information is far better, but note the extra noise in the image (the C100 includes noise reduction for its AVCHD recordings to help the efficiency of its encoding).

This is the kind of image one might expect from the Canon C300 which records 4:2:2 in-camera at 50 Mbits per second. Adding an external recorder such as the Atomos Ninja matches the C300’s quality. But let’s say you don’t have the option to use an external recorder – can the internal recordings be fixed?

RareVision make 5DtoRGB – an application that post-processes footage recorded internally in the 4:2:0 based H.264 and AVCHD codecs, and goes one further step by ‘smoothing’ (not just blurring) the chroma to soften the blockiness. In doing so, it fixes the C100’s AVCHD chroma interlace problem:

The results are a very acceptable midway point between the blocky (stripey) AVCHD and the better colour resolution of the ProResHQ. Here are the settings I use – I’ll do a separate guide to 5DtoRGB in a separate post.

The only key change is a switch from BT.601 to BT.709 (the former is for US Standard Definition, the latter is for all HD material, a new standard is available for 4K).

So why should you NOT process all your C100 rushes through 5DtoRGB?

It takes time. Processing a 37 second clip took 159 seconds (2 mins 39 seconds) on my i7 2.3 GHz MacBook Pro. Compare that with 83 seconds for ClipWrap to transcode, and only 6 seconds to rewrap (similar to Final Cut Pro’s import).

You will have to judge whether the benefits of shooting internally with the significant transcode time outweigh the cost of an external recorder and the inconvenience of using it. You may wish to follow my pattern for the majority of my non-chromakey, fast turnaround work, where I’ll shoot internally, and only when I encounter difficult situations, opt to transcode those files via 5DtoRGB.

I’ve also been investigating the use of a ‘denoiser’. It’s early days in my tests, but I’ve noticed that it masks the ‘interlaced chroma’ stripe pattern is effectively hidden:

This is not a panacea. Denoising is even more processor intensive – taking a long time to render. My early testing shows that you can under- and over-do it with unpleasant results, and that the finished result – assuming that you’re not correcting a fault, but preparing for a grade – doesn’t compress quite as well. It’s too slick, and therefore perversely needs some film grain on top. But that’s another post.

Canon C100 PsF – the fix

c100

The Canon C100 produces a very nice, very detailed image just like its bigger brother, the C300. However, the C100 uses AVCHD as its internal codec and Canon have chosen (yet again) a slightly odd version of this standard that creates problems in Non Linear Edit software such as Premiere Pro and Final Cut Pro X (excellent article by Allan Tépper, ProVideo Coalition).

Unless you perform a couple of extra steps, you may notice that the images have aliasing artifacts – stair steps on edges and around areas of detail.

PP6 – Edges before:

Here’s an example of the problem from within Adobe Premiere Pro, set to view the C100’s AVCHD footage at 200%. Note the aliasing around the leaves in the centre of the picture (click it to see a 1:1 view). Premiere has interpreted the progressive video as interlaced, and is ‘deinterlacing it’ by removing alternate lines of pixels and then ‘papering over the cracks’. It’s not very pretty.

PP6 – Interpret footage:

To cure this, we must tell Premiere that each 25psf clip from the C100 really is progressive scan, and it should lay off trying to fix something that isn’t broken. Control click your freshly imported C100 clips and select ‘Modify’ from the pop-up menu, then select ‘Interpret Footage…’

Alternatively, with your clips selected, choose ‘Interpret Footage…’ from the ‘Clip –> Modify’ menu.

Modify Clip

In the ‘Modify Clip’ dialog, the ‘Interpret Footage’ pane is automatically brought to the front. Click on the ‘Conform to:’ button and select ‘No Fields (Progressive Scan)’ from the pop-up:

PP Edges after

Now your clips will display correctly at their full resolution.

Final Cut Pro X – before:

The initial situation looks much worse in FCPX, which seems to have a bit of an issue with C100 footage, even after the recent update to version 10.1.

Select imported clips

The key to the FCPX fix is to let FCPX completely finish importing AVCHD before you try to correct the interlace problem. If you continue with these steps whilst the footage is still importing, changes will not ‘stick’ – clicking off the clips to select something else will show that nothing has really changed. Check that all background tasks have completed before progressing.

First, select all your freshly imported C100 clips. Eagle-eyed readers may wonder why the preview icon is so bright and vivid whilst the example clips are tonaly calmer. The five clips use different Custom Picture profiles.

Switch to Settings in Info tab

Bring up the Inspector if hidden (Command-4), and select the Info tab. In the bottom left of the Inspector, there’s a pop-up to show different Metadata views. Select Settings.

Change Field Dominiance Override to Progressive

In the Settings view of the Info pane, you’ll find the snappily titled ‘Field Dominance Override’, where you can force FCPX to interpret footage as Progressive – which is what we want. Setting it as Upper First will cater for almost all interlaced footage except DV, which is Lower First. Setting it back to ‘None’ lets FCPX decide. We want ‘Progressive’.

Final Cut Pro X – after:

Now the video displays correctly.

The before & after:

 

FCPX upgraded to 10.1

Logo-FCPXOkay, I admit it. On the stroke of midnight, I was pressing the refresh button in the App Store. New FCPX! New Toys!

So – FCPX 10.1 is out. Do I need to upgrade? Yes – there’s enough changes in the system that address current issues. But it requires a major jump in operating system – when your computer is your major money-earning tool and it’s stable and reliable, you don’t touch it unless you have to. I have to switch from Lion 10.7.5 to Mavericks 10.9, and that’s a big leap.

TLDR?

  • Build a new bootable drive with FCPX 10.1 to experiment on – you may not want to update yet
  • Clone your drive to a bootable image (to return to in weeks and months to come) with SuperDuper or Carbon Copy Cloner and make a fresh Time Machine backup – make sure they all work before you proceed!
  • Copy older projects to work with, don’t use originals – Philip Hodgetts has made EventManager X free! Use it to manage your project updates.
  • Prepare for some ‘spilled milk’ with Mavericks (for me, Exchange email is broken)

At first, I thought it odd that Apple released FCPX 10.1 so close to a major worldwide holiday, but on reflection – it’s perfect. Rule 1 of upgrades: never upgrade during a job. Things can go wrong, things like backups and archives invariably take more time than you thought, and what if it’s all horrible and you need to back track? Smaller jumps, a minor ‘point-oh-one’ upgrades can be welcome relief, but this is a ‘point-one’ and it needs an OS upgrade to boot (pun not intended).

The safest option for me is (having backed up your main machine of course) to unwrap a brand new hard drive, format it and install the latest OS on it, then boot from THAT. Install the new software on the fresh OS, and play with COPIES of older projects that you copied across. New versions of software often change the file format and rarely is it back-compatible. You want to play in a protected ‘sand-box’ (I preferred ‘sandpit’ but hey…) so you don’t accidentally convert your current projects to the new system and find yourself committed to the switch.

Really, that is the safest way – but its frustrating as the performance of a system booted on an external drive isn’t quite what you’re used to, and it’s a bit clunky. Plus, it will take time to do the official switch – you’ll have to rebuild your apps, delete old versions that don’t work, sort out new workflows, new versions, reinstall, find license agreements, it all takes time (and it’s not billable for freelancers). But until you’re sure that the new OS won’t kill your current must-use apps, you can simply shut down, unplug, and return to your current safe system.

Then of course there’s the impatient teenager in all of us who, after backing up, installs the new OS on top of the old OS, downloads the new app, finds what’s broken in the rest of the system and fixes it, finds out that a few tools don’t work, plug-ins need shuffling, projects don’t render as they used to, fonts have gone missing… All this takes longer, funnily enough. And then there’s the creeping rot of a brand new operating system ‘installed in place’ over the old one. I did this ages ago, and the problems didn’t show until 12 months on and we’d gone through some minor version changes and bug fixes. Serious, serious problems that impacted work (and backups, and archives). If you’re jumping from 10.X to 10.Y (especially to 10.Z) it’s worth the time it takes to do a proper clean install.

And of course once it’s done, you still may need to be able to go back to the ‘old’ system – so you’ll need to clone – not back up or archive but CLONE – your old system before you start, if only for the comfort factor of running back to it when the new system refuses to do something.

So, I’m spending the first day having to NOT download the update, but format drives, archive disks, install software whilst reading and watching the sudden deluge of 10.1 info. (Note to self – Matt: don’t touch that button! Don’t do it!)

Alex4D has a bunch of links to get you started, training from Ripple and Larry Jordan (hopefully IzzyVideo will have some new stuff soon too), FCP.co discussion forums already alight with debate… and a week or two of holiday season to enjoy it all in.

(And Apple’s official take)

POV’s 2013 Documentary Filmmaking Equipment Survey

Whilst the number of respondents is a bit too low to be a true picture, POV’s survey does paint an interesting picture of the Documentarist’s world. It’s still a ‘buy’ rather than ‘rent’ market, for the best part in love with Canon’s DSLRs and lenses.

However, there’s a couple of splits  I wanted to see, but isn’t here. Firstly the split by sensor size: what has happened to 2/3″, and what proportion are now S35? Secondly, and somewhat related, body design. There still seems to be plenty of room for ‘the little black sausage of joy’ – the fixed lens, all-in-one camera with a wide-ranging parfocal zoom.

Yes, the Mac dominates in Docco editing. I boggle slight at the FCP7 market – twice that of all the Premiere Pro flavours. FCP7 used to bog down with over 35-40 mins in a timeline, and for larger projects I’d have expected a larger takeup of Premiere Pro.

Still, at least Gaffer Tape makes it into the top 5 ‘things we love’ list.

POV’s 2013 Documentary Filmmaking Equipment Survey

Turbo.264 HD – a quick and dirty guide for Mac based editors

Turbo.264 HD by Elgato is a Mac application sold as a consumer solution to help transform tricky formats like AVCHD into something more manageable. Rather than deal with professional formats like Apple ProRes, it uses H.264, a widely accepted format that efficiently stores high quality video in a small space. For given values of ‘quality’ and ‘small’, that is.

For the professional video editor, a common requirement is to create a version of their project to be uploaded to the web for use in services like Vimeo and YouTube. Whilst this can be achieved in-app with some edit software, not all do this at the quality that’s required, and often tie up the computer until the process is complete. This can be a lengthy process.

So, enter Turbo.264 HD – a ‘quick and dirty’ compressor that can do batches of movies, gives you access to important controls of H.264 that are key to making Vimeo/YouTube movies that stay in sync and perform well. It’s very simple in operation. The following guide will help you make your own presets for use with Vimeo and YouTube.

A quick and dirty guide for editors and videographers

First steps

Two Quicktime movies have been dropped onto the panel. Both are using custom presets created earlier. Click on the Format popup to select a preset, or add your own.

First steps

Vimeo/YouTube preset for Client Previews

Lots of presets have been built already in this copy of Turbo.264 HD – not just for the web but for iPad and iPhone use, even portrait (9:16) video. This guide will concentrate on two in particular.

Firstly, the Vimeo 720p version for client previews. This assumes that your master video will be in a high quality HD format such as 1080p ProRes, with 48KHz audio and progressive scan.

Clicking the ‘+’ button bottom left makes a new profile you can name. There’s a base Profile to work from that you select from the Profile pop-up at the top on the right hand side. For the Vimeo preset, the ‘HD 720p’ profile is used.

Next, adjust the settings as indicated. We don’t want to use the Upload service (as privacy settings may need indivual attention), the Audio settings can stay at automatic. The Other tab has basic switches for subtitles, chapters and dolby sound if they are part of the movie, and can be left alone.

Vimeo/YouTube preset for Client Previews

Sending HD video via the internet

The second preset is useful when you need to send high quality material via the internet in an emergency. File formats such as ProRes are ideal for editing, but use a large amount of space. H.264 can incorporate very high quality in a much smaller file size, but the files are difficult to edit or play back in this state. However, they can be transcoded back to ProRes for editing.

Sending HD video via the internet

The benefits and drawbacks of sending H.264 over ProRes

This preset does lower the quality by an almost imperceptible amount, and the original files should be sent via hard disk if possible. However when you need a quick turnaround under challenging circumstances (for example, a wifi internet connection in a hotel or coffee shop), this preset can help.

For example, a 2 minute 42 second ProRes clip uses 2.6 GB of disk space. The original clip shot on AVCHD at 1080p25 was 462 MB. However, using the H.264 settings below, the result was 101 MB with virtually no visible loss of quality. A 2 mbps internet connection would take almost 2.5 hours for the ProRes file, half an hour for the AVCHD file and under 7 mintues for the H.264 file.

And finally…

Hitting the start button starts the batch, and processed movies retain the original file name with a .mp4 extension. You can see that this 25fps 1080p movie is encoding at almost 28 fps, so a little faster than real time. The minutes remaining starts a little crazily then settles down. You can leave it running while you edit, but it will slow a little. When there’s no resizing and little compression, it can run twice as fast as real time (depends on the speed of your Mac).

And finally...
Remember, this is just a quick and dirty method of turning around client previews quickly – I often have ‘batches’ to do, 6-12 movies of 3 mins each, or a couple of 20-30 min interview select reels with burned in timecode. I pump them all through Turbo264 rather than Episode Pro as – due to the high bitrate – you’re not going to see much difference.
When it comes to the final encode, a professional encoding solution such as Telestream Episode, with the X264 codec as a replacement H.264 encoder, will generate the best results.

Creating the Dance of the Seven Veils

Unboxing videos are an interesting phenomenon.

They don’t really count as ‘television’ or ‘film’ – in fact they’re not much more than a moving photo or even diagram. But they are part of the mythos of the launch of a new technical product.

I’ve just finished my first one – and it was ‘official’ – no pressure, then.

I first watched quite a few unboxing videos. This was, mostly, a chore. It was rapidly apparent that you need to impart some useful information to the viewer to keep them watching. Then there was the strange pleasure in ‘unwrapping’ – you have to become six years old all over again, even though – after a couple of decades of doing this – you’re more worried about what you’re going to do with all the packaging and when you can get rid of it.

So… to build the scene. My unpack able box was quite big. Too big for my usual ‘white cyclorama’ setup. I considered commandeering the dining room, but it was quite obvious that unless I was willing to work from midnight until six, that wasn’t going to happen. I have other work going on.

So it meant the office. Do I go for a nice Depth of Field look and risk spending time emptying the office of the usual rubbish and kibble? Or do I create a quiet corner of solitude? Of course I do. Then we have to rehearse the unpacking sequence.

Nothing seems more inopportune than suddenly scrabbling at something that won’t unwrap, or unfold, or not look gorgeous. So, I have to unwrap with the aim of putting it all back together gain – more than perfectly. I quickly get to see how I should pack things so it unpacks nicely. I note all the tricks of the packager’s origami.

So, we start shooting. One shot, live, no chance to refocus/zoom, just keep the motion going.

I practice and practice picking up bundles of boring cables and giving them a star turn. I work out the order in which to remove them. I remember every item in each tray. Over and over again.

Only two takes happened without something silly happening – and after the second ‘reasonable’ take, I was so done. But still, I had to do some closeups, and some product shots. Ideally, everything’s one shot, but there are times when a cutaway is just so necessary, and I wish I got more.

Learning Point: FIlm every section as a cutaway after you do a few good all-in-one takes.

Second big thing, which I kinda worked out from the get-go. Don’t try and do voiceover and actions. We’re blokes, multitasking doesn’t really work. It’s a one taker and you just need to get the whole thing done.

Do you really need voiceover, anyway? I chickened out and used ‘callout’ boxes of text in the edit. This was because I had been asked to make this unboxing video and to stand by for making different language versions – dubbing is very expensive, transcription and translation for subtitles can be expensive and lead to lots and lots of sync issues (German subs are 50% more voluminous than English subtitles and take time to fit in).

So, a bunch of call-out captions could be translated and substituted pretty easily. Well, that’s the plan.

Finally, remember the ‘call to action’ – what do you want your viewers to do having watched the video? Just a little graphic to say ‘buy here’ or ‘use this affiliate coupon’ and so on. A nod to the viewer to thank them for their attention.

And so, with a couple of hundred views in its first few hours of life, it’s not a Fenton video, but it’s out there stirring the pot. I’d like to have got more jokes and winks in there, but the audience likes these things plain and clear. It was an interesting exercise, but I’m keen to learn the lessons from it. Feedback welcomed! What do you want from an Unboxing Video?

BVE2013 – Did the dead cat just bounce?

Accountants have a lovely phrase – even a dead cat will bounce if it’s dropped from high enough. The world of video has been feeling the pinch for a few years now, but today – wandering the halls of the Broadcast Video Expo now in its new home – maybe it bounced back. People were smiling, feeling a little more confident. A real tonic to the system.

On the negative side, there was talk of how video clients were acutely aware of the cheapening of tools and how budgets were so squeezed. On the other hand, there was a genuine feeling of ‘democratisation’ in the markets I’ve frequented. On one hand ‘clients don’t feel comfortable with work-at-home editors’ but big names will now admit to ‘colour correcting on an iMac’. Clients may raise an eyebrow to DaVinci – ‘that’s the free software, right?’ but Grading and Colorists are back in the game. Just need to get our audio back in the limelight too. Broadcast is making it all very tricky again.

The big 800lb gorillas of the broadcast industry are not quite so dominant (!!) – but then, maybe it’s more telling that the show is now far more indie/corporate-friendly. I remember the BVE show was almost hostile to the corporate market. The visitors I met seemed to be 90% indie/corporate. Maybe birds of a feather stick together, but I definitely felt ‘amongst friends’ here.

Maybe that’s the grass roots poking through. Now that we’re hearing the parables of Netflix commissioning their own series, Google investing in content, these are a new round of broadcasters that the Web Generation of videographers and the avant garde of broadcast are taking to heart.

So are there new releases and excellent toys? Yes. The DJI Phantom stand – quadcopters with GoPros and NEX-7s on gimbal heads – was astonishingly busy. Queues to touch and feel the Sony F5 and Black Magic Cine Camera, Nikon out in force with a nod to ebullient Atomos, the Rode SmartLav (snark!) is in demo (tip – the Rode Lav is actually more interesting) and there’s a litany of distractions and shiny things…

Speaking of which, got to see some lovely lamps. Dedo has a booth where you can play with the new line of LED fittings – the 20W ‘son of LEDzilla’ particularly caught my eye. Small, neat, flexible, and can chuck light long distances. Only trouble is, so Teutonic is this company that ‘they’re not quite ready yet’ and have been so for a while. LEDs can be odd beasties, and the broadcast industry have said how LEDs ‘should’ work, but having worked with lesser LEDs and suffered challenges in skin tones, will be looking forward to lamps with true and fair rendition of skin tone.

Sad to find that there were a few companies I wanted to meet that weren’t here. But conversely, good to visit a show that can’t be swallowed in a day, let alone an afternoon.

3D isn’t here really. This year has a decidedly British take on 4K (jolly nice! Isn’t it doing well! Now, about HD…). If you need it, it’s here. If you think you need it, plenty of people to give you both sides. There’s a whole 4K pavilion, but it’s a separate side show. Another area which I felt sorry for was DVD duplication and its ilk. Vimeo and YouTube have their faults, they drive me nuts, but the concept of burning DVDs seems a little ‘Standard Def’ – and even BluRay seems a little difficult to justify.

If you can get along (this is a self-selecting audience, I know) do try the seminars. You’ll have to queue a bit, or suffer the standing, but unlike other years I’ve not been left out in the cold and there are some great presentations. Hopefully some will make it onto the web (a few are up already).

I have my take-aways from today, some I want to keep for myself, some I’m not sure make sense until I go again, but the biggest take home was the positive sleeves-rolled-up attitude of the people here.Just when many thought of upping stumps and retreating to the pavilion, there are clients out there who need video professionals who get great results because they’re good at what they do (whether on free software or high end systems).

So whilst I don’t feel we’re in recovery mode, maybe the bottom was scraped a while back and the bounce has happened. I’ll learn more on Thursday. If you can make the time to drop in on BVE, it should cheer you up if nothing else.

HD-SDI Embedding

Samurai-BNCOn a recent job, I had a chance to work with the Atomos Samurai – a recorder that creates either ProRes or DNxHD files from HD-SDI video, rather than the rather more consumerist (but just as good) quality HDMI signals I usually deal with. I have, for the last few years, eschewed the extra expense of HD-SDI kit in favour of ‘That will do nicely’ HDMI, but I think I’ve found a good business case for re-thinking that.*

The job was to record the output from a vision mixed feed from an Outside Broadcast truck, filming an awards ceremony. We had, in fact, each of the 5 cameras recording to AJA KiPros, but there was a need for two copies of the finished programme to go to two separate editors (myself and Rick, as it happens, working on two entirely separate edits) as soon as the event finished – even the time spent copying from the KiPro drive to another disk would have taken too long. So we added Rick’s Samurai to the chain.

We learned a couple of interesting things in preparation for the job.

Samurai_on_location_10aThe first is ‘how to reliably power a Samurai’ – its neat little case doesn’t have a mains adaptor in it, although it will happily run for hours on Sony NP-F style batteries (you can A-B roll the batteries too, so changing one whilst it’s powered off the second battery). However, I didn’t want to have to think about checking batteries – I wanted to switch it to record, then switch it off at the end of the gig, as I had other things to worry about (cutting 5 cameras, after shooting ‘Run & Gun’ style all day).

Samurai_on_location_07b

The Samurai (and Ninja) can be powered off a Sony ‘Dummy Battery’ supplied with Sony battery chargers and some camcorders. Plug the dummy battery in, connect it to the charger and switch to ‘Camera’ mode and behold – one mains powered Samurai.

The second point is thanks to Thomas Stäubli (OB truck owner) and Production Manager Arndt Hunschok who set up the audio in a very clever way which gave me a unique opportunity to fix the edit’s music tracks.

Unlike HDMI, HD-SDI has 8 audio tracks embedded in the signal. The sound engineer kindly split his mix into 4 stereo groups: a mixed feed, audio from the presenter microphones, audio from directional microphones pointing at the audience (but away from the PA speakers), and a clean music feed.

The practical upshot was that I was able to edit several versions of the 90 minute awards ceremony (30, 8 and 3 minute versions) without the music, then re-lay the music stings (from its clean feed, or replace with licensed alternatives for the DVD version) where appropriate, thus producing a very slick result and saving a lot of time and hair pulling (or sad compromises) in the edit suite.

Technically, the Samurai footage came straight in and ready to edit with its 8 audio tracks in frame accurate sync (of course). I was able to slice it up and do a pre-mix of the required tracks.

In the past, this has been a bit of a nightmare. This time, it was easy to take audio from the stage and play with the timings for music cues.

A short technical note: be it HDMI or HD-SDI, your picture is made up of 1s and 0s and so there’s no technical difference in the quality if fed with the same source**. However, the audio is interesting. Most of the time, shooting indie films or simple corporates, you’re not going to need lots of separated tracks. When it comes to live performances or panel debates, however, the 8 tracks of HD-SDI can significantly offset the extra cost of the technology by saving time in the edit suite. Well worth a conversation with your Techinical Director or supplier to sort out the ‘sub mixes’ (separating your audio feed to the channels) and ‘embedding’ (entwining the audio channels into the HD-SDI feed).

It’s odd that this hasn’t occurred to me before – the facility has been there, but perhaps it’s that last bit of kit – the ‘HD-SDI Audio Embedder’ available from suppliers like Black Magic Design and AJA – that’s been hiding its light under a bushel. As such, it is probably the least sexy item on one’s shopping list. Not the sort of thing that crops up for the journeyman videographer, but just the sort of thing when specifying the larger jobs with rental kit.

So, note to self: when dealing with complex audio, remember HD-SDI Audio Embedders, HD-SDI recorders.

And again, my thanks to Thomas Stäubli and Arndt Hunschok for their assistance and patience.

Samurai_on_location_12a

* One of the main business cases for HD-SDI (and good old SDI before that) was that it uses the standard BNC connector that has been the main ‘video’ connector in the broadcast industry. The BNC connector has a rotating cuff around the plug that locks it into the socket so it doesn’t accidentally get pulled out (like XLRs). HDMI – and its horrible mutated midget bastard offspring ‘Mini-HDMI’ can work its way loose and pop out of a socket with sickening ease, thus any critical HDMI-connected kit usually has a heavily guarded ‘exclusion zone’ round it where no mortals are allowed to tread, and sometimes bits of gaffer take just to make sure – in fact there is a portion of the ‘aftermarket video extras’ industry that make brackets designed to hold such cables into cameras and recorders. And, at risk of turning a footnote into an article, SDI/HD-SDI travels over ordinary 75 Ohm Coax over long distances, unlike the multicore short lengths of overpriced HDMI cables. So, yes, HD-SDI makes sense purely from a connector point of view.

** Notwithstanding the 4:4:4:4 recorders from Convergent Design and now Sound Devices. Basically, a 1.5G HD-SDI signal carrying a 10 bit 4:2:2 output will be indistinguishable from an HDMI signal carrying a 10 bit 4:2:2 signal, and many cameras with both HDMI and HD-SDI output 4:2:2 8 bit video signals anyway. But HDMI only does 2 channel audio whereas HD-SDI does 8. Back to the story…

Preparing Setups with Shot Designer

Following on from their line of successful film making tutorials for Directors, Per Holmes and the Hollywood Camera Work team have launched their new app for iOS/Android and Mac/Windows – Shot Designer.

This is a ‘blocking’ tool – a visual way of mapping out ‘who or what goes where, does what and when’ in a scene, and where cameras should be to pick up the action. For a full intro to the craft of blocking scenes from interviews to action scenes, check out the DVDs, but whilst they can be – and often are – scribbled out on scraps of paper, Shot Designer makes things neat, quick, sharable via dropbox, and *animated*. A complex scene on paper can become a cryptic mashup of lines and circles, but Shot Designer shows character and camera moves in real time or in steps.

You can set up lighting diagrams too – using common fittings including KinoFlos, 1x1s, large and small fresnels, and populate scenes with scenery, props, cranes, dollies, mic booms and so on – all in a basic visual language familiar to the industry and just the sort of heart-warming brief that crews like to see before they arrive on set.

Matt's 2-up setup

My quick example (taking less time that it would to describe over a phone) is a simple 2-up talking head discussion. The locked off wide is matched with two cameras which can either get a single closeup on each, or if shifted, a nice Over Shoulder shot. A couple of 800W fresnels provide key and back-light but need distance and throw to make this work (if too close to the talent, the ratio of backlight to key will be too extreme) so the DoP I send this to may recommend HMI spots – which will mean the 4 lamp Kino in front will need daylight bulbs. So, we’ll probably set up width-wise in the as yet un-recced room – but you get the idea: we have a plan.

Operationally, Shot Designer is quick to manipulate and is ruthlessly designed for tablet use but even sausage fingers can bash together a lighting design on an iPhone. There’s a highlighter mode so you can temporarily scribble over your diagram whilst explaining it. The software is smart too – you can link cameras so that you don’t ‘cross the line’, Cameras can ‘follow’ targets… It builds a shot list from your moves so you can check your coverage before you wrap and move to the next scene.

Interestingly, there’s a ‘Director’s Viewfinder’ that’s really handy: Shot Designer knows the camera in your device (and if it doesn’t you can work it out), so you can use that to pinch and zoom to get your shot size and read off the focal length for anything from a AF101 or 5D Mk 3 to an Arri Alexa – other formats (e.g. EX1R or Black Magic Cinema Camera) will be added to the list over time. Again, this is an ideal recce tool, knowing in advance about lens choice and even camera choice).

This really is not a storyboard application – Per Holmes goes to great lengths to stress that storyboarding can push you down a prescribed route in shooting and can be cumbersome when things change, whereas the ‘block and stage’ method of using multiple takes or multiple cameras gives you far more to work with in the ‘third writing stage’ of editing. You can incorporate your storyboard frames, or any images, even ones taken on your device, and associate them with cameras. Again, that’s handy from a recce point of view right up to a reference of previous shots to match a house style, communicating the oft-tricky negative space idea, keeping continuity and so on. However, future iterations of Shot Designer are planned to include a 3D view – not in the ‘Pre-viz’ style of something like iClone or FrameForge but a clear and flexible tool for use whilst in production.

There is a free ‘single scene’ version, and a $20 license for unlimited scenes over all platforms – but check their notes due to store policy: buyers should purchase the mobile version to get a cross-over license to the desktop app, as rules say if you buy the desktop app first you’ll still be forced to buy the mobile version.

Shot Designer may appear to be for Narrative filmmaking, but the block and stage method helps set up for multicam, and a minute spent on blocking and staging any scene from from wedding to corporate to indie production is time well spent. The ability to move from Mac or PC app to iPad or Android phone via dropbox to share diagrams and add notes is a huge step forward from the paper napkin or ‘knocked up in PowerPoint’ approach. It will even be a great ‘shot notebook’ to communicate what the director wants to achieve.

Just for its sharability and speed at knocking up lighting and setup diagrams, Shot Designer is well worth a look, even at $20 for the full featured version. If you combine it with the Blocking and Staging aspect and its planning capabilities, it’s a great tool for the Director, DoP and even (especially) a Videographer on a recce.

Edit: For those of us who haven’t bought an iPad yet – this might be the ‘killer app’ for the iPad mini…

Editing – reaching your destination. Limo or Taxi?

Update: since this was written, FCPX has moved on and repairs to events have got easier. I have, despite the vitriol in this blog, returned to FCPX. But let’s not spoil a good rant…

 

Editing is a journey from chaos to order. It defies the second rule of thermodynamics which says everything must turn to chaos. However, film making is a process whereby poo (abelit high quality poo -and in huge quantities) becomes haute cuisine in elegant portions. So how do we get there?

Although we shoot a lot of material knowing most of it hits the cutting room floor, the EDITOR knows to view everything. Amongst the detritus and the offcuts are little gems, little nuggets of gold. As editors, we need to find them and treasure them, and park them in a safe place. It must be a trustworthy safe place for we’ve cut them from their origin and put them there.

Once again, I’m trying to take an impossible amount of rushes and wrangle them into a reasonable selection for editing. Let’s assume that the main trip from rushes to edit is fine – any Non Linear Editor worth its salt can do that. But how do we manage the pre-edit? I’ve been using FInal Cut Pro X for the last year, and it’s been quick and utilitarian. However, 3 projects have gone south for the Winter because the database structure has become damaged. Yes, FCPX has procreated in a decidedly vertical direction leaving me, the editor, up the sewer without a paddle. Not once, not twice, but three times.

All three projects shared the same characteristics: not just a single project, but each borrowing from several projects over different hard drives. Not through design, but as a consequence of client desires. I had been using the SparseImage trick espoused by many, to great success. However, when SparseImages from different volumes were combined, sublimity turned to slurry.

I had chosen, so I could back up and archive quickly and securely, to import footage as links, rather than suffer the arduous process of copying gigabytes of footage to a single drive. Those links weren’t ‘aliases’ in the MacOS term, they were Unix ‘Symbolic Links’ which are uneditable unlesss individually, via a command line. If 497 links go bad, how long would it take you to fix them via a command line? Find me a Unix Guru who would agree to this in the 120 minutes between suffering the problem and showtime with the client. Yes, I re-edited the whole day’s edit in 2 hours, and FCPX was the only editor that could work as fast as we can.

Well, it’s happened again. I’m now editing in Adobe Premiere CS6 and will stay here. It feels like stepping out of a wonderful Mercedes Limo and getting into a TX1 London Cab. Basic, utilitarian, egalitarian. It sure as hell ain’t as quick as FCPX, doesn’t have the amazing third party support, but it just rattles on.

I don’t want to love it, but hey – it does things like subtitles, it knows about timecode, it respects links to external media, it has a save-vault that can save the editor from his own bad decisions, and whilst it can crash, it neither delete stuff nor forgets so much so you can help it recover.

I miss the air conditioned, hydro-suspensioned, gin-filled palace on four wheels that is FCPX, and I think I’ll still use it for simple little runabouts. It’s fun. It’s great. It’s actually really effective. But next time I’m hired for the long haul, I’m cutting in CS6. A limo can cross town, a TX1 will cross continents.

 

PostScript: The main issues were finally tracked down to something at the system level, requiring a full Clean Install. There are some hateful things about FCPX, but bear in mind the parable of the Old Woman who lived in a Vinegar Bottle.