Dealing with 109% whites – the footage that goes to 11

Super-whites are a quick way of getting extra latitude and preventing the video tell-tale of burned out highlights by allowing brighter shades to be recorded over the ‘legal’ 100% of traditional video. However, it’s come to my attention that some folk may not be reaping the advantages of superwhites – or even finding footage is ‘blown out’ in the highlights where the cameraman is adamant that zebras said it was fine.

So, we have a scale where 0% is black, and 100% is white. 8 bit video assigns numbers to brightness levels, but contains wriggle room, so if you have the Magic Numbers of Computing 0-255, you’d assume black starts at 0, and white ends up at 255. Alas not. Black starts at 16, and white ends at 235. Super whites use the extra room from 235 to 255 to squeeze in a little more latitude which is great.

But that’s on the camera media. Once you get into the edit software, you need to make sure you obey the 100% white law. And that’s where things go a bit pear shaped.

If you can excuse my laundry, here’s a shot with 109% whites – note them peeping up above the 100% line in the waveform monitor:

(Click the images below to get a full view)

01-fs100_into_fcpx-2012-07-12-13-55.jpg

Note also, that the fluffy white clouds are blown – there’s ugly detail snapping from pale blue to white around them. Although I shot this so I just got into 109%, the monitor shows us the 100% view, so it’s overexposed as far as the editor’s concerned.

So in my NLE – in this case, Final Cut Pro X – I drop the exposure down, and everything sits nicely on the chart. I could pull up the blacks if necessary…

02-fcpx_drops_luma-2012-07-12-13-55.jpg

But I’ve been told about an app called 5DtoRGB, which pre-processes your 109% superwhite footage to 100% as it converts to ProRes:

03-5dtorgb_into_fcpx-2012-07-12-13-55.jpg

Note that it is indeed true that the whites are brought down to under 100%, the blacks are still quite high and will require pulling down in my opinion. 5DtoRGB takes a lot longer to process its ProRes files – I’ve reports of 10x longer than FCP7 Log & Transfer, but I’ve not tested this myself.

I did some tests in Adobe Premiere CS6, which demonstrates the problem. We start with our NATIVE AVCHD clip, with whites happily brushing the 109% limit as before. These are just 1s and 0s, folks. It should look identical – and it does. Info in the Waveform Monitor, blown whites in the viewer.

Another technical note: the FCPX Waveform Monitor talks about 100% and 0%, but Adobe’s WFM uses the ‘voltage’ metaphor – analogue video signals were ‘one volt high’, but 0.3 volts were given over to timing signals, so 0.7 volts were used to go from black (0.3 volts) to white (1 volt). So… 0.3 = black in Adobe’s WFM. And another thing – I’m from a PAL country, and never really got used to NTSC in analogue form – if I remember correctly, blacks weren’t exactly at 0.3 volts, also known as IRE=0 – they were raised for some reason to IRE=7.5, thus proving that NTSC with its drop frames, 29.97 fpx, error-prone colour phase and the rest, should be buried in soft peat and recycled as firelighters. But I digress. Premiere:

06-premier_start-2012-07-12-13-55.jpg

Let’s get our Brightness and Contrast control out to bring the 109s down to 100:

08-premier_bright-2012-07-12-13-55.jpg

Hold on a tick, we haven’t adjusted ANYTHING, and Premiere has run a chainsaw along the 100% line. That white detail has been removed until you remove the filter – you can’t get it back whilst the Brightness & Contrast filter is there. Maybe this isn’t the right tool to use, but you’d think it would do something? Not just clip straight away?

I tried Curves:

09-premier-curve-2012-07-12-13-55.jpg

It’s tricky, but you can pull down the whites – it’s not pretty. Look how the WFM has a pattern of horizontal lines – that’s nastiness being added to your image. The highlights are being squashed, but you can’t bring your blacks down.

So finally, I found ‘ProcAmp’ (an old fashioned term for a Processing Amplifier – we had these in analogue video days). This simply shifts everything down to the correct position without trying to be clever:

10-premier-procamp-2012-07-12-13-55.jpg

At last. We have our full tonality back, and under our control.

With all these issues, and probably some misunderstanding about 109%, I can see the desire for something safe and quick using the new FS700 cinegammas in the form of CineGamma 2, which only allows 100% whites, ditto Rec709 in the FS100. But forewarned is fore-armed.

I donate the last 9% of my brightness range to specular highlights and the last shreds of detail in the sky, so I can have that ‘expensive film look’ of rolled off highlights. But if I didn’t haul them back into the usable range of video, all that stuff would appear as burned out blobs of white – ugly. However, I also spent a bit of time testing this out when I switched from FCP7 to FCPX, as the former took liberties with gamma so you could get away with things. The bugs in FCPX and Magic Bullet made me check and check again.

It’s been worth it.

Thunderbolt Stikes Back

UPDATE: Writing >2GB files on SSDs >240GB with the Seagate GoFlex Thunderbolt Adaptor can cause the drive to unexpectedly dismount from your computer with an error (-50). Read the full story and the solution by Wolfgang Bauer.

Following on from my USB3 testing, I’ve finally received an interesting box – the Seagate GoFlex Thunderbolt Adaptor, now in stock for about £100.

The cool trick is that you can connect any ‘bare’ Solid State Drive (SSD) to it, and the price of SSDs is coming down quickly. A 256GB SSD can be had for under £140. Of course then you have to add your £40 cable, but assuming we can use one adaptor and cable for all your SSDs, finally we have a solution for Thunderbolt SSD for editing (and archiving/Backing up to USB3).

The downside is that you’ll probably want to keep the bare drive in the adaptor with some elastic bands or something – very high tech.

So, why would you want to do this?

Because it’s freaking fast. That’s why. Editing with Final Cut Pro 10 on this drive is the sort of experience we assumed it would be. No spinny beachballs of death, no stuttering, just ‘slick demo’ performance.

Drive Write (MB/s) Read (MB/s)
Slow, cheap USB2 drive 21.6 26.2
Western Digital Passport SE (USB2) 30.1 32.8
LaCie Quadra 7200rpm 2TB on FW800 46.6 44.5
Crucial 256GB SSD on FW800 75 81.6
Western Digital Passport SE (USB3) 96.3 108.4
Internal 512 GB SSD in MacBook Pro 17” 88.8 167.0
ThunderBolt with SSD 266.3 381.3

We’re talking 5x the speed of FW800 write, 8x the speed of FW800 read. And then there’s file access times.

With the same cable and adaptor, you can purchase additional SSDs for £140, and pretty soon we’ll see half a terabyte for under £200.

For my little industry niche, that means one SSD per edit for the duration of that edit, then archived off to hard drives so the SSD can be recycled, but happy to have maybe even a dozen of them – which I couldn’t afford with the current clutch of Thunderbolt/SSD combinations.

As I edit on site a lot, this can mean little gotchas like power suddenly dipping or going off entirely. Not talking third world style, just leaving a render in your hotel room and Housekeeping thoughtfully remove your room key that you taped into the slot to keep the power on. Or that moment after a big event where you’re backing up, and the 3-phase power you’ve been living on gets pulled for the de-rig. Which is why I’m so passionate about bus-powered drives that can work with a laptop editing computer.

And then there’s the scary ‘let’s edit this in the car’ or ‘let’s log rushes on the plane’ – with spinning disks? No. Ah – how about SSD? Fine. I’ll be archiving ‘media managed’ videos to thumb drives next.

It makes hard disks feel as old fashioned as tape.

USB3 for Macs – Thunderbolt killer or simple step up?

I’m a dyed in wool Mac User, so for me USB3 hardly came into view. I do remember watching a USB3 demo on a Mac where it displayed sub- FW800 performance – and decided to leave it at that. However, the continuing need to pass on my video work to PC users for archive, and the achingly slow performance of USB2, forced me to at least check it out for myself. At least it would give me something to do whilst waiting for sensibly priced Thunderbolt storage and cables.

After all, the Mac world is still full of USB2 devices: cheap, slow hard drives and computationally undemanding stuff like mice and keyboards. Little USB sticks for storage to perpetuate ‘sneakernet’ file sharing. There is of course, that funny USB3 sticker on most PC drives that claims high performance and that may be good enough for our PC using brethren, but we’re Mac users – why not sidestep the USB3 ‘upgrade’ for the super fast world of Thunderbolt?

That’s the mindset that Apple appears to want us to hold.

But the new set of Ivy Bridge equipped Macs will get, courtesy of their new chip set, USB3 functionality. Will Apple connect this functionality into the OS? Or will they find a way to block it? Should we care?

The world of PCs has been using USB3 for quite a while, scratching their heads over the Mac user’s obsession with FireWire and the ‘Unicorn poop’ status of Thunderbolt. Why are Mac users so precious about FireWire? USB3 blows it out of the water! They want Thunderbolt speeds – for what? And connect them with $50 cables? If the devices are blessed with pass-thru ports – which so many aren’t? (see sidebar below) USB3 has hubs!

I purchased the CalDigit USB3 card which fits into the ExpressCard slot of my MacBook Pro 17” – the choice of CalDigit is significant, as it’s the only one that’s touted to work with pretty much any USB3 drive – other manufacturers of USB3 cards tend to only work with their own drives and thus missing the meaning of the U in USB.

I’m used to working on FW800 drives, and so intended to use cheaper USB3 bus powered disks for backup and for handing over to clients, who could use them on their PCs. However, when used as the main working drive with a disk intensive application like Final Cut Pro, it was obvious that the USB3 drive was a lot faster. FCPX was running far better on USB3 than on FW800.

Enthused by this, I did a quick series of tests of drive performance with the Blackmagic Disk Speed Test app, available for free from the App Store. And so, in reverse order, here are the results:

Drive Write (MB/s) Read (MB/s)
Slow, cheap USB2 drive 21.6 26.2
Western Digital Passport SE (USB2) 30.1 32.8
LaCie Quadra 7200rpm 2TB on FW800 46.6 44.5
Crucial 256GB SSD on FW800 75 81.6
Western Digital Passport SE (USB3) 96.3 108.4
Internal 512 GB SSD in MacBook Pro 17” 88.8 167.0

Of course, what’s missing from this test is the ultimate: SSD in Thunderbolt drive, but these are still eyewateringly expensive.

The £75 Western Digital drives represent the kind of value we’ve been used to with spinning disks, and I can affirm that they work very well with Final Cut Pro 10 – the WDs on USB3 have been my drive of choice for onsite editing, with the added advantage that they’re cheap and compatable and can be passed over to the client without too many caveats (having archived to other drives).

Is USB3 better than Thunderbolt? No – it’s a different beastie. Should we give up on Thunderbolt for USB3? Of course not.

Should Apple now accept USB3 as a non-competitve alternative for non-specialist media use? Of course.

In the days of Steve Jobs, I’d fear that USB3 would be disabled in the new Macs in a fit of pique. The adoption of USB3 in the new Macs would demonstrate a more universal approach from Tim Cook et al.

At least CalDigit offer the option to MacBook Pro 17” users – and for everyone else, there is, of course, the Thunderbolt adaptor for Express Cards. Oh the irony.

Side bar: Thunderbolt devices can be powered via the cable from the host, but only one device per port. So, Thunderbolt powered devices do not have a passthrough port. Powered devices can have passthrough ports, but these are a rarity as most manufacturers seem to feel a single Thunderbolt port is enough.

However, when you’re paying top dollar for a new technology, the idea that a device will only work as the single device on the chain is frankly anathema. Video ingest device with no pass through to a storage device? Storage device with no pass through to a display device? No wonder Thunderbolt is taking its time to get accepted when device manufacturers assume their product will exist in its own lonesome prefecture.

4K is coming

It’s incredible – I’ve been looking at the iPad 3 retina display and thinking about video.

It’s very good to see 1080p, but looking at other demos, I want to emulate that ‘looking through a window’ effect. Not just HD video, but full-on 2048×1536 pixel-for-pixel video, filling the screen and providing an uncanny, hyper real look that will have people trying to reach through the glass and touch.

Well, of course, NAB is coming and the current ‘alphanumerique du jour’ is no longer 3D, but 4K. Of course the exotica of the camera world – Red’s Epic, Sony’s F65 and plenty of others, shoot in 4K on productions with sumptious budgets. Big movies like the new Girl With a Dragon Tattoo have been filmed in 4K, even when few cinemas can currently project it at its full resolution.

I don’t work in that world, but in my Corporate niche, we were able to jump into 720p and ‘Medium Format’ HD sooner than broadcast because our audience uses PowerPoint. They were doing HD before you could buy an HD TV, let alone an HD TV in a supermarket.

We’re still working through the roll-out of HD and there’s an awful lot of Marketing ‘fluff’ out there: Can we tell the difference between 720p and 1080p from the sofa? How big does the screen in your living room actually have to be to see the difference from your sofa? How big will your domestic screen get over time?

There’s a great (but very technical) article on Creative Cow that takes a careful look at some of the marketing messages about 4K.

http://magazine.creativecow.net/article/the-truth-about-2k-4k-the-future-of-pixels

I see a different trend with the iPad 3 – a much more intimate experience, where video becomes almost an analogue for high quality print in terms of magazine photography. Android devices will inevitably sport similar resolutions, we’ll see more and more tablet devices crop up in all sorts of situations – after all, they’re no longer ‘geek gadgets’ and have become widely accepted by a new computer illiterate audience with little or no preconceptions.

So once again, it’s the corporate/educational/industrial market with its smaller user base and less legacy that can drive the demand for these new technologies, in their quest for novelty and impact.

Okay, so HD (1,920 dots across by 1,080 dots down) is less dots than the screen – and leaves black bars top and bottom. But surely 4K, which contains four times the pixels, is a little overkill?

Firstly, shooting with plenty of spare pixels gives you scope to zoom into an image. Shoot at a wider framing, then crop in at the edit stage to help with framing, or perhaps add little zooms, or track motion. Purists may see this as a bit of an underhand trick, but ask a photographer about cropping – it’s an important part of the process. Furthermore, with careful thought, the old film practice for shooting for different aspect ratios can be readopted. Michael Cioni’s presentation at a LAFCPUG meeting makes the ‘shoot 5K for 4K’ message come alive:

Secondly, shooting a lot of pixels on cheaper or more compressed formats, then shrinking the final image down, can help apparent sharpness and detail. If you are shooting in a 4:2:0 codec and shrink the image down in Post Production (often handled at 4:4:4), you can effectively get 4:2:2. This was a great trick in the early days of HDV. We were shooting HDV, specifically to shrink it down to PAL in post, and the rather ropey colour information became clean enough to pull a chroma key.

With the new crop of 4K cameras coming out, this trick may soon return as, unlike the F65 with its 4:4:4 recorder and £3,700 cards, I foresee this breed’s 4K being heavily compressed.

However, with a little love and tenderness in post, I hope to get that ‘window on reality’ look on ‘retina’ style devices. Either that, or this is the most convoluted excuse to buy an iPad 3.

Sharpening tools

I’m off on a little job next week where my dear FS100 must be left behind, along with all that lovely glass I’ve been collecting. I will revert back to the Sony PMW-EX1R, which feels odd all of a sudden because it’s just a big black sausage, no extras needed. All in one. Sweet.

Now, I needed to give it a good checking over, ensure the media’s okay, that the lens is behaving its self (keeping focus from zoomed in to zoomed out), and it needed a bit of a tweak. But as I looked at the pictures, I noticed how full of noise they were (compared to the FS100) and especially how the detail was too crunchy.

Video cameras have Detail circuits to enhance the look of areas of detail and edges – raising the contrast of the picture around edges, and if overdone, it looks like someone’s clumsily traced your picture with a felt tip pen and chalk. Take it away, though, and your pictures are soft and lacking ‘bite’.

So the big question is, how much detail sharpening should one do in-camera? Too much is irreversibly ugly. Too little, and every single shot looks maddeningly soft. No worries – we’ll fix that in post, but then every single shot needs a sharpen filter and you’re into longer render times. For a Behind The Scenes or ENG shoot, you may not have the time to do this, so a bit of in-camera magic can be a good thing.

So, I did some tests, and because you’re needing to shoot a bit, examine the footage on a good quality monitor, shoot a bit more, rinse, repeat, one tends to do these tests in unglamorous locations – so you get to glimpse a grotty corner of my garden. What it does offer is a wide tonal range, a lot of detail of different types to handle, some natural and manufactured edges to show up aliasing and of course a few pretty flowers that didn’t get zapped by the recent frosts.

http://www.mdma.tv/sharpening/

Now, firstly you’re looking at full-frame 1080p frame grabs, not a video. Secondly, they just cycle round and round. Look at the TV aerials, the branches in the background, the chair legs and the ivy, and see how various forms of sharpening affect them.

Looking at full-size frame grabs is best, but it may help to compare the same part of the image showing the different sharpening methods, firstly at 100%, then at 200%:


No in-camera sharpening. In the case of the EX-1, this means setting Detail to 0.


Camera sharpening. EX-1 Detail: +10


Less in-camera sharpening. Detail: -10


No in-camera sharpening, but sharpened in software: Final Cut Pro X Sharpen filter Sharpening: 2.5


No in-camera sharpening, but sharpened in software using Irudis Tonalizer|VFX PRO.

Here are the results at 200%:


No in-camera sharpening.


Camera sharpening.


Less in-camera sharpening.


Final Cut Pro X


Tonalizer

It is obvious (to me, at least) that the FCPX sharpening filter at 2.5 is far superior to the in-camera sharpening even at 0, and that ‘Detail Off’ is too soft. Tonalizer’s detail was infinitely more subtle than the FCPX sharpener, but takes a good while longer to render (IIRC, the sharpener filter works in real-time, no rendering required).

So, the EX1R is set to Detail 0 on next week’s job, but will have detail OFF on any shoots where I have full control of the edit and of course who gets to see the rushes. I do like the Tonalizer sharpening, though – very subtle, and plenty of ‘wriggle room’.

It’s little tests like these that can feel obsessive (doing and sharing) – debating the number of pixels that can fit on the head of a pin – but these are the ’20%’ details that normal humans may not immediately point out, but they see and feel nonetheless, and now they won’t be shocked at seeing every pimple and pore writ large in the interviews, nor will they be rubbing their eyes and thinking about opticians. They’ll just love the pictures.

FCPX – partying with your Flaky Friend

Tart

UPDATE: Compound Clips, specifically splitting Compound Clips, and worst of all, splitting a compounded clip that’s been compounded, increases project complexity exponentially. Thus, your FCPX project quickly becomes a nasty, sticky, crumbly mess.

Which is a shame, because Compound Clips are the way we glue audio and video together, how we manage complexity with a magnetic timeline, and butt disparate sections together to use transitions. Kind of vital, really.

Watch these excellent demonstration videos from T. Payton who hangs out at fcp.co:

These refer to version 10.0.1, and at time of writing, were at 10.0.3, but I can assure you that we STILL have this problem (I don’t think it’s a bug, I think it’s the way FCPX does Compound Clips). We return you to your original programming…

Okay, report from the trenches: Final Cut Pro 10? Love it – with a long rider in the contract.

I’m a short-form editor – most of my gigs are 90 seconds to 10 minutes (record is 10 seconds and I’m proud of it). Turn up ‘Somewhere in Europe’, shoot interviews, General Views, B-Roll, get something good together either that night, or very soon afterwards, publish to the web, or to the big screen, or push out to mobiles and ipads…

This is where FCPX excels. As an editorial ‘current affairs’ segment editor, it’s truly a delight. I bet you slightly overshot? Got a 45 minute take on an interview that needs to be 45 seconds? Range based favourites are awesome, and skimming lets you find needles in a haystack. Need to edit with the content specialist at your side? The magnetic timeline is an absolute joy, and don’t get me started about auditioning.

It’s true: in cutting down interviews, in throwing together segments, and especially when arguing the toss over telling a given story, I’m at least twice as fast and so much more comfortable throwing ideas around inside FCPX.

But my new Editing Friend is a ‘Flaky Friend’.

She really should be the life and soul of the party, but somehow there’s a passive aggressive diva streak in her.

There are three things she doesn’t do, and it’s infuriating:

  • She doesn’t recognise through-edits – they can’t be removed, they are, to her, like cesarian scars, tribal tattoos (or so she claims), cuts of honour. We tell her we’re cutting soup at this stage, but no. ‘Cuts are forever’ she says, like the perfect NLE she thinks she is.
  • She doesn’t paste attributes selectively – it’s only all or nothing. ‘We must be egalitarian’ she croons. What is good for one is good for all, apparently. You can’t copy a perfect clip and only apply colour correction to the pasted clip – you must paste EVERYTHING, destroying your sound mix, needing extensive rework to your audio mix, and heaven help you if you change your mind.
  • She flatly refuses to accept that there is already a way we all do common things, and wants to do it her own kooky way. Making J and L cuts into a Tea Ceremony, blind assumption that a visual transition needs an audio transition, even if we’ve already done the groundwork on the audio… girl, the people who think you’re being cute by insisting this are rapidly diminishing to the point you can count them on your thumbs, and we do include you in that list.

So okay, she’s a good gal at heart. Meaning the best for you. But she needs to bail out and quit every so often, especially if you’re used to tabbing between email, browser, Photoshop, Motion et al. She’ll get all claustrophobic, and you’ll be waiting 20-40 seconds with the spinning beachball of death between application switches. It’s all a bit too much like hard work. ‘I can’t cope’, she sighs – and spins a beachball like she smokes a cigarette. We stand around, shuffling our feet as she determinedly smokes her tab down to the butt. ‘Right!’ she shouts at last. ‘Let’s get going!’

And yes, it’s great when things are going right.

But put her under pressure, with a couple of dozen projects at hand, some background rendering to do, it all gets very ‘I’m going to bed with a bottle of bolly’. I’m getting this an awful lot now, and I really resent being kept hanging around whilst she changes a 5 word caption in a compound clip that takes 5 FRICKIN’ MINUTES to change, I resent every minute of waiting for projects to open and close, and whilst it’s lovely to see her skip daintily through all that fun new footage, when it comes down to the hard work, she’s so not up to it…

I am twice as fast at editing in FCPX, but I am a quarter of the speed when doing the ‘maid of all work’ cleaning up and changes. It means that, actually, I am working twice as hard in X as I was in 7, just mopping up after this flakey friend who has a habit of throwing up in your bathtub and doing that shit-eating grin as they raid your fridge of RAM and CPU cycles.

Well, FCPX dear, my flaky friend, you’re… FIRED.

Tonalizer – a scalpel, not a bullet

Now we’re all shooting flat, how do we get our rushes looking their best? By grading. From the giddy high end of DaVinci down to the humble color board in FCPX, grading is the price we pay for creamy highlights, rich shadows and that ‘expensive’ cinematic look. And I’m in love with a new tool.

We shoot flat because the ‘video’ tonality of traditional video is far less than modern CMOS sensors can handle – by modifying the mapping of brightness to the camera’s curve, we can squeeze in a couple of extra exposure stops if we’re careful.

Of course that makes the pictures look a little different. Highlights are pushed down a bit, shadows are pulled up a bit, and we get ‘flat’ looking pictures. They need to be graded in post to ‘roll off’ the last stop or two of highlights, so brightening the highlights again, but without the awful cut-off of a ‘blown’ highlight – the ‘tip-ex’ effect on foreheards, for example. Similarly, the shadows can be tamped down, but because the shadows started life in a brighter realm, as they’re pushed down to more shadowy levels, we retain the details without the boiling mass of noise we used to associate with it.

Of course, because EVERY shot you’ve taken has this flat profile, EVERY shot needs work in post before mortal humans can enjoy them. You can apply a ‘Look Up Table’ (a long list of ‘if it’s this level of brightness, that should be that level of brightness’ if somebody has been kind or commercial enough to make one for your Non Linear Edit system (e.g. Technicolor profiles), but if you’re hand-rolling, you’re on your own.

I’ve traditionally used Magic Bullet Looks and Colorista to do my conversion from flat to full, but with the transition to FCPX, we had to wait until fairly recently to get back this functionality. The Color Boards did not provide the kind of delicacy in manipulation of curves, and other plug-in manufacturers have stepped in.

Personally, I preferred Magic Bullet Looks because of familiarity and the general ‘one filter to control them all’ approach, but it feels heavy going for FCPX – and for some reason it feels slower and heavier than it did in FCP7.

Then along comes Tonalizer.

If you’re used to the interface glitz of MBLII, Tonalizer’s dour set of sliders seems a little limiting. No faux colour balls, no pretty graphs, some curt labels, and that’s it.

But what it actually does is wonderful – it’s as if there’s thousands of subtle adjustments it can make, but they’ve all been tamed down to a few sliders. FCPX may have a brightness slider, and you can watch the whole Waveform get shifted up and down the IRE scale, ensuring your image is only correct at one minute point in the slider’s travel. Watch the brightness control in Tonalizer affect your waveform, and see how it’s nipping and tucking things at the top and bottom end of the scale, subtly redistributing the tonality over a very pleasant curve, which you can then change the shape of with another slider. Then there’s the ‘highlight rescue’ and ‘shadow boost’ that file down the sharp edges in highlights and shadows, with a form of contrast that subtly increases around areas of brightness transition that gives the merest hint of ‘phwoar’ (a UK idiom that I hope travels well). Of course, if you wind everything up to 11, your footage ends up like a dog’s dinner, but Tonalizer can handle subtlety.

It’s all very neat and handleable, it’s all very focused on footage that’s been shot on flat profiles, and tellingly, it’s got all the little things we need day to day:

  • Adaption will pull flat ‘log’ style rushes in to shape
  • Tint is good at removing the green pollution in Fluorescent lighting
  • Warmth simply nudges the colour temperature (won’t correct the WRONG colour temp, but handy in mixed conditions)
  • Protect Skin Tones will ring fence that little collection of tones so your lit interview is fine, but the green pool of background is improved

And then there’s the Detail Sharpener.

Sharpening is anathema to Narrative shooters, but in Corporates, sharp colourful pictures sell. Period. Not oversharpened ‘black ring around white objects’ horrible ‘in camera’ sharpening. Tonalizer just wafts some magic over the image and helps the camera’s inherent sharpness. You have turned the sharpening circuits off, haven’t you? Cameras don’t sharpen well as they have to do it in real time and it sharpens all the noise and crud. If you do it in post, the computer spends a little more time and care (with the appropriate software).

So Tonalizer lifts and separates, adds a bit of definition, respects skin tone, and even has little niceties like an ‘assist’ mode that flags clipped detail, plus a ‘safe range’ that gracefully protects your picture from harm when winding up the controls to higher values.

For FCPX users, there are two versions – one specifically set for Technicolor CineStyle favoured by many DSLR shooters. This shoots incredibly flat, and takes the DSLR brightness range to the very edge, producing clay-like skin tones and such milky images that it takes time and skill to bring back (but the results are worth it if you do have the time).

Shooting ultra flat does have some disadvantages – more noise in the shadows, but Tonalizer has a Noise Reduction function to help mitigate that. Another issue is that you are spreading a lot of info over an 8 bit image, and aggressive manipulation will degrade the image as your carefully spaced data gets pulled and pushed and bits fall between the ‘8-bit gaps’ and disappear for ever. Start yanking the sliders of any grading plug-in, and watch the waveform monitor, looking for the fine horizontal lines (gaps in the info) appear.

So there’s a ‘Comfy Camp’ of Picture Profile users who want enough brightness range in highlights to do the roll off thing, and enough tonality in the shadows, to create an ‘expensive’ if not completely ‘cinematic’ final image, and this is where Tonalizer is a great tool for getting the look you want. It certainly floats my boat for corporate/commercial work, and has already supplanted MBL2 for my ‘jobbing’ work – quicker to get right, quicker to render, and I have to say I rather like its singleminded approach.

Am I giving up on Magic Bullet? Absolutely not. It goes further, does more… but it’s slower and easy to ‘overbake’ (I habitually dial back my MBL grades to 66%). For well exposed stuff that just needs to be clean, clear and smartened up, MBL is overkill and, used indiscriminately, can do more damage than good. Tonalizer is perfect for just a little bit of zip and starch.

It’s been out a while, but I didn’t really bother much because i) I had Magic Bullet Looks, and was happy with it, and ii) I thought it was quite expensive for what it was offering. I then saw Tonalizer demonstrated at the February MacVideo Live event (where 10 lucky attendees walked away with a copy – not me, alas) and managed to have a quick play with it. You will have to try it out on your own footage to realise its worth. I do note, however, that there have been a number of promotions, and fcp.co currently has a 35% discount going.

This is a blog, not a review, and I’m not particularly keen to get involved with promoting anything, but have been much enthused by Tonalizer, and for FCPX users, it’s well worth checking out, even if you do already have a plethora of level/curve based plug-ins.

Blade and a J-cut, two bits!

Final Cut Pro X doesn’t do J-cuts. It doesn’t do it at all, and whilst I am not an aggressive or violent person, I feel the need to sit on a naughty step for thinking what I’d like to do to this bit of software if it were something tangible.

What am I talking about? Any editor will tell you that, in ‘How To Edit 102’, we learn about the J cut. Very simply, it’s when a simple cut between two shots has the audio of the second clip start at just a fraction before the picture starts. Or, put it another way, the second shot starts with new audio over the old shot, then the video cuts to the new shot.

Let’s imagine a string of 3 comments by 3 different people.

We edit the comments so that they flow. But the magic of the J cut is that as we look as the first person, we hear the second person starting to talk – like we do in a discussion round a table in real life – and then (AND ONLY THEN) we look at them. There’s about half a second or maybe a bit less between hearing them and actually looking at them. So we start the audio when it should, then the video follows between 7 and 12 frames afterwards (that’s a quarter to half a second – we’re being subtle here!)

When we see this in television and film, it mimics our every day experience, and it feels very natural. Comfortable.

It’s an editing ‘condiment’. Like adding a bit of salt to food, it’s not clean and pure, but it feels right.

So looking at the clips in the timeline, there’s an offset between when the audio cuts, and when the picture cuts.

It works both ways: if sound cuts before picture, it’s a J (see how the tail points to the left, indicating the lower (audio) track starts first in our left-to-right scan. J-cut. If the pictures cut first and then the audio cuts, we get an L-cut – visually speaking.

SO we cut our first take of a sequence, and we’re really trying to get the sequence of what people are saying in a logical sequence. Let’s not worry about pictures and cutaways now, let’s get the ‘radio programme with pictures’ version done. Sometimes, it gets messy and we’re cutting little bits of words and halfwords together so a parenthetical comment is allowed to stand alone. So long as it sounds right, we’ll cover the messy pictures with their jump cuts with a cutaway.

The reason for my ire is that this mainstay of professional editing, this 1 step operation in FCP7, this ‘thing you can sum up in a letter’ is peformed thusly in FCPX:

http://help.apple.com/finalcutpro/mac/10.0/#ver1632d82c

It’s like trying to put hospital corners on a duvet: it can be done, but that’s an awful lot of effort for something that should be quick and simple.

After all, when firing off a bunch of edited interviews for a client, hands up those who, in FCP7, Avid or PPro, would perhaps slip a few edits to add a little polish? Then unslide them back again to continue editing? Exactly.

Well, now and again I find a really good reason to switch from PPro or FCP7 into FCPX, but then spend an afternoon bumping my shins and grazing my scalp whilst climbing through its ‘little ways’. Well, I lost my temper big-time over the whole J-cut thing and turned to good friend Rick Young for solice. He’s writing a book on FCP-X, he’ll know how to do it.

And he did.

“Basically, detatch all your interview clips’ audio so they’re separate from the video, and use the T tool to slip the video. Simple!”

But that’s quite an odd thing for an app that touts to never suffer bad sync – dangle your audio off your video for ever? Deal with a double-track for every clip that could be in a J-cut? In a modern bit of software destined for the next 10 years of editing? That’s madness! Actually, I think I put it a little stronger than that.

It sounded like my solution for getting a pet dog through his dog door was to cut him in half and re-attatch with velcro once he’s through. Just live with a dog for ever more that has to be cared for in case his velcro join comes apart. Yes, I do have funny feelings about my footage, but if you were to spend so much time with them you’d go funny too.

And here’s the conclusion: Rick’s method works – it works fine. It works great, in fact. Give it a try, drop that beastly Apple method.

But here’s my finishing salvo: Apple’s FCPX team shouldn’t feel ‘oh that’s all right then’ and not implement an offset tool. It’s so simple: apply an Option key behaviour on the T trim tool. Thanks for XML and all that, I’m sure multicam will be great too. Just finish off your tool with a way to turn radio edits into J cuts *Just* *like* *you* *used* *to*. Put the Pro back into FCPX!

If Apple called it iMovie Pro…

I’m very impressed with iMovie Pro. It’s very quick to edit with, there’s lots of powerful controls to do things that can be tiresome in Final Cut Pro, the interface is clean and uncluttered, and there are ways to bend the way the application works into a professional workflow – and by professional, I mean an environment where you’re earning money from editing footage according to the desires and ultimate direction of a client – specifically where ‘I can’t do that’ doesn’t enter the equation unless budgets say otherwise.

The release of iMovie Pro has been somewhat mucked up by its publisher, Apple. They’ve decided to release it with under the ‘Final Cut’ brand, and this has caused a backlash in their established user community. In doing so, they’ve elevated expectations as the FCP brand is a ten year old product that, while creaking a bit in its old age, has a reliable and stable workflow with lots of workarounds to hide the issues with such an old product. To introduce this new package as its next generation is about as subtle and believable as a 1920s SFX shot of teleportation.

Let’s say I cut Apple some slack here: Final Cut Pro was born in the mid 1990s as a PC package, then ported over to Apple’s senescent OS9 and vintage QuickTime technologies that were approaching their own ‘End of Life’ or ‘Best Before’ dates. Nevertheless, Apple soldiered on and built a strong following in the Non Linear Editing market, excusing FCP’s little ‘ways’ like one ignores the excessive, erm, ‘venting of gas’ from a beloved Labrador.

As time goes on, Apple has to look at the painful truth that FCP is getting old. It’s just not able to easily evolve into 64 bit and new video technologies, and rewriting it from the ground up could be a long, frustrating process of ‘recreating’ things that shouldn’t be done in ‘modern’ software. After a few big efforts, it becomes painfully obvious that we can’t make a bionic Labrador.

So Apple were faced with a difficult choice: rebuild their dog, their faithful friend, warts and all, from the ground up, which will please a few but will never help the greater audience, or… and this is hard to emote: shoot it in the head, kill it quickly, and do a switcharoo with their young pup iMovie, fresh out of Space Cadet Camp, full of zeal and spunk for adventure but still a little green.

So here’s where the scriptwriter faces a dilema. Do we do a Doctor Who regeneration sequence, or do we do a prequel reboot a-la Abrams’ Star Trek? Or do we substitue an ageing star with a young turk with is own ideas on the role and hope the audience buys it?

Exactly.

Imagine if Apple said this: ‘hey guys, FCP can go no further. Enjoy it as is. From now on, we’re investing in iMovie’s technologies and will make it the best editor ever – our first version is for ‘The Next Generation’, but it’s going to grow and develop fast, it is tomorrow’s editor, it does stuff you’ll need in the future – welcome to iMovie Pro’.

Okay, so you’d have to invest $400 in this new platform, but it’s got potential. Imagine letting producers do selects on an iPad, emailing you their collections ready for you to edit. Imagine identifying interviewees (not in this release) and linking them to lower third and consent metadata, or (as would have been really useful) ‘find this person (based on this photo) in my rushes’ (again, not in this version but the hooks are there). Imagine not having to do all the grunt work of filing twiddly bits, or identifying stuff shot in Slough. This is clever. This is exciting. And skimming? Actually yes – I like that.

But if Apple tries to sell us all this sizzle as Final Cut Pro, I want my controls and my media management clarity. I want to know why I am paying $400 for an upgrade that gives me less features.

The new FCP-X has iMovie icons (see those little ‘stars’ on projects?), offers iMovie import, looks like iMovie, works like iMovie, has iMovie features and then some. It IS iMovie Pro, and I am happy with that. All the crap that Apple get for them calling it Final Cut Pro, which it is most certainly and definitely (nay, defiantly) is NOT, is fully deserved. May they be bruised and battered for their arrogance.

Apple: rename FCP-X to iMovie Pro. It’s the truth, and it’s good.

IP Videography

Sony SNC CH210I’m shooting timelapse today – a build of an exhibition area. However, the brief posed some challenges that meant my usual kit would not just be inconvenient, but almost impossible to use.

The exhbition area needed to be filmed from high up, but there were no vantage points a person could film from. It meant fixing a camera to a bit of building, then running a cable. There were no convenient power outlets nearby, either. Once rigged, the cameras would be inaccessible until the show was over. The footage would be required BEFORE the cameras were taken down. There wasn’t a limitless budget either.

So… we couldn’t strap a camcorder or DSLR up – how would you get the footage? How would you change battery? Webcams need USB or are of limited resolution. Finally, I settled on a pair of SNC-CH210 ‘IP’ Cameras from Sony (supplied by Charles Turner from Network Video Systems in Manchester). These are tiny, smaller than the ‘baby mixer’ tins of tonic or cola you’d find on a plane. They can be gaffer taped to things, slotted into little corners, flown overhead on lightweight stands or suspended on fishing line.

The idea is that these cameras are ‘Internet Protocol’ network devices. They have an IP address, they can see the internet, and if you have the right security credentials, you can see the cameras – control them – from anywhere else on the internet using a browser. The cameras drop their footage onto an FTP server (mine happens to be in the Docklands, but it could be anywhere). They have but one cable running to them – an Ethernet Cat5e cable – which also carries power from a box somewhere in between the router and the camera. Ideal for high end security applications, but pretty darn cool for timelapse too!

So I’m sitting here watching two JPEGs, one from each camera, land in my FTP folder every minute. I can pull them off, use the QuickTime Player Pro’s ‘Open Image Sequence’ function to then convert this list of JPEGs into a movie at 25fps to see how the timelapse is going. So far, so good.

The most difficult thing, which I had to turn to help for, was the ‘out of the box’ expierience of assigning each camera an IP address. Being a Mac user with limited networking skills, the PC-only software with instructions written in Ancient Geek was of no help. A networking engineer soon had them pulling their identities of DHCP, and other than one mis-set DNS, it was a smooth process to show each camera where the FTP server was, and what to put there.

It was quite a surreal experience, sitting on the empty floor of the NEC with nothing but a wifi connection on my MacBook Pro, adjusting the cameras on a DIFFERENT network, and checking the results from my FTP server somewhere ‘in the cloud’.

The quality is okay, but not spectacular – I’d say it’s close to a cheap domestic HDV camcorder. But at a few hundred quid each, they’ll pay for themselves almost immediately, and they’ll get rolled out again and again. I doubt they would be of interest to the likes of Mr Philip Bloom et al. Notwithstanding that, I just need to sharpen my networking and cable-making skills!

Matt administering an IP camera from Wifi

Matt administering an IP camera from Wifi