HD-SDI Embedding

Samurai-BNCOn a recent job, I had a chance to work with the Atomos Samurai – a recorder that creates either ProRes or DNxHD files from HD-SDI video, rather than the rather more consumerist (but just as good) quality HDMI signals I usually deal with. I have, for the last few years, eschewed the extra expense of HD-SDI kit in favour of ‘That will do nicely’ HDMI, but I think I’ve found a good business case for re-thinking that.*

The job was to record the output from a vision mixed feed from an Outside Broadcast truck, filming an awards ceremony. We had, in fact, each of the 5 cameras recording to AJA KiPros, but there was a need for two copies of the finished programme to go to two separate editors (myself and Rick, as it happens, working on two entirely separate edits) as soon as the event finished – even the time spent copying from the KiPro drive to another disk would have taken too long. So we added Rick’s Samurai to the chain.

We learned a couple of interesting things in preparation for the job.

Samurai_on_location_10aThe first is ‘how to reliably power a Samurai’ – its neat little case doesn’t have a mains adaptor in it, although it will happily run for hours on Sony NP-F style batteries (you can A-B roll the batteries too, so changing one whilst it’s powered off the second battery). However, I didn’t want to have to think about checking batteries – I wanted to switch it to record, then switch it off at the end of the gig, as I had other things to worry about (cutting 5 cameras, after shooting ‘Run & Gun’ style all day).

Samurai_on_location_07b

The Samurai (and Ninja) can be powered off a Sony ‘Dummy Battery’ supplied with Sony battery chargers and some camcorders. Plug the dummy battery in, connect it to the charger and switch to ‘Camera’ mode and behold – one mains powered Samurai.

The second point is thanks to Thomas Stäubli (OB truck owner) and Production Manager Arndt Hunschok who set up the audio in a very clever way which gave me a unique opportunity to fix the edit’s music tracks.

Unlike HDMI, HD-SDI has 8 audio tracks embedded in the signal. The sound engineer kindly split his mix into 4 stereo groups: a mixed feed, audio from the presenter microphones, audio from directional microphones pointing at the audience (but away from the PA speakers), and a clean music feed.

The practical upshot was that I was able to edit several versions of the 90 minute awards ceremony (30, 8 and 3 minute versions) without the music, then re-lay the music stings (from its clean feed, or replace with licensed alternatives for the DVD version) where appropriate, thus producing a very slick result and saving a lot of time and hair pulling (or sad compromises) in the edit suite.

Technically, the Samurai footage came straight in and ready to edit with its 8 audio tracks in frame accurate sync (of course). I was able to slice it up and do a pre-mix of the required tracks.

In the past, this has been a bit of a nightmare. This time, it was easy to take audio from the stage and play with the timings for music cues.

A short technical note: be it HDMI or HD-SDI, your picture is made up of 1s and 0s and so there’s no technical difference in the quality if fed with the same source**. However, the audio is interesting. Most of the time, shooting indie films or simple corporates, you’re not going to need lots of separated tracks. When it comes to live performances or panel debates, however, the 8 tracks of HD-SDI can significantly offset the extra cost of the technology by saving time in the edit suite. Well worth a conversation with your Techinical Director or supplier to sort out the ‘sub mixes’ (separating your audio feed to the channels) and ‘embedding’ (entwining the audio channels into the HD-SDI feed).

It’s odd that this hasn’t occurred to me before – the facility has been there, but perhaps it’s that last bit of kit – the ‘HD-SDI Audio Embedder’ available from suppliers like Black Magic Design and AJA – that’s been hiding its light under a bushel. As such, it is probably the least sexy item on one’s shopping list. Not the sort of thing that crops up for the journeyman videographer, but just the sort of thing when specifying the larger jobs with rental kit.

So, note to self: when dealing with complex audio, remember HD-SDI Audio Embedders, HD-SDI recorders.

And again, my thanks to Thomas Stäubli and Arndt Hunschok for their assistance and patience.

Samurai_on_location_12a

* One of the main business cases for HD-SDI (and good old SDI before that) was that it uses the standard BNC connector that has been the main ‘video’ connector in the broadcast industry. The BNC connector has a rotating cuff around the plug that locks it into the socket so it doesn’t accidentally get pulled out (like XLRs). HDMI – and its horrible mutated midget bastard offspring ‘Mini-HDMI’ can work its way loose and pop out of a socket with sickening ease, thus any critical HDMI-connected kit usually has a heavily guarded ‘exclusion zone’ round it where no mortals are allowed to tread, and sometimes bits of gaffer take just to make sure – in fact there is a portion of the ‘aftermarket video extras’ industry that make brackets designed to hold such cables into cameras and recorders. And, at risk of turning a footnote into an article, SDI/HD-SDI travels over ordinary 75 Ohm Coax over long distances, unlike the multicore short lengths of overpriced HDMI cables. So, yes, HD-SDI makes sense purely from a connector point of view.

** Notwithstanding the 4:4:4:4 recorders from Convergent Design and now Sound Devices. Basically, a 1.5G HD-SDI signal carrying a 10 bit 4:2:2 output will be indistinguishable from an HDMI signal carrying a 10 bit 4:2:2 signal, and many cameras with both HDMI and HD-SDI output 4:2:2 8 bit video signals anyway. But HDMI only does 2 channel audio whereas HD-SDI does 8. Back to the story…

Dealing with 109% whites – the footage that goes to 11

Super-whites are a quick way of getting extra latitude and preventing the video tell-tale of burned out highlights by allowing brighter shades to be recorded over the ‘legal’ 100% of traditional video. However, it’s come to my attention that some folk may not be reaping the advantages of superwhites – or even finding footage is ‘blown out’ in the highlights where the cameraman is adamant that zebras said it was fine.

So, we have a scale where 0% is black, and 100% is white. 8 bit video assigns numbers to brightness levels, but contains wriggle room, so if you have the Magic Numbers of Computing 0-255, you’d assume black starts at 0, and white ends up at 255. Alas not. Black starts at 16, and white ends at 235. Super whites use the extra room from 235 to 255 to squeeze in a little more latitude which is great.

But that’s on the camera media. Once you get into the edit software, you need to make sure you obey the 100% white law. And that’s where things go a bit pear shaped.

If you can excuse my laundry, here’s a shot with 109% whites – note them peeping up above the 100% line in the waveform monitor:

(Click the images below to get a full view)

01-fs100_into_fcpx-2012-07-12-13-55.jpg

Note also, that the fluffy white clouds are blown – there’s ugly detail snapping from pale blue to white around them. Although I shot this so I just got into 109%, the monitor shows us the 100% view, so it’s overexposed as far as the editor’s concerned.

So in my NLE – in this case, Final Cut Pro X – I drop the exposure down, and everything sits nicely on the chart. I could pull up the blacks if necessary…

02-fcpx_drops_luma-2012-07-12-13-55.jpg

But I’ve been told about an app called 5DtoRGB, which pre-processes your 109% superwhite footage to 100% as it converts to ProRes:

03-5dtorgb_into_fcpx-2012-07-12-13-55.jpg

Note that it is indeed true that the whites are brought down to under 100%, the blacks are still quite high and will require pulling down in my opinion. 5DtoRGB takes a lot longer to process its ProRes files – I’ve reports of 10x longer than FCP7 Log & Transfer, but I’ve not tested this myself.

I did some tests in Adobe Premiere CS6, which demonstrates the problem. We start with our NATIVE AVCHD clip, with whites happily brushing the 109% limit as before. These are just 1s and 0s, folks. It should look identical – and it does. Info in the Waveform Monitor, blown whites in the viewer.

Another technical note: the FCPX Waveform Monitor talks about 100% and 0%, but Adobe’s WFM uses the ‘voltage’ metaphor – analogue video signals were ‘one volt high’, but 0.3 volts were given over to timing signals, so 0.7 volts were used to go from black (0.3 volts) to white (1 volt). So… 0.3 = black in Adobe’s WFM. And another thing – I’m from a PAL country, and never really got used to NTSC in analogue form – if I remember correctly, blacks weren’t exactly at 0.3 volts, also known as IRE=0 – they were raised for some reason to IRE=7.5, thus proving that NTSC with its drop frames, 29.97 fpx, error-prone colour phase and the rest, should be buried in soft peat and recycled as firelighters. But I digress. Premiere:

06-premier_start-2012-07-12-13-55.jpg

Let’s get our Brightness and Contrast control out to bring the 109s down to 100:

08-premier_bright-2012-07-12-13-55.jpg

Hold on a tick, we haven’t adjusted ANYTHING, and Premiere has run a chainsaw along the 100% line. That white detail has been removed until you remove the filter – you can’t get it back whilst the Brightness & Contrast filter is there. Maybe this isn’t the right tool to use, but you’d think it would do something? Not just clip straight away?

I tried Curves:

09-premier-curve-2012-07-12-13-55.jpg

It’s tricky, but you can pull down the whites – it’s not pretty. Look how the WFM has a pattern of horizontal lines – that’s nastiness being added to your image. The highlights are being squashed, but you can’t bring your blacks down.

So finally, I found ‘ProcAmp’ (an old fashioned term for a Processing Amplifier – we had these in analogue video days). This simply shifts everything down to the correct position without trying to be clever:

10-premier-procamp-2012-07-12-13-55.jpg

At last. We have our full tonality back, and under our control.

With all these issues, and probably some misunderstanding about 109%, I can see the desire for something safe and quick using the new FS700 cinegammas in the form of CineGamma 2, which only allows 100% whites, ditto Rec709 in the FS100. But forewarned is fore-armed.

I donate the last 9% of my brightness range to specular highlights and the last shreds of detail in the sky, so I can have that ‘expensive film look’ of rolled off highlights. But if I didn’t haul them back into the usable range of video, all that stuff would appear as burned out blobs of white – ugly. However, I also spent a bit of time testing this out when I switched from FCP7 to FCPX, as the former took liberties with gamma so you could get away with things. The bugs in FCPX and Magic Bullet made me check and check again.

It’s been worth it.

FCPX – partying with your Flaky Friend

Tart

UPDATE: Compound Clips, specifically splitting Compound Clips, and worst of all, splitting a compounded clip that’s been compounded, increases project complexity exponentially. Thus, your FCPX project quickly becomes a nasty, sticky, crumbly mess.

Which is a shame, because Compound Clips are the way we glue audio and video together, how we manage complexity with a magnetic timeline, and butt disparate sections together to use transitions. Kind of vital, really.

Watch these excellent demonstration videos from T. Payton who hangs out at fcp.co:

These refer to version 10.0.1, and at time of writing, were at 10.0.3, but I can assure you that we STILL have this problem (I don’t think it’s a bug, I think it’s the way FCPX does Compound Clips). We return you to your original programming…

Okay, report from the trenches: Final Cut Pro 10? Love it – with a long rider in the contract.

I’m a short-form editor – most of my gigs are 90 seconds to 10 minutes (record is 10 seconds and I’m proud of it). Turn up ‘Somewhere in Europe’, shoot interviews, General Views, B-Roll, get something good together either that night, or very soon afterwards, publish to the web, or to the big screen, or push out to mobiles and ipads…

This is where FCPX excels. As an editorial ‘current affairs’ segment editor, it’s truly a delight. I bet you slightly overshot? Got a 45 minute take on an interview that needs to be 45 seconds? Range based favourites are awesome, and skimming lets you find needles in a haystack. Need to edit with the content specialist at your side? The magnetic timeline is an absolute joy, and don’t get me started about auditioning.

It’s true: in cutting down interviews, in throwing together segments, and especially when arguing the toss over telling a given story, I’m at least twice as fast and so much more comfortable throwing ideas around inside FCPX.

But my new Editing Friend is a ‘Flaky Friend’.

She really should be the life and soul of the party, but somehow there’s a passive aggressive diva streak in her.

There are three things she doesn’t do, and it’s infuriating:

  • She doesn’t recognise through-edits – they can’t be removed, they are, to her, like cesarian scars, tribal tattoos (or so she claims), cuts of honour. We tell her we’re cutting soup at this stage, but no. ‘Cuts are forever’ she says, like the perfect NLE she thinks she is.
  • She doesn’t paste attributes selectively – it’s only all or nothing. ‘We must be egalitarian’ she croons. What is good for one is good for all, apparently. You can’t copy a perfect clip and only apply colour correction to the pasted clip – you must paste EVERYTHING, destroying your sound mix, needing extensive rework to your audio mix, and heaven help you if you change your mind.
  • She flatly refuses to accept that there is already a way we all do common things, and wants to do it her own kooky way. Making J and L cuts into a Tea Ceremony, blind assumption that a visual transition needs an audio transition, even if we’ve already done the groundwork on the audio… girl, the people who think you’re being cute by insisting this are rapidly diminishing to the point you can count them on your thumbs, and we do include you in that list.

So okay, she’s a good gal at heart. Meaning the best for you. But she needs to bail out and quit every so often, especially if you’re used to tabbing between email, browser, Photoshop, Motion et al. She’ll get all claustrophobic, and you’ll be waiting 20-40 seconds with the spinning beachball of death between application switches. It’s all a bit too much like hard work. ‘I can’t cope’, she sighs – and spins a beachball like she smokes a cigarette. We stand around, shuffling our feet as she determinedly smokes her tab down to the butt. ‘Right!’ she shouts at last. ‘Let’s get going!’

And yes, it’s great when things are going right.

But put her under pressure, with a couple of dozen projects at hand, some background rendering to do, it all gets very ‘I’m going to bed with a bottle of bolly’. I’m getting this an awful lot now, and I really resent being kept hanging around whilst she changes a 5 word caption in a compound clip that takes 5 FRICKIN’ MINUTES to change, I resent every minute of waiting for projects to open and close, and whilst it’s lovely to see her skip daintily through all that fun new footage, when it comes down to the hard work, she’s so not up to it…

I am twice as fast at editing in FCPX, but I am a quarter of the speed when doing the ‘maid of all work’ cleaning up and changes. It means that, actually, I am working twice as hard in X as I was in 7, just mopping up after this flakey friend who has a habit of throwing up in your bathtub and doing that shit-eating grin as they raid your fridge of RAM and CPU cycles.

Well, FCPX dear, my flaky friend, you’re… FIRED.

If Apple called it iMovie Pro…

I’m very impressed with iMovie Pro. It’s very quick to edit with, there’s lots of powerful controls to do things that can be tiresome in Final Cut Pro, the interface is clean and uncluttered, and there are ways to bend the way the application works into a professional workflow – and by professional, I mean an environment where you’re earning money from editing footage according to the desires and ultimate direction of a client – specifically where ‘I can’t do that’ doesn’t enter the equation unless budgets say otherwise.

The release of iMovie Pro has been somewhat mucked up by its publisher, Apple. They’ve decided to release it with under the ‘Final Cut’ brand, and this has caused a backlash in their established user community. In doing so, they’ve elevated expectations as the FCP brand is a ten year old product that, while creaking a bit in its old age, has a reliable and stable workflow with lots of workarounds to hide the issues with such an old product. To introduce this new package as its next generation is about as subtle and believable as a 1920s SFX shot of teleportation.

Let’s say I cut Apple some slack here: Final Cut Pro was born in the mid 1990s as a PC package, then ported over to Apple’s senescent OS9 and vintage QuickTime technologies that were approaching their own ‘End of Life’ or ‘Best Before’ dates. Nevertheless, Apple soldiered on and built a strong following in the Non Linear Editing market, excusing FCP’s little ‘ways’ like one ignores the excessive, erm, ‘venting of gas’ from a beloved Labrador.

As time goes on, Apple has to look at the painful truth that FCP is getting old. It’s just not able to easily evolve into 64 bit and new video technologies, and rewriting it from the ground up could be a long, frustrating process of ‘recreating’ things that shouldn’t be done in ‘modern’ software. After a few big efforts, it becomes painfully obvious that we can’t make a bionic Labrador.

So Apple were faced with a difficult choice: rebuild their dog, their faithful friend, warts and all, from the ground up, which will please a few but will never help the greater audience, or… and this is hard to emote: shoot it in the head, kill it quickly, and do a switcharoo with their young pup iMovie, fresh out of Space Cadet Camp, full of zeal and spunk for adventure but still a little green.

So here’s where the scriptwriter faces a dilema. Do we do a Doctor Who regeneration sequence, or do we do a prequel reboot a-la Abrams’ Star Trek? Or do we substitue an ageing star with a young turk with is own ideas on the role and hope the audience buys it?

Exactly.

Imagine if Apple said this: ‘hey guys, FCP can go no further. Enjoy it as is. From now on, we’re investing in iMovie’s technologies and will make it the best editor ever – our first version is for ‘The Next Generation’, but it’s going to grow and develop fast, it is tomorrow’s editor, it does stuff you’ll need in the future – welcome to iMovie Pro’.

Okay, so you’d have to invest $400 in this new platform, but it’s got potential. Imagine letting producers do selects on an iPad, emailing you their collections ready for you to edit. Imagine identifying interviewees (not in this release) and linking them to lower third and consent metadata, or (as would have been really useful) ‘find this person (based on this photo) in my rushes’ (again, not in this version but the hooks are there). Imagine not having to do all the grunt work of filing twiddly bits, or identifying stuff shot in Slough. This is clever. This is exciting. And skimming? Actually yes – I like that.

But if Apple tries to sell us all this sizzle as Final Cut Pro, I want my controls and my media management clarity. I want to know why I am paying $400 for an upgrade that gives me less features.

The new FCP-X has iMovie icons (see those little ‘stars’ on projects?), offers iMovie import, looks like iMovie, works like iMovie, has iMovie features and then some. It IS iMovie Pro, and I am happy with that. All the crap that Apple get for them calling it Final Cut Pro, which it is most certainly and definitely (nay, defiantly) is NOT, is fully deserved. May they be bruised and battered for their arrogance.

Apple: rename FCP-X to iMovie Pro. It’s the truth, and it’s good.

IP Videography

Sony SNC CH210I’m shooting timelapse today – a build of an exhibition area. However, the brief posed some challenges that meant my usual kit would not just be inconvenient, but almost impossible to use.

The exhbition area needed to be filmed from high up, but there were no vantage points a person could film from. It meant fixing a camera to a bit of building, then running a cable. There were no convenient power outlets nearby, either. Once rigged, the cameras would be inaccessible until the show was over. The footage would be required BEFORE the cameras were taken down. There wasn’t a limitless budget either.

So… we couldn’t strap a camcorder or DSLR up – how would you get the footage? How would you change battery? Webcams need USB or are of limited resolution. Finally, I settled on a pair of SNC-CH210 ‘IP’ Cameras from Sony (supplied by Charles Turner from Network Video Systems in Manchester). These are tiny, smaller than the ‘baby mixer’ tins of tonic or cola you’d find on a plane. They can be gaffer taped to things, slotted into little corners, flown overhead on lightweight stands or suspended on fishing line.

The idea is that these cameras are ‘Internet Protocol’ network devices. They have an IP address, they can see the internet, and if you have the right security credentials, you can see the cameras – control them – from anywhere else on the internet using a browser. The cameras drop their footage onto an FTP server (mine happens to be in the Docklands, but it could be anywhere). They have but one cable running to them – an Ethernet Cat5e cable – which also carries power from a box somewhere in between the router and the camera. Ideal for high end security applications, but pretty darn cool for timelapse too!

So I’m sitting here watching two JPEGs, one from each camera, land in my FTP folder every minute. I can pull them off, use the QuickTime Player Pro’s ‘Open Image Sequence’ function to then convert this list of JPEGs into a movie at 25fps to see how the timelapse is going. So far, so good.

The most difficult thing, which I had to turn to help for, was the ‘out of the box’ expierience of assigning each camera an IP address. Being a Mac user with limited networking skills, the PC-only software with instructions written in Ancient Geek was of no help. A networking engineer soon had them pulling their identities of DHCP, and other than one mis-set DNS, it was a smooth process to show each camera where the FTP server was, and what to put there.

It was quite a surreal experience, sitting on the empty floor of the NEC with nothing but a wifi connection on my MacBook Pro, adjusting the cameras on a DIFFERENT network, and checking the results from my FTP server somewhere ‘in the cloud’.

The quality is okay, but not spectacular – I’d say it’s close to a cheap domestic HDV camcorder. But at a few hundred quid each, they’ll pay for themselves almost immediately, and they’ll get rolled out again and again. I doubt they would be of interest to the likes of Mr Philip Bloom et al. Notwithstanding that, I just need to sharpen my networking and cable-making skills!

Matt administering an IP camera from Wifi

Matt administering an IP camera from Wifi

Achieving ‘that video look’

Throughout the last 9 decades of cinema, Directors have been stuck with the same tired look forced upon them by the constraints of their technology. Cinematographers at the vanguard of their industry, disenchanted with the timelessness of film, are now looking to achieve that elusive ‘live’ look – video!

The world of moving pictures has gone by a number of pet names, one of which describes one of the pitfalls of having to pay for your recording medium by the half-cubit or ‘foot’ as some would say. ‘The Flicks’ were just that – flickering images in a dark room, destined to cause many a strained eye.

Whilst motion could be recorded at or above 20 frames per second, there was a problem in that the human eye’s persistence of vision (that eye-blink time where a ghost of a bright image dances upon your retina) means you can perceive flicker up to about 40 frames per second. So your movie had smooth movement at 24 or 25 frames per second, but it still flashed a bit.

Of course, clever engineers realised that if you showed every frame TWICE, so the lamp illuminated each frame through a revolving bow-tie cunningly pressed into service as a shutter, then hauled the loop of film (due to mass, intertia, etc – tug the whole reel and you’d snap it) down one frame and give that a double flash. Rinse, repeat.

Every student of film will get taught the special panning speed to avoid juddery images – then forget it. Ditto the use of shutter speeds beyond 180 degrees. And so we’re stuck with motion blur and the last vestiges of flicker in the eyes of an audience reared on a visual diet of 75fps video games.

A collection of flim makers, some with their roots in the DV revolution of the 1990s, are looking to their true source of inspiration, trying to mimic the hallowed ‘television look’ by the simple expedient of shooting a higher frame rate. This gives their work a sense of ‘nowness’, an eerie ‘look into the magical mirror’ feel.

As post-production 3D gains traction, Directors are taking a further leaf out of the Great Book Of Video by using a technique known as ‘deep depth of field’ – where the lens sharply records all from the near to the far. An effect very reminiscent of the 1/3” class of DV camcorders. This will, of course, take huge amounts of lighting to achieve pinhole like apertures in their ‘medium format’ cameras such as Epic, Alexa and F65, but as leading lights such as James Cameron and Peter Jackson jump on the bandwagon, the whole industry can now concentrate on achieving ‘That Video Look’.

TV Soup – or how video compression really works

A little while ago, I got embroiled in a discussion about editing footage from DSLRs and why it wasn’t always a good idea to desire editing the original camera files. I repeat a condensed version of rant here for some light relief – but please can you imagine it as delivered by the inimitable Samuel L. Jackson…

When your DSLR camera records video, it needs to be space efficient as it has to deal with a lot of frames every second. Merely recording every frame does not leave enough time to actually capture subsequent frames and compress them nicely. It needs to do some Ninja Chops to do video.

Firstly, it does not record each frame as an image. It records a frame, and for every subsequent frame it only records the changes from the first frame. This may go on for, oooh, 15 frames or so. Then it takes a breath and records a full frame, then does the differences from THAT frame onwards.

Now imagine you are an editing application. Scooting around in that framework of real and imaginary frames means you’re spending most of your time adding up on your fingers and toes just to work out which frame you’re supposed to be displaying, let alone uncompressing that frame to display it.

Oh yes. In order to edit, you have to DECOMPRESS frames to show them, and that takes time. It’s like making ‘packet soup’.

Your editing software is trying to snort up packet soup – dried bits of vegetable and stock – it has to add a specific amount of water to that mix, allow the dried bits of aforementioned stuff to absorb the water, then compartmentalise the soup into spoonfuls.

Lesser compressed soup (not H.264 freeze dried but ProRes/DNxHD ‘just add hot water’ concentrate) can do this quicker and better – and some say it tastes better too. If only these newfangled cameras stopped freeze-drying their soup and just stuck to boiling off the excess water like MPEG2 does, dang, that would be nicer.

So, when you take your camera originals in H.264, you have to carefully re-hydrate your freeze-dried movies, and allow them to slowly absorb their moisture in a long process called transcoding. Then gently simmer them to a stock soup concentrate, so your editi system can easily serve them up in 1-frame, 1-spoon servings so you can edit them between the many hundreds of thousands of bowls that maketh the feast of your film.

You can have QuickTime soup. You can have Cineform soup. You can have DNxHD soup. H.264 soup is freeze dried and acquired through a straw. But H.264 soup is the size of a stock cube, and (for want of a better example) R3D is like canned soup – just requires a little reheating and a cup of cream.

Which ever way you capture and store it, we all watch soup.

Take your T2i footage, rehydrate it into the editing format you choose (can be ProRes, DNxHD, Cineform, hell, even XDCAM-EX) and then dish it up by editing and add your secret sauce to make it look/taste even finer. When you try to edit raw footage on most edit systems, you’re making soup into a condiment.

Thank you Mr Jackson.

Okay already, enough of the metaphor (and you’re spared the spatial compression stuff for now). CS5 does the ‘edit native H.264’ trick very well, so can other systems in the future, no doubt. But there is most definitely a time and a place for transcoding before editing. And I don’t think it’s going away.