Creating the Dance of the Seven Veils

Unboxing videos are an interesting phenomenon.

They don’t really count as ‘television’ or ‘film’ – in fact they’re not much more than a moving photo or even diagram. But they are part of the mythos of the launch of a new technical product.

I’ve just finished my first one – and it was ‘official’ – no pressure, then.

I first watched quite a few unboxing videos. This was, mostly, a chore. It was rapidly apparent that you need to impart some useful information to the viewer to keep them watching. Then there was the strange pleasure in ‘unwrapping’ – you have to become six years old all over again, even though – after a couple of decades of doing this – you’re more worried about what you’re going to do with all the packaging and when you can get rid of it.

So… to build the scene. My unpack able box was quite big. Too big for my usual ‘white cyclorama’ setup. I considered commandeering the dining room, but it was quite obvious that unless I was willing to work from midnight until six, that wasn’t going to happen. I have other work going on.

So it meant the office. Do I go for a nice Depth of Field look and risk spending time emptying the office of the usual rubbish and kibble? Or do I create a quiet corner of solitude? Of course I do. Then we have to rehearse the unpacking sequence.

Nothing seems more inopportune than suddenly scrabbling at something that won’t unwrap, or unfold, or not look gorgeous. So, I have to unwrap with the aim of putting it all back together gain – more than perfectly. I quickly get to see how I should pack things so it unpacks nicely. I note all the tricks of the packager’s origami.

So, we start shooting. One shot, live, no chance to refocus/zoom, just keep the motion going.

I practice and practice picking up bundles of boring cables and giving them a star turn. I work out the order in which to remove them. I remember every item in each tray. Over and over again.

Only two takes happened without something silly happening – and after the second ‘reasonable’ take, I was so done. But still, I had to do some closeups, and some product shots. Ideally, everything’s one shot, but there are times when a cutaway is just so necessary, and I wish I got more.

Learning Point: FIlm every section as a cutaway after you do a few good all-in-one takes.

Second big thing, which I kinda worked out from the get-go. Don’t try and do voiceover and actions. We’re blokes, multitasking doesn’t really work. It’s a one taker and you just need to get the whole thing done.

Do you really need voiceover, anyway? I chickened out and used ‘callout’ boxes of text in the edit. This was because I had been asked to make this unboxing video and to stand by for making different language versions – dubbing is very expensive, transcription and translation for subtitles can be expensive and lead to lots and lots of sync issues (German subs are 50% more voluminous than English subtitles and take time to fit in).

So, a bunch of call-out captions could be translated and substituted pretty easily. Well, that’s the plan.

Finally, remember the ‘call to action’ – what do you want your viewers to do having watched the video? Just a little graphic to say ‘buy here’ or ‘use this affiliate coupon’ and so on. A nod to the viewer to thank them for their attention.

And so, with a couple of hundred views in its first few hours of life, it’s not a Fenton video, but it’s out there stirring the pot. I’d like to have got more jokes and winks in there, but the audience likes these things plain and clear. It was an interesting exercise, but I’m keen to learn the lessons from it. Feedback welcomed! What do you want from an Unboxing Video?

Thunderbolt Stikes Back

UPDATE: Writing >2GB files on SSDs >240GB with the Seagate GoFlex Thunderbolt Adaptor can cause the drive to unexpectedly dismount from your computer with an error (-50). Read the full story and the solution by Wolfgang Bauer.

Following on from my USB3 testing, I’ve finally received an interesting box – the Seagate GoFlex Thunderbolt Adaptor, now in stock for about £100.

The cool trick is that you can connect any ‘bare’ Solid State Drive (SSD) to it, and the price of SSDs is coming down quickly. A 256GB SSD can be had for under £140. Of course then you have to add your £40 cable, but assuming we can use one adaptor and cable for all your SSDs, finally we have a solution for Thunderbolt SSD for editing (and archiving/Backing up to USB3).

The downside is that you’ll probably want to keep the bare drive in the adaptor with some elastic bands or something – very high tech.

So, why would you want to do this?

Because it’s freaking fast. That’s why. Editing with Final Cut Pro 10 on this drive is the sort of experience we assumed it would be. No spinny beachballs of death, no stuttering, just ‘slick demo’ performance.

Drive Write (MB/s) Read (MB/s)
Slow, cheap USB2 drive 21.6 26.2
Western Digital Passport SE (USB2) 30.1 32.8
LaCie Quadra 7200rpm 2TB on FW800 46.6 44.5
Crucial 256GB SSD on FW800 75 81.6
Western Digital Passport SE (USB3) 96.3 108.4
Internal 512 GB SSD in MacBook Pro 17” 88.8 167.0
ThunderBolt with SSD 266.3 381.3

We’re talking 5x the speed of FW800 write, 8x the speed of FW800 read. And then there’s file access times.

With the same cable and adaptor, you can purchase additional SSDs for £140, and pretty soon we’ll see half a terabyte for under £200.

For my little industry niche, that means one SSD per edit for the duration of that edit, then archived off to hard drives so the SSD can be recycled, but happy to have maybe even a dozen of them – which I couldn’t afford with the current clutch of Thunderbolt/SSD combinations.

As I edit on site a lot, this can mean little gotchas like power suddenly dipping or going off entirely. Not talking third world style, just leaving a render in your hotel room and Housekeeping thoughtfully remove your room key that you taped into the slot to keep the power on. Or that moment after a big event where you’re backing up, and the 3-phase power you’ve been living on gets pulled for the de-rig. Which is why I’m so passionate about bus-powered drives that can work with a laptop editing computer.

And then there’s the scary ‘let’s edit this in the car’ or ‘let’s log rushes on the plane’ – with spinning disks? No. Ah – how about SSD? Fine. I’ll be archiving ‘media managed’ videos to thumb drives next.

It makes hard disks feel as old fashioned as tape.

FCPX – partying with your Flaky Friend

Tart

UPDATE: Compound Clips, specifically splitting Compound Clips, and worst of all, splitting a compounded clip that’s been compounded, increases project complexity exponentially. Thus, your FCPX project quickly becomes a nasty, sticky, crumbly mess.

Which is a shame, because Compound Clips are the way we glue audio and video together, how we manage complexity with a magnetic timeline, and butt disparate sections together to use transitions. Kind of vital, really.

Watch these excellent demonstration videos from T. Payton who hangs out at fcp.co:

These refer to version 10.0.1, and at time of writing, were at 10.0.3, but I can assure you that we STILL have this problem (I don’t think it’s a bug, I think it’s the way FCPX does Compound Clips). We return you to your original programming…

Okay, report from the trenches: Final Cut Pro 10? Love it – with a long rider in the contract.

I’m a short-form editor – most of my gigs are 90 seconds to 10 minutes (record is 10 seconds and I’m proud of it). Turn up ‘Somewhere in Europe’, shoot interviews, General Views, B-Roll, get something good together either that night, or very soon afterwards, publish to the web, or to the big screen, or push out to mobiles and ipads…

This is where FCPX excels. As an editorial ‘current affairs’ segment editor, it’s truly a delight. I bet you slightly overshot? Got a 45 minute take on an interview that needs to be 45 seconds? Range based favourites are awesome, and skimming lets you find needles in a haystack. Need to edit with the content specialist at your side? The magnetic timeline is an absolute joy, and don’t get me started about auditioning.

It’s true: in cutting down interviews, in throwing together segments, and especially when arguing the toss over telling a given story, I’m at least twice as fast and so much more comfortable throwing ideas around inside FCPX.

But my new Editing Friend is a ‘Flaky Friend’.

She really should be the life and soul of the party, but somehow there’s a passive aggressive diva streak in her.

There are three things she doesn’t do, and it’s infuriating:

  • She doesn’t recognise through-edits – they can’t be removed, they are, to her, like cesarian scars, tribal tattoos (or so she claims), cuts of honour. We tell her we’re cutting soup at this stage, but no. ‘Cuts are forever’ she says, like the perfect NLE she thinks she is.
  • She doesn’t paste attributes selectively – it’s only all or nothing. ‘We must be egalitarian’ she croons. What is good for one is good for all, apparently. You can’t copy a perfect clip and only apply colour correction to the pasted clip – you must paste EVERYTHING, destroying your sound mix, needing extensive rework to your audio mix, and heaven help you if you change your mind.
  • She flatly refuses to accept that there is already a way we all do common things, and wants to do it her own kooky way. Making J and L cuts into a Tea Ceremony, blind assumption that a visual transition needs an audio transition, even if we’ve already done the groundwork on the audio… girl, the people who think you’re being cute by insisting this are rapidly diminishing to the point you can count them on your thumbs, and we do include you in that list.

So okay, she’s a good gal at heart. Meaning the best for you. But she needs to bail out and quit every so often, especially if you’re used to tabbing between email, browser, Photoshop, Motion et al. She’ll get all claustrophobic, and you’ll be waiting 20-40 seconds with the spinning beachball of death between application switches. It’s all a bit too much like hard work. ‘I can’t cope’, she sighs – and spins a beachball like she smokes a cigarette. We stand around, shuffling our feet as she determinedly smokes her tab down to the butt. ‘Right!’ she shouts at last. ‘Let’s get going!’

And yes, it’s great when things are going right.

But put her under pressure, with a couple of dozen projects at hand, some background rendering to do, it all gets very ‘I’m going to bed with a bottle of bolly’. I’m getting this an awful lot now, and I really resent being kept hanging around whilst she changes a 5 word caption in a compound clip that takes 5 FRICKIN’ MINUTES to change, I resent every minute of waiting for projects to open and close, and whilst it’s lovely to see her skip daintily through all that fun new footage, when it comes down to the hard work, she’s so not up to it…

I am twice as fast at editing in FCPX, but I am a quarter of the speed when doing the ‘maid of all work’ cleaning up and changes. It means that, actually, I am working twice as hard in X as I was in 7, just mopping up after this flakey friend who has a habit of throwing up in your bathtub and doing that shit-eating grin as they raid your fridge of RAM and CPU cycles.

Well, FCPX dear, my flaky friend, you’re… FIRED.

Tonalizer – a scalpel, not a bullet

Now we’re all shooting flat, how do we get our rushes looking their best? By grading. From the giddy high end of DaVinci down to the humble color board in FCPX, grading is the price we pay for creamy highlights, rich shadows and that ‘expensive’ cinematic look. And I’m in love with a new tool.

We shoot flat because the ‘video’ tonality of traditional video is far less than modern CMOS sensors can handle – by modifying the mapping of brightness to the camera’s curve, we can squeeze in a couple of extra exposure stops if we’re careful.

Of course that makes the pictures look a little different. Highlights are pushed down a bit, shadows are pulled up a bit, and we get ‘flat’ looking pictures. They need to be graded in post to ‘roll off’ the last stop or two of highlights, so brightening the highlights again, but without the awful cut-off of a ‘blown’ highlight – the ‘tip-ex’ effect on foreheards, for example. Similarly, the shadows can be tamped down, but because the shadows started life in a brighter realm, as they’re pushed down to more shadowy levels, we retain the details without the boiling mass of noise we used to associate with it.

Of course, because EVERY shot you’ve taken has this flat profile, EVERY shot needs work in post before mortal humans can enjoy them. You can apply a ‘Look Up Table’ (a long list of ‘if it’s this level of brightness, that should be that level of brightness’ if somebody has been kind or commercial enough to make one for your Non Linear Edit system (e.g. Technicolor profiles), but if you’re hand-rolling, you’re on your own.

I’ve traditionally used Magic Bullet Looks and Colorista to do my conversion from flat to full, but with the transition to FCPX, we had to wait until fairly recently to get back this functionality. The Color Boards did not provide the kind of delicacy in manipulation of curves, and other plug-in manufacturers have stepped in.

Personally, I preferred Magic Bullet Looks because of familiarity and the general ‘one filter to control them all’ approach, but it feels heavy going for FCPX – and for some reason it feels slower and heavier than it did in FCP7.

Then along comes Tonalizer.

If you’re used to the interface glitz of MBLII, Tonalizer’s dour set of sliders seems a little limiting. No faux colour balls, no pretty graphs, some curt labels, and that’s it.

But what it actually does is wonderful – it’s as if there’s thousands of subtle adjustments it can make, but they’ve all been tamed down to a few sliders. FCPX may have a brightness slider, and you can watch the whole Waveform get shifted up and down the IRE scale, ensuring your image is only correct at one minute point in the slider’s travel. Watch the brightness control in Tonalizer affect your waveform, and see how it’s nipping and tucking things at the top and bottom end of the scale, subtly redistributing the tonality over a very pleasant curve, which you can then change the shape of with another slider. Then there’s the ‘highlight rescue’ and ‘shadow boost’ that file down the sharp edges in highlights and shadows, with a form of contrast that subtly increases around areas of brightness transition that gives the merest hint of ‘phwoar’ (a UK idiom that I hope travels well). Of course, if you wind everything up to 11, your footage ends up like a dog’s dinner, but Tonalizer can handle subtlety.

It’s all very neat and handleable, it’s all very focused on footage that’s been shot on flat profiles, and tellingly, it’s got all the little things we need day to day:

  • Adaption will pull flat ‘log’ style rushes in to shape
  • Tint is good at removing the green pollution in Fluorescent lighting
  • Warmth simply nudges the colour temperature (won’t correct the WRONG colour temp, but handy in mixed conditions)
  • Protect Skin Tones will ring fence that little collection of tones so your lit interview is fine, but the green pool of background is improved

And then there’s the Detail Sharpener.

Sharpening is anathema to Narrative shooters, but in Corporates, sharp colourful pictures sell. Period. Not oversharpened ‘black ring around white objects’ horrible ‘in camera’ sharpening. Tonalizer just wafts some magic over the image and helps the camera’s inherent sharpness. You have turned the sharpening circuits off, haven’t you? Cameras don’t sharpen well as they have to do it in real time and it sharpens all the noise and crud. If you do it in post, the computer spends a little more time and care (with the appropriate software).

So Tonalizer lifts and separates, adds a bit of definition, respects skin tone, and even has little niceties like an ‘assist’ mode that flags clipped detail, plus a ‘safe range’ that gracefully protects your picture from harm when winding up the controls to higher values.

For FCPX users, there are two versions – one specifically set for Technicolor CineStyle favoured by many DSLR shooters. This shoots incredibly flat, and takes the DSLR brightness range to the very edge, producing clay-like skin tones and such milky images that it takes time and skill to bring back (but the results are worth it if you do have the time).

Shooting ultra flat does have some disadvantages – more noise in the shadows, but Tonalizer has a Noise Reduction function to help mitigate that. Another issue is that you are spreading a lot of info over an 8 bit image, and aggressive manipulation will degrade the image as your carefully spaced data gets pulled and pushed and bits fall between the ‘8-bit gaps’ and disappear for ever. Start yanking the sliders of any grading plug-in, and watch the waveform monitor, looking for the fine horizontal lines (gaps in the info) appear.

So there’s a ‘Comfy Camp’ of Picture Profile users who want enough brightness range in highlights to do the roll off thing, and enough tonality in the shadows, to create an ‘expensive’ if not completely ‘cinematic’ final image, and this is where Tonalizer is a great tool for getting the look you want. It certainly floats my boat for corporate/commercial work, and has already supplanted MBL2 for my ‘jobbing’ work – quicker to get right, quicker to render, and I have to say I rather like its singleminded approach.

Am I giving up on Magic Bullet? Absolutely not. It goes further, does more… but it’s slower and easy to ‘overbake’ (I habitually dial back my MBL grades to 66%). For well exposed stuff that just needs to be clean, clear and smartened up, MBL is overkill and, used indiscriminately, can do more damage than good. Tonalizer is perfect for just a little bit of zip and starch.

It’s been out a while, but I didn’t really bother much because i) I had Magic Bullet Looks, and was happy with it, and ii) I thought it was quite expensive for what it was offering. I then saw Tonalizer demonstrated at the February MacVideo Live event (where 10 lucky attendees walked away with a copy – not me, alas) and managed to have a quick play with it. You will have to try it out on your own footage to realise its worth. I do note, however, that there have been a number of promotions, and fcp.co currently has a 35% discount going.

This is a blog, not a review, and I’m not particularly keen to get involved with promoting anything, but have been much enthused by Tonalizer, and for FCPX users, it’s well worth checking out, even if you do already have a plethora of level/curve based plug-ins.

The Light Fantastic

Just back from a manic week, shooting in Beirut, Cairo, then to Cambridge and finally to Edinburgh. We were shooting documentary style, interviews and GVs (General Views) or B-Roll, and Cutaways. The schedules were fluid, the locations unseen, and everything needed to be shot at NTSC frame rates. Immediately, my favourite camera for this sort of job (Sony’s FS100) was out. Secondly, we needed a flexible lighting kit, but all kit needed to be portable, flexible and light.

Even in these days of extremely sensitive cameras, lighting is still an essential part of video work. Even if it’s a bit of addition with a reflector or subtraction with a black drape, you’re adapting the light to reveal shape and form and directing the viewer’s eye to what’s important to your story.

Of course, we can’t all travel with a couple of 7-Tonne lighting trucks full of HMI Brutes and Generators, or even a boxful of Blondes and Redheads. I’ve had a little interview kit of Dedos, Totas and a softbox with an egg-crate, but then these create a separate box of cables, dimmers, plugs, RCDs and stands, and whilst easy to throw in the boot of the car, it’s not exactly travel friendly.

I recently invested in a couple of 1×1 style LED panels, run off V-Lock batteries. These have been a revelation – the freedom to light ‘wirelessly’, and with enough brightness to do a dual-key two-up interview with three cameras has been great. I’ve got the entire kit into a Pelicase with stands, reflector, batteries and charger – but at a gnat’s under 30 Kg, it attracts ‘heavy’ surcharges when flown (and eye-rolls from check-in staff). Then add a tripod bag, then spare a thought for the sartorial and grooming needs of Yours Truly, and the prices go up, as do the chances of something going missing. Also, a stack of pelicases and flight cases lets everyone know that the Media Circus is in town. Such attention isn’t always welcome – especially from those in uniform.

So I’ve been shopping.

I’ve found some little LED lamps on eBay that clip together and run off the same batteries as my FS100. Add a couple of lightweight stands, and the Safari tripod, add a few yards of bubblewrap and a ‘Bag For Life’ full of clothing, all thrown into an Argos cheapie lightweight suitcase. I reckon the case is probably good for three, maybe four trips when reinforced with luggage straps, but getting three bags into one, and doing so under 20 Kg, is a very neat trick. No excess baggage charges, no additional overweight baggage charges, no trips to oversize baggage handling, no solo struggling with four bags…

Entire shoot kit including tripod and 3 head lighting.The six LED lamps and three stands allowed for basic 3 point lighting, and their native daylight balance meant that, for the best part, we were augmenting the available light in our locations. Even outdoors, 3 LED lamps bolted together, about 1.5 meters from the subject (and a foot or so above his eyeline) produced a beautiful result. Without the lamp, we’d have ‘just another voxpop’, but with the lamp, with the ability to bring his face up one f-stop from the background, we had a very slick shot. And because it’s all battery driven, we could do this outdoors, we could run around to different locations, and never have to worry about bashing cables – or even finding a power point that worked.

Now, there’s LED, and there’s LED. These were not Litepanels lamps, and there is a little bit of the ‘lime’ about the light. CRI was below 90, which isn’t very good. However, this was easy to cheer up using FCP-X’s colour board, and quite frankly most humans would not see the green tinge until I carefully point it out and do a ‘before/after’ – and even then, my clients weren’t in the slightest bit bothered – just thought I was being a bit of an ‘Artiste’.

We shot on my Canon 550D using the Canon 17-55 f2.8 IS zoom and a Sigma 50mm 1.4 in some of the smaller locations (to really throw the background out of focus). For GVs and B-Roll, the Image Stabilisation was essential for getting shots where we couldn’t take a tripod, or for working so fast a tripod would have been a liability. You’ll have to imagine standing at the edge of Cairo traffic, or wandering through back street markets – or filming buildings next to razor wire blockades guarded by soldiers…

So, the camera could be thrown in a backpack with three lenses, a Zoom recorder, a couple of mics, batteries, charger, a little LitePanels Micro ‘eye-light’ and of course the Zacuto Z-Finder. Everything else, including tripod, stands, lamps and chargers, plus clothing, go in the suitcase.

I really prefer the Pelicase, I love my 1x1s, I’m so glad to be back on the Sachtler head and using an FS100, but I’ve got my ‘low profile’ kit together now. And with the little panels using NP-F batteries (or 5x AAs), clipping together to make a key, or staying separate for background lighting, it’s a very flexible kit.

Two little quotes come to mind: at a MacVideo event a while back, Dedo Weigert (the DoP of Dedo lamp fame) asserted that lighting is not about quantity, but about quality. On a recent podcast, DoP Shane Hurlbut stated, in reaction to the sensitivity of cameras ‘not needing extra lighting’ that it was a DoPs duty to control light rather than to accept what’s already there. I’ve taken both of these to heart with portable LED lamps, as there’s no longer an excuse to shoot without.

PS: I’ll be doing some further tests with the lamps, and intend to make a video from the results.

If Apple called it iMovie Pro…

I’m very impressed with iMovie Pro. It’s very quick to edit with, there’s lots of powerful controls to do things that can be tiresome in Final Cut Pro, the interface is clean and uncluttered, and there are ways to bend the way the application works into a professional workflow – and by professional, I mean an environment where you’re earning money from editing footage according to the desires and ultimate direction of a client – specifically where ‘I can’t do that’ doesn’t enter the equation unless budgets say otherwise.

The release of iMovie Pro has been somewhat mucked up by its publisher, Apple. They’ve decided to release it with under the ‘Final Cut’ brand, and this has caused a backlash in their established user community. In doing so, they’ve elevated expectations as the FCP brand is a ten year old product that, while creaking a bit in its old age, has a reliable and stable workflow with lots of workarounds to hide the issues with such an old product. To introduce this new package as its next generation is about as subtle and believable as a 1920s SFX shot of teleportation.

Let’s say I cut Apple some slack here: Final Cut Pro was born in the mid 1990s as a PC package, then ported over to Apple’s senescent OS9 and vintage QuickTime technologies that were approaching their own ‘End of Life’ or ‘Best Before’ dates. Nevertheless, Apple soldiered on and built a strong following in the Non Linear Editing market, excusing FCP’s little ‘ways’ like one ignores the excessive, erm, ‘venting of gas’ from a beloved Labrador.

As time goes on, Apple has to look at the painful truth that FCP is getting old. It’s just not able to easily evolve into 64 bit and new video technologies, and rewriting it from the ground up could be a long, frustrating process of ‘recreating’ things that shouldn’t be done in ‘modern’ software. After a few big efforts, it becomes painfully obvious that we can’t make a bionic Labrador.

So Apple were faced with a difficult choice: rebuild their dog, their faithful friend, warts and all, from the ground up, which will please a few but will never help the greater audience, or… and this is hard to emote: shoot it in the head, kill it quickly, and do a switcharoo with their young pup iMovie, fresh out of Space Cadet Camp, full of zeal and spunk for adventure but still a little green.

So here’s where the scriptwriter faces a dilema. Do we do a Doctor Who regeneration sequence, or do we do a prequel reboot a-la Abrams’ Star Trek? Or do we substitue an ageing star with a young turk with is own ideas on the role and hope the audience buys it?

Exactly.

Imagine if Apple said this: ‘hey guys, FCP can go no further. Enjoy it as is. From now on, we’re investing in iMovie’s technologies and will make it the best editor ever – our first version is for ‘The Next Generation’, but it’s going to grow and develop fast, it is tomorrow’s editor, it does stuff you’ll need in the future – welcome to iMovie Pro’.

Okay, so you’d have to invest $400 in this new platform, but it’s got potential. Imagine letting producers do selects on an iPad, emailing you their collections ready for you to edit. Imagine identifying interviewees (not in this release) and linking them to lower third and consent metadata, or (as would have been really useful) ‘find this person (based on this photo) in my rushes’ (again, not in this version but the hooks are there). Imagine not having to do all the grunt work of filing twiddly bits, or identifying stuff shot in Slough. This is clever. This is exciting. And skimming? Actually yes – I like that.

But if Apple tries to sell us all this sizzle as Final Cut Pro, I want my controls and my media management clarity. I want to know why I am paying $400 for an upgrade that gives me less features.

The new FCP-X has iMovie icons (see those little ‘stars’ on projects?), offers iMovie import, looks like iMovie, works like iMovie, has iMovie features and then some. It IS iMovie Pro, and I am happy with that. All the crap that Apple get for them calling it Final Cut Pro, which it is most certainly and definitely (nay, defiantly) is NOT, is fully deserved. May they be bruised and battered for their arrogance.

Apple: rename FCP-X to iMovie Pro. It’s the truth, and it’s good.

IP Videography

Sony SNC CH210I’m shooting timelapse today – a build of an exhibition area. However, the brief posed some challenges that meant my usual kit would not just be inconvenient, but almost impossible to use.

The exhbition area needed to be filmed from high up, but there were no vantage points a person could film from. It meant fixing a camera to a bit of building, then running a cable. There were no convenient power outlets nearby, either. Once rigged, the cameras would be inaccessible until the show was over. The footage would be required BEFORE the cameras were taken down. There wasn’t a limitless budget either.

So… we couldn’t strap a camcorder or DSLR up – how would you get the footage? How would you change battery? Webcams need USB or are of limited resolution. Finally, I settled on a pair of SNC-CH210 ‘IP’ Cameras from Sony (supplied by Charles Turner from Network Video Systems in Manchester). These are tiny, smaller than the ‘baby mixer’ tins of tonic or cola you’d find on a plane. They can be gaffer taped to things, slotted into little corners, flown overhead on lightweight stands or suspended on fishing line.

The idea is that these cameras are ‘Internet Protocol’ network devices. They have an IP address, they can see the internet, and if you have the right security credentials, you can see the cameras – control them – from anywhere else on the internet using a browser. The cameras drop their footage onto an FTP server (mine happens to be in the Docklands, but it could be anywhere). They have but one cable running to them – an Ethernet Cat5e cable – which also carries power from a box somewhere in between the router and the camera. Ideal for high end security applications, but pretty darn cool for timelapse too!

So I’m sitting here watching two JPEGs, one from each camera, land in my FTP folder every minute. I can pull them off, use the QuickTime Player Pro’s ‘Open Image Sequence’ function to then convert this list of JPEGs into a movie at 25fps to see how the timelapse is going. So far, so good.

The most difficult thing, which I had to turn to help for, was the ‘out of the box’ expierience of assigning each camera an IP address. Being a Mac user with limited networking skills, the PC-only software with instructions written in Ancient Geek was of no help. A networking engineer soon had them pulling their identities of DHCP, and other than one mis-set DNS, it was a smooth process to show each camera where the FTP server was, and what to put there.

It was quite a surreal experience, sitting on the empty floor of the NEC with nothing but a wifi connection on my MacBook Pro, adjusting the cameras on a DIFFERENT network, and checking the results from my FTP server somewhere ‘in the cloud’.

The quality is okay, but not spectacular – I’d say it’s close to a cheap domestic HDV camcorder. But at a few hundred quid each, they’ll pay for themselves almost immediately, and they’ll get rolled out again and again. I doubt they would be of interest to the likes of Mr Philip Bloom et al. Notwithstanding that, I just need to sharpen my networking and cable-making skills!

Matt administering an IP camera from Wifi

Matt administering an IP camera from Wifi

Achieving ‘that video look’

Throughout the last 9 decades of cinema, Directors have been stuck with the same tired look forced upon them by the constraints of their technology. Cinematographers at the vanguard of their industry, disenchanted with the timelessness of film, are now looking to achieve that elusive ‘live’ look – video!

The world of moving pictures has gone by a number of pet names, one of which describes one of the pitfalls of having to pay for your recording medium by the half-cubit or ‘foot’ as some would say. ‘The Flicks’ were just that – flickering images in a dark room, destined to cause many a strained eye.

Whilst motion could be recorded at or above 20 frames per second, there was a problem in that the human eye’s persistence of vision (that eye-blink time where a ghost of a bright image dances upon your retina) means you can perceive flicker up to about 40 frames per second. So your movie had smooth movement at 24 or 25 frames per second, but it still flashed a bit.

Of course, clever engineers realised that if you showed every frame TWICE, so the lamp illuminated each frame through a revolving bow-tie cunningly pressed into service as a shutter, then hauled the loop of film (due to mass, intertia, etc – tug the whole reel and you’d snap it) down one frame and give that a double flash. Rinse, repeat.

Every student of film will get taught the special panning speed to avoid juddery images – then forget it. Ditto the use of shutter speeds beyond 180 degrees. And so we’re stuck with motion blur and the last vestiges of flicker in the eyes of an audience reared on a visual diet of 75fps video games.

A collection of flim makers, some with their roots in the DV revolution of the 1990s, are looking to their true source of inspiration, trying to mimic the hallowed ‘television look’ by the simple expedient of shooting a higher frame rate. This gives their work a sense of ‘nowness’, an eerie ‘look into the magical mirror’ feel.

As post-production 3D gains traction, Directors are taking a further leaf out of the Great Book Of Video by using a technique known as ‘deep depth of field’ – where the lens sharply records all from the near to the far. An effect very reminiscent of the 1/3” class of DV camcorders. This will, of course, take huge amounts of lighting to achieve pinhole like apertures in their ‘medium format’ cameras such as Epic, Alexa and F65, but as leading lights such as James Cameron and Peter Jackson jump on the bandwagon, the whole industry can now concentrate on achieving ‘That Video Look’.

TV Soup – or how video compression really works

A little while ago, I got embroiled in a discussion about editing footage from DSLRs and why it wasn’t always a good idea to desire editing the original camera files. I repeat a condensed version of rant here for some light relief – but please can you imagine it as delivered by the inimitable Samuel L. Jackson…

When your DSLR camera records video, it needs to be space efficient as it has to deal with a lot of frames every second. Merely recording every frame does not leave enough time to actually capture subsequent frames and compress them nicely. It needs to do some Ninja Chops to do video.

Firstly, it does not record each frame as an image. It records a frame, and for every subsequent frame it only records the changes from the first frame. This may go on for, oooh, 15 frames or so. Then it takes a breath and records a full frame, then does the differences from THAT frame onwards.

Now imagine you are an editing application. Scooting around in that framework of real and imaginary frames means you’re spending most of your time adding up on your fingers and toes just to work out which frame you’re supposed to be displaying, let alone uncompressing that frame to display it.

Oh yes. In order to edit, you have to DECOMPRESS frames to show them, and that takes time. It’s like making ‘packet soup’.

Your editing software is trying to snort up packet soup – dried bits of vegetable and stock – it has to add a specific amount of water to that mix, allow the dried bits of aforementioned stuff to absorb the water, then compartmentalise the soup into spoonfuls.

Lesser compressed soup (not H.264 freeze dried but ProRes/DNxHD ‘just add hot water’ concentrate) can do this quicker and better – and some say it tastes better too. If only these newfangled cameras stopped freeze-drying their soup and just stuck to boiling off the excess water like MPEG2 does, dang, that would be nicer.

So, when you take your camera originals in H.264, you have to carefully re-hydrate your freeze-dried movies, and allow them to slowly absorb their moisture in a long process called transcoding. Then gently simmer them to a stock soup concentrate, so your editi system can easily serve them up in 1-frame, 1-spoon servings so you can edit them between the many hundreds of thousands of bowls that maketh the feast of your film.

You can have QuickTime soup. You can have Cineform soup. You can have DNxHD soup. H.264 soup is freeze dried and acquired through a straw. But H.264 soup is the size of a stock cube, and (for want of a better example) R3D is like canned soup – just requires a little reheating and a cup of cream.

Which ever way you capture and store it, we all watch soup.

Take your T2i footage, rehydrate it into the editing format you choose (can be ProRes, DNxHD, Cineform, hell, even XDCAM-EX) and then dish it up by editing and add your secret sauce to make it look/taste even finer. When you try to edit raw footage on most edit systems, you’re making soup into a condiment.

Thank you Mr Jackson.

Okay already, enough of the metaphor (and you’re spared the spatial compression stuff for now). CS5 does the ‘edit native H.264’ trick very well, so can other systems in the future, no doubt. But there is most definitely a time and a place for transcoding before editing. And I don’t think it’s going away.

Sweating the Petty Stuff

I’m putting the finishing touches on a simple set of ‘talking head’ videos destined for a corporate intranet to introduce a new section of content. Nothing particularly earth shaking or ground breaking. It certainly won’t win any awards, but it’s the kind of bread and butter work that pays bills.

However, there is a wrinkle. The client’s intranet is actually hosted and run by a separate company – a service provider. This service provider has set various limits to prevent silly things from happening, and these limits are hard-wired. If you have a special requirement, ‘the computer says no’.

One particular limit, which I will rant and rave about being particularly idiotic, pathetic and narrow minded, is that all video clips that users can upload to the system are limited to (get this) 12 Megabytes. That’s it. Any video, regardless of duration, cannot be any larger than 12 Megabytes. Period.

Another mark of bad programming in this system is that videos should measure a certain dimension, no bigger, no smaller. That may be fair if correctly implemented, but no. The fixed size is a stupid hobbled size and worse still, is not exactly 4:3 and not exactly 16:9, and not exactly anything really. So everything looks crap, though some look crapper than others.

Finally, the real evidence that the developers don’t actually understand video and don’t care about it either, the dimensions are not divisible by 8, therefore chucking the whole Macroblock thing in the khazi – digital video compression tends to divide things up in to blocks of 8 pixels and work out within those blocks what to do about dividing it up further. If your video dimensions are not divisible by 8, you get more issues with quality, performance and the like. It’s like designing car parks using the width of an Austin Healy Sprite, not caring about the fact that people who park can’t actually open their doors without bumping into other cars.

But the nurse says I must rest now. Rant over.

So, I’ve got to make all my talking head videos 12 Megabytes or less. How do you ensure this?

Well, method 1 is to monkey around with various settings in your compression software until you find something that sort of works.

Method 2 requires a pocket calculator, but saves a lot of time. You need to work out the ‘bitrate’ of your final video, how many bits are going to be used per second of video – if 500k bits per second are used, and the video is 10 seconds long, then 500k times 10 seconds is 5,000k or 5 Mbits.

Aha! But these are bits, the units of the internot. Not BYTES, and there are 8 bits in a Byte – believe me, I’ve counted them. We’ll leave aside another nerdy thing that there’s actually 1024 bits in a Kilobit, not 1000 (ditto KiloBytes to MegaBytes) – enough already.

So basically, 5 Megabits are divided by 8 to get the actual MegaBytes that the file will occupy on the hard disk: 0.625 in this case, or 625 Kilobytes.

So lets say I have a 6 minute video, which has to be shoehorned into 12 Mbytes. What bitrate do I need to set in Compressor/Episode/MPEGstreamclip/Whatever?

6 minutes = 360 seconds. Our answer, in the language of spreadsheets, is

((Target_size_in_Bytes x 8) x 1024) divided by Duration of video in seconds

So

=((12*8)*1024)/360

which equals 266 kilobits per second, which is not a lot, because that has to be divvied up between the video AND the audio, so give the audio at least 32 kilobits of that, and you’re down to 230 for the video.

But if you have a 60 second commercial,

=((12*8)*1024)/60

which is 1.6 Megabits, which is far studlier – 640×360, 128k soundtrack, room to spare!

So the 12 megabit limit is fine for commercials – but nothing of substance. The quality drops off a cliff after 2 minutes final duration.

But at least we have an equation which means you can measure twice and compress once, and not face another grinding of pips for 3 hours trying to get your magnum opus below 12.78 MBytes.