If Apple called it iMovie Pro…

I’m very impressed with iMovie Pro. It’s very quick to edit with, there’s lots of powerful controls to do things that can be tiresome in Final Cut Pro, the interface is clean and uncluttered, and there are ways to bend the way the application works into a professional workflow – and by professional, I mean an environment where you’re earning money from editing footage according to the desires and ultimate direction of a client – specifically where ‘I can’t do that’ doesn’t enter the equation unless budgets say otherwise.

The release of iMovie Pro has been somewhat mucked up by its publisher, Apple. They’ve decided to release it with under the ‘Final Cut’ brand, and this has caused a backlash in their established user community. In doing so, they’ve elevated expectations as the FCP brand is a ten year old product that, while creaking a bit in its old age, has a reliable and stable workflow with lots of workarounds to hide the issues with such an old product. To introduce this new package as its next generation is about as subtle and believable as a 1920s SFX shot of teleportation.

Let’s say I cut Apple some slack here: Final Cut Pro was born in the mid 1990s as a PC package, then ported over to Apple’s senescent OS9 and vintage QuickTime technologies that were approaching their own ‘End of Life’ or ‘Best Before’ dates. Nevertheless, Apple soldiered on and built a strong following in the Non Linear Editing market, excusing FCP’s little ‘ways’ like one ignores the excessive, erm, ‘venting of gas’ from a beloved Labrador.

As time goes on, Apple has to look at the painful truth that FCP is getting old. It’s just not able to easily evolve into 64 bit and new video technologies, and rewriting it from the ground up could be a long, frustrating process of ‘recreating’ things that shouldn’t be done in ‘modern’ software. After a few big efforts, it becomes painfully obvious that we can’t make a bionic Labrador.

So Apple were faced with a difficult choice: rebuild their dog, their faithful friend, warts and all, from the ground up, which will please a few but will never help the greater audience, or… and this is hard to emote: shoot it in the head, kill it quickly, and do a switcharoo with their young pup iMovie, fresh out of Space Cadet Camp, full of zeal and spunk for adventure but still a little green.

So here’s where the scriptwriter faces a dilema. Do we do a Doctor Who regeneration sequence, or do we do a prequel reboot a-la Abrams’ Star Trek? Or do we substitue an ageing star with a young turk with is own ideas on the role and hope the audience buys it?


Imagine if Apple said this: ‘hey guys, FCP can go no further. Enjoy it as is. From now on, we’re investing in iMovie’s technologies and will make it the best editor ever – our first version is for ‘The Next Generation’, but it’s going to grow and develop fast, it is tomorrow’s editor, it does stuff you’ll need in the future – welcome to iMovie Pro’.

Okay, so you’d have to invest $400 in this new platform, but it’s got potential. Imagine letting producers do selects on an iPad, emailing you their collections ready for you to edit. Imagine identifying interviewees (not in this release) and linking them to lower third and consent metadata, or (as would have been really useful) ‘find this person (based on this photo) in my rushes’ (again, not in this version but the hooks are there). Imagine not having to do all the grunt work of filing twiddly bits, or identifying stuff shot in Slough. This is clever. This is exciting. And skimming? Actually yes – I like that.

But if Apple tries to sell us all this sizzle as Final Cut Pro, I want my controls and my media management clarity. I want to know why I am paying $400 for an upgrade that gives me less features.

The new FCP-X has iMovie icons (see those little ‘stars’ on projects?), offers iMovie import, looks like iMovie, works like iMovie, has iMovie features and then some. It IS iMovie Pro, and I am happy with that. All the crap that Apple get for them calling it Final Cut Pro, which it is most certainly and definitely (nay, defiantly) is NOT, is fully deserved. May they be bruised and battered for their arrogance.

Apple: rename FCP-X to iMovie Pro. It’s the truth, and it’s good.

IP Videography

Sony SNC CH210I’m shooting timelapse today – a build of an exhibition area. However, the brief posed some challenges that meant my usual kit would not just be inconvenient, but almost impossible to use.

The exhbition area needed to be filmed from high up, but there were no vantage points a person could film from. It meant fixing a camera to a bit of building, then running a cable. There were no convenient power outlets nearby, either. Once rigged, the cameras would be inaccessible until the show was over. The footage would be required BEFORE the cameras were taken down. There wasn’t a limitless budget either.

So… we couldn’t strap a camcorder or DSLR up – how would you get the footage? How would you change battery? Webcams need USB or are of limited resolution. Finally, I settled on a pair of SNC-CH210 ‘IP’ Cameras from Sony (supplied by Charles Turner from Network Video Systems in Manchester). These are tiny, smaller than the ‘baby mixer’ tins of tonic or cola you’d find on a plane. They can be gaffer taped to things, slotted into little corners, flown overhead on lightweight stands or suspended on fishing line.

The idea is that these cameras are ‘Internet Protocol’ network devices. They have an IP address, they can see the internet, and if you have the right security credentials, you can see the cameras – control them – from anywhere else on the internet using a browser. The cameras drop their footage onto an FTP server (mine happens to be in the Docklands, but it could be anywhere). They have but one cable running to them – an Ethernet Cat5e cable – which also carries power from a box somewhere in between the router and the camera. Ideal for high end security applications, but pretty darn cool for timelapse too!

So I’m sitting here watching two JPEGs, one from each camera, land in my FTP folder every minute. I can pull them off, use the QuickTime Player Pro’s ‘Open Image Sequence’ function to then convert this list of JPEGs into a movie at 25fps to see how the timelapse is going. So far, so good.

The most difficult thing, which I had to turn to help for, was the ‘out of the box’ expierience of assigning each camera an IP address. Being a Mac user with limited networking skills, the PC-only software with instructions written in Ancient Geek was of no help. A networking engineer soon had them pulling their identities of DHCP, and other than one mis-set DNS, it was a smooth process to show each camera where the FTP server was, and what to put there.

It was quite a surreal experience, sitting on the empty floor of the NEC with nothing but a wifi connection on my MacBook Pro, adjusting the cameras on a DIFFERENT network, and checking the results from my FTP server somewhere ‘in the cloud’.

The quality is okay, but not spectacular – I’d say it’s close to a cheap domestic HDV camcorder. But at a few hundred quid each, they’ll pay for themselves almost immediately, and they’ll get rolled out again and again. I doubt they would be of interest to the likes of Mr Philip Bloom et al. Notwithstanding that, I just need to sharpen my networking and cable-making skills!

Matt administering an IP camera from Wifi

Matt administering an IP camera from Wifi

Achieving ‘that video look’

Throughout the last 9 decades of cinema, Directors have been stuck with the same tired look forced upon them by the constraints of their technology. Cinematographers at the vanguard of their industry, disenchanted with the timelessness of film, are now looking to achieve that elusive ‘live’ look – video!

The world of moving pictures has gone by a number of pet names, one of which describes one of the pitfalls of having to pay for your recording medium by the half-cubit or ‘foot’ as some would say. ‘The Flicks’ were just that – flickering images in a dark room, destined to cause many a strained eye.

Whilst motion could be recorded at or above 20 frames per second, there was a problem in that the human eye’s persistence of vision (that eye-blink time where a ghost of a bright image dances upon your retina) means you can perceive flicker up to about 40 frames per second. So your movie had smooth movement at 24 or 25 frames per second, but it still flashed a bit.

Of course, clever engineers realised that if you showed every frame TWICE, so the lamp illuminated each frame through a revolving bow-tie cunningly pressed into service as a shutter, then hauled the loop of film (due to mass, intertia, etc – tug the whole reel and you’d snap it) down one frame and give that a double flash. Rinse, repeat.

Every student of film will get taught the special panning speed to avoid juddery images – then forget it. Ditto the use of shutter speeds beyond 180 degrees. And so we’re stuck with motion blur and the last vestiges of flicker in the eyes of an audience reared on a visual diet of 75fps video games.

A collection of flim makers, some with their roots in the DV revolution of the 1990s, are looking to their true source of inspiration, trying to mimic the hallowed ‘television look’ by the simple expedient of shooting a higher frame rate. This gives their work a sense of ‘nowness’, an eerie ‘look into the magical mirror’ feel.

As post-production 3D gains traction, Directors are taking a further leaf out of the Great Book Of Video by using a technique known as ‘deep depth of field’ – where the lens sharply records all from the near to the far. An effect very reminiscent of the 1/3” class of DV camcorders. This will, of course, take huge amounts of lighting to achieve pinhole like apertures in their ‘medium format’ cameras such as Epic, Alexa and F65, but as leading lights such as James Cameron and Peter Jackson jump on the bandwagon, the whole industry can now concentrate on achieving ‘That Video Look’.

TV Soup – or how video compression really works

A little while ago, I got embroiled in a discussion about editing footage from DSLRs and why it wasn’t always a good idea to desire editing the original camera files. I repeat a condensed version of rant here for some light relief – but please can you imagine it as delivered by the inimitable Samuel L. Jackson…

When your DSLR camera records video, it needs to be space efficient as it has to deal with a lot of frames every second. Merely recording every frame does not leave enough time to actually capture subsequent frames and compress them nicely. It needs to do some Ninja Chops to do video.

Firstly, it does not record each frame as an image. It records a frame, and for every subsequent frame it only records the changes from the first frame. This may go on for, oooh, 15 frames or so. Then it takes a breath and records a full frame, then does the differences from THAT frame onwards.

Now imagine you are an editing application. Scooting around in that framework of real and imaginary frames means you’re spending most of your time adding up on your fingers and toes just to work out which frame you’re supposed to be displaying, let alone uncompressing that frame to display it.

Oh yes. In order to edit, you have to DECOMPRESS frames to show them, and that takes time. It’s like making ‘packet soup’.

Your editing software is trying to snort up packet soup – dried bits of vegetable and stock – it has to add a specific amount of water to that mix, allow the dried bits of aforementioned stuff to absorb the water, then compartmentalise the soup into spoonfuls.

Lesser compressed soup (not H.264 freeze dried but ProRes/DNxHD ‘just add hot water’ concentrate) can do this quicker and better – and some say it tastes better too. If only these newfangled cameras stopped freeze-drying their soup and just stuck to boiling off the excess water like MPEG2 does, dang, that would be nicer.

So, when you take your camera originals in H.264, you have to carefully re-hydrate your freeze-dried movies, and allow them to slowly absorb their moisture in a long process called transcoding. Then gently simmer them to a stock soup concentrate, so your editi system can easily serve them up in 1-frame, 1-spoon servings so you can edit them between the many hundreds of thousands of bowls that maketh the feast of your film.

You can have QuickTime soup. You can have Cineform soup. You can have DNxHD soup. H.264 soup is freeze dried and acquired through a straw. But H.264 soup is the size of a stock cube, and (for want of a better example) R3D is like canned soup – just requires a little reheating and a cup of cream.

Which ever way you capture and store it, we all watch soup.

Take your T2i footage, rehydrate it into the editing format you choose (can be ProRes, DNxHD, Cineform, hell, even XDCAM-EX) and then dish it up by editing and add your secret sauce to make it look/taste even finer. When you try to edit raw footage on most edit systems, you’re making soup into a condiment.

Thank you Mr Jackson.

Okay already, enough of the metaphor (and you’re spared the spatial compression stuff for now). CS5 does the ‘edit native H.264’ trick very well, so can other systems in the future, no doubt. But there is most definitely a time and a place for transcoding before editing. And I don’t think it’s going away.

Sweating the Petty Stuff

I’m putting the finishing touches on a simple set of ‘talking head’ videos destined for a corporate intranet to introduce a new section of content. Nothing particularly earth shaking or ground breaking. It certainly won’t win any awards, but it’s the kind of bread and butter work that pays bills.

However, there is a wrinkle. The client’s intranet is actually hosted and run by a separate company – a service provider. This service provider has set various limits to prevent silly things from happening, and these limits are hard-wired. If you have a special requirement, ‘the computer says no’.

One particular limit, which I will rant and rave about being particularly idiotic, pathetic and narrow minded, is that all video clips that users can upload to the system are limited to (get this) 12 Megabytes. That’s it. Any video, regardless of duration, cannot be any larger than 12 Megabytes. Period.

Another mark of bad programming in this system is that videos should measure a certain dimension, no bigger, no smaller. That may be fair if correctly implemented, but no. The fixed size is a stupid hobbled size and worse still, is not exactly 4:3 and not exactly 16:9, and not exactly anything really. So everything looks crap, though some look crapper than others.

Finally, the real evidence that the developers don’t actually understand video and don’t care about it either, the dimensions are not divisible by 8, therefore chucking the whole Macroblock thing in the khazi – digital video compression tends to divide things up in to blocks of 8 pixels and work out within those blocks what to do about dividing it up further. If your video dimensions are not divisible by 8, you get more issues with quality, performance and the like. It’s like designing car parks using the width of an Austin Healy Sprite, not caring about the fact that people who park can’t actually open their doors without bumping into other cars.

But the nurse says I must rest now. Rant over.

So, I’ve got to make all my talking head videos 12 Megabytes or less. How do you ensure this?

Well, method 1 is to monkey around with various settings in your compression software until you find something that sort of works.

Method 2 requires a pocket calculator, but saves a lot of time. You need to work out the ‘bitrate’ of your final video, how many bits are going to be used per second of video – if 500k bits per second are used, and the video is 10 seconds long, then 500k times 10 seconds is 5,000k or 5 Mbits.

Aha! But these are bits, the units of the internot. Not BYTES, and there are 8 bits in a Byte – believe me, I’ve counted them. We’ll leave aside another nerdy thing that there’s actually 1024 bits in a Kilobit, not 1000 (ditto KiloBytes to MegaBytes) – enough already.

So basically, 5 Megabits are divided by 8 to get the actual MegaBytes that the file will occupy on the hard disk: 0.625 in this case, or 625 Kilobytes.

So lets say I have a 6 minute video, which has to be shoehorned into 12 Mbytes. What bitrate do I need to set in Compressor/Episode/MPEGstreamclip/Whatever?

6 minutes = 360 seconds. Our answer, in the language of spreadsheets, is

((Target_size_in_Bytes x 8) x 1024) divided by Duration of video in seconds



which equals 266 kilobits per second, which is not a lot, because that has to be divvied up between the video AND the audio, so give the audio at least 32 kilobits of that, and you’re down to 230 for the video.

But if you have a 60 second commercial,


which is 1.6 Megabits, which is far studlier – 640×360, 128k soundtrack, room to spare!

So the 12 megabit limit is fine for commercials – but nothing of substance. The quality drops off a cliff after 2 minutes final duration.

But at least we have an equation which means you can measure twice and compress once, and not face another grinding of pips for 3 hours trying to get your magnum opus below 12.78 MBytes.

H.264 Marathon Man – ‘is it safe?’

Now that the MPEGLA has confirmed that H.264 is going to be safe from legal gotchas for the foreseeable future, I guess we’re all back to improving our H.264 game after kicking the tyres of Google’s VP8. Loads of choice to encode to H.264, but which one has the speed? Which one has the quality?

The Elgato Turbo.264 is a great little USB dongle that accelerates your compression so a 5 minute video takes about 5 minutes to encode. That’s right. Try it! And quality is pretty good. Just open up your movie, Export the movie using the Elgato hardware, and – as Mr Jobs would say – ‘boom’.

But it’s not the best H.264 in the world. It’s not that smooth-as-butter expensive movie trailer feel. At some point, you have to trade speed for quality. And without that magic little dongle (which, quite frankly could deliver all the quality you need), things get very slow indeed.

Compressor, still my Go-To choice for DVD, has been left out in the cold as it doesn’t do H.264 that well for all the time it takes.

So there’s Telestream’s Episode. This will do a pretty reasonable job, and it will encode pretty much anything into anything else with enough control to get good results. Then there’s the exotica – both hardware and software from the likes of Digital Rapids and Ateme, designed for major encoding jobs. They carry suitably major price tags too.

And then there’s the Open Source crowd. Sometimes the nicest encoders come with a very attractive price tag. X264 is one of them. Very much a pro tool, but tamed from its command line interface with apps such as FFmpegX. However, x264 can be driven from most QT apps (including Compressor – for batch work and scaling – and even direct from the FCP timeline if you’re not using the computer for a day or two).

MyComet is a conduit framework from the QT interface to the x264 engine. Although it’s a bit geeky and intense, this is not virgin territory. A lot of pro compressionists are using it thanks to its great results. So rather than wax lyrical here, I will simply point you at some resources for you to dive in and check it out.

The x264 compressor

MyComet framework for use in QT apps

A useful intro to its use in Compressor

But I’m NOT giving up my Elgato Turbo.264 yet!

Take me where, Boris?

Boris in a taxiI hope you all enjoy the journey you can find here:


Yes, it’s what is known as a viral campaign, and I will stop right there because it’s got a job to do. Enjoy.

But, dear readers, I thought I’d pass on a few notes on the actual shooting and editing of it. It will spoil the whole experience as you learn how we gutted the chicken you found so tasty, but it’s all about the learning.

IT’S ALL ABOUT THE IDEA: First off… viral video – don’t get too ambitious in the cinematic camp. Keep it real. Don’t do full on production values, as people (your audience) get suspicious. Evan was pulling us back from the big picture and pushing us forward in keeping it real and light – and ‘happy’.

MAKING HAY WHILST THE SUN SHINES: We had booked an hour with Boris. He’s a busy chap, has a city to run and all that. We planned well beforehand, booked a spot that’s nearby his office that has lots of different vistas. A private road near some modern offices that overlook the Thames and has a little tree-lined avenue and a patch of grass and a whole vista of ‘Tradional London’, plus some utilitarian spots.

Well, didn’t quite go to plan. On the day, Boris had too many commitments, our filming spot was shifted 1 hour earlier (which turned into 20 minutes, of which 15 was spent reading the script). We ended up sharing our spot with 3 coachloads of children having a really great picnic lunch in the sunshine. We tried to ask the supervising staff for a little consideration, but they were children playing in the sunshine. Whilst the Producer in me wishes for a non-lethal Kid-EMP weapon, the dad in me says ‘mic Boris closer’ so we switched from Radio mic to a COS-11 as close as I dare to Boris’s mouth.

SOUND ISOLATION: if faced with loud surrounding noise, put the mic as close to the sound as possible. It’s just like lighting – the inverse square law is your friend. That’s why pop vocalists hold the mic to their mouth. There’s no way any other sound is going to get a look in. Their sound will be 100:1 louder than anything else on stage. But people don’t like looking at microphones – it breaks the fourth wall and feels a bit ‘keen’.

DISTRACTING BACKGROUNDS: We only got one location, and it had a bright green privet hedge in it, which really didn’t sit well with shots from the many locations we needed to shoot in (no way was Boris going to be driven round London for this). So the simple solution was to pick ‘privet green’ in the the FCP colour corrector as our ‘selected colour’ and desaturate it fully. Suddenly, Boris pops out of the background which we never even see. People wander past, but it’s just set dressing. It’s not Schindeler’s List, merely a trompe d’oeil.

PLENTY OF LOCATION SOUND: Everything was shot with the engine off, and we carefully recorded plenty of engine-on – both interior and exterior – with ‘taxi pulls in’ and ‘taxi pulls out’, plus door opens and closes. Even things like brake squeals, car horns (it’s my car horn you hear, recorded days later). All this covers up a multitude of sins and makes the whole thing believable. Especially because we recorded interior taxi noises, but nobody believed the sound of ‘int taxi’, so we had to trickle ‘ext taxi’ over Boris’s lines as that’s what the audience wants to hear, not what they would have heard if they were there. RECORD ON-SITE ATMOS SFX!

MAKE A LOOK AND STICK TO IT: We shot the external taxi shots one day, and Boris the next. Of course, the weather decided to go from ‘inside a tupperware sandwich box’ grey sky to brilliant sunshine (with occasional storm cloud) the next. So we had to overdrive the dull outside shots to make them really colourful and almost like toytown, then scale back Boris (and that wretched hedge) so he’d fit ‘inside’ the external shots. Colorista helps even out the exposure range of the Canon 550D to the Sony EX1, and Magic Bullet does all the colour, grads and vignetting,

SHOOTING IRON: We shot the external GVs on a basic Canon 550D. We didn’t have time or paperwork in some locations, and so we had to be as non-threatening as possible. You can shoot from public areas, but this fact is sometimes lost on Community Support Officers working for the Police. So, no BFG Zacuto Rigs, matte boxes, tripods, monopods or follow focus systems, just a Tokina 11-16 on a Canon 550D with a Zacuto Z-Finder. I could have really done with a Zacuto Target Shooter support on this one, but fate was agin me and mine is now on order. ‘Just In Time’ ordering sucks rocks through straws.

Of course the main shooting was done on Sony PMW-EX1s – looks like a Z1, better picture than a 570. Everything shot 720p – the Canon stuff was downsampled to 720p too.

EDITING: All done in FCP, pretty quick thanks to Evan’s tight storyboard and requirements. Lots of little nit-picking changes over a few days, changes that you will not see or care about, but they had to be done. So, what made my life sane was the Elgato Turbo264HD USB dongle that accellerates H.264 encoding. Each iteration of the movie required a web and a YouTube version, we may go through 3 iterations per day of 7 movies, so if left to Compressor, I’d be, well, dead. But because the Elgato unit does H.264 in pretty-much-real-time, I was able to turn around changes pretty quickly.

So… Keep it real, use extreme limitations to do your best, remember the basic rules, go for consistency, simple solutions work, and edit for your audience even if your client needs you to change things.

Photos by: Sean Barnes

Adventures in the land of Grass Valley

Just back from a conference job in Rome – usual brief: a big conference has keynote presentations filmed, these need to be captured and edited down to their bare essence for viewing on the web.

Conferences at this level go on for days. Picture the scenario: a conference may last four days, with 25% of the time in ‘keynote’ mode: four cameras record a presentation given to 2,500 people in an auditorium, whilst presenters do their stuff either solo or in groups. For the other 75% of the time, the 2,500 delegates split into, maybe, 25 groups. Every hour, there are 25 presentations happening, and this lasts for three days, eight hours a day.

So let us leave aside the ‘breakout’ presentations and concentrate on the eight hours of keynote presentations, each of which includes a presenter or two, quite a lot of powerpoint slides, and maybe a video or a software demonstration.

Ideally, as soon as a presentation finishes, the highlights are made available, but that can’t happen.

The impression I am trying to create is of a machine that generates huge amounts of content which need digesting and cutting down before sharing on the web. How can a month’s worth of presentations be broken down to provide an accurate summary of the benefit of attending such an event?

Welcome to the world of conference video.

So the bottom line is that, in the old days, we’d record a vision mixed feed onto tape. Somebody who understood the content and its politics would sit with somebody who knew what timecode was and understood how important it was to note the in- and out- words of a good sound bite, would furiously concentrate on creating a shot list to hand over to an editor who would take the tape, shuttle through, pick the sound bites out and cut it all together.

But of course it takes time to shuttle through a 180 minute tape, and errors invariably crop up in timecode, or – worse still – you get a list of ‘he said something interesting after he talked about penguins’. That’s when you realise you’ll have to shuttle through a 60 minute presentation at double speed at least twice to find out what they’re talking about whilst you survive on a drip feed of espresso.

So wouldn’t it be nice if we ditched tape and went for a hard disk solution? Enter the Grass Valley Turbo – it records very high quality video to hard disk, and it can handle huge amounts. They have been cropping up on lots of the events I cover. You could run a television station using just two of these beasts. They can record and play back at the same time, you can shuffle playlists during playout, they are serious toys.

But for an editor, they are an expletive nightmare. Sure they record high quality, but it’s all at 8 megabit MPEG2 in GFX format, which means your Mac won’t play it in anything other than VLC. You can only play video from certain points, which may be minutes away from each other. You can only note down the approximate timecodes of the bit you want. Then you need to open up the whole GFX movie in something like Episode Pro, and convert the bits you want using the TimeCode in/out settings. Get it wrong, and Episode will have a hissy fit.

So you get the Turbo files into a format you can edit, and you realise that you need more than the bit you want. So rinse and repeat. Or the client wants to see more of it so you need to load up the GFX in VLC, and tell them to go get a coffee whilst you do the unmentionable.

The Turbo will convert to DV, but the exchange format is usually a USB disk, so you can get an 8 gig file in FX format, or a 24 gig file after a LENGTHY process in DV format off a turbo. So we get the MPEG files.

Quite frankly, I think I prefer the bloody tape.

So this time, we had a new toy: the AJA KiPro.

The Ki Pro is basically a tape deck without tape, recording to hard disks using Apple ProRes. The disks are special, in that they push into the deck like a tape, and when you pull them out, you find a little FireWIre 800 connector in the back which means you start editing straight away (it’s bus powered too). Or you can copy them off to your editing hard disk at FW800 speeds.

It was quick, direct, and easy – three words I DO NOT associate with Grass Valley Turbo.

It was even higher quality. The KiPro was set to up-rez the Standard Def it was fed via SDI to 720p, which it did marvellously.

The best bit is that a Ki Pro, even with lots of those special disks, costs a LOT less than an ultra-broadcast Grass Valley Turbo. It all happens at ProRes and FW800 rather than MPEG2 and USB.

I doubt Grass Valley see the Ki Pro as competition, but I’ll want one of these puppies for conference record in the future.

Gone in a flash

So Steve Jobs doesn’t like Flash.

Flash has always had a chorus of catcalls and boos from off-stage, way before Mr Jobs started his campaign. It dates back fifteen years ago, in fact: http://www.useit.com/alertbox/9512.html and http://www.useit.com/alertbox/20001029.html

Nevertheless, the reason why Flash became so popular in the Corporate video world was that Mac based video generators found WMV a hard format to publish in, and WMV wasn’t the nicest progressive download format around. QuickTime was a bit of a no-no at the time, with a 40 MB download and cumbersome install (from the viewpoint of conservative IT departments). Flash played nice on both Mac and PC, and was ‘as standard’ on corporate PCs.

Now… imagine a world where Microsoft adopted QuickTime (that’s never going to happen, but just imagine), would we be messing around with Flash? Sure, Flash works, but the playback is prone to stuttering and feels gritty in all but perfect playback environments. And even then, a dropped frame would never occur in the same place.

I used to use QuickTime for web based work. It was easy to integrate, provided smooth playback, looked great and worked well on the PC – so long as you installed QuickTime, which went from 7 MB to 42 MB (mandatory iTunes install) in the days before ubiquitous broadband. So QuickTime was out for client-facing stuff.

I adopted Flash, learned to like and to use flash, because the alternative was so unappealing (convincing Corporates, NGOs and the like to adopt QuickTime.

Well, hell’s closed for skiing and formation pig-flying:


We think H.264 is an excellent format. In its HTML5 support, IE9 will support playback of H.264” — Microsoft.

Flash gave lots of us video guys a solid foundation on getting video on the web as a reliable, easy standard that any website could benefit from.

Then Mr Jobs comes along, starts a war, and it’s out with Flash, in with HTML5 if you want to play in his little iGarden.

Don’t get me wrong – Flash is going to be around for some time yet. Many corporates do not use HTML5 compatible browsers, but give it a couple of years and Flash for video publishers will fade to black.

So it’s time to get good at H.264. For those of us publishing corporate video, we’ve got to get to know new settings, new wrinkles, new ‘chops’ that get even better results. New gamma, new keyframes. Maybe new software, or new plug-ins. New workflows.

And more importantly, new hardware. H.264 is not a quick codec to encode to. Whether it’s raw horsepower with an Octocore Mac, a mid-end solution like the Matrox MXO2 or an Elgato Turbo264 HD, we’ll need hardware help for a while yet. It’s not like encoding to the On2 codec!

And there’s a transition period. Remember, H.264 works in Flash now, and that’s pretty much the bleeding edge as corporate web video goes. The safe route has been On2’s Flash 8 codec, but I for one will be moving on to become H.264 based.

Until, of course, the next great codec comes along.

The Cambrian exposion of videography

A long time ago, nature fetched a really strong cup of morning coffee and decided things needed a little shaking up on the evolution front, and life on earth went through a period of experimentation. It left behind an interesting and sometimes confusing fossil record that left many biologists overwhelmed and sighing ‘Oh no, not another freakin’ Phylum’. Were these legs or internal organs? Which way up was this creature? How did that thing actually move?

As I build up my Canon 550D into a usable camcorder, I wonder if we will look back at this period of camcorder technology in much the same way. Every day, people publish photos of their rigs, some small and dainty, others fashioned out of scaffolding poles, bristling with brackets and cable ducts and cages. There are spiders and snipers and stereoscopic shooters…

The world of videography went mad, a little while ago, for clever little boxes that enabled you to attach photographic lenses at one end and your video camera at the other, enabling you to enjoy the thin depth of field, wide choice of lens and general filmic look afforded to photographers. Sure, they were expensive, took quite a bit of setting up, lost quite a bit of light in the process and required some girders to lay it all out on. But at least you got a familiar way of recording what you shot, somewhere to plug your microphones in, a way of hearing and visually checking what you’ve got and a fighting chance of supplying it to the editor.

And now we have so many codecs, so many workflows, so many resolutions and so many frame rates to worry about. But that’s nothing to the sudden lack of things like holding a camera steady. Suddenly you need follow focus, matte box, remote start-stop button, I’m yet to see a motorised zoom controller, and if you want Image Stabilisation (which – at these focal lengths – is Sine Qua Non for most), buy the expensive glass.

Ah yes, the glass. My EX1 has one lens. Oh, and a little flat wide adaptor for those special wide times. My DSLR has three: a W-I-D-E to wide, a wide to portrait, and a portrait to long lens – and even then it doesn’t quite reach as far as the EX1. And at f2.8, they are as slow as my EX1’s zoom at full reach. I could add some exotica – 50 f1.4, 80 f2 and so on. I could REALLY roll things out with a Noktor


Now, that lot isn’t cheap, it isn’t compact, it’s not actually very easy to use, and we haven’t started accessorising it yet with LCD Viewfinders, extra power supplies, separate monitors (and their power requirements).

And because good glass is the crucial first point of the video process, people are starting to pull apart their Canon 7Ds, surgically removing the mirror box and retro-fitting PL mounts for Cine lenses.

This is a lovely little world for the Cine boys to play in, but for the rest of us? Time out, guys! Before you spend thousands, nay tens of thousands if you get really excited, just check out the CMOS jelly motion, the moire on stripey things, the aliasing on diagonal things. I am not being curmudgeonly – I’ve been bitten by the DSLR bug too. But I am trying to keep my infatuation under control.

Because, while the DV revolution got so much started, and the tapeless revolution is pretty much done and dusted, the HD revolution isn’t quite finished yet, the RED revolution is still happening, the Video DSLR revolution is now fully under way, then we’re waiting for the Scarlet Modular revolution to kick off, and stereoscopic video is beginning to wake up… and I’m thinking… oh no, not other video Phylum…