Preparing Setups with Shot Designer

Following on from their line of successful film making tutorials for Directors, Per Holmes and the Hollywood Camera Work team have launched their new app for iOS/Android and Mac/Windows – Shot Designer.

This is a ‘blocking’ tool – a visual way of mapping out ‘who or what goes where, does what and when’ in a scene, and where cameras should be to pick up the action. For a full intro to the craft of blocking scenes from interviews to action scenes, check out the DVDs, but whilst they can be – and often are – scribbled out on scraps of paper, Shot Designer makes things neat, quick, sharable via dropbox, and *animated*. A complex scene on paper can become a cryptic mashup of lines and circles, but Shot Designer shows character and camera moves in real time or in steps.

You can set up lighting diagrams too – using common fittings including KinoFlos, 1x1s, large and small fresnels, and populate scenes with scenery, props, cranes, dollies, mic booms and so on – all in a basic visual language familiar to the industry and just the sort of heart-warming brief that crews like to see before they arrive on set.

Matt's 2-up setup

My quick example (taking less time that it would to describe over a phone) is a simple 2-up talking head discussion. The locked off wide is matched with two cameras which can either get a single closeup on each, or if shifted, a nice Over Shoulder shot. A couple of 800W fresnels provide key and back-light but need distance and throw to make this work (if too close to the talent, the ratio of backlight to key will be too extreme) so the DoP I send this to may recommend HMI spots – which will mean the 4 lamp Kino in front will need daylight bulbs. So, we’ll probably set up width-wise in the as yet un-recced room – but you get the idea: we have a plan.

Operationally, Shot Designer is quick to manipulate and is ruthlessly designed for tablet use but even sausage fingers can bash together a lighting design on an iPhone. There’s a highlighter mode so you can temporarily scribble over your diagram whilst explaining it. The software is smart too – you can link cameras so that you don’t ‘cross the line’, Cameras can ‘follow’ targets… It builds a shot list from your moves so you can check your coverage before you wrap and move to the next scene.

Interestingly, there’s a ‘Director’s Viewfinder’ that’s really handy: Shot Designer knows the camera in your device (and if it doesn’t you can work it out), so you can use that to pinch and zoom to get your shot size and read off the focal length for anything from a AF101 or 5D Mk 3 to an Arri Alexa – other formats (e.g. EX1R or Black Magic Cinema Camera) will be added to the list over time. Again, this is an ideal recce tool, knowing in advance about lens choice and even camera choice).

This really is not a storyboard application – Per Holmes goes to great lengths to stress that storyboarding can push you down a prescribed route in shooting and can be cumbersome when things change, whereas the ‘block and stage’ method of using multiple takes or multiple cameras gives you far more to work with in the ‘third writing stage’ of editing. You can incorporate your storyboard frames, or any images, even ones taken on your device, and associate them with cameras. Again, that’s handy from a recce point of view right up to a reference of previous shots to match a house style, communicating the oft-tricky negative space idea, keeping continuity and so on. However, future iterations of Shot Designer are planned to include a 3D view – not in the ‘Pre-viz’ style of something like iClone or FrameForge but a clear and flexible tool for use whilst in production.

There is a free ‘single scene’ version, and a $20 license for unlimited scenes over all platforms – but check their notes due to store policy: buyers should purchase the mobile version to get a cross-over license to the desktop app, as rules say if you buy the desktop app first you’ll still be forced to buy the mobile version.

Shot Designer may appear to be for Narrative filmmaking, but the block and stage method helps set up for multicam, and a minute spent on blocking and staging any scene from from wedding to corporate to indie production is time well spent. The ability to move from Mac or PC app to iPad or Android phone via dropbox to share diagrams and add notes is a huge step forward from the paper napkin or ‘knocked up in PowerPoint’ approach. It will even be a great ‘shot notebook’ to communicate what the director wants to achieve.

Just for its sharability and speed at knocking up lighting and setup diagrams, Shot Designer is well worth a look, even at $20 for the full featured version. If you combine it with the Blocking and Staging aspect and its planning capabilities, it’s a great tool for the Director, DoP and even (especially) a Videographer on a recce.

Edit: For those of us who haven’t bought an iPad yet – this might be the ‘killer app’ for the iPad mini…


Commercial Building Sites (and other locations) require PPE – Personal Protection Equipment. A hard hat, steel capped boots and a high visibility jacket at a minimum. It’s a code: you can tell a trade or function by the colour of helmet, you can tell if someone’s safe in an environment by the colour of their overalls. Sometimes it’s a bit more relaxed, on some sites, it’s vital to be dressed accordingly…

So, maybe about four times a year, I’ll be filming on a building site (or similar). It’s exciting work, I love it – it’s like getting an anatomy lesson in architecture, the people you meet are so NOT media but share a passion for what they do, and it’s a great antidote to Corporate Head Office Syndrome.

But today – a recce – was interesting. Half of our motley crew could not visit the site because they’d not brought their ‘PPE’. It brought back memories of school and not bringing the right PE kit. With the thankful exception that the site manager would not make us do our job in our underwear, unlike many PE teachers.

Okay, so luckily Pete the Lobster had some spares in his van and if the truth be told, Mr AirCon’s big DMs could pass as Steel Capped Boots (steelies), but a couple of chaps would have to pass on the tour.

I had to pass on a message to a fellow shooter about this, and suddenly realised – heck, who would even think about this unless they’ve been through the ritual humiliation before? Some poor chap dragged from his duties to dig up a pair of unloved and overused boots for you in a size that will hopefully avoid permanent toe damage, the location of a Hi Viz vest that’s decidedly lo-vis and almost ‘Camo’ thanks to a community of bacterial life forms based on a gene swap Lichen and Thrush. A hard hat that conspires to provide both whiplash and a medicine ball for your head whilst transferring arcane versions of transmissible dermatitis.

Dude, you go through this once or twice and suddenly, you buy your own kit. It then sits in your car for a year, untouched.

Then you go on site visits, recces, shoots, and each time, you avoid brushing your Hi Vis jacket against tar, soil, sand, cement, glue or anything. Your boots are protected from the worst of the elements by architectual pebbles and galvanised walkways, your hard hat never contacts anything more onerous than the plastic storage bag you received it in.

A couple of years later, and you’re out on a site visit, and your PPE is still in showroom condition. You suddenly want a ‘distressing service’ to tamper your day-glo jacket and shiny boots to avoid the glares from the engineers around you who already resent the fact that you’re here to commit their labours to video.

For what it’s worth, it can cost you less than £50 to get your hat, gloves, boots and vest, which you can pack into a bag and let sit in the car for ever and a day. For those of us in this community that will never need to film on a building site, no worries. But believe me, over 10 years, it’s nothing. I’m very glad to have it in the car, and suddenly a job comes up and that PPE kit will save your bacon.

Or even your life.

The Light Fantastic

Just back from a manic week, shooting in Beirut, Cairo, then to Cambridge and finally to Edinburgh. We were shooting documentary style, interviews and GVs (General Views) or B-Roll, and Cutaways. The schedules were fluid, the locations unseen, and everything needed to be shot at NTSC frame rates. Immediately, my favourite camera for this sort of job (Sony’s FS100) was out. Secondly, we needed a flexible lighting kit, but all kit needed to be portable, flexible and light.

Even in these days of extremely sensitive cameras, lighting is still an essential part of video work. Even if it’s a bit of addition with a reflector or subtraction with a black drape, you’re adapting the light to reveal shape and form and directing the viewer’s eye to what’s important to your story.

Of course, we can’t all travel with a couple of 7-Tonne lighting trucks full of HMI Brutes and Generators, or even a boxful of Blondes and Redheads. I’ve had a little interview kit of Dedos, Totas and a softbox with an egg-crate, but then these create a separate box of cables, dimmers, plugs, RCDs and stands, and whilst easy to throw in the boot of the car, it’s not exactly travel friendly.

I recently invested in a couple of 1×1 style LED panels, run off V-Lock batteries. These have been a revelation – the freedom to light ‘wirelessly’, and with enough brightness to do a dual-key two-up interview with three cameras has been great. I’ve got the entire kit into a Pelicase with stands, reflector, batteries and charger – but at a gnat’s under 30 Kg, it attracts ‘heavy’ surcharges when flown (and eye-rolls from check-in staff). Then add a tripod bag, then spare a thought for the sartorial and grooming needs of Yours Truly, and the prices go up, as do the chances of something going missing. Also, a stack of pelicases and flight cases lets everyone know that the Media Circus is in town. Such attention isn’t always welcome – especially from those in uniform.

So I’ve been shopping.

I’ve found some little LED lamps on eBay that clip together and run off the same batteries as my FS100. Add a couple of lightweight stands, and the Safari tripod, add a few yards of bubblewrap and a ‘Bag For Life’ full of clothing, all thrown into an Argos cheapie lightweight suitcase. I reckon the case is probably good for three, maybe four trips when reinforced with luggage straps, but getting three bags into one, and doing so under 20 Kg, is a very neat trick. No excess baggage charges, no additional overweight baggage charges, no trips to oversize baggage handling, no solo struggling with four bags…

Entire shoot kit including tripod and 3 head lighting.The six LED lamps and three stands allowed for basic 3 point lighting, and their native daylight balance meant that, for the best part, we were augmenting the available light in our locations. Even outdoors, 3 LED lamps bolted together, about 1.5 meters from the subject (and a foot or so above his eyeline) produced a beautiful result. Without the lamp, we’d have ‘just another voxpop’, but with the lamp, with the ability to bring his face up one f-stop from the background, we had a very slick shot. And because it’s all battery driven, we could do this outdoors, we could run around to different locations, and never have to worry about bashing cables – or even finding a power point that worked.

Now, there’s LED, and there’s LED. These were not Litepanels lamps, and there is a little bit of the ‘lime’ about the light. CRI was below 90, which isn’t very good. However, this was easy to cheer up using FCP-X’s colour board, and quite frankly most humans would not see the green tinge until I carefully point it out and do a ‘before/after’ – and even then, my clients weren’t in the slightest bit bothered – just thought I was being a bit of an ‘Artiste’.

We shot on my Canon 550D using the Canon 17-55 f2.8 IS zoom and a Sigma 50mm 1.4 in some of the smaller locations (to really throw the background out of focus). For GVs and B-Roll, the Image Stabilisation was essential for getting shots where we couldn’t take a tripod, or for working so fast a tripod would have been a liability. You’ll have to imagine standing at the edge of Cairo traffic, or wandering through back street markets – or filming buildings next to razor wire blockades guarded by soldiers…

So, the camera could be thrown in a backpack with three lenses, a Zoom recorder, a couple of mics, batteries, charger, a little LitePanels Micro ‘eye-light’ and of course the Zacuto Z-Finder. Everything else, including tripod, stands, lamps and chargers, plus clothing, go in the suitcase.

I really prefer the Pelicase, I love my 1x1s, I’m so glad to be back on the Sachtler head and using an FS100, but I’ve got my ‘low profile’ kit together now. And with the little panels using NP-F batteries (or 5x AAs), clipping together to make a key, or staying separate for background lighting, it’s a very flexible kit.

Two little quotes come to mind: at a MacVideo event a while back, Dedo Weigert (the DoP of Dedo lamp fame) asserted that lighting is not about quantity, but about quality. On a recent podcast, DoP Shane Hurlbut stated, in reaction to the sensitivity of cameras ‘not needing extra lighting’ that it was a DoPs duty to control light rather than to accept what’s already there. I’ve taken both of these to heart with portable LED lamps, as there’s no longer an excuse to shoot without.

PS: I’ll be doing some further tests with the lamps, and intend to make a video from the results.

Blade and a J-cut, two bits!

Final Cut Pro X doesn’t do J-cuts. It doesn’t do it at all, and whilst I am not an aggressive or violent person, I feel the need to sit on a naughty step for thinking what I’d like to do to this bit of software if it were something tangible.

What am I talking about? Any editor will tell you that, in ‘How To Edit 102’, we learn about the J cut. Very simply, it’s when a simple cut between two shots has the audio of the second clip start at just a fraction before the picture starts. Or, put it another way, the second shot starts with new audio over the old shot, then the video cuts to the new shot.

Let’s imagine a string of 3 comments by 3 different people.

We edit the comments so that they flow. But the magic of the J cut is that as we look as the first person, we hear the second person starting to talk – like we do in a discussion round a table in real life – and then (AND ONLY THEN) we look at them. There’s about half a second or maybe a bit less between hearing them and actually looking at them. So we start the audio when it should, then the video follows between 7 and 12 frames afterwards (that’s a quarter to half a second – we’re being subtle here!)

When we see this in television and film, it mimics our every day experience, and it feels very natural. Comfortable.

It’s an editing ‘condiment’. Like adding a bit of salt to food, it’s not clean and pure, but it feels right.

So looking at the clips in the timeline, there’s an offset between when the audio cuts, and when the picture cuts.

It works both ways: if sound cuts before picture, it’s a J (see how the tail points to the left, indicating the lower (audio) track starts first in our left-to-right scan. J-cut. If the pictures cut first and then the audio cuts, we get an L-cut – visually speaking.

SO we cut our first take of a sequence, and we’re really trying to get the sequence of what people are saying in a logical sequence. Let’s not worry about pictures and cutaways now, let’s get the ‘radio programme with pictures’ version done. Sometimes, it gets messy and we’re cutting little bits of words and halfwords together so a parenthetical comment is allowed to stand alone. So long as it sounds right, we’ll cover the messy pictures with their jump cuts with a cutaway.

The reason for my ire is that this mainstay of professional editing, this 1 step operation in FCP7, this ‘thing you can sum up in a letter’ is peformed thusly in FCPX:

It’s like trying to put hospital corners on a duvet: it can be done, but that’s an awful lot of effort for something that should be quick and simple.

After all, when firing off a bunch of edited interviews for a client, hands up those who, in FCP7, Avid or PPro, would perhaps slip a few edits to add a little polish? Then unslide them back again to continue editing? Exactly.

Well, now and again I find a really good reason to switch from PPro or FCP7 into FCPX, but then spend an afternoon bumping my shins and grazing my scalp whilst climbing through its ‘little ways’. Well, I lost my temper big-time over the whole J-cut thing and turned to good friend Rick Young for solice. He’s writing a book on FCP-X, he’ll know how to do it.

And he did.

“Basically, detatch all your interview clips’ audio so they’re separate from the video, and use the T tool to slip the video. Simple!”

But that’s quite an odd thing for an app that touts to never suffer bad sync – dangle your audio off your video for ever? Deal with a double-track for every clip that could be in a J-cut? In a modern bit of software destined for the next 10 years of editing? That’s madness! Actually, I think I put it a little stronger than that.

It sounded like my solution for getting a pet dog through his dog door was to cut him in half and re-attatch with velcro once he’s through. Just live with a dog for ever more that has to be cared for in case his velcro join comes apart. Yes, I do have funny feelings about my footage, but if you were to spend so much time with them you’d go funny too.

And here’s the conclusion: Rick’s method works – it works fine. It works great, in fact. Give it a try, drop that beastly Apple method.

But here’s my finishing salvo: Apple’s FCPX team shouldn’t feel ‘oh that’s all right then’ and not implement an offset tool. It’s so simple: apply an Option key behaviour on the T trim tool. Thanks for XML and all that, I’m sure multicam will be great too. Just finish off your tool with a way to turn radio edits into J cuts *Just* *like* *you* *used* *to*. Put the Pro back into FCPX!

The ‘science’ of ‘awesome’?

What is it about manflu and training DVDs? Once again, I am confined to duvet, lines of lemsip cut with vitamin C ready for snorting, and I am watching the latest instalment of Per Holmes’ Magnum Opus – “Hot Moves – the Science of Awesome”. And once again, it’s an amazing watch.

This 115 minute long DVD/MP4 feature is an ‘addendum’ to the ‘Master Course In High-End Blocking & Staging’ course – a 6 DVD set of mindbending info, but rather than cover the mechanics of telling a story, or covering a scene so it will cut well, this DVD is about getting the trailer shots – as the narrator puts it, ‘awesome for the sake of being awesome’.

In his usual style, Per and his team hose you with information. It comes thick and fast – though I detect a slight slowing of in tempo in this iteration, though that could be the lemsip. You know an iconic shot when you see it, but the team demonstrate how and why these shots work. And variations that don’t.

Funnily enough, the audience for this production is probably a lot wider than previous titles, not only because it’s great for low budget indie movie makers, but because it taps into the virtual world. This is a must-have for 3d animators and motion graphics designers looking for a movie style.

But even if you’re just going to invest in a slider or even tape a GoPro Hero to a broom stick, you’re going to get some great ideas and solid learning from the title.

It’s ‘required reading’ (watching) if you already have the ‘Visual Effects for Directors’ series, and a fun intro to the style of Per Holmes if you’re thinking about jumping in, but remember that this is the fun bit. You’ll still have to learn the footwork with Blocking & Staging.

Any peeves? The download version is a DVD image which really wants you to use FireFox extensions. I much prefer a smaller straightforward MP4, preferably HD for my AppleTV. But that’s such a minor thing and I believe HCW may be going MP4 soon.

In conclusion, this is yet another solid training title from HCW that rewards repeated viewing and pulls no punches in delivering high quality and high quantity learning material.

Sweating the Petty Stuff

I’m putting the finishing touches on a simple set of ‘talking head’ videos destined for a corporate intranet to introduce a new section of content. Nothing particularly earth shaking or ground breaking. It certainly won’t win any awards, but it’s the kind of bread and butter work that pays bills.

However, there is a wrinkle. The client’s intranet is actually hosted and run by a separate company – a service provider. This service provider has set various limits to prevent silly things from happening, and these limits are hard-wired. If you have a special requirement, ‘the computer says no’.

One particular limit, which I will rant and rave about being particularly idiotic, pathetic and narrow minded, is that all video clips that users can upload to the system are limited to (get this) 12 Megabytes. That’s it. Any video, regardless of duration, cannot be any larger than 12 Megabytes. Period.

Another mark of bad programming in this system is that videos should measure a certain dimension, no bigger, no smaller. That may be fair if correctly implemented, but no. The fixed size is a stupid hobbled size and worse still, is not exactly 4:3 and not exactly 16:9, and not exactly anything really. So everything looks crap, though some look crapper than others.

Finally, the real evidence that the developers don’t actually understand video and don’t care about it either, the dimensions are not divisible by 8, therefore chucking the whole Macroblock thing in the khazi – digital video compression tends to divide things up in to blocks of 8 pixels and work out within those blocks what to do about dividing it up further. If your video dimensions are not divisible by 8, you get more issues with quality, performance and the like. It’s like designing car parks using the width of an Austin Healy Sprite, not caring about the fact that people who park can’t actually open their doors without bumping into other cars.

But the nurse says I must rest now. Rant over.

So, I’ve got to make all my talking head videos 12 Megabytes or less. How do you ensure this?

Well, method 1 is to monkey around with various settings in your compression software until you find something that sort of works.

Method 2 requires a pocket calculator, but saves a lot of time. You need to work out the ‘bitrate’ of your final video, how many bits are going to be used per second of video – if 500k bits per second are used, and the video is 10 seconds long, then 500k times 10 seconds is 5,000k or 5 Mbits.

Aha! But these are bits, the units of the internot. Not BYTES, and there are 8 bits in a Byte – believe me, I’ve counted them. We’ll leave aside another nerdy thing that there’s actually 1024 bits in a Kilobit, not 1000 (ditto KiloBytes to MegaBytes) – enough already.

So basically, 5 Megabits are divided by 8 to get the actual MegaBytes that the file will occupy on the hard disk: 0.625 in this case, or 625 Kilobytes.

So lets say I have a 6 minute video, which has to be shoehorned into 12 Mbytes. What bitrate do I need to set in Compressor/Episode/MPEGstreamclip/Whatever?

6 minutes = 360 seconds. Our answer, in the language of spreadsheets, is

((Target_size_in_Bytes x 8) x 1024) divided by Duration of video in seconds



which equals 266 kilobits per second, which is not a lot, because that has to be divvied up between the video AND the audio, so give the audio at least 32 kilobits of that, and you’re down to 230 for the video.

But if you have a 60 second commercial,


which is 1.6 Megabits, which is far studlier – 640×360, 128k soundtrack, room to spare!

So the 12 megabit limit is fine for commercials – but nothing of substance. The quality drops off a cliff after 2 minutes final duration.

But at least we have an equation which means you can measure twice and compress once, and not face another grinding of pips for 3 hours trying to get your magnum opus below 12.78 MBytes.

The Delights of Electric String

The thing about shooting and editing video, there’s just so much data created. Heaps of the stuff.

As I write, I’m sitting on a pot of about 48 Terabytes of data, and this is growing at about 1-2 Terabytes per month. Every project sits on a disk, each disk is mirrored, and when full, ‘retired’ and put off-site. Certain jobs are archived off to BluRay data disks, other jobs get copied to USB drives and handed over to the client.

So I have a Mac that spends most of its time copying. Just sucking bits off one drive and blowing them onto another.

But a little experience got me shaken out of my little rut recently. I was doing a ‘crash edit’ job, taking rushes of a conference and editing them down into a summary – fast turnaround stuff. The conference was being recorded to DVCAM tape, and ‘in the olden days’ somebody would take note of the time when something interesting came up, the DVCAM decks would record ‘Time Of Day Code’, and therefore I could suck in ‘just the good bits’.

Many shows now get recorded to Grass Valley Turbo – a beast of a Hard Disk recorder, records in a MPEG2 variant. That means its half or even quarter of the size of DV, but cannot be edited natively. So you have to transcode it (takes longer than real-time – so why bother, stick to tape). Rick and I looked at the KiPro recently, which was great…


Imagine this: a Mac’s recording DVCpro50 (near-as-dammit DigiBeta) to hard disk. At the end of each 90 minute session, it leaves a 40 Gigabyte QT movie, edit ready, on its internal hard drive. The file gets copied to the edit computer’s hard disk in about 15-20 minutes (as a backup – could even edit off the original drive over the network), editing starts immediately. Output is rendered from Final Cut Pro to hard disk ready for immediate playout.

All this is done over Gigabit ethernet. It’s just the usual Cat-5e network cable, can be run long distances, patched through to a well-designed facilities built in network, and get this: IT IS FASTER THAN FIREWIRE 800. Leaves USB for dead.

Trouble is, you need a good network engineer to configure the gigabit switches and ensure a private network (so you don’t slow down the rest of the building, or have them share your precious bandwidth). But it was truly a delight to work with, and will be trying to get this on all similar jobs.

So, back home, backing up yet another Terabyte drive, noting with newfound disatisfaction that it will take the usual 7 hours; I reminisced about the speed of Gigabit ethernet, and the fact that one can edit from a networked drive, and wondered if and how I could implement it for myself.

Going Gigabit would require a bit of investment.

However, lets start with a simple test: 1 Terabyte of mixed data (big and small files) on a standard LaCie hard drive. How long to dump that lot onto another hard drive?

USB2 – 16 hours.
FireWire 800 – 7 hours.
Gigabit ethernet – 4.5 hours (extrapolated from 40 GB files)
eSata PCIexpress – 3 hours
LTO-5 tape backup – 2 hours (assuming uncompressed) via iSCSI

So I bought a LaCie eSata card for the MacBook Pro to better than halve the time it takes to back up a drive.

Though the Gigabit ethernet is a wonderful technology when on-site or where ingest and edit are split apart by more than a few meters.

So why not USB-3? Well, all my main drives have eSATA sockets on them, so USB-3 – whilst being a great technology – isn’t quite prime time for me yet. When the next round of Mac laptops come out, they should have USB-3 compliant ports, so that will encourage more drives to be released in this format, and so the ecology will generally drift that way. The LaCie Rugged USB3 is a good start.

Why not LTO-5? Well, the base drive is £3k, and the tapes cost more than bare hard drives. I can get a Samsung Terabyte bare drive for £50, and an LTO-5 tape for £70. They’re easier to store, but they don’t ‘unarchive’ easily, and if I want higher capacity, I have to pension off the expensive drive.

So, right now, I have 30 drives with eSATA as well as FW800 ports, and for the minuscule investment of £40 for a dual channel SATA card for my MacBook Pro, I get to halve my duplication time. If a hard drive goes south, I can pop the backup in the ‘toaster’ and continue until my replacement drive arrives, then copy across.

But of course, I have to pull out the eSATA card if I want to use my SxS or SDHC cards… (Drums fingers) and if I didn’t have a17 MacBook Pro, all this would be theoretical. (Drums fingers again) Come on, Apple, get with the USB-3 equipped MacBook Pro…

PS: You do need a dual channel card if you’re using SATA to back up – unlike FireWire, you can’t daisy-chain, and whilst there are ‘SATA Backplanes’ that work a little like USB hubs, it is not really the same thing.

Level Up!

As we’re all aware that you can build a business from videography, there will be times when you invest in equipment. There will be times when you divest from equipment. The hope being that you divest your equipment when prices are high, and invest in equipment when prices are low. At all times, you bear in mind that equipment must pay back its original capital (what you paid for it) over time, but some kit can’t be a ‘line item’ (something you explicitly charge for).

So, you may buy a camera, and allocate a portion of your daily rate to pay for that camera. In a year or eight, it will have generated enough income to cover your ownership (the capital cost, the interest on any loans, the maintenance cost of keeping it working and the insurance cost of, well, insuring it), and whatever the accountant says to ‘write it off’.

But do you do that to your tripod?

Another way of looking at this is to get an idea of how much it costs to hire the kit you use on a daily basis. Well, maybe not all of it, but a full camera bag (including batteries, stock, a few accessories), a couple of microphones, and some sticks to put it all on, and some cans to hear it all on. That hire cost can be saved by owning your own kit, but the cost of owning your own kit must be recouped by charging for your own kit as if you had to hire it.

Now, having established that any purchases you make MUST be a revenue generator in a direct or indirect sense, what happens when you sell some kit that’s been written off, been a revenue generator and has since become a dust generator? Whoopee, free money.

It’s a bit like one of the many ‘FaceBook Farming Games’ you will have heard about. You’ve ‘levelled up’ and have been awarded a sack full of coins to invest in your farm/kingdom/videography business. Watchoo gonna doobout dat?

It would be lovely to go out and splurge on something you’ve always desired – that Steadicam system you always dreamed about, a full-on DSLR system with ALL the glass, or whatever. But really – the adult in all of us has to say: ‘what will generate enough cash, or enough ‘experience points’ (client goodwill/stickability/attractability) or enough ‘skill points’ (your own awesomeness/speed/capability) to pay for this quickly and earn enough to buy yet more toys?

Just like lottery winners, you need to know that a pot of cash needs to be invested in such a way that it returns enough profit to pay for its generation cost, AND keep its value over time (so it beats inflation) AND then generate an income for you on top of that. The inflation proof income generation of a million quid may be quite modest. You can tell I married an accountant. It makes great pillow talk.

And so here I am, having levelled up because I sold my Z1s and all their accessories, not willing to put the coins into the bigger pot, but to dedicate it to getting more experience/skill points. Okay, that’s a really nice position to be in, and I really hope you find yourself in that position too. But, then how does one ‘not screw it up’?

Okay, so ignoring all the toys… (I wanted Canon L series glass), what will your AUDIENCE see?

– Upgrading SD cards to SxS: speeds up your acquisition in time critical situations. I doubt this situation affects many, but it would get me from end of shoot to warm bed quicker on every job. Very expensive though, and nobody will see the difference.

– Upgrading to daylight running Fluorescent lamps. Sigh, how often are you asked to do an interview in mixed tungsten and daylight, trying to get the outside without burning it out, having dimmed your puny little tungsten lamps you bought so you don’t fry your subject? Clients will see (and feel) this difference, sort of, but they probably won’t pay for it over standard tungsten.

– Getting into DSLR – now, there’s an investment for the modern videographer. Trouble is, you’re going to expose yourself to a whole new world of want. Clients will see the difference, but you’re going to have to do a whole lot more work for it, AND you are going to need really silly expensive stuff: LCD viewfinder (£250), shoulder stock (£350), batteries (£100), lenses (at least £1500), new bag, software, training – it will end up the same price as a brand new pro camera. But the pictures are worth it. Honest. Buy a 550D and a Tokina 11-16 and find out.

– Invest in a few high end plug-ins. I’ve already managed to get a job to pay for Magic Bullet, and I’ve been with Colorista for a long time. DVmatte Pro has made chromakey a joy, and FX factory has done great things for me. They will for you, so long as you buy them for a job based on how many hours it saves you. Clients don’t pay for plugins – not directly, anyway. But they’ll like the expensive look you can make (‘expensive’ is subtle – use the Magic Bullet waveform monitors to stop things oversaturating or blowing out, and explore the curves to add richness).

– Buy a Steadicam – get the shots you can only dream about as the camera floats around your scene. However, the learning curve is steep and requires arms like Popeye unless you get an arm and vest. You’re not going to get usable results in the first three months. You’re not going to get good enough until there’s a year of it under your belt. You’ll get lucky now and again, with shots that make the show, but you’re never going to be a full-time Steadicam operator (OTOH we may not want to be).

– Get a bunch of crash cams, including the GoPro Hero HD and a little DSLR. With this setup, you’re going to get shots that you will never ever get any other way. Put a GoPro on the end of a broom handle or three, and pretend it’s a PoleCam. Put a DSLR in the corner of the room and shoot timelapse like there’s no tomorrow. Clients love these shots, but you’re signing up to a whole lot more kit in your kit box.

Or just calm down and mix and match.

Microphones, tripods and lamps don’t go out of date, and will last a long time. I think I’ll level up a lamp or two (a Kino and a dedo spot), add a 50mm f1.4 lens and get a slider from the Z1 cash. Each one of those will be seen by clients. Will I earn any more on a daily rate? No. Will I get repeat bookings? Will I get fans? Will I be proud of the new work? Yes. That will generate the extra income, be it ever so small. But over time it adds up.

Oh, yes, and I need a GoPro Hero. And a 24-80mm f2.8. And a Steadicam. I really want a Steadicam. And a MacPro. And Adobe CS5. And Boris Continuum. And most of the Foundry plugins.

Oh dear…

And another thing…

Just back from the Broadcast show (BVE2010?), where I participated on a panel hosted by Rick Young about the future of video, alongside luminaries Larry Jordan, Christina and David Fox. We debated various topics and I hope it will be up on MacVideo.TV soon.

But as always in these situations, I’ve come out of the room and had a little time to reflect on what we said, and now I’m spending the evening slapping my forehead and muttering ‘Should have said that’, ‘should have mentioned this’ and ‘why did I open my stupid gob about the other?’ and so to end this circle of grumblings of ‘Oh and another thing’, here’s what I wanted to add now that I’ve thought about it properly.


So we were talking about what should we be shooting on and editing with over time? Generally, shooting formats do not make good editing formats, editing formats do not make good delivery formats. So choose the right codec for the job, and think about things like NanoFlash which separate your codec choice from your camera choice. Weigh up the time taken to do a ‘virtual telecine’ of footage to an editable format versus the instant gratification but sometimes long term toil of editing in your shooting format. Flash was once king of web delivery but – shock, horror – not supported by iPhone and iPad. Be ready to convert assets to H.264, or even Ogg at this rate. Or Dirac. Or maybe even Microsoft’s VC1, the codec almost, but not entirely, unlike MPEG4.

Above all, codecs are like vegetables – they come in and go out of season. Keep an eye on this and keep high quality masters of stuff you want to keep. If you want to keep them for decades, consider lossless or near lossless (like PhotoJPEG) and be prepared to transcode in the future.


There seemed to be a bit of apathy about live events coming to internet video. Which I find odd.

If we’re to roll over and say that the idea of tuning in at 8PM to watch a show is ‘over’ and everyone is using PVRs, then surely the concept of Broadcaster is dead?

Of course not, and don’t forget the big things Broadcast does well: Live. Leaving aside whether it’s good or not, live broadcast TV does things like News, Sport, Spectacle and Entertainment unlike anything else. It has to be live, it has to be big and of course it’s hard to ignore the ads.

Has anyone tried to use the web during a big news event? It grinds to a halt, no video works, everything is clogged up. So if the internet is to perform the same trick, it needs to know how to do broadcast: video on the web using a broadcast protocol. Yes, yes, this is already possible in closed Microsoft networks, but not on the internet. Not yet, anyway.

Serendipity: a post on a BBC blog with an interesting point of view.


Web video will probably be people’s first taste of ‘High Definition’, now that YouTube, Vimeo and the rest are firmly ensconced in 720p land. Better than broadcast (well, okay, not really, but stick with me for a while), on demand, wide range of stuff. So why is it so hard to watch on your lovely expensive living room TV?

Sure, Apple TV was supposed to fix that but somehow never got there. Storage manufacturers are having a go, but of course you have to crack your DRM to use it. But for Pete’s sake, check out how quickly any desire to watch stuff – originally on your computer – on your big plasma, rapidly turns into a High School Engineering project.

HDMI has been hobbled to prevent you using a computer to play DVDs and BluRay discs, or you find the sound is missing, or it’s the wrong resolution or shape, or we’re scaling when it’s not necessary. Then there’s the dumbness over HD – buy a movie from iTunes and you get something of lower resolution and lower functionality than if one purchased the DVD, and often it costs more than Amazon. Like that’s going to build an industry.


Don’t get me started. Fibre is as fast as the boxes at either end. Peer to peer (even if within the ISP level) could be the next intelligent proxy service for large files, but because P2P = piracy = end of the world, your bandwidth is being shaped. Oh no, not up… Down. Throttles occur. Want those throttles lifted? Pay. This is divisive. Companies are rationing out, not implementing more. Growth builds business, but growth doesn’t mean dividing the cake into ever smaller units. But really, I am ranting now.


Technology suffers greatly from hype. The Cheap DV Revolution had a huge dose of hype, everyone got bored of it and suddenly DVCAM ousted BetaSP whilst nobody was looking.

The same thing happened with HD – Changing the world as we knew it, fell a bit flat and nothing more was said in the consumer world. Suddenly there’s a million HD subscribers (though we’re still talking the test phase).

This big initial hit of enthusiasm, followed by a rapid tailing off of interest until there’s a trough of disillusionment then hides a slow and steady growth of that technology until it reaches ubiquity and invisibility as it delivers the promises once made at the peak of the hype cycle.

So we’ve started that slow climb out of disillusionment with HD, I predict the plunge of DSLRs into that trough sometime soon, and 3d is climbing up that cycle.

We’ve done digital, gone tapeless. Still a lot of work to do on HD. We’re nowhere near done on that. DSLRs will have a brief moment in the sun like their DoF Adaptors before them, but will they remain indie film maker tools once Scarlet and a new Hybrid hit the market?

3D is in its infancy. Will it come? Sure, along with VHD, but unless you have big big bucks, I’d still shoot long-life material in good HD rather than half-finished 3D formats. But we need to play and to test – how will motion graphics or even simple tummy tags work in 3D?

And these are just three of my little rumblings from the afternoon’s panel – I’ll probably chew it over for ages.

I hope, at least, the audience got a basketful of things to think about.

Bookmark and Share

Where’s yo’ head at?

nullI’ve been restricted to quarters due to Man Flu recently, and have kept some rather odd company, in the form of the boxed DVD set of ‘Visual Effects for Directors’.

Over 7 intensive DVDs, the Hollywood Camera Work team takes you through the basics and the not so basics of working with 3d software, compositing, match moving, a deep dive into chromakey (from painting a studio to planning shots in a small cyc studio), and dealing with simulations that overlay your movies – explosions, collisions, hard/soft body interaction, particles.

All this is from the point of view of an Indie film maker with an HVX200 or something similar, non-esoteric 3d and compositing software running on desktop computers, and a big vision.

It’s not a course in how to use 3d or compositing software, though it pulls no punches on giving you very detailed information. Rather, it’s to gain an understanding of the process to enable the director or producer to fully comprehend the unfolding workflow when ‘we’ll comp that in post’, and how to plan a chomakey shot that tracks round a subject so they can be inserted into a CGI scene.

Like the other product in HCW’s stable – “High End Blocking & Staging”, this is not an easy watch. You’ll be ‘drinking from the firehose’ so to speak. Info comes thick and fast, and you’ll benefit from repeated watching. There’s over 10 hours of stuff in there, spread over 7 DVDs, and there’s no time for tourists. Buckle up, take notes, and there’s coursework for you to test yourself on hosted at the HCW website.

These courses are sometimes called a ‘film school in a box’, and that’s a pretty good description. It’s 25 years since I’ve sweated through intense lectures and come up gasping for air. But then I find that sort of thing an enjoyable experience….

It’s not going to be suitable for every videographer. It’s aimed squarely at indie film production of the high-tech type (Blocking & Staging is much more general and recommended for all ‘film makers’). The price, $329, is a bargain for what you’re getting. A wise investment. But since I bought my set, HCW are now offering you an option to download images of the DVDs, and they will post you a box and some labels.

Why? Because I had to pay VAT and import duty on my set, suffering delays and surcharges along the way. This way, you download the DVD images, and burn your own disks – the official labels are valued at $3 so do not attract surcharges and duty.

Besides which, this is the sort of thing that’s great to dip in and out of on a small screen as well as the home setup. There is SO much information, it needs repeated viewings to allow all that great knowledge to become part of your own mental toolset. It may not be as instant as Neo’s upload – “I know kung fu…” – but you’ll empathise with with the intensity of the upload experience.