Preset white settings – Danger, Will Robinson!

Many cameras allow us to dial in a specific colour temperature for a shot, rather than manually set it with a white or grey card. It sounds good, but can deliver alarming results.

Consider these two images shot under different lighting systems. In both cases, I’ve put a daylight Dedo DLED-4 over the shoulder as a sort of 3/4-backlight-kicker thing. It’s a look I like. That’s a fixed constant. I like to light to 4500 ºK which gives me wriggle room over colour temperature, and is a piece of cake with a bi-colour device like the Dedo Felloni. I had to shift the camera to 5600 to match the lighting sources with my cheaper LED panel lamps.

The camera was a Sony PMW F3 with Nikkor 35-70 at around f4.

Immediately, you can see that the Non-brand LED panels are green. Not just a little, green, they are Incredible Hulk Green. Note the daylight highlight on Rick’s temple – its about the same in both images, though I did use 5600 ºK on the F3 for the Non-brand LEDs. I tried using half CTO, but the results were absolutely hideous.

Both images are from LED sources and are untouched in terms of grading. The Fellonis are neutral, accurate and appear to all intents and purposes to be full spectrum. I also find the diffusion and fill tweaks to be particularly nice, considering the cramped location and speed at which we had to work.

So it’s plain: be careful of setting a colour temperature in-camera – it works well with continuous spectrum lighting, but looks horrible if you use more restrictive sources – especially LED devices from the lower end of our budget. They output a very restrictive light.

But that’s not the whole story.

Let’s do what we’re supposed to do: let’s white-set on a known white reference (not just a bit of photocopier paper). Let’s re-light with our non-brand LED panels. At first glance, hey! It looks good!

Let’s compare with the more ‘continuous spectrum’ Fellonis on the right. Note that Rick’s skin tone on the left is far flatter with a hint of grey and yellow. Note also that the pure daylight source behind him is now casting a MAGENTA light over his hair and shirt – all that green has been neutralised, leaving a nasty Magenta odour hanging about. If we try and cancel that out, it will bring the green back in. Meanwhile, the brighter reds and oranges have been tempered by removing so much green.

The result? There’s an ashen look to the skin. It’s a bit dull. It lacks life. On the right, there’s some flush to the face around and under the eyes. The backlight and his shirt pick up the fresh daylight from the three-quarter back. It’s natural, rather than made up.

But bear in mind that if I were using the Non-brand LEDs in a mixed environment, trying to blend them with existing daylight or tungsten or – egad, even worse – both, the results are just awful. That green tinge is back, and it really doesn’t sit with anything else. I remember vividly a shoot trying to use these No-name panels in a mixed lighting situation, pinning half-CTO and diff over them to try and calm them, and still seeing the green tinge seep through.

  • Take home 1: be careful using pre-set Kelvin settings as not all lighting is full spectrum. You’re choosing a compromise. It can be the best decision, but it can also be wrong.
  • Take home 2: a proper white-set is the way to go in difficult situations, but strong corrections will impact other lighting sources (ambient, backlight, fill, etc)
  • Take home 3: Unless shooting raw, correcting for White Balance issues can only take away data from your image and reduce its quality.

Note: look out for the full story (and more) on moviemachine.tv soon.

C100 noise – the fix

The Canon C100 is an 8 bit camera, so its images have ‘texture’ – a sort of electronic grain reminiscent of film. Most of the time this is invisible, or a pleasant part of the picture. In some situations, it can be an absolute menace. Scenes that contain large areas of gently grading tone pose a huge problem to an 8 bit system: areas of blue sky, still water, or in my case, a boring white wall of the interview room.

Setup

Whilst we set up, I shot some tests to help Alex with tuning his workflow for speed. It rapidly became obvious that we’d found the perfect shot to demonstrate the dangers of noise – and in particular, the C100’s some-time issue with a sort of pattern of vertical stripes:

Click the images below to view the image at 1:1 – this is important – and for some browsers (like Chrome) you may need to click the image again to zoom in.

So, due to the balance of the lighting (couldn’t black the room out, couldn’t change rooms), we were working at 1250 ISO – roughly equivalent to adding 6dB of gain. So, I’m expecting a little noise, but not much.

Not that much. And remember, this is a still – in reality, it’s boiling away and drawing attention to its self.

It’s recommended to run an Auto Black Balance on a camera at the start of every shoot or if the camera changes temperature (e.g. indoors to outdoors). Officially, one should Auto Black Balance after every ISO change). An Auto Black Balance routine identifies the ‘static’ noise to the camera’s image processor, which will then do a better job of hiding it.

So, we black balanced the camera, and Alex took over the role of lit object.

There was some improvement, but the vertical stripes could still be seen. It’s not helped by being a predominantly blue background – we’re seeing noise mostly from the blue channel, and blue is notorious for being ‘the noisy weak one’ when it comes to video sensors. Remember that when you choose your chromakey background (see footnote).

The first thought is to use a denoiser – a plugin that analyses the noise pattern and removes it. The C100 uses some denoising in-camera for its AVCHD recordings, but in this case even the in-camera denoiser was swamped. Neat Video is a great noise reduction plug-in, available for many platforms and most editing software. I tried its quick and simple ‘Easy Setup’, which dramatically improved things.

But it’s not quite perfect – there’s still some mottling. In some respects, it’s done too good a job at removing the speckles of noise, leaving some errors in colour behind. You can fettle with the controls in advanced mode to fine tune it, but perversely, adding a little artificial monochrome noise helped a lot:

We noticed that having a little more contrast in the tonal transition seemed to strongly alter the noise pattern – less subtlety to deal with. I hung up my jacket as a make-shift cucoloris to see how the noise was affected by sharper transitions of tone.

So, we needed more contrast in the background – which we eventually achieved by lowering the ambient light in the room (two translucent curtains didn’t help much). But in the meantime, we tried denoising this, and playing around with vignettes. That demonstrated the benefit of more contrast – although the colour balance was hideous.

However, there’s banding in this – and when encoded for web playback, those bands will be ‘enhanced’ thanks to the way lossy encoding works.

We finally got the balance right by using Magic Bullet Looks to create a vignette that raised the contrast of the background gradient, did a little colour correction to help the skin tones, and even some skin smoothing.

The Issue

We’re cleaning up a noisy camera image and generating a cleaner output. Almost all of my work goes up on the web, and as a rule, nice clean video makes for better video than drab noisy video. However, super-clean denoised video can do odd things once encoded to H.264 and uploaded to a service such as Vimeo.

Furthermore, not all encoders were created equal. I tried three different types of encoder: the quick and dirty Turbo264, the MainConcept H.264 encoder that works fast with OpenCL hardware, and the open source but well respected X264 encoder. The latter two were processed in Epsiode Pro 6.4.1. The movies follow the above story, you can ignore the audio – we were just ‘mucking around’ checking stuff.

The best results came from Episode using X264

Here’s the same master movie encoded via MainConcept – although optimised for OpenGL, it actually took 15% longer than X264 on my MacBook Pro, and to my eyes seems a little blotchier.

Finally Turbo264 – which is a single pass encoder aimed at speed. It’s not bad, but not very good either.

Finally, a look at YouTube:

This shows that each service tunes its encoding to its target audience. YouTube seems to cater for noisy video, but doesn’t like strong action or dramatic tonal changes – as befits its more domestic uploads. Vimeo is trying very hard to achieve a good quality balance, but can be confused by subtle gradation. Download the uploaded masters and compare if you wish.

In Conclusion:

Ideally, one would do a little noise reduction, then add a touch of film grain to ‘wake up’ the encoder and give it something to chew on – flat areas of tone seem to make the encoding ‘lazy’. I ended up using Magic Bullet Looks yet again, pepping up the skin tones with Colorista, a little bit of Cosmo to cater for any dramatic makeup we may come across (no time to alter the lighting between interviewees), a vignette to hide the worst of the background noise, and a subtle amount of film grain. For our uses, it looked great both on the ProRes projected version and the subsequent online videos.

Here’s the MBL setup:

What’s going on?

There are, broadly speaking, three classes of camera recording: 8 bits per channel, 10 bits per channel and 12 bits per channel (yes there are exotic 16 bit systems and beyond). There are three channels – one each for Red, Blue and Green. In each channel, the tonal range from black to white is split into steps. A 2 bit system allows 4 ’steps’ as you can make 4 numbers mixing up 2 ‘bits’ (00, 01, 10 and 11 in binary). So a 2 bit image would have black, dark grey, light grey and white. To make an image in colour, you’d have red green and blue versions stacked up on top of each other.

8 bit video has, in theory, 256 steps each for red, green and blue. For various reasons, the first 16 steps are used for other things, and peak white happens at step 235, leaving 20 steps for engineering uses. So there’s only about 220 steps between black and white. If that’s, say, 8 stops of brightness range, then a 0.5 stop difference in brightness has only 14 steps between them. That would create bands.

So, there’s a trick. Just like in printing, we can diffuse the edges of each band very carefully by ‘dithering’ the pixels like an airbrush. The Canon Cinema range perform their magic in just an 8 bit space by doing a lot of ‘diffusion dithering’ and that can look gosh-darn like film grain.

Cameras such as the F5 use 10 bits per channel – so there are 1024 steps rather than about 220, and therefore handle subtlety well. Alexa, BMCC and Epic operate at 12 bits per channel – 4096 steps between black and white for each channel. This provides plenty of space – or ‘data wriggle room’ to move your tonality around in post, and deliver a super-clean master file.

But as we’ve seen from the uploaded video – if web is your delivery, you’re faced with 4:2:0 colour and encoders that are out of your control.

The C100 with its 8 bit AVCHD codec does clever things including some noise reduction, and this may have skewed the results here, so I will need to repeat the test with a 4:2:2 ProRes type recorder, where no noise reduction is used, and other tests I’ve done have demonstrated that NeatVideo prefers noisy 10 bit ProRes over half-denoised AVCHD. But I think this will just lead to a cleaner image, and that doesn’t necessarily help.

As perverse as it may seem, my little seek-and-destroy noise hunt has lead to finding the best way to ADD noise.

Footnote: Like most large sensor cameras, the Canon C100 has a Bayer pattern sensor – pixels are arranged in groups of four in a 2×2 grid. Each group contains a red pixel sensor, a blue pixel sensor and two green ones. Green has twice the effective data, making it the better choice for chromakey. But perhaps that’s a different post.

Turbo.264 HD – a quick and dirty guide for Mac based editors

Turbo.264 HD by Elgato is a Mac application sold as a consumer solution to help transform tricky formats like AVCHD into something more manageable. Rather than deal with professional formats like Apple ProRes, it uses H.264, a widely accepted format that efficiently stores high quality video in a small space. For given values of ‘quality’ and ‘small’, that is.

For the professional video editor, a common requirement is to create a version of their project to be uploaded to the web for use in services like Vimeo and YouTube. Whilst this can be achieved in-app with some edit software, not all do this at the quality that’s required, and often tie up the computer until the process is complete. This can be a lengthy process.

So, enter Turbo.264 HD – a ‘quick and dirty’ compressor that can do batches of movies, gives you access to important controls of H.264 that are key to making Vimeo/YouTube movies that stay in sync and perform well. It’s very simple in operation. The following guide will help you make your own presets for use with Vimeo and YouTube.

A quick and dirty guide for editors and videographers

First steps

Two Quicktime movies have been dropped onto the panel. Both are using custom presets created earlier. Click on the Format popup to select a preset, or add your own.

First steps

Vimeo/YouTube preset for Client Previews

Lots of presets have been built already in this copy of Turbo.264 HD – not just for the web but for iPad and iPhone use, even portrait (9:16) video. This guide will concentrate on two in particular.

Firstly, the Vimeo 720p version for client previews. This assumes that your master video will be in a high quality HD format such as 1080p ProRes, with 48KHz audio and progressive scan.

Clicking the ‘+’ button bottom left makes a new profile you can name. There’s a base Profile to work from that you select from the Profile pop-up at the top on the right hand side. For the Vimeo preset, the ‘HD 720p’ profile is used.

Next, adjust the settings as indicated. We don’t want to use the Upload service (as privacy settings may need indivual attention), the Audio settings can stay at automatic. The Other tab has basic switches for subtitles, chapters and dolby sound if they are part of the movie, and can be left alone.

Vimeo/YouTube preset for Client Previews

Sending HD video via the internet

The second preset is useful when you need to send high quality material via the internet in an emergency. File formats such as ProRes are ideal for editing, but use a large amount of space. H.264 can incorporate very high quality in a much smaller file size, but the files are difficult to edit or play back in this state. However, they can be transcoded back to ProRes for editing.

Sending HD video via the internet

The benefits and drawbacks of sending H.264 over ProRes

This preset does lower the quality by an almost imperceptible amount, and the original files should be sent via hard disk if possible. However when you need a quick turnaround under challenging circumstances (for example, a wifi internet connection in a hotel or coffee shop), this preset can help.

For example, a 2 minute 42 second ProRes clip uses 2.6 GB of disk space. The original clip shot on AVCHD at 1080p25 was 462 MB. However, using the H.264 settings below, the result was 101 MB with virtually no visible loss of quality. A 2 mbps internet connection would take almost 2.5 hours for the ProRes file, half an hour for the AVCHD file and under 7 mintues for the H.264 file.

And finally…

Hitting the start button starts the batch, and processed movies retain the original file name with a .mp4 extension. You can see that this 25fps 1080p movie is encoding at almost 28 fps, so a little faster than real time. The minutes remaining starts a little crazily then settles down. You can leave it running while you edit, but it will slow a little. When there’s no resizing and little compression, it can run twice as fast as real time (depends on the speed of your Mac).

And finally...
Remember, this is just a quick and dirty method of turning around client previews quickly – I often have ‘batches’ to do, 6-12 movies of 3 mins each, or a couple of 20-30 min interview select reels with burned in timecode. I pump them all through Turbo264 rather than Episode Pro as – due to the high bitrate – you’re not going to see much difference.
When it comes to the final encode, a professional encoding solution such as Telestream Episode, with the X264 codec as a replacement H.264 encoder, will generate the best results.

Creating the Dance of the Seven Veils

Unboxing videos are an interesting phenomenon.

They don’t really count as ‘television’ or ‘film’ – in fact they’re not much more than a moving photo or even diagram. But they are part of the mythos of the launch of a new technical product.

I’ve just finished my first one – and it was ‘official’ – no pressure, then.

I first watched quite a few unboxing videos. This was, mostly, a chore. It was rapidly apparent that you need to impart some useful information to the viewer to keep them watching. Then there was the strange pleasure in ‘unwrapping’ – you have to become six years old all over again, even though – after a couple of decades of doing this – you’re more worried about what you’re going to do with all the packaging and when you can get rid of it.

So… to build the scene. My unpack able box was quite big. Too big for my usual ‘white cyclorama’ setup. I considered commandeering the dining room, but it was quite obvious that unless I was willing to work from midnight until six, that wasn’t going to happen. I have other work going on.

So it meant the office. Do I go for a nice Depth of Field look and risk spending time emptying the office of the usual rubbish and kibble? Or do I create a quiet corner of solitude? Of course I do. Then we have to rehearse the unpacking sequence.

Nothing seems more inopportune than suddenly scrabbling at something that won’t unwrap, or unfold, or not look gorgeous. So, I have to unwrap with the aim of putting it all back together gain – more than perfectly. I quickly get to see how I should pack things so it unpacks nicely. I note all the tricks of the packager’s origami.

So, we start shooting. One shot, live, no chance to refocus/zoom, just keep the motion going.

I practice and practice picking up bundles of boring cables and giving them a star turn. I work out the order in which to remove them. I remember every item in each tray. Over and over again.

Only two takes happened without something silly happening – and after the second ‘reasonable’ take, I was so done. But still, I had to do some closeups, and some product shots. Ideally, everything’s one shot, but there are times when a cutaway is just so necessary, and I wish I got more.

Learning Point: FIlm every section as a cutaway after you do a few good all-in-one takes.

Second big thing, which I kinda worked out from the get-go. Don’t try and do voiceover and actions. We’re blokes, multitasking doesn’t really work. It’s a one taker and you just need to get the whole thing done.

Do you really need voiceover, anyway? I chickened out and used ‘callout’ boxes of text in the edit. This was because I had been asked to make this unboxing video and to stand by for making different language versions – dubbing is very expensive, transcription and translation for subtitles can be expensive and lead to lots and lots of sync issues (German subs are 50% more voluminous than English subtitles and take time to fit in).

So, a bunch of call-out captions could be translated and substituted pretty easily. Well, that’s the plan.

Finally, remember the ‘call to action’ – what do you want your viewers to do having watched the video? Just a little graphic to say ‘buy here’ or ‘use this affiliate coupon’ and so on. A nod to the viewer to thank them for their attention.

And so, with a couple of hundred views in its first few hours of life, it’s not a Fenton video, but it’s out there stirring the pot. I’d like to have got more jokes and winks in there, but the audience likes these things plain and clear. It was an interesting exercise, but I’m keen to learn the lessons from it. Feedback welcomed! What do you want from an Unboxing Video?

Roll on the dead cats

deadcatLooks like I’m in the market for a couple of dead cats for my stick mics.

Interesting feedback from filming voxpops this week – especially from the women. I paraphrase only slightly:

“Why isn’t yours fluffy? I don’t like that one, it’s to black and stubby. I want one I can stroke. Don’t point that at me, it’s not nice.”

Now, on a minor technical point, stuffing your 416 or CS-1 in a dead cat when indoors is a technical faux-pas. An audio tautology. When you see it happen, you think ‘Film Students’, or a gauche attempt to appear ‘Pro’. Whilst we can discuss the use of a Sennheiser 416 indoors over more suitable short shotgun microphones on one hand, and chuckle at the sort of gut reactions above on the other, I’m a bit shamed to be honest.

I’ve never really thought of the situation from the voxpopper’s position – specifically, someone who isn’t used to the gear we use. We call them ‘gun’ mics, ‘rifle’ mics, it’s all a bit wrapped in that male viewpoint, and when somebody pokes something ever so slightly alien at you, resplendent in its anodised smooth black metal, it can be… well, intimidating.

It can also be confusing. I didn’t have a ‘reporter mic’ with me when we suddenly had a need to do a ‘friendly chat’ between three people, so the participants (to some degree media trained) took my short shotgun Sanken CS1 (crumbs, here we go again) and used it like more like a vocalist’s mic (close to the mouth), to a degree where the mic was dealing with uncomfortably loud source material (never mind the audio circuits in the camera). The next participant would take over and use the mic at the correct distance for a reporter mic. Lots of scrabbling with audio levels, application of limiter in camera and compression in post rescued the shoot.

But I digress. The learning point from that is that, given a mic, media-trained folk will tend to shove reporter mics in people’s faces (including their own) ‘just like on TV’. But there is a sort of mic they KNOW should be wafted out of shot – that’s right, the big fluffy ones. You really can’t stuff that in somebody’s face.

So, here’s the deal. I will get a ‘Dead Cat’ windjammer for my hypercardiod (okay, short shotgun) mics when doing voxpops and accept a little less from them. Yes, it’s funny and unnecessary and to techie crews, ‘poserish’ – but it’s also funny for the interviewee, and that relaxes them. And they’ll keep the mics away from their face.

So roll on the Dead Cats.

Preparing Setups with Shot Designer

Following on from their line of successful film making tutorials for Directors, Per Holmes and the Hollywood Camera Work team have launched their new app for iOS/Android and Mac/Windows – Shot Designer.

This is a ‘blocking’ tool – a visual way of mapping out ‘who or what goes where, does what and when’ in a scene, and where cameras should be to pick up the action. For a full intro to the craft of blocking scenes from interviews to action scenes, check out the DVDs, but whilst they can be – and often are – scribbled out on scraps of paper, Shot Designer makes things neat, quick, sharable via dropbox, and *animated*. A complex scene on paper can become a cryptic mashup of lines and circles, but Shot Designer shows character and camera moves in real time or in steps.

You can set up lighting diagrams too – using common fittings including KinoFlos, 1x1s, large and small fresnels, and populate scenes with scenery, props, cranes, dollies, mic booms and so on – all in a basic visual language familiar to the industry and just the sort of heart-warming brief that crews like to see before they arrive on set.

Matt's 2-up setup

My quick example (taking less time that it would to describe over a phone) is a simple 2-up talking head discussion. The locked off wide is matched with two cameras which can either get a single closeup on each, or if shifted, a nice Over Shoulder shot. A couple of 800W fresnels provide key and back-light but need distance and throw to make this work (if too close to the talent, the ratio of backlight to key will be too extreme) so the DoP I send this to may recommend HMI spots – which will mean the 4 lamp Kino in front will need daylight bulbs. So, we’ll probably set up width-wise in the as yet un-recced room – but you get the idea: we have a plan.

Operationally, Shot Designer is quick to manipulate and is ruthlessly designed for tablet use but even sausage fingers can bash together a lighting design on an iPhone. There’s a highlighter mode so you can temporarily scribble over your diagram whilst explaining it. The software is smart too – you can link cameras so that you don’t ‘cross the line’, Cameras can ‘follow’ targets… It builds a shot list from your moves so you can check your coverage before you wrap and move to the next scene.

Interestingly, there’s a ‘Director’s Viewfinder’ that’s really handy: Shot Designer knows the camera in your device (and if it doesn’t you can work it out), so you can use that to pinch and zoom to get your shot size and read off the focal length for anything from a AF101 or 5D Mk 3 to an Arri Alexa – other formats (e.g. EX1R or Black Magic Cinema Camera) will be added to the list over time. Again, this is an ideal recce tool, knowing in advance about lens choice and even camera choice).

This really is not a storyboard application – Per Holmes goes to great lengths to stress that storyboarding can push you down a prescribed route in shooting and can be cumbersome when things change, whereas the ‘block and stage’ method of using multiple takes or multiple cameras gives you far more to work with in the ‘third writing stage’ of editing. You can incorporate your storyboard frames, or any images, even ones taken on your device, and associate them with cameras. Again, that’s handy from a recce point of view right up to a reference of previous shots to match a house style, communicating the oft-tricky negative space idea, keeping continuity and so on. However, future iterations of Shot Designer are planned to include a 3D view – not in the ‘Pre-viz’ style of something like iClone or FrameForge but a clear and flexible tool for use whilst in production.

There is a free ‘single scene’ version, and a $20 license for unlimited scenes over all platforms – but check their notes due to store policy: buyers should purchase the mobile version to get a cross-over license to the desktop app, as rules say if you buy the desktop app first you’ll still be forced to buy the mobile version.

Shot Designer may appear to be for Narrative filmmaking, but the block and stage method helps set up for multicam, and a minute spent on blocking and staging any scene from from wedding to corporate to indie production is time well spent. The ability to move from Mac or PC app to iPad or Android phone via dropbox to share diagrams and add notes is a huge step forward from the paper napkin or ‘knocked up in PowerPoint’ approach. It will even be a great ‘shot notebook’ to communicate what the director wants to achieve.

Just for its sharability and speed at knocking up lighting and setup diagrams, Shot Designer is well worth a look, even at $20 for the full featured version. If you combine it with the Blocking and Staging aspect and its planning capabilities, it’s a great tool for the Director, DoP and even (especially) a Videographer on a recce.

Edit: For those of us who haven’t bought an iPad yet – this might be the ‘killer app’ for the iPad mini…

Steelies

Commercial Building Sites (and other locations) require PPE – Personal Protection Equipment. A hard hat, steel capped boots and a high visibility jacket at a minimum. It’s a code: you can tell a trade or function by the colour of helmet, you can tell if someone’s safe in an environment by the colour of their overalls. Sometimes it’s a bit more relaxed, on some sites, it’s vital to be dressed accordingly…

So, maybe about four times a year, I’ll be filming on a building site (or similar). It’s exciting work, I love it – it’s like getting an anatomy lesson in architecture, the people you meet are so NOT media but share a passion for what they do, and it’s a great antidote to Corporate Head Office Syndrome.

But today – a recce – was interesting. Half of our motley crew could not visit the site because they’d not brought their ‘PPE’. It brought back memories of school and not bringing the right PE kit. With the thankful exception that the site manager would not make us do our job in our underwear, unlike many PE teachers.

Okay, so luckily Pete the Lobster had some spares in his van and if the truth be told, Mr AirCon’s big DMs could pass as Steel Capped Boots (steelies), but a couple of chaps would have to pass on the tour.

I had to pass on a message to a fellow shooter about this, and suddenly realised – heck, who would even think about this unless they’ve been through the ritual humiliation before? Some poor chap dragged from his duties to dig up a pair of unloved and overused boots for you in a size that will hopefully avoid permanent toe damage, the location of a Hi Viz vest that’s decidedly lo-vis and almost ‘Camo’ thanks to a community of bacterial life forms based on a gene swap Lichen and Thrush. A hard hat that conspires to provide both whiplash and a medicine ball for your head whilst transferring arcane versions of transmissible dermatitis.

Dude, you go through this once or twice and suddenly, you buy your own kit. It then sits in your car for a year, untouched.

Then you go on site visits, recces, shoots, and each time, you avoid brushing your Hi Vis jacket against tar, soil, sand, cement, glue or anything. Your boots are protected from the worst of the elements by architectual pebbles and galvanised walkways, your hard hat never contacts anything more onerous than the plastic storage bag you received it in.

A couple of years later, and you’re out on a site visit, and your PPE is still in showroom condition. You suddenly want a ‘distressing service’ to tamper your day-glo jacket and shiny boots to avoid the glares from the engineers around you who already resent the fact that you’re here to commit their labours to video.

For what it’s worth, it can cost you less than £50 to get your hat, gloves, boots and vest, which you can pack into a bag and let sit in the car for ever and a day. For those of us in this community that will never need to film on a building site, no worries. But believe me, over 10 years, it’s nothing. I’m very glad to have it in the car, and suddenly a job comes up and that PPE kit will save your bacon.

Or even your life.

The Light Fantastic

Just back from a manic week, shooting in Beirut, Cairo, then to Cambridge and finally to Edinburgh. We were shooting documentary style, interviews and GVs (General Views) or B-Roll, and Cutaways. The schedules were fluid, the locations unseen, and everything needed to be shot at NTSC frame rates. Immediately, my favourite camera for this sort of job (Sony’s FS100) was out. Secondly, we needed a flexible lighting kit, but all kit needed to be portable, flexible and light.

Even in these days of extremely sensitive cameras, lighting is still an essential part of video work. Even if it’s a bit of addition with a reflector or subtraction with a black drape, you’re adapting the light to reveal shape and form and directing the viewer’s eye to what’s important to your story.

Of course, we can’t all travel with a couple of 7-Tonne lighting trucks full of HMI Brutes and Generators, or even a boxful of Blondes and Redheads. I’ve had a little interview kit of Dedos, Totas and a softbox with an egg-crate, but then these create a separate box of cables, dimmers, plugs, RCDs and stands, and whilst easy to throw in the boot of the car, it’s not exactly travel friendly.

I recently invested in a couple of 1×1 style LED panels, run off V-Lock batteries. These have been a revelation – the freedom to light ‘wirelessly’, and with enough brightness to do a dual-key two-up interview with three cameras has been great. I’ve got the entire kit into a Pelicase with stands, reflector, batteries and charger – but at a gnat’s under 30 Kg, it attracts ‘heavy’ surcharges when flown (and eye-rolls from check-in staff). Then add a tripod bag, then spare a thought for the sartorial and grooming needs of Yours Truly, and the prices go up, as do the chances of something going missing. Also, a stack of pelicases and flight cases lets everyone know that the Media Circus is in town. Such attention isn’t always welcome – especially from those in uniform.

So I’ve been shopping.

I’ve found some little LED lamps on eBay that clip together and run off the same batteries as my FS100. Add a couple of lightweight stands, and the Safari tripod, add a few yards of bubblewrap and a ‘Bag For Life’ full of clothing, all thrown into an Argos cheapie lightweight suitcase. I reckon the case is probably good for three, maybe four trips when reinforced with luggage straps, but getting three bags into one, and doing so under 20 Kg, is a very neat trick. No excess baggage charges, no additional overweight baggage charges, no trips to oversize baggage handling, no solo struggling with four bags…

Entire shoot kit including tripod and 3 head lighting.The six LED lamps and three stands allowed for basic 3 point lighting, and their native daylight balance meant that, for the best part, we were augmenting the available light in our locations. Even outdoors, 3 LED lamps bolted together, about 1.5 meters from the subject (and a foot or so above his eyeline) produced a beautiful result. Without the lamp, we’d have ‘just another voxpop’, but with the lamp, with the ability to bring his face up one f-stop from the background, we had a very slick shot. And because it’s all battery driven, we could do this outdoors, we could run around to different locations, and never have to worry about bashing cables – or even finding a power point that worked.

Now, there’s LED, and there’s LED. These were not Litepanels lamps, and there is a little bit of the ‘lime’ about the light. CRI was below 90, which isn’t very good. However, this was easy to cheer up using FCP-X’s colour board, and quite frankly most humans would not see the green tinge until I carefully point it out and do a ‘before/after’ – and even then, my clients weren’t in the slightest bit bothered – just thought I was being a bit of an ‘Artiste’.

We shot on my Canon 550D using the Canon 17-55 f2.8 IS zoom and a Sigma 50mm 1.4 in some of the smaller locations (to really throw the background out of focus). For GVs and B-Roll, the Image Stabilisation was essential for getting shots where we couldn’t take a tripod, or for working so fast a tripod would have been a liability. You’ll have to imagine standing at the edge of Cairo traffic, or wandering through back street markets – or filming buildings next to razor wire blockades guarded by soldiers…

So, the camera could be thrown in a backpack with three lenses, a Zoom recorder, a couple of mics, batteries, charger, a little LitePanels Micro ‘eye-light’ and of course the Zacuto Z-Finder. Everything else, including tripod, stands, lamps and chargers, plus clothing, go in the suitcase.

I really prefer the Pelicase, I love my 1x1s, I’m so glad to be back on the Sachtler head and using an FS100, but I’ve got my ‘low profile’ kit together now. And with the little panels using NP-F batteries (or 5x AAs), clipping together to make a key, or staying separate for background lighting, it’s a very flexible kit.

Two little quotes come to mind: at a MacVideo event a while back, Dedo Weigert (the DoP of Dedo lamp fame) asserted that lighting is not about quantity, but about quality. On a recent podcast, DoP Shane Hurlbut stated, in reaction to the sensitivity of cameras ‘not needing extra lighting’ that it was a DoPs duty to control light rather than to accept what’s already there. I’ve taken both of these to heart with portable LED lamps, as there’s no longer an excuse to shoot without.

PS: I’ll be doing some further tests with the lamps, and intend to make a video from the results.

Blade and a J-cut, two bits!

Final Cut Pro X doesn’t do J-cuts. It doesn’t do it at all, and whilst I am not an aggressive or violent person, I feel the need to sit on a naughty step for thinking what I’d like to do to this bit of software if it were something tangible.

What am I talking about? Any editor will tell you that, in ‘How To Edit 102’, we learn about the J cut. Very simply, it’s when a simple cut between two shots has the audio of the second clip start at just a fraction before the picture starts. Or, put it another way, the second shot starts with new audio over the old shot, then the video cuts to the new shot.

Let’s imagine a string of 3 comments by 3 different people.

We edit the comments so that they flow. But the magic of the J cut is that as we look as the first person, we hear the second person starting to talk – like we do in a discussion round a table in real life – and then (AND ONLY THEN) we look at them. There’s about half a second or maybe a bit less between hearing them and actually looking at them. So we start the audio when it should, then the video follows between 7 and 12 frames afterwards (that’s a quarter to half a second – we’re being subtle here!)

When we see this in television and film, it mimics our every day experience, and it feels very natural. Comfortable.

It’s an editing ‘condiment’. Like adding a bit of salt to food, it’s not clean and pure, but it feels right.

So looking at the clips in the timeline, there’s an offset between when the audio cuts, and when the picture cuts.

It works both ways: if sound cuts before picture, it’s a J (see how the tail points to the left, indicating the lower (audio) track starts first in our left-to-right scan. J-cut. If the pictures cut first and then the audio cuts, we get an L-cut – visually speaking.

SO we cut our first take of a sequence, and we’re really trying to get the sequence of what people are saying in a logical sequence. Let’s not worry about pictures and cutaways now, let’s get the ‘radio programme with pictures’ version done. Sometimes, it gets messy and we’re cutting little bits of words and halfwords together so a parenthetical comment is allowed to stand alone. So long as it sounds right, we’ll cover the messy pictures with their jump cuts with a cutaway.

The reason for my ire is that this mainstay of professional editing, this 1 step operation in FCP7, this ‘thing you can sum up in a letter’ is peformed thusly in FCPX:

http://help.apple.com/finalcutpro/mac/10.0/#ver1632d82c

It’s like trying to put hospital corners on a duvet: it can be done, but that’s an awful lot of effort for something that should be quick and simple.

After all, when firing off a bunch of edited interviews for a client, hands up those who, in FCP7, Avid or PPro, would perhaps slip a few edits to add a little polish? Then unslide them back again to continue editing? Exactly.

Well, now and again I find a really good reason to switch from PPro or FCP7 into FCPX, but then spend an afternoon bumping my shins and grazing my scalp whilst climbing through its ‘little ways’. Well, I lost my temper big-time over the whole J-cut thing and turned to good friend Rick Young for solice. He’s writing a book on FCP-X, he’ll know how to do it.

And he did.

“Basically, detatch all your interview clips’ audio so they’re separate from the video, and use the T tool to slip the video. Simple!”

But that’s quite an odd thing for an app that touts to never suffer bad sync – dangle your audio off your video for ever? Deal with a double-track for every clip that could be in a J-cut? In a modern bit of software destined for the next 10 years of editing? That’s madness! Actually, I think I put it a little stronger than that.

It sounded like my solution for getting a pet dog through his dog door was to cut him in half and re-attatch with velcro once he’s through. Just live with a dog for ever more that has to be cared for in case his velcro join comes apart. Yes, I do have funny feelings about my footage, but if you were to spend so much time with them you’d go funny too.

And here’s the conclusion: Rick’s method works – it works fine. It works great, in fact. Give it a try, drop that beastly Apple method.

But here’s my finishing salvo: Apple’s FCPX team shouldn’t feel ‘oh that’s all right then’ and not implement an offset tool. It’s so simple: apply an Option key behaviour on the T trim tool. Thanks for XML and all that, I’m sure multicam will be great too. Just finish off your tool with a way to turn radio edits into J cuts *Just* *like* *you* *used* *to*. Put the Pro back into FCPX!

The ‘science’ of ‘awesome’?

What is it about manflu and training DVDs? Once again, I am confined to duvet, lines of lemsip cut with vitamin C ready for snorting, and I am watching the latest instalment of Per Holmes’ Magnum Opus – “Hot Moves – the Science of Awesome”. And once again, it’s an amazing watch.

This 115 minute long DVD/MP4 feature is an ‘addendum’ to the ‘Master Course In High-End Blocking & Staging’ course – a 6 DVD set of mindbending info, but rather than cover the mechanics of telling a story, or covering a scene so it will cut well, this DVD is about getting the trailer shots – as the narrator puts it, ‘awesome for the sake of being awesome’.

In his usual style, Per and his team hose you with information. It comes thick and fast – though I detect a slight slowing of in tempo in this iteration, though that could be the lemsip. You know an iconic shot when you see it, but the team demonstrate how and why these shots work. And variations that don’t.

Funnily enough, the audience for this production is probably a lot wider than previous titles, not only because it’s great for low budget indie movie makers, but because it taps into the virtual world. This is a must-have for 3d animators and motion graphics designers looking for a movie style.

But even if you’re just going to invest in a slider or even tape a GoPro Hero to a broom stick, you’re going to get some great ideas and solid learning from the title.

It’s ‘required reading’ (watching) if you already have the ‘Visual Effects for Directors’ series, and a fun intro to the style of Per Holmes if you’re thinking about jumping in, but remember that this is the fun bit. You’ll still have to learn the footwork with Blocking & Staging.

Any peeves? The download version is a DVD image which really wants you to use FireFox extensions. I much prefer a smaller straightforward MP4, preferably HD for my AppleTV. But that’s such a minor thing and I believe HCW may be going MP4 soon.

In conclusion, this is yet another solid training title from HCW that rewards repeated viewing and pulls no punches in delivering high quality and high quantity learning material.