MacBook Pro 15″ Retina 2014 for FCPX

MacBook Pro 15” Retina Buyers remorse: I paid for extra GHz, should I have invested in a bigger SSD?

I’ve finally got my MacBook Pro 15″ 2014 BTO – I went for the 2.8 GHz processor which lead to a 12 day wait as it was shipped to the UK from Shanghai. Was it worth the wait? Did I get the best bang for the buck?

It replaces my Early 2011 MacBook Pro 17″ with 8GB RAM, which has been an excellent machine (with the £800 SSD option, quite frankly the most expensive Mac I’ve purchased), but it now has a dicky GPU and needs to be ‘baked’ now and then to reset it. That’s great as an ‘at-home’ machine, but not good ‘on location’. Hence the new machine.

According to the 64 bit Geekbench tests, the new 15″ MBP with 2.8 GHz processor is about 40% faster than the 2011 MBP17″, achieving GeekBench scores (and this is not a speed test, just a ‘run it and see’) of 3895/15215 over the MBP17″ 2866/10655. My previous upgrades have been stellar, but this was a bit, hmm – ‘okay’.

Lest we forget, my laptop is for editing first. However, it must also be my computer for everything else too.

I have been happy (ish) with a 500GB internal SSD that was super fast. I did no actual work on it (external SSD drives via Thunderbolt was the way to go), but apps did not ‘launch’ – they ‘decloaked’ – just appearing versus the wait and wait from an internal spinning hard drive. This was the big bonus – SSD for the system and apps is definitely the way to go. Do not consider anything less.

The biggest issue for us FCPX editors could be the lack of FW800 ports. I have >75 FW800 drives (mostly LaCie Quadras) and need to access their contents. So I used the BlackMagic Disk Speed Test app to measure performance ‘before and after’ – I already have a Belkin Thunderbolt dock to provide USB3 on my old MacBook, and I checked this out on the new MacBook too, as it could prove FW800.

So, the old MacBook Pro could do USB3 x3 ports on the Belkin Thunderbolt Dock. It could also do Firewire 800 and 1GB ethernet, whilst passing through the Thunderbolt connection to, say, my Black Magic UltraStudio Mini Monitor (HD-SDI output from Thunderbolt – yay!).

But what of the disk performance? The new MacBook Pro does USB3 natively (two ports) but can only do FW800 with a Thunderbolt adaptor, and that soaks up one valuable Thunderbolt port. No loop through. The Belkin does USB and FW800 – AND it has a Thunderbolt loop through.

Here’s my rough findings. These are not optimised results, they’re just what happens when I connect my various drives through the options I have available to me:

MBP15″ 2.8GHz (Read/Write)

  • Direct USB3 – 161W/165R
  • Dongled FW800 – 75W/72R (counterintuitive, but hey)
  • Belkin Dock FW800 – 68W/69R
  • Belkin Dock UBS3 – 94W/97R (that’s surprising)

but then two years ago I did similar tests on the OLD MBP17″ and…

  • Internal FW800 bus – 46W/44R
  • Internal USB2 bus – 32W/33R
  • Internal SSD eSATA – 88W/167R
  • CalDigit USB3 PCI ExpressCard – 96W/138R

So all my older FW800 drives happen to have USB3 interfaces, and I think I’ll be using THAT in the future. FW800 does appear to be dead in the water.

Okay, what these numbers do NOT say is the punch line. The internal SSD does the following – read and weep:

  • Internal SSD – 549W/726R

FCPX users, for the love of your favourite deity, invest in SSD not GHz. Partition your drive to two volumes – a working volume and a boot volume. The cost I had to bear for waiting for the extra GHz does not make a huge difference in the Geekbench scores. The difference of a 500GB scratch volume with those numbers is an immense kick up the backside cache.

Everything about the Mac OS, everything about the future of FCPX, is all about SSD. If you’re into mobile editing, if you’re into smaller projects with sub-10 minute timelines, invest in SSD, not GPU. I wish I’d doubled my SSD rather than get 15% more performance on the CPU.

Ingesting P2 for FCPX – some alternatives

I’ve had some bad luck with MXF ingest to FCP, the Canon C300 variety needed a bit of voodoo. This weekend, I’m playing with images of Panasonic’s P2 media, copied onto an NTFS formatted USB3 drive.

FCPX couldn’t see anything. It knew there was a P2 card there, just didn’t see anything. Okay, moving on.

I’ve recently ditched Adobe Creative Cloud for being too expensive to maintain for an FCPX editor, but I still kept Adobe CS6 as there are some things (Audition, Encore, Photoshop and Illustrator) that I need – if not the latest versions thereof.

So, surprise, surprise, Adobe Premiere Pro and Adobe Prelude could both see the P2 card. I started a Transcode from the MXF files to ProRes 422.

If we skip the issues that cropped up trying to make that happen reliably, I also fired up Final Cut Pro 7 – which has a ‘Log and Transfer’ mode that also saw the P2 card images and willingly imported them whilst transcoding to ProRes.

And here’s the catch: FCP7 did 90 mins of P2 rushes in about 45 minutes. Adobe Prelude did the same in about 90 minutes.

So, we’d expect the Prelude transcodes to be better than the FCP7 transcodes – it took longer, the software is newer. Stands to reason, right?

The two versions look visually identical. Flipping between them, there’s no visible difference.

We can take one version, import the second version and overlay it on the first version, then use the ‘Difference’ composite mode. It will highlight the difference between the two – supposedly identical – frames. What you get is a murky-black composition which tells you nothing. What you need to do is group the two together, then boost the contrast to buggery. One of the versions has a sort of ‘flicking’ nature. Maybe for a frame, maybe for a second or so. I lined up originals on top of each other to mark where the difference composite flicked, then examined each version with a waveform monitor. What you see is this:

Compare this frame:
unknown-2014-07-26-19-21.jpeg

With this frame:
unknown-2014-07-26-19-21.jpeg

You may have to do this side by side. It’s actually a big difference. Check out her hair.

The Adobe Media Encoder version has barely visible jumps in luminance. Barely visible on a monitor. but it’s about 1-2 IRE. The FCP7 Transcode versions do not. They are ‘cleaner’.

Yes, I obsess (!) about this – because I’m chromakeying the results, and ‘bumps’ in luminance can upset the keying settings.

So, I’d recommend FCP7 over Adobe for ingesting P2 cards for measurable speed and quality reasons. I wish FCPX would ingest P2 direct from disk, but my installation doesn’t work (it didn’t work with C300 for a while, until I found the fix).

So there you go. I know Adobe Media Encoder gets a good write-up, but in this case I have to hand it to FCP7. I wonder if I’m missing a secret folder for P2 ingest in FCPX?

MXF to FCPX not working? A possible fix

Ingesting C300 rushes using the Canon FCPX plug-in
Ingesting-C300-rushes-using-the-Canon-FCPX-plug-in
Canon provide a free plug-in to enable the C300’s MXF files to import directly into FCPX without the need to transcode to ProRes. Many users report that they have no problems with the installation and it ‘just works’. However, other users with similar setups report that they cannot import C300 rushes in FCPX, though it works through Log and Capture in FCP7, additionally Adobe Premiere successfully imports C300 MXF. Only FCPX seems affected, and for a limted subset of FCPX users.

C300 rushes don’t work
C300-rushes-don-t-work

Here’s the typical scenario: having run the xpfm211 installer. FCPX sees the folder structure, even the MXF files themselves, but does not recognise either. This is as far as some users get. For some reason, the installer has completed successfully, we are seeing files, but nothing imports. De-installing and re-installing brings the user back to this situation. Very frustrating.

After installing
After-installingTrying to track the activity of the installer, we see two new plug-ins highlighted in the MIO/RAD/Plugins folder – CanonE1.RADPlug and CanonXF.RADPlug. The latter would appear to be the ‘magic smoke’ for the MXF format. However, this isn’t working. There’s a second empty RADPlugins folder below – should the plugins be in there?
Moving the plugins
Moving-the-pluginsWhilst it may seem a bit ‘cargo cult’ to shift the contents from a RAD/Plugins heirachy to a RADPlugins, it was worth a shot. No, it didn’t work.
Comparing folders with a working configuration
Comparing-folders-with-a-working-configurationHere’s where it got interesting. I was able to confer with another editor who had a system that did import MXF successfully. The key difference was that he had a CanonXF64.RADPlug folder – not an XF, an XF64. I could not find a similar folder, nor could I make the installer create one. In the end, he just sent me a copy of that folder, and I dragged and dropped it into the same folder I had.
C300 rushes now appear normally
C300-rushes-now-appear-normallyAnd it worked! It’s pretty obvious because you can see the clips, but also note that the MXF folder heirachy has gone, replaced simply with the usual list of clips on a card or archive.
The Secret Sauce of C300 Import
The-Secret-Sauce-of-C300-ImportSo this folder appears to be the missing link. Depending on your system, the installer either creates this folder, or it doesn’t. Both of us had the XDCAMFormat.RADPlugin, removing both did not make my installer create this file, the only way was to use somebody elses copy. It would be useful to provide this folder as a download to those who need it, but license agreements seem to forbid this sort of activity – probably for good reason.
It comes down to an issue with the installer, which isn’t written by Canon staff, and so it’s difficult to work out who to alert to the situation. However, as seen here, access to the CanonXF64.RADplug folder cures the problem for now.

C100 noise – the fix

The Canon C100 is an 8 bit camera, so its images have ‘texture’ – a sort of electronic grain reminiscent of film. Most of the time this is invisible, or a pleasant part of the picture. In some situations, it can be an absolute menace. Scenes that contain large areas of gently grading tone pose a huge problem to an 8 bit system: areas of blue sky, still water, or in my case, a boring white wall of the interview room.

Setup

Whilst we set up, I shot some tests to help Alex with tuning his workflow for speed. It rapidly became obvious that we’d found the perfect shot to demonstrate the dangers of noise – and in particular, the C100’s some-time issue with a sort of pattern of vertical stripes:

Click the images below to view the image at 1:1 – this is important – and for some browsers (like Chrome) you may need to click the image again to zoom in.

So, due to the balance of the lighting (couldn’t black the room out, couldn’t change rooms), we were working at 1250 ISO – roughly equivalent to adding 6dB of gain. So, I’m expecting a little noise, but not much.

Not that much. And remember, this is a still – in reality, it’s boiling away and drawing attention to its self.

It’s recommended to run an Auto Black Balance on a camera at the start of every shoot or if the camera changes temperature (e.g. indoors to outdoors). Officially, one should Auto Black Balance after every ISO change). An Auto Black Balance routine identifies the ‘static’ noise to the camera’s image processor, which will then do a better job of hiding it.

So, we black balanced the camera, and Alex took over the role of lit object.

There was some improvement, but the vertical stripes could still be seen. It’s not helped by being a predominantly blue background – we’re seeing noise mostly from the blue channel, and blue is notorious for being ‘the noisy weak one’ when it comes to video sensors. Remember that when you choose your chromakey background (see footnote).

The first thought is to use a denoiser – a plugin that analyses the noise pattern and removes it. The C100 uses some denoising in-camera for its AVCHD recordings, but in this case even the in-camera denoiser was swamped. Neat Video is a great noise reduction plug-in, available for many platforms and most editing software. I tried its quick and simple ‘Easy Setup’, which dramatically improved things.

But it’s not quite perfect – there’s still some mottling. In some respects, it’s done too good a job at removing the speckles of noise, leaving some errors in colour behind. You can fettle with the controls in advanced mode to fine tune it, but perversely, adding a little artificial monochrome noise helped a lot:

We noticed that having a little more contrast in the tonal transition seemed to strongly alter the noise pattern – less subtlety to deal with. I hung up my jacket as a make-shift cucoloris to see how the noise was affected by sharper transitions of tone.

So, we needed more contrast in the background – which we eventually achieved by lowering the ambient light in the room (two translucent curtains didn’t help much). But in the meantime, we tried denoising this, and playing around with vignettes. That demonstrated the benefit of more contrast – although the colour balance was hideous.

However, there’s banding in this – and when encoded for web playback, those bands will be ‘enhanced’ thanks to the way lossy encoding works.

We finally got the balance right by using Magic Bullet Looks to create a vignette that raised the contrast of the background gradient, did a little colour correction to help the skin tones, and even some skin smoothing.

The Issue

We’re cleaning up a noisy camera image and generating a cleaner output. Almost all of my work goes up on the web, and as a rule, nice clean video makes for better video than drab noisy video. However, super-clean denoised video can do odd things once encoded to H.264 and uploaded to a service such as Vimeo.

Furthermore, not all encoders were created equal. I tried three different types of encoder: the quick and dirty Turbo264, the MainConcept H.264 encoder that works fast with OpenCL hardware, and the open source but well respected X264 encoder. The latter two were processed in Epsiode Pro 6.4.1. The movies follow the above story, you can ignore the audio – we were just ‘mucking around’ checking stuff.

The best results came from Episode using X264

Here’s the same master movie encoded via MainConcept – although optimised for OpenGL, it actually took 15% longer than X264 on my MacBook Pro, and to my eyes seems a little blotchier.

Finally Turbo264 – which is a single pass encoder aimed at speed. It’s not bad, but not very good either.

Finally, a look at YouTube:

This shows that each service tunes its encoding to its target audience. YouTube seems to cater for noisy video, but doesn’t like strong action or dramatic tonal changes – as befits its more domestic uploads. Vimeo is trying very hard to achieve a good quality balance, but can be confused by subtle gradation. Download the uploaded masters and compare if you wish.

In Conclusion:

Ideally, one would do a little noise reduction, then add a touch of film grain to ‘wake up’ the encoder and give it something to chew on – flat areas of tone seem to make the encoding ‘lazy’. I ended up using Magic Bullet Looks yet again, pepping up the skin tones with Colorista, a little bit of Cosmo to cater for any dramatic makeup we may come across (no time to alter the lighting between interviewees), a vignette to hide the worst of the background noise, and a subtle amount of film grain. For our uses, it looked great both on the ProRes projected version and the subsequent online videos.

Here’s the MBL setup:

What’s going on?

There are, broadly speaking, three classes of camera recording: 8 bits per channel, 10 bits per channel and 12 bits per channel (yes there are exotic 16 bit systems and beyond). There are three channels – one each for Red, Blue and Green. In each channel, the tonal range from black to white is split into steps. A 2 bit system allows 4 ’steps’ as you can make 4 numbers mixing up 2 ‘bits’ (00, 01, 10 and 11 in binary). So a 2 bit image would have black, dark grey, light grey and white. To make an image in colour, you’d have red green and blue versions stacked up on top of each other.

8 bit video has, in theory, 256 steps each for red, green and blue. For various reasons, the first 16 steps are used for other things, and peak white happens at step 235, leaving 20 steps for engineering uses. So there’s only about 220 steps between black and white. If that’s, say, 8 stops of brightness range, then a 0.5 stop difference in brightness has only 14 steps between them. That would create bands.

So, there’s a trick. Just like in printing, we can diffuse the edges of each band very carefully by ‘dithering’ the pixels like an airbrush. The Canon Cinema range perform their magic in just an 8 bit space by doing a lot of ‘diffusion dithering’ and that can look gosh-darn like film grain.

Cameras such as the F5 use 10 bits per channel – so there are 1024 steps rather than about 220, and therefore handle subtlety well. Alexa, BMCC and Epic operate at 12 bits per channel – 4096 steps between black and white for each channel. This provides plenty of space – or ‘data wriggle room’ to move your tonality around in post, and deliver a super-clean master file.

But as we’ve seen from the uploaded video – if web is your delivery, you’re faced with 4:2:0 colour and encoders that are out of your control.

The C100 with its 8 bit AVCHD codec does clever things including some noise reduction, and this may have skewed the results here, so I will need to repeat the test with a 4:2:2 ProRes type recorder, where no noise reduction is used, and other tests I’ve done have demonstrated that NeatVideo prefers noisy 10 bit ProRes over half-denoised AVCHD. But I think this will just lead to a cleaner image, and that doesn’t necessarily help.

As perverse as it may seem, my little seek-and-destroy noise hunt has lead to finding the best way to ADD noise.

Footnote: Like most large sensor cameras, the Canon C100 has a Bayer pattern sensor – pixels are arranged in groups of four in a 2×2 grid. Each group contains a red pixel sensor, a blue pixel sensor and two green ones. Green has twice the effective data, making it the better choice for chromakey. But perhaps that’s a different post.

C100 Chroma Subsampling – the fix

c100The C100’s AVCHD is a little odd – you may see ‘ghost interlace’ around strong colours in PsF video. AVCHD is 4:2:0 – the resolution of the colour is a quarter of the resolution of the base image. Normally, our eyes aren’t so bothered about this, and most of the time nobody’s going to notice. However, stronger colours found in scenes common to event videographers, and when ‘amplifying’ colours during grading, all draw attention to this artifact.

Note that this problem is completely separate from the ‘Malign PsF’ problem discussed in another post, but as the C100 is the only camera that generates this particular problem in its internal recordings, I suspect that this is where the issue lies. I’ve never seen this in Panasonic or Sony implementations of AVCHD.

This is a 200% frame of some strongly coloured (but natural) objects, note the peculiar pattern along the diagonals – not quite stair-stepped as you might imagine.

Please click the images to view them at the correct size:

There are stripes at the edge of the red peppers, and their length denotes interframe movement. These artefacts illustrate that there’s some interlace going on even though the image is progressive.

Like ‘true’ interlacing artefacts, these stripey areas add extra ‘junk information’ which must be encoded and compressed when delivering video in web ready formats. These are wasting bitrate and robbing the image of crispness and detail. Reds are most affected, but these issues crop up in areas of strong chrominance including fabrics, graphics and stage/theatrical lighting.

Some have pointed the finger of blame at edit software, specifically Final Cut Pro X. I wondered if it was the way FCPX imported the .MTS files, so I rewrapped them in ClipWrap from Divergent Media. In version 2.6.7, I’ve yet to experience the problems I experienced in earlier versions, but the actual results seem identical to FCPX:

For the sake of completeness, I took the footage through ClipWrap’s transcode process – still no change:

So the only benefit would be to older computers that don’t like handling AVCHD in its natural state.

To isolate the problem to the recording format rather than the camera, I also shot this scene on an external recorder using the Canon’s 4:2:2 HDMI output and in recorded in ProRes 422HQ. The colour information is far better, but note the extra noise in the image (the C100 includes noise reduction for its AVCHD recordings to help the efficiency of its encoding).

This is the kind of image one might expect from the Canon C300 which records 4:2:2 in-camera at 50 Mbits per second. Adding an external recorder such as the Atomos Ninja matches the C300’s quality. But let’s say you don’t have the option to use an external recorder – can the internal recordings be fixed?

RareVision make 5DtoRGB – an application that post-processes footage recorded internally in the 4:2:0 based H.264 and AVCHD codecs, and goes one further step by ‘smoothing’ (not just blurring) the chroma to soften the blockiness. In doing so, it fixes the C100’s AVCHD chroma interlace problem:

The results are a very acceptable midway point between the blocky (stripey) AVCHD and the better colour resolution of the ProResHQ. Here are the settings I use – I’ll do a separate guide to 5DtoRGB in a separate post.

The only key change is a switch from BT.601 to BT.709 (the former is for US Standard Definition, the latter is for all HD material, a new standard is available for 4K).

So why should you NOT process all your C100 rushes through 5DtoRGB?

It takes time. Processing a 37 second clip took 159 seconds (2 mins 39 seconds) on my i7 2.3 GHz MacBook Pro. Compare that with 83 seconds for ClipWrap to transcode, and only 6 seconds to rewrap (similar to Final Cut Pro’s import).

You will have to judge whether the benefits of shooting internally with the significant transcode time outweigh the cost of an external recorder and the inconvenience of using it. You may wish to follow my pattern for the majority of my non-chromakey, fast turnaround work, where I’ll shoot internally, and only when I encounter difficult situations, opt to transcode those files via 5DtoRGB.

I’ve also been investigating the use of a ‘denoiser’. It’s early days in my tests, but I’ve noticed that it masks the ‘interlaced chroma’ stripe pattern is effectively hidden:

This is not a panacea. Denoising is even more processor intensive – taking a long time to render. My early testing shows that you can under- and over-do it with unpleasant results, and that the finished result – assuming that you’re not correcting a fault, but preparing for a grade – doesn’t compress quite as well. It’s too slick, and therefore perversely needs some film grain on top. But that’s another post.

Canon C100 PsF – the fix

c100

The Canon C100 produces a very nice, very detailed image just like its bigger brother, the C300. However, the C100 uses AVCHD as its internal codec and Canon have chosen (yet again) a slightly odd version of this standard that creates problems in Non Linear Edit software such as Premiere Pro and Final Cut Pro X (excellent article by Allan Tépper, ProVideo Coalition).

Unless you perform a couple of extra steps, you may notice that the images have aliasing artifacts – stair steps on edges and around areas of detail.

PP6 – Edges before:

Here’s an example of the problem from within Adobe Premiere Pro, set to view the C100’s AVCHD footage at 200%. Note the aliasing around the leaves in the centre of the picture (click it to see a 1:1 view). Premiere has interpreted the progressive video as interlaced, and is ‘deinterlacing it’ by removing alternate lines of pixels and then ‘papering over the cracks’. It’s not very pretty.

PP6 – Interpret footage:

To cure this, we must tell Premiere that each 25psf clip from the C100 really is progressive scan, and it should lay off trying to fix something that isn’t broken. Control click your freshly imported C100 clips and select ‘Modify’ from the pop-up menu, then select ‘Interpret Footage…’

Alternatively, with your clips selected, choose ‘Interpret Footage…’ from the ‘Clip –> Modify’ menu.

Modify Clip

In the ‘Modify Clip’ dialog, the ‘Interpret Footage’ pane is automatically brought to the front. Click on the ‘Conform to:’ button and select ‘No Fields (Progressive Scan)’ from the pop-up:

PP Edges after

Now your clips will display correctly at their full resolution.

Final Cut Pro X – before:

The initial situation looks much worse in FCPX, which seems to have a bit of an issue with C100 footage, even after the recent update to version 10.1.

Select imported clips

The key to the FCPX fix is to let FCPX completely finish importing AVCHD before you try to correct the interlace problem. If you continue with these steps whilst the footage is still importing, changes will not ‘stick’ – clicking off the clips to select something else will show that nothing has really changed. Check that all background tasks have completed before progressing.

First, select all your freshly imported C100 clips. Eagle-eyed readers may wonder why the preview icon is so bright and vivid whilst the example clips are tonaly calmer. The five clips use different Custom Picture profiles.

Switch to Settings in Info tab

Bring up the Inspector if hidden (Command-4), and select the Info tab. In the bottom left of the Inspector, there’s a pop-up to show different Metadata views. Select Settings.

Change Field Dominiance Override to Progressive

In the Settings view of the Info pane, you’ll find the snappily titled ‘Field Dominance Override’, where you can force FCPX to interpret footage as Progressive – which is what we want. Setting it as Upper First will cater for almost all interlaced footage except DV, which is Lower First. Setting it back to ‘None’ lets FCPX decide. We want ‘Progressive’.

Final Cut Pro X – after:

Now the video displays correctly.

The before & after:

 

Turbo.264 HD – a quick and dirty guide for Mac based editors

Turbo.264 HD by Elgato is a Mac application sold as a consumer solution to help transform tricky formats like AVCHD into something more manageable. Rather than deal with professional formats like Apple ProRes, it uses H.264, a widely accepted format that efficiently stores high quality video in a small space. For given values of ‘quality’ and ‘small’, that is.

For the professional video editor, a common requirement is to create a version of their project to be uploaded to the web for use in services like Vimeo and YouTube. Whilst this can be achieved in-app with some edit software, not all do this at the quality that’s required, and often tie up the computer until the process is complete. This can be a lengthy process.

So, enter Turbo.264 HD – a ‘quick and dirty’ compressor that can do batches of movies, gives you access to important controls of H.264 that are key to making Vimeo/YouTube movies that stay in sync and perform well. It’s very simple in operation. The following guide will help you make your own presets for use with Vimeo and YouTube.

A quick and dirty guide for editors and videographers

First steps

Two Quicktime movies have been dropped onto the panel. Both are using custom presets created earlier. Click on the Format popup to select a preset, or add your own.

First steps

Vimeo/YouTube preset for Client Previews

Lots of presets have been built already in this copy of Turbo.264 HD – not just for the web but for iPad and iPhone use, even portrait (9:16) video. This guide will concentrate on two in particular.

Firstly, the Vimeo 720p version for client previews. This assumes that your master video will be in a high quality HD format such as 1080p ProRes, with 48KHz audio and progressive scan.

Clicking the ‘+’ button bottom left makes a new profile you can name. There’s a base Profile to work from that you select from the Profile pop-up at the top on the right hand side. For the Vimeo preset, the ‘HD 720p’ profile is used.

Next, adjust the settings as indicated. We don’t want to use the Upload service (as privacy settings may need indivual attention), the Audio settings can stay at automatic. The Other tab has basic switches for subtitles, chapters and dolby sound if they are part of the movie, and can be left alone.

Vimeo/YouTube preset for Client Previews

Sending HD video via the internet

The second preset is useful when you need to send high quality material via the internet in an emergency. File formats such as ProRes are ideal for editing, but use a large amount of space. H.264 can incorporate very high quality in a much smaller file size, but the files are difficult to edit or play back in this state. However, they can be transcoded back to ProRes for editing.

Sending HD video via the internet

The benefits and drawbacks of sending H.264 over ProRes

This preset does lower the quality by an almost imperceptible amount, and the original files should be sent via hard disk if possible. However when you need a quick turnaround under challenging circumstances (for example, a wifi internet connection in a hotel or coffee shop), this preset can help.

For example, a 2 minute 42 second ProRes clip uses 2.6 GB of disk space. The original clip shot on AVCHD at 1080p25 was 462 MB. However, using the H.264 settings below, the result was 101 MB with virtually no visible loss of quality. A 2 mbps internet connection would take almost 2.5 hours for the ProRes file, half an hour for the AVCHD file and under 7 mintues for the H.264 file.

And finally…

Hitting the start button starts the batch, and processed movies retain the original file name with a .mp4 extension. You can see that this 25fps 1080p movie is encoding at almost 28 fps, so a little faster than real time. The minutes remaining starts a little crazily then settles down. You can leave it running while you edit, but it will slow a little. When there’s no resizing and little compression, it can run twice as fast as real time (depends on the speed of your Mac).

And finally...
Remember, this is just a quick and dirty method of turning around client previews quickly – I often have ‘batches’ to do, 6-12 movies of 3 mins each, or a couple of 20-30 min interview select reels with burned in timecode. I pump them all through Turbo264 rather than Episode Pro as – due to the high bitrate – you’re not going to see much difference.
When it comes to the final encode, a professional encoding solution such as Telestream Episode, with the X264 codec as a replacement H.264 encoder, will generate the best results.

HD-SDI Embedding

Samurai-BNCOn a recent job, I had a chance to work with the Atomos Samurai – a recorder that creates either ProRes or DNxHD files from HD-SDI video, rather than the rather more consumerist (but just as good) quality HDMI signals I usually deal with. I have, for the last few years, eschewed the extra expense of HD-SDI kit in favour of ‘That will do nicely’ HDMI, but I think I’ve found a good business case for re-thinking that.*

The job was to record the output from a vision mixed feed from an Outside Broadcast truck, filming an awards ceremony. We had, in fact, each of the 5 cameras recording to AJA KiPros, but there was a need for two copies of the finished programme to go to two separate editors (myself and Rick, as it happens, working on two entirely separate edits) as soon as the event finished – even the time spent copying from the KiPro drive to another disk would have taken too long. So we added Rick’s Samurai to the chain.

We learned a couple of interesting things in preparation for the job.

Samurai_on_location_10aThe first is ‘how to reliably power a Samurai’ – its neat little case doesn’t have a mains adaptor in it, although it will happily run for hours on Sony NP-F style batteries (you can A-B roll the batteries too, so changing one whilst it’s powered off the second battery). However, I didn’t want to have to think about checking batteries – I wanted to switch it to record, then switch it off at the end of the gig, as I had other things to worry about (cutting 5 cameras, after shooting ‘Run & Gun’ style all day).

Samurai_on_location_07b

The Samurai (and Ninja) can be powered off a Sony ‘Dummy Battery’ supplied with Sony battery chargers and some camcorders. Plug the dummy battery in, connect it to the charger and switch to ‘Camera’ mode and behold – one mains powered Samurai.

The second point is thanks to Thomas Stäubli (OB truck owner) and Production Manager Arndt Hunschok who set up the audio in a very clever way which gave me a unique opportunity to fix the edit’s music tracks.

Unlike HDMI, HD-SDI has 8 audio tracks embedded in the signal. The sound engineer kindly split his mix into 4 stereo groups: a mixed feed, audio from the presenter microphones, audio from directional microphones pointing at the audience (but away from the PA speakers), and a clean music feed.

The practical upshot was that I was able to edit several versions of the 90 minute awards ceremony (30, 8 and 3 minute versions) without the music, then re-lay the music stings (from its clean feed, or replace with licensed alternatives for the DVD version) where appropriate, thus producing a very slick result and saving a lot of time and hair pulling (or sad compromises) in the edit suite.

Technically, the Samurai footage came straight in and ready to edit with its 8 audio tracks in frame accurate sync (of course). I was able to slice it up and do a pre-mix of the required tracks.

In the past, this has been a bit of a nightmare. This time, it was easy to take audio from the stage and play with the timings for music cues.

A short technical note: be it HDMI or HD-SDI, your picture is made up of 1s and 0s and so there’s no technical difference in the quality if fed with the same source**. However, the audio is interesting. Most of the time, shooting indie films or simple corporates, you’re not going to need lots of separated tracks. When it comes to live performances or panel debates, however, the 8 tracks of HD-SDI can significantly offset the extra cost of the technology by saving time in the edit suite. Well worth a conversation with your Techinical Director or supplier to sort out the ‘sub mixes’ (separating your audio feed to the channels) and ‘embedding’ (entwining the audio channels into the HD-SDI feed).

It’s odd that this hasn’t occurred to me before – the facility has been there, but perhaps it’s that last bit of kit – the ‘HD-SDI Audio Embedder’ available from suppliers like Black Magic Design and AJA – that’s been hiding its light under a bushel. As such, it is probably the least sexy item on one’s shopping list. Not the sort of thing that crops up for the journeyman videographer, but just the sort of thing when specifying the larger jobs with rental kit.

So, note to self: when dealing with complex audio, remember HD-SDI Audio Embedders, HD-SDI recorders.

And again, my thanks to Thomas Stäubli and Arndt Hunschok for their assistance and patience.

Samurai_on_location_12a

* One of the main business cases for HD-SDI (and good old SDI before that) was that it uses the standard BNC connector that has been the main ‘video’ connector in the broadcast industry. The BNC connector has a rotating cuff around the plug that locks it into the socket so it doesn’t accidentally get pulled out (like XLRs). HDMI – and its horrible mutated midget bastard offspring ‘Mini-HDMI’ can work its way loose and pop out of a socket with sickening ease, thus any critical HDMI-connected kit usually has a heavily guarded ‘exclusion zone’ round it where no mortals are allowed to tread, and sometimes bits of gaffer take just to make sure – in fact there is a portion of the ‘aftermarket video extras’ industry that make brackets designed to hold such cables into cameras and recorders. And, at risk of turning a footnote into an article, SDI/HD-SDI travels over ordinary 75 Ohm Coax over long distances, unlike the multicore short lengths of overpriced HDMI cables. So, yes, HD-SDI makes sense purely from a connector point of view.

** Notwithstanding the 4:4:4:4 recorders from Convergent Design and now Sound Devices. Basically, a 1.5G HD-SDI signal carrying a 10 bit 4:2:2 output will be indistinguishable from an HDMI signal carrying a 10 bit 4:2:2 signal, and many cameras with both HDMI and HD-SDI output 4:2:2 8 bit video signals anyway. But HDMI only does 2 channel audio whereas HD-SDI does 8. Back to the story…

Dealing with 109% whites – the footage that goes to 11

Super-whites are a quick way of getting extra latitude and preventing the video tell-tale of burned out highlights by allowing brighter shades to be recorded over the ‘legal’ 100% of traditional video. However, it’s come to my attention that some folk may not be reaping the advantages of superwhites – or even finding footage is ‘blown out’ in the highlights where the cameraman is adamant that zebras said it was fine.

So, we have a scale where 0% is black, and 100% is white. 8 bit video assigns numbers to brightness levels, but contains wriggle room, so if you have the Magic Numbers of Computing 0-255, you’d assume black starts at 0, and white ends up at 255. Alas not. Black starts at 16, and white ends at 235. Super whites use the extra room from 235 to 255 to squeeze in a little more latitude which is great.

But that’s on the camera media. Once you get into the edit software, you need to make sure you obey the 100% white law. And that’s where things go a bit pear shaped.

If you can excuse my laundry, here’s a shot with 109% whites – note them peeping up above the 100% line in the waveform monitor:

(Click the images below to get a full view)

01-fs100_into_fcpx-2012-07-12-13-55.jpg

Note also, that the fluffy white clouds are blown – there’s ugly detail snapping from pale blue to white around them. Although I shot this so I just got into 109%, the monitor shows us the 100% view, so it’s overexposed as far as the editor’s concerned.

So in my NLE – in this case, Final Cut Pro X – I drop the exposure down, and everything sits nicely on the chart. I could pull up the blacks if necessary…

02-fcpx_drops_luma-2012-07-12-13-55.jpg

But I’ve been told about an app called 5DtoRGB, which pre-processes your 109% superwhite footage to 100% as it converts to ProRes:

03-5dtorgb_into_fcpx-2012-07-12-13-55.jpg

Note that it is indeed true that the whites are brought down to under 100%, the blacks are still quite high and will require pulling down in my opinion. 5DtoRGB takes a lot longer to process its ProRes files – I’ve reports of 10x longer than FCP7 Log & Transfer, but I’ve not tested this myself.

I did some tests in Adobe Premiere CS6, which demonstrates the problem. We start with our NATIVE AVCHD clip, with whites happily brushing the 109% limit as before. These are just 1s and 0s, folks. It should look identical – and it does. Info in the Waveform Monitor, blown whites in the viewer.

Another technical note: the FCPX Waveform Monitor talks about 100% and 0%, but Adobe’s WFM uses the ‘voltage’ metaphor – analogue video signals were ‘one volt high’, but 0.3 volts were given over to timing signals, so 0.7 volts were used to go from black (0.3 volts) to white (1 volt). So… 0.3 = black in Adobe’s WFM. And another thing – I’m from a PAL country, and never really got used to NTSC in analogue form – if I remember correctly, blacks weren’t exactly at 0.3 volts, also known as IRE=0 – they were raised for some reason to IRE=7.5, thus proving that NTSC with its drop frames, 29.97 fpx, error-prone colour phase and the rest, should be buried in soft peat and recycled as firelighters. But I digress. Premiere:

06-premier_start-2012-07-12-13-55.jpg

Let’s get our Brightness and Contrast control out to bring the 109s down to 100:

08-premier_bright-2012-07-12-13-55.jpg

Hold on a tick, we haven’t adjusted ANYTHING, and Premiere has run a chainsaw along the 100% line. That white detail has been removed until you remove the filter – you can’t get it back whilst the Brightness & Contrast filter is there. Maybe this isn’t the right tool to use, but you’d think it would do something? Not just clip straight away?

I tried Curves:

09-premier-curve-2012-07-12-13-55.jpg

It’s tricky, but you can pull down the whites – it’s not pretty. Look how the WFM has a pattern of horizontal lines – that’s nastiness being added to your image. The highlights are being squashed, but you can’t bring your blacks down.

So finally, I found ‘ProcAmp’ (an old fashioned term for a Processing Amplifier – we had these in analogue video days). This simply shifts everything down to the correct position without trying to be clever:

10-premier-procamp-2012-07-12-13-55.jpg

At last. We have our full tonality back, and under our control.

With all these issues, and probably some misunderstanding about 109%, I can see the desire for something safe and quick using the new FS700 cinegammas in the form of CineGamma 2, which only allows 100% whites, ditto Rec709 in the FS100. But forewarned is fore-armed.

I donate the last 9% of my brightness range to specular highlights and the last shreds of detail in the sky, so I can have that ‘expensive film look’ of rolled off highlights. But if I didn’t haul them back into the usable range of video, all that stuff would appear as burned out blobs of white – ugly. However, I also spent a bit of time testing this out when I switched from FCP7 to FCPX, as the former took liberties with gamma so you could get away with things. The bugs in FCPX and Magic Bullet made me check and check again.

It’s been worth it.

FCPX – partying with your Flaky Friend

Tart

UPDATE: Compound Clips, specifically splitting Compound Clips, and worst of all, splitting a compounded clip that’s been compounded, increases project complexity exponentially. Thus, your FCPX project quickly becomes a nasty, sticky, crumbly mess.

Which is a shame, because Compound Clips are the way we glue audio and video together, how we manage complexity with a magnetic timeline, and butt disparate sections together to use transitions. Kind of vital, really.

Watch these excellent demonstration videos from T. Payton who hangs out at fcp.co:

These refer to version 10.0.1, and at time of writing, were at 10.0.3, but I can assure you that we STILL have this problem (I don’t think it’s a bug, I think it’s the way FCPX does Compound Clips). We return you to your original programming…

Okay, report from the trenches: Final Cut Pro 10? Love it – with a long rider in the contract.

I’m a short-form editor – most of my gigs are 90 seconds to 10 minutes (record is 10 seconds and I’m proud of it). Turn up ‘Somewhere in Europe’, shoot interviews, General Views, B-Roll, get something good together either that night, or very soon afterwards, publish to the web, or to the big screen, or push out to mobiles and ipads…

This is where FCPX excels. As an editorial ‘current affairs’ segment editor, it’s truly a delight. I bet you slightly overshot? Got a 45 minute take on an interview that needs to be 45 seconds? Range based favourites are awesome, and skimming lets you find needles in a haystack. Need to edit with the content specialist at your side? The magnetic timeline is an absolute joy, and don’t get me started about auditioning.

It’s true: in cutting down interviews, in throwing together segments, and especially when arguing the toss over telling a given story, I’m at least twice as fast and so much more comfortable throwing ideas around inside FCPX.

But my new Editing Friend is a ‘Flaky Friend’.

She really should be the life and soul of the party, but somehow there’s a passive aggressive diva streak in her.

There are three things she doesn’t do, and it’s infuriating:

  • She doesn’t recognise through-edits – they can’t be removed, they are, to her, like cesarian scars, tribal tattoos (or so she claims), cuts of honour. We tell her we’re cutting soup at this stage, but no. ‘Cuts are forever’ she says, like the perfect NLE she thinks she is.
  • She doesn’t paste attributes selectively – it’s only all or nothing. ‘We must be egalitarian’ she croons. What is good for one is good for all, apparently. You can’t copy a perfect clip and only apply colour correction to the pasted clip – you must paste EVERYTHING, destroying your sound mix, needing extensive rework to your audio mix, and heaven help you if you change your mind.
  • She flatly refuses to accept that there is already a way we all do common things, and wants to do it her own kooky way. Making J and L cuts into a Tea Ceremony, blind assumption that a visual transition needs an audio transition, even if we’ve already done the groundwork on the audio… girl, the people who think you’re being cute by insisting this are rapidly diminishing to the point you can count them on your thumbs, and we do include you in that list.

So okay, she’s a good gal at heart. Meaning the best for you. But she needs to bail out and quit every so often, especially if you’re used to tabbing between email, browser, Photoshop, Motion et al. She’ll get all claustrophobic, and you’ll be waiting 20-40 seconds with the spinning beachball of death between application switches. It’s all a bit too much like hard work. ‘I can’t cope’, she sighs – and spins a beachball like she smokes a cigarette. We stand around, shuffling our feet as she determinedly smokes her tab down to the butt. ‘Right!’ she shouts at last. ‘Let’s get going!’

And yes, it’s great when things are going right.

But put her under pressure, with a couple of dozen projects at hand, some background rendering to do, it all gets very ‘I’m going to bed with a bottle of bolly’. I’m getting this an awful lot now, and I really resent being kept hanging around whilst she changes a 5 word caption in a compound clip that takes 5 FRICKIN’ MINUTES to change, I resent every minute of waiting for projects to open and close, and whilst it’s lovely to see her skip daintily through all that fun new footage, when it comes down to the hard work, she’s so not up to it…

I am twice as fast at editing in FCPX, but I am a quarter of the speed when doing the ‘maid of all work’ cleaning up and changes. It means that, actually, I am working twice as hard in X as I was in 7, just mopping up after this flakey friend who has a habit of throwing up in your bathtub and doing that shit-eating grin as they raid your fridge of RAM and CPU cycles.

Well, FCPX dear, my flaky friend, you’re… FIRED.