Thursday, January 14, 2016

Editing Eight-Millimeter Film Versus Non-Linear Editing

My ol' Dad got me started with thinking of making film because he found enormous entertainment in making eight-millimeter movies.  Making such movies was relatively simple if your only purpose was to shoot the film and then show it again later.  It gets demented when you want to take film and edit multiple bits, splice them together, put a sound stripe on them, etc.

Splicing eight-millimeter film isn't something I will forget because you take two pieces of film and mount them on a device which holds them in position on a device for splicing them.  Then apply a bit of glue with a tiny brush and put a transparent patch over the spot to hold them together.  That, fifty years ago, is how one did the things I do now with non-linear editing with Final Cut Pro.

The sound stripe for eight-millimeter film was a custom thing because standard eight-millimeter cameras and projectors couldn't handle sound.  Normally, eight-millimeter film has the frames and, to one side, the tractor holes so the camera and projector mechanisms can pull the film through the mechanical transport and display it.  The sound stripe is an addition to that which applies a magnetic material to the side of of the film and that can contain audio.  Good luck on getting a good sync between the audio and the film because this method doesn't let you see the wave forms, it's more dumb luck than anything else.


So that was fifty years ago and Final Cut Pro is now.  Non-linear editing is the approach to video in which you can edit the front, the back, anywhere you like in your film project and the software will ensure all other clips hold their positions relative to whichever one on which you are working.  That doesn't exactly make editing easier but it's a loud cry away from what it was like with eight millimeter.

It's the same with audio as the sync was almost hopeless back in that time but now I can see the wave forms for each of the audio tracks.  This may seem like some incredibly technical knowledge but I'm not a sound engineer.  After having seen a wave form, it's much easier to understand their uniqueness and how the patterns in them are relatively easy to see.

For example, I will usually do something simple way up front when recording such as say 'video sync ... one ... two ... three' and that will show up in the wave form as distinct peaks for each word.  Once those are aligned with one another then I know the video frames are aligned as well and thank you non-linear editor.

Note:  I go to maximum resolution of the wave form to get the sync.  It's blindingly minute but the result is high precision.


For multiple video tracks, the best camera becomes the reference track for audio and video.  There is a higher-quality audio track recorded but it has no video.  Therefore, the first task is to sync the audio track to the reference track as then I can throw the audio reference track away and use only the high-quality audio.  From there, the other video tracks with their audio tracks are synced back to the high-quality audio and I have full confidence the video will be in sync as well.  Then I can throw out all those other audio tracks and keep four or five video tracks in near-perfect sync with each other and the high-quality audio.

And that's when the real editing starts.  Five video tracks need to merge down to one so which viewpoint will be used at any given moment and this may take a considerable time, not so much from difficulty but rather this part is almost painting ... and it would a nightmare with eight-millimeter film.


Sometimes there's a lament we didn't get the Future which was promised in The Jetsons but for editing film we definitely did.  If my ol' Dad had ever seen Final Cut, he would have been hooked for life.

No comments: