One of the more esoteric bits of our pipeline has always been the interface between our render pipeline and our editing systems. Back in the dark ages, the only efficient way to get material into an editing system was to capture it off of a video tape. Everyone did it. Even feature films. They would use a process called telecine which would basically use a video camera to take a picture of each frame of film and record it onto one frame of video. The fun part comes in when you bring in the nature of video. It's not like film, where each frame is essentially a single photograph. Due to the ingenious ways our forebears engineered their way around the technical limitations they had in order to make television transmissions work, they couldn't just send a single full frame picture for each frame of video. They had to break each frame into sub frames, which they ended up calling fields. In order to get some semblance of picture transmitted and displayed as fast as was possible back then, instead of breaking the frame into the top half and then the bottom half, or the left then the right, they broke the frame into lines and then made fields out of every other line. So, the first field would have the first line, the third, the fifth, and so on and the second field would have the second line, the fourth line, the sixth, etc.
For obvious reasons, these fields were called Even and Odd. And depending on who made the camera being used, the VCR, TV, etc. which field came first changed. So for some things, the even field was expected to be received first, and other things expected to receive the odd field first. As one would guess, this kept things very easy and simple to understand. Fast forward to the digital age and the non-linear editing system and the telecine process. To understand the true joy of telecine and video fields, one needs to understand that film usually is shot at a frame rate of 24 fps, meaning the camera takes 24 pictures every second. Video instead runs at apprx 30 frames per second, or really 60 fields per second. Why 60? Because that was the frequency that electricity flows through the country's power grid, which made it a easy way to synchronize everyone's TVs to the same approximate timing. This also means when converting a film sequence to video, you had to not only convert the 24 frames to video fields, you also had to change the number of frames per second. This is done with a process called a 3/2 pulldown. Now comes Math. I apologize in advance.
So, if film is at 24 frames per second, that would be the same as 48 fields per second. Since video is 60 field per second, we need 12 more frames. The easy way to do that is to just copy a field every once in a while. Say, every fourth field for example. For a better explanation, check out:
Which field you decide to copy is called cadence. And, it turns out, what cadence you decide to use can have a very big impact on how your material looks after its been converted. Do it wrong and things look weird and jaggy and stuttery.
So, why am I telling you any of this? Because cadence has been an issue for us almost from the very beginning. As I mentioned WAY back at the beginning, since the fastest way to get video onto an editing system back when we started was to capture it from a video tape, we had to find a way to take our CG generated frames onto video tape. Luckily, we were not the only people trying to do this kind of thing. The digital effects industry was just starting to boom so there was a lot of demand for something that could take CG frames and display them in realtime on a video system. We used something called a DDR or Digital Disk Recorder. This allowed us to render our animation at film frame rate (which is what all our animators were used to animating to) and just copy those frame files to the DDR hard drive and it would play back those frames automatically doing the 3/2 pulldown conversion from film to video. It also could pretend to be a video tape deck which our editing systems could capture from. Thus we were able to get our animation onto an editing system as quickly as possible.
Alas, now that technology has reached the point that video fields are considered on the way out if not obsolete, we still have to deal with them as we re-produce our older episodes to HD. Thats right, re-produce. A lot of shows that were airing during the HD conversion process just took their video signal and scaled it up. Since HD is a different aspect ratio from regular video, this results in black bars to either side of the image. That sucks. Since we still have all the source files from when we made an episode the first time, we are able to reload them into our animation software and recreate the image in proper HD format. This make us rather unique in the animation industry. It also means we've had to dedicate a small team of people to this process because the change in aspect ratio creates a ton of issues. Our animators would use the framing of the shot to tell when they could stop animating a character, for example. Since a HD frame is wider then a regular video frame, this results in a lot of shots where a character would seem to just stop moving instead of walking off the screen.
Another of the issues we have to deal with is because of the cheat we did back then using that wonderful little DDR. Since we were just dumping frames onto it and letting it handle the conversion to video, things like cadence changed on a shot by shot basis. This is very very bad if for some reason you have to re-edit that material years later, maybe when changing it to use HD shots instead of regular video shots, since there's no way to match the cadence originally used unless you do it by eye. And that is why our HD editor can be a very cranky man. And why we've spent a long time tweaking and tuning our render queue to output files in different cadences for different episodes.
Our HD team just started working on episode 6 of season 1. And yet again we've found the cadence has changed. And we also discovered that the new version of editing software they are now using is even more fussy about field cadence and frame rate then the previous version. So we've spent the last few weeks going back and forth with them trying to dial in our render queue output again. And we finally got it sorted. So, I am recording what we are using now here for posterity since I am sure I will forget as soon as I move on to the next problem. And I REALLY don't want to go through this process again. So, for the record:
Field dominance = ODD
Field cadence = BC
In Shake (which we STILL use for frame to quicktime file conversion), that would look like:
FIPullUpDown(MainFramesIn, "Pulldown", 1, 0);
And, just for completeness' sake, here at the QT HASHes Shake needs to output the right format quicktime file:
Pre season 12.5 (29.97 fps episodes):
Post season 12.5 (23.976 fps episodes):
No, this information has no use to anyone other then me. What's your point?