I’ve come up with a very affordable and efficient solution for my current workflow for my cartoon, Man Comics. I’d like to show it off a little bit here. My entire cartoon is being shot on an iPhone and a Kinect. That’s it. Trust me, I would love to own a Canon for hd filming but I don’t have one yet. Since I’m just going to convert the background footage into a vector image, my 720p iPhone with a few attachable lenses and a Kinect have proven to be just enough for my purposes with this project. So let’s get into it.
I am currently practicing a seven step process.
Step 1: Brainstorming and writing.
Step 2: Film background plates.
Step 3: Record actors’ dialog and MOCAP performances.
Step 4: Editing and storyboarding of background footage with dialog audio.
Step 5: Camera Tracking and vectorization of background footage.
Step 6: Character orientation, MOCAP skeleton attachment, mouth and eye animation, addition character fixes.
Step 7: Rendering, compositing and final editing audio and image adjustments.
Here’s the breakdown…
I think Step 1 is pretty self explanatory. It’s brainstorming and writing. You’ll just have to wait for the cartoon on this one.
Since I usually carry my phone on me, I can choose to capture and use any location I visit, preferably a location that i have easy access to in case i need to reshoot and hopefully without too many people around. Let’s say my script or scene has two characters with simple blocking. This is where I have to consider all of the angles and shots I will need for the edit. I learned most of this process while working on films in L.A. I usually try to capture my basic editing angles such as the wide shot, medium wides, mediums, close ups, overs and any insert shots i think i might need with any props or scenery. It makes it a little difficult when I don’t have actual actors yet. I have to imaging where they will be standing, how tall i think they might be, and what type of props they will be interacting with. I try to shoot more footage and angles than I’ll end up using but it’s better to have too much than not enough. I also learned that it’s better to record at least a solid minute on each angle so I don’t have to reuse footage if I use that angle often.
After i get all my shots, it’s time to record the acting and dialog. This is where my Kinect comes in. I mic my actors or myself with a Lectrosonics lavalier for the vocals and the Kinect is responsible for recording MOCAP(Motion Capture) data into Blender. If you’ve ever seen behind-the-scenes footage of a special effect shot where the actors are wearing ping pong balls all over their bodysuit, it’s the same idea minus the funny looking suit. Gollum from Lord of the Rings is a good example. Even though the character is completely computer generated, the voice and body movement was recorded from an actual acting session. For $70 on eBay, the Kinect has proven to be a very powerful and affordable asset to my projects.
Now that I have the skit recorded in both audio and MOCAP, I bring the audio into Final Cut and I edit my iPhone footage around the skit. This editing stage has been very helpful for me. I learn a lot about what camera angles and edits are needed for me to tell my story and any footage that doesn’t work or isn’t there is found out at this stage of production. Editing my footage to the dialog also serves as a somewhat accurate and completed storyboard and shot list. This is extremely helpful in letting me know what camera angle I’m on or what character we are seeing during a certain line of dialog or action. Then I can concentrate on rendering only the angles and frames needed instead of wasting lots and lots and lots of time rendering and perfecting unused footage. I can’t even begin to explain how important this step of my workflow can be. This step is where I am learning most about my strengths and weaknesses as a filmmaker has taught me the most about directing and editing. I cherish the lessons learned and it has been great practice for me. Luckily, since I don’t have any characters in my frame yet, there is a lot I can fix without any major setbacks or hurdles. In fact, Step 4 can jump to Steps 2 and 3 intermittently. A lot of the creative power takes form during these collaborative Steps.
After my scene is edited and storyboarded, I bring the background footage from my iPhone into a program called Pftrack where I can do two things. First I can straighten my lens distortion. I do this for still shots and motion shots and it is especially needed when I use my wide angle lens. Then, if it’s a motion shot, I track the perspective and camera movement so my camera in my 3D program, Blender, has the same exact perspective and movement in my 3D scene that my iPhone captured.
After I have tracked my camera and imported the information into my scene, I have to place and size my characters in 3D space to match that shot. I also mask any object that would be in front of them from the footage such as a table that the character would be sitting behind or trees in in the foreground.
I might be getting ahead of myself.
I’m actually using a custom build of Blender that allows me to do NPR (Non-Photo Realistic) rendering. This is what gives my characters that line drawing look instead of a standard shaded 3D look. To achieve the same look as my characters rendered from Blender, I batch render my iphone frames in Adobe Illustrator. This is sort of interesting because I’m converting the image from rasterized to vectored. Not only am I making my background camera footage lined and similar to my character renders and cartoon style, I can also resize the image to any dimension without losing any quality whatsoever. I could technically scale my film to be the size of an entire building without any pixelation at all if I chose to. So my 720p iPhone can actually be rescaled to 1080p during this step of my process.
Finally, I line up my MOCAP and my dialog into the prospective camera angles in Blender and I animate my character’s mouth, eyes and fingers. I can also constrain certain animation elements of my rig to the MOCAP data or fix any movement I feel necessary such as re-recording and switching out any body movements or keying during this stage.
Lastly, the scene is rendered and brought into Nuke for compositing and then Final Cut where it can be watched in context as a whole and scrutinized dearly. I may also do slight editing but carefully as to not cause any unnecessary headaches. That’s it in a nut shell. Other than rendering, all of this can be completed within a couple days. One day to shoot the footage and maybe a day or two for the lip sync and animation. In my opinion, with a little help, a 30 second short could be completed from start to finish within one day rather easily. That makes me happy. I apologize about the wordiness of this post. There is still so much to show! Hope you enjoyed the sneak peak behind-the-scenes preview for Man Comics!