Sunday, 31 July 2011

Mova uses FACS and Pixel Liberation Front create virtual camera systems to make the Green Lantern Shine


Many people understand at least the basics of what is involved with animating characters, blowing up buildings, and expanding sets to include wild virtual vistas. But what happens before the FX artists are handed their tasks? While Green Lantern was still just a glow in Warner Brother's eye, Mova and Pixel Liberation Front were already at work building the foundation of the film.

Reality capture FX house Mova contributed facial capture to create the facial rig for the three actors Mark Strong (Sinestro), Temuera Morrison (Abin Sur), and Ryan Reynolds (Green Lantern), while previsualization FX house Pixel Liberation Front (PLF) handled a mix of previs, postvis and final visual effects.


Mova
Mova, the company founded by OnLive CEO Steve Perlman, produced the data needed to create believable facial animation through two specific steps, the capture process and the data delivery, under the supervision of Motion Capture Supervisor & General Manager Greg LaSalle, and Head of Motion Research, Ken Pearce. The first step in the Mova process is melding with the clients' pipeline to accommodate the massive amount of information.

"We create more facial data than people are used to getting," said Pearce. "In markers you can get maybe 100 or 150 data points tops, and we are giving people hundreds of thousands of points of data," requiring the FX house to come up with a process that can use the exceptional amount of information and Mova to decide what data needs to be sent.


How do they produce so much more information than marker based systems? It starts with applying a film of phosphorescent makeup to the actors face, and clothing when required.

The random pattern ultraviolet makeup used is applied to the actors face using an airbrush and is totally invisible to the naked eye. However, the Mova camera system does see it, and is able to capture the makeup information through the synchronized camera system.


Mova's capture process uses a number of cameras that are calibrated in space so they know their locations. The Mova system captures the information using white light and UV light that strobe back and forth faster than the naked eye can perceive. Color cameras pick up only the face textures and alternating UV lights flash on and off, triggering the makeup that is picked up by the black and white cameras.

They capture multiple streams of video over the course of several hours. That information is delivered to the client who selects what parts they want to process. "We found expressions for each actor that we wanted to use as a neutral expression, the ground zero for all of the emotion," said Pearce.


"The scan that come out of our system are like scans you would get from a laser scanner except we are scanning over time, so we've got 24 scans of motion per second." To handle registration, they selected a frame with an appropriate expression, and the client sets the character model to the same expression.

That becomes the starting point for how the data drives the character animation. While Mova's technology is able to capture textures and performance at the same time, "for this project," said LaSalle "we didn't capture textures since it is retargeted data. In this case the primary goal was to get the face shapes so an animation rig could be built."


Some of the Green Lantern actors required black and white dots painted on their faces to be used as tracking reference, such as around Reynolds's eyes to show where the edge of the mask would be.

"We did various tests to make sure our makeup wasn't going to interfere with what they were doing and that their dot makeup wasn't going to interfere with our processes. That proved to be completely problem free." Another first was capturing facial prosthetics. "That worked fine too," said LaSalle.


The scans result in a tracking mesh with far more data than marker based capture systems can deliver. In the case of Green Lantern, that was around 2,500 points. Pearce explains "We gave them a tracking mesh with a vertex that matched every dot for their dot set.

So in one data file they would have the tracking mesh data, the dot data as geometry and images from the shoot that were texture mapped on to the scans so they could also see the dots as a texture map on the actors face. That was something we hadn't done before and all worked very smoothly."


Mova did a mix of FACS (Facial Action Coding System) poses, a series of facial expressions, a series of transitions and motions from one expression to another, and lines of dialogue. Said Pearce, "This film really made us start to automate our pipeline for solving FACS poses more. Everything pushes you to evolve a little bit in some direction, that's one of the things that came out of this job." Mova delivers the job in common file formats. A software systems builds the motion files in Maya and the FACS are delivered as OBJ's.


PLF
PLF had its hands full with nearly 2,000 shots of previs and thousands of shot revisions, all under the supervision of Kyle Robinson. Working in Maya, they collaborated with the DP's lens package and proper film back to assure the resulting images "looked exactly the way they were supposed to look once you looked through a real life camera.

If you don't get that piece of information correct, everything you do is wrong." This step provides set information such as how much blue or green screen is needed, how much set extension needs to be filled in digitally, or how many CG characters are in the background.

Next, the characters, vehicles, environments and locations, are constructed digitally and the sequences are animated to real world specs. Once all the elements are assembled, the sequence is edited with sound on Final Cut Pro and shown to the director "to fine tune the edit, trimming frames, swapping shots, change the angle. That is handed to every department who then does a breakdown" listing what locations need how many shots, how many shots can be done in a day, how many days each location needs, how many background extras are required, how many days are needed for special effects. All before anyone even begins shooting.


As with all films, Green Lantern required specific tinkering to get the proper look. One stunt required a speed study. PLF built the location and vehicle. "We did different miles per hour tests to see how things would react when the vehicle went through the set."

Testing the speed in increments and reviewing how the set responded to various speeds, they ran the simulation through various camera angles until the second unit director decided on the one he liked best. "Then we'd go out to the set and they would rig the gag up, run it at the MPH we had figured out.


That was a pretty interesting challenge, the true essence of previsualization is to figure out a stunt or effect, work on it, decide on something you like, bring it to the real world, and work with the crew to get it to exactly match what the second unit director wanted." It's not as easy as it sounds.


This was one challenge that kept Robinson up at night. The set was only so big and anything going through at 30 MPH would go through in seconds so PLF needed to find ways to stretch the set real estate to make the set appear bigger than it was.


Luckily, Green Lantern didn't require all things to be limited to real world specs, but the message still had to deliver. The constructs created by the Green Lantern's imagination is a good example of this. A construct is the materialization of the Green Lanterns' thought, his power.

The ring gives him the ability to materialize whatever is in his imagination, constructed from the green energy of willpower. It manifests itself into a material object. If he needs a gun, he has one, and it's a working piece of machinery until he stops thinking of it and it disappears.

The Green Lantern usually makes a construct when he's in combat, and it's almost instantaneous. "How do you communicate the whole philosophy of what a construct is in eight frames of action?" asked Robinson.


PLF helped the Art Department do R&D and design development on how the constructs were. "The art Department had a crystal clear idea of how they wanted it to happen.

There were different stages where the construct came out and they wanted to see that fleshed out in animation" to define the dynamic energy of the constructs coming to life. "We went through several iterations of different styles on how the constructs were made, the speed, how transparent, how green," says Robinson.

"That was fun because we got to do a bunch of shaders, dynamics, and completely go off the conceptual end of things and just try wacky stuff." While this area isn't always handled in previs, often they are working hand in hand with the art department and can turn over ideas quickly, speeding up creative decisions by handing over a visual. "Of course that doesn't mean this is what the final idea is," Robinson clarified, "but that is the hand-off from the art department to the visual effects department" giving them a strong visual reference to what the art department is looking for.


PLF uses a virtual camera environment system where they do virtual production, a process to assist the art department, production designers, or DP's visualize a set space with a virtual camera.

"Motion capture cameras will set up a volume," explains Robinson. "From that, we have a Wacom Cintiq tablet which is getting feedback from the computer. It's attached to a camera in a 3D software package. When you turn in real life, it turns the camera in the virtual environment.


So the art director can build a beautiful world in the set, then show the director still images of it." It's an idea they picked up from Avatar, but is an in-house proprietary engineering. "What Cameron had was around 20 guys on set who would do real time animation.

They would update the set for him on the spot. We don't have the team to do that, but we had enough so that the director could say move this element there, scale this wall, put a window here. Basically allow him to experience the space and kind of dictate adjustments to the environment while he's in it and long before it's built."


PLF used that same motion capture system for the stunt team in a fight sequence. The stunt actors donned motion capture suits and acted out the choreographed fight sequence. PLF retargeted that information on to their digital characters. "Once we have the characters doing the motion capture augmented in virtual reality with some key framing animation, we drop them into the set the director and production designer just scouted with the virtual camera.

Now you have the characters animated doing the choreographed fight sequence in the environment that was just approved. Now, long before they get to the set, to shooting, to the blue screen setup, the director can go in with the same virtual camera and do maneuvers around them and see their action. This way there is a very clear vision of what is expected by all the actors, stunt players, and crew."

After previs, there is postvis. "That is where it gets technical. You start tracking empty plates and adding your low res models in there so that you can get shots with elements to the editors so they can start cutting the movie." Again, Green Lantern's requirements made this no small task. When they were doing postvis and making elements and shots to be edited for the studio temp screening, they delivered around 800 temp shots just for the screening with a team of just six people.

SOURCE: CGSociety.

3 comments:

  1. Right now, well before Diablo 3 Goldmany people reach the arranged, to capturing, to the blue screen of death startup, this director can go along with the same online photographic camera in addition to accomplish movements around these and discover their actions. That way there's a very clear vision of what's Guild Wars 2 Goldexpected simply by all of the famous actors, stunt gamers, as well as group.In

    ReplyDelete