(image credit: ScreenDaily.com)

Artifical Realism:

Two noteworthy industry publications were covering the AbuDhabi Film Festival, and had the foresight to attend the Masterclass on 3D Film Production in the 21st Century. They each came out with different headlines for their reports.

Screen Daily said “DeSouza brings 3D motion capture to the masses” , while The Hollywood Reporter said: “3D Expert Warns of Motion Capture Abuse” (subscription may be required for the THR article)

Both articles were interesting to me, as it was an example of what the audience may have taken away from the two and half hour masterclass. Designing the masterclass in itself was a challenge, as I did not know the composition of the audience. I had to take into account the possibility of Directors, Producers, Film Students, animators and as I learned later from a few people; They were IT technicians looking at a switch in career.

I’ll try to use some lines from the two reviews and explain in a bit more detail from my point of view what was happening on-stage. There were many topics covered and I took 3D to mean both, regular 3D CG and of course Stereoscopic 3D.

This article will be in parts as the title shows. We start off with the enormous strides that are being made in bringing Digital Actors to life. The clip below was one of my opening slides:

The video, no… “creation” is the work of the extremely talented Jorge Jimenez. Here’s the best part, for the masterclass, I had downloaded the EXE file from his site. So the exe starts by playing the video (actually realtime render with camera path), until the stunned audience saw me take the mouse and move the head around in all it’s SSSS glory in realtime. All this on my HP HDX laptop that has a three year old Nvidia 9600 GT chip!

The holy grail of CG human rendering is to achieve “realism”, but read the second paragraph from Jorge’s site, and you will see that for cinematic realism, it could actually be a lot easier as we “dirty down” the otherwise pristine CG rendering, and yet use methods such as Jorge’s separable SSS, to speed up render time with today’s GPUs.

“Remember”, I said to the audience, “We are not looking for real-time realism as Games developers need to. For Cinema you could offline the process, but use the giant strides that Pixel Shaders, and other real time graphics technologies now allow us”.

For the more technically inclined, here’s a paper from Nvidia on the subject of rendering more realistic Human skin in CG)

The main takeaways from the first 30 minutes of my talk were:

  • Game Engines and CG technology are going to drive Cinematic production henceforth.
  • Hybrid Cinema Production is what 3D film making will be all about. The mixing of Live action and CG, both for Actors and the environment or “Stage”
  • I made it a point to repeatedly explain the difference again between vanilla 3D and Stereoscopic 3D
Let’s get into some of the sensational headlines that The Hollywood Reporter generated from the talk:
>> Angelina Jolie going at it with Daniel Craig? That`s the scenario that Dubai-based CGI and 3D expert Clyde DeSouza painted for his Masterclass audience at Abu Dhabi Film Festival. DeSouza said that once an actor`s performance is motion captured for a big-budget Hollywood movie then it stays on servers forever
The last part of the sentence is what I did say on-stage! The former part was in the post interview talk (-:
But getting back…
(age restricted content)
Digital Surrogate Ownership:
Who owns the Digital Surrogate of an actor? The actor themselves? the Studio? or the Mo-cap/Performance capture house?…
Let’s see in steps what really comprises creating a Digital Surrogate
  • The person (actor) undergoes a whole body scan.
  • Even more detailed scan of the skin face texture, and maybe hands.
  • This skin texture is “baked” let’s say on Oct 15th 2012 and the person (actor) is 35 years old.
  • Facial expressions are captured based on algorithms that are unique to a facial capture software.  Through CG morph targets, and CG Face rigs, almost any expression could then potentially be synthesized. See the FACS explanation for more technical details on Facial Action coding.
  • Facial expressions unique to an actor can also be captured as a “macro” rather than re-creating it via FACS.
  • Finally, motion capture. Though not needed, as it can be re-targetted from any actor, in certain cases, the best mo-cap performance for a certain “Style” is best acquired from the orginal “Human”.
Think John Travolta’s “Strut” in the final scene in Staying Alive, back in 1983.  Can he “strut” the same as he could back then? …ummm
What if a studio were to decide to make Staying Alive again? if you had the original actors Mocapped “strut”, simply apply it to the backside of a Digital actor in leather trousers. Photorealistic NewYork is available today.

 

How about Michael Jackson’s dance style and the moon walk. He’s often imitated, but in the end, the nuances were his. Was his moon-walk ever mo-capped? I do not know. Can it be mo-capped with 100% accuracy from a video file? I’m not certain on this too..
A Digital Surrogate Actor is Born:
An actor’s, or persons Digital Surrogate is “Born”, when the above steps in the process are completed. Here’s the interesting part…
>> On the other hand, said DeSouza, stars could insist [on] their CG likenesses remaining permanently young. “Tom Cruise could insist on being young forever in every movie from now on,” he joked.
So what I meant was, that as an actor ages, he/she has a choice of making a call that in future movies they always want to henceforth look, as they did when they were 35.
If they are getting a yearly scan done at 36, 37 etc they could pick their Digital Surrogate by year. Of course it costs less than cosmetic surgery and no side effects whatsoever!

 

On a more serious note: Who “owns” this Digital Surrogate today? There’s a full digital replica of some actors lying around on servers in some VFX house, and with a little re-targetting, this performance capture can be spliced and edited much like video, and can even be broken down to sentence and phrase level facial expressions, then mapped onto any other digital character, without the actors consent.
(image credit: flickr/deltamike)

 

Example: Do you need Jim Carrey’s exaggerated facial expressions for “MASK 3” and Jim Carrey is asking for too much for the film? Let’s search our servers and see if we’d mo-capped and perfomance captured him.
Ok, let’s re-target it to a Digital character, who might also have a striking resemblance (but not needed) to the original actor, and along with the advances as shown in the Jiminez realistic CG face rendering above, we now have a completely digital actor within our budget.

 

So this is something for Actors to think about. There’s both a good and bad side to the technology, but the right to decide should fully be with the Human Owner of the surrogate.
At this point we are still talking vanilla 3D, but once you have a 3D scan and the Digital Surrogate, it’s a no-brainer to render it in Stereoscopic 3D. When the depth channel as I call it, kicks in… you have total immersion in Artifical Reality.

 

From the THR report:
>>”The biggest challenge for filmmakers remains putting emotion into computer-generated people. Filmmaking is about emotion — even in artificial reality films.” 
We’ll save that for Part 2.

 

  • Ramachandra Babu

    Good that you have started the interesting debate about who owns the Copyrights of the ” Digital Surrogate” ? The Right to resell downloaded music is being challenged in US court while an EU court has already ruled that downloaded games and software can be resold. Sites like ReDigi are reselling pre-owned digital music and there are plans to expand the resale business to games and e-books too. Buying and selling of media needs to be redefined in the Digital World!