Build your own Hollywood AVATAR movie camera rig

By: Clyde DeSouza

AVATAR the movie has been hailed as a revolution in film making and as an entertainment blockbuster, and rightly so. It created many firsts, for example being one of the longest new era 3D films of approx 160 minutes, that did not give audiences a headache. Far from it! the movie was actually engrossing and used stereoscopic 3D very effectively in story telling.

There were many other advances as well, from the technology used in motion capture of real actors whose facial expressions drove the their animated “avatars”, to the lighting and CGI technology that rendered the realistic skin textures of the animated characters.

This article however is about the revolutionary method used to Film the CGI world in Avatar.  Hollywood had never before “shot” CG movies in as intuitive a manner as Cameron shot with his Avatar Virtual Camera.

Disclaimer: I do not know for sure the exact details of the now much talked about Virtual Camera that was in the hands of Cameron as he roamed about the performance set and CG world, but from my understanding and guessing how it was done, here are some insights into creating a Virtual Camera rig that can be used in what I call “Hybrid Cinematography” – a mix of CGI and Live talent.

Real Time Virtual Worlds:

Watch the video above and pause frequently. Some observations worth mentioning:

  • Ignore all references to the Xbox and Playstation, the engine runs at max spec and realism on the PC.
  • Everything in the CG world is being created live and in real-time.
  • Frame update is well over the required 24fps (or 48 fps for stereo).
  • The demonstrator is playing GOD and creating a fantasy world in real time at photo realistic quality
  • A demonstrator / Creator / Director then steps into this CG world with a “camera” and can do any conceivable cinematographic move from Dolly, long shot, OTS, etc
  • The World itself is animated e.g the waves are lapping on the shore, Leaves are swaying, while the Director is setting up framing and “blocking” a shot
  • Time of Day follows real world rules, as does atmospherics – all these can be tweaked to create any existing real world setting or fantasy planet
  • Vegetation can be grown in real time and follow rules of existence
  • Physics simulation is built in

The above observations is not to sell this excellent Virtual Cinematography toolkit, or game engine called CryEngine 3, but to demonstrate where real time film making has evolved when the art first came to be named as Machinima.

It has been noted that mega corporations such as Microsoft and HP played important roles by building custom software that could for instance, track every blade of grass in the Avatar world, to creating custom hardware to render the masterpiece at cinema-realistic quality.

Obviously such budgets are beyond the reach of the average Hollywood filmmaker and production house, so these tips for making an Avatar like film on a lower budget. By using cutting edge readily available technology with sufficient innovation, a bit of hacking and tweaking, we may even surpass what Cameron did!

Building a custom AVATAR Virtual Camera or “Simulcam” rig:

While Licensing an engine such as CryEngine 3 is not exactly in-expensive, it is much more cost effective than budgeting for a similar realtime exercise in CGI cinematography by hiring heavyweights to supply hardware and software. CryEngine’s secret is something aptly called a “sandbox”. This is where you create your world or import an already created base world from popular 3D modelling packages, populate it with vegetation and characters and then direct your epic. All in real-time as the above video shows.

The great thing is that there is a huge passionate community of free lance “world designers”, programmers and plugin developers that can be hired to assist in creating a custom version of the standard toolkit -at a much lower cost. If the actual output of CryEngine is not being used in the final film, a custom license deal can probably be worked out with the creators as well.

The Virtual Camera can be a plugin that slaves a CG Cryengine camera (there can be more than one CG camera in the Sandbox) to a physical device such as an LCD screen, with an accelerometer, compass and gyroscope. Such devices can be designed or hacked from off-the shelf existing hardware – a sort of glorified Iphone with more sensors. A more precise alternative is a tracking system such as that from Vicon, or the providers suchs as Xsens. Download the PDF here.  In effect it becomes the “mouse” that controls the regular CG camera in virtual space.

Hollywood says this is ground breaking, but the fact is, in the gaming and Machinima world, Virtual Cameras are staple when doing “cut scenes”, where during gameplay, control is wrestled away from the gamer, and a cinematic like cut-scene is played out before returning control back to the player.

The Virtual camera lens, depth of field and Field of view etc can be set to mimic everything from fish eye to 35 mm real world cameras. Finally the Scale of the CGI world can also be set to real world units such as inches, feet and miles, to simulate the Director actually seeing in the viewfinder (LCD), a view of the CG world as if it were real.

Now using established camera moves such as Dollying, Zooming, etc. The real time world can be seen in an LCD screen (the viewfinder) of the physical device, and the XYZ data of the movement of this physical camera thus drives the CGi camera and it’s motion can be recorded as a camera path. With a suitable conversion hack or export plugin, this data can be exported to industry standard “superior” offline rendering software for final output of the movie.

(Above: realtime mo-cap in the Unreal3 engine)

Motion capture and facial capture data from a Live actor wearing sensors can also be sent to the CryEngine characters. At this point I have not done enough research to know whether this data can be piped “live” to a character in the Sandbox editor, but I would be sufficiently convinced that it can.

**Update** According to an article by Variety, even Cameron only saw “crude” models in the Simulcam. Suffice to say, Game engines such as CryEngine, Unreal 3 would give you equally good realtime animated avatars.

One system worth mentioning and brought to attention from the on-going  Cinematography.net discussion list, is Previzion from LightCraft.  With a great deal of time and effort spent, this system is completely portable and ready to use/adapted for indie Hybrid movie making and big budget previsualization.

**Update 2** Listen to the over hour long Audio interview on the making of the film and the realtime Simulcam. Much emphasis has been on the ability to hand place every cloud and tree in the CG set and manipulate in realtime. A lot of the talk is about the Modo Software though.  http://www.luxology.com/modcast/audio.aspx?id=103

**Update 3** You may also want to check on this product- The Intersense Virtual Camera tracking system

**Update 4** (March 20th 2010) I just found out about KontrolFreax

The Previszion system from LightCraft

Infinite ‘Takes’ in Mixed Reality:

If the Motion capture is done once to satisfaction and is pre-mapped to the Virtual characters, the live actors are free to go home! while the Director can then proceed to do infinite “takes” to his/her satisfaction. The only time the actors need to be present is during “Hybrid scenes” where live human talent mixes with “avatars”. The moment the Director shouts “Action” the scene can be re-animated and the Director goes in with his virtual camera to take the shot. I’m guessing this is even how Cameron did it, or he could have had the luxury of using Auto desk Motion Builder for realtime facial expression takes if he so wanted.

Once the CGI scene has been setup, the Director is free to “shoot” in an intuitive manner using his background and knowledge as a film maker and his style of shooting, to capture and cut to appropriate close-ups, over the shoulder shots and crane type shots as he would in the real world.

Avatar the movie was also shot stereoscopically. This can also be done in the CryEngine Sandbox, by slaving a second CG camera set at whatever interocular base desired. The output of the Sandbox can then be visualized in Stereo 3D using  DirectX calls that are intercepted by software from third party manufacturers such as IZ3d or using an Nvidia based 3D solution for previewing on ANY sized stereoscopic 3D display. Realtime rendering in stereo can be accomplished on a single Intel I7 based PC with high end Nvidia.

This will give the Director the ability to look for anomalies such as Stereo window violations, correct spacing of 3d objects, assigning and visualizing Depth budget,  maintaining inter scene “cut” depth etc. all in realtime! At this moment I am not sure that even Cameron had this luxury! on his system.

Mixing Live action with CGI – Real Time

Hopefully this brief article provides enough “Seed ideas” to kick start a new wave of creative film making. CryEngine 3 is known for its ultra realistic real time environment and outdoor scene rendering, the output quality of which can be used, in certain cases as finished video! There are other good engines as well, such as Unreal Engine3 and even the budget priced UNITY3D engine to build a realtime Virtual Camera system from.

Looking at more advanced areas of mixing green screen based (Chroma keyed) Live actors inserted into this real-time CG world and viewed with a Virtual Camera – one idea would be to take a live action stereo camera rig, either side-by-side or beamsplitter based and simply “stick” our Virtual Camera (LCD) to the back of the physical stereo-rig.

The output of the real world stereo rig would then go through a conventional green-screen keyer hardware, overlaid over the CGI world. This composite image is what the director sees on the Virtual camera LCD. Now as he aims the physical stereo-camera at the human talent on a performance stage, he sees the complete end scene! Foreground CGI layers can be hidden and moved out of the scene, if they occlude the human talent, and then brought back into place to make sure they fit into the rules of Stereoscopic 3D production.

Some considerations are, offsetting the pivot point of the Virtual Camera (lcd) being tracked, and adjusting the CGI world cameras “lens” to mimic the focal length, depth of field etc, of the real world camera. This is an area of further research into perfection that can be carried out.

Big budget Hollywood producers will no doubt balk at the idea of using a game engine as the core of a Virtual Reality Camera, but even they stand to benefit from the advances in technology that are happening month by month. The ability to rapidly setup entire scenes, proof of concept and rehearse hybrid Cgi – Human talent based films does not need to cost millions of dollars anymore. It can be done at a fraction of that cost. In some instances because everything is running in real time, hardware is getting cheaper and with access to post-production filters, entire scenes can realistically be produced in Real Time! watch the below youtube video in HD mode for an example and remember this is rendered in real-time on a single modern PC machine.

Refresh this page if the three Youtube videos do not load. First video: Every Cloud, Tree and Rock can be moved in realtime while animated, and output in full Stereoscopic 3D. For further reading on re-creating the real world environment in CG, read a previous post here – Game Engines for Realtime visualization

- Authored by Clyde DeSouza,

Technology Advisor- Real Vision