Dirrogate_DeepVR_cinematicVR_film

Shooting in VR on Digital Sets:

For the past 4 days we’ve been shooting the opening scene of ‘Dirrogate:DeepVR’ at Hazelwood Terrace. The reason the scene’s taking so long is not because Maya is a bad actor, no, in fact she’s been really co-operative, going through multiple wardrobe changes, hair styles and contact lenses without fuss. Learning the ropes of filming on Digital sets in Virtual Reality, and discovering a new language of storytelling along the way, is the reason.

We’ll get to some of the grammar and vocabulary soon; Deep VR and Root VR is the title of this article, after all. Let’s talk a bit more about Digital Sets and working in this nascent industry, first.

Above: How a Director or DP might do scene blocking while within VR.

HazelWood Terrace isn’t a real world location as you’ve probably guessed by now. The location exists in VR, created by PolyBox and was prominently featured by Timoni West during her presentation at the Vision Summit 2016. As VR filmmaking matures, there is every reason to believe it will spawn a whole industry, just like it’s real world counterpart. Digital Sets and VR Studio backlots, Set designers, Costume and makeup, Digital actors — a virtual Hollywood.

However, there are caveats. We aren’t living in the Matrix, yet. The reason we at Realvision, can film Dirrogate:DeepVR, is only because Hazelwood Terrace is optimized for the current generation of underpowered VR platforms. Maya, our lead actress had to shed a few hundred thousand polygons to be able to perform in the scene.

The upshot is: We predict mass employment and job opportunities for many real world film industry professionals – provided they update their skill sets for Volleywood.

Lighting_texturing_VR_Filmmaking
Above: Scenes from “221B Baker Street” by Elliott Lambert, show what real-world DPs and Set designers might come across when evolving their careers to VR filmmaking

To lend credence to our prediction, let’s see what we need:

  • A DP (Director of Photography) who understands Forward Lighting rendering, Gamma for VR displays, Deffered rendering, Baking lightmaps along with their traditional realworld skills of three point lighting etc.
  • Camera Person – Who understands the nuances of camera movement, Framing (yes there is such a thing) despite the lack of a proscenium, as we’ll see later on in this article.
  • Set and Costume Designers – who will need to familiarize themselves with creating digital replicas of real world materials, texture atlases and “substance designing” for digital satin, wood, leather…
  • Audio engineers and Music Producers – who will need to upgrade their knowledge to take in ambisonics, realtime environmental audio propagation and occlusion and in particular, because of the intimacy that a VR headset (theater) offers – ASMR. Pioneers who are exploring this field, such as Nick Venden, are worthy of following.
  • A Director – will need to know how to leverage the limitations of the medium and throw the rule book out the window. (S)he might also demand that the location or set has ‘Root VR’ and Deep VR capability.

It’s important to have a firm grasp on what’s involved in Cinematic VR filmmaking, and how to create an experience that surpasses the reason audiences currently go to the cinema – that of wanting to get away from the rigors of real life to experience a virtual one.

It’s part of the evolution of Cinema as we know it, where filmmaking will be hybrid – A mix of real world photography and the total immersion that is possible when merged with synthetic reality.

Which brings us to discussing the main topics of the article…


Note: Her lips don’t move in this video – a limitation of capture of the Oculus stream from the Unity Game engine. Use mouse to look around.

What is ROOT VR?

While shooting at a VR location such as Hazelwood Terrace, the Director might ask if everything is Root VR ready. Let’s see what that entails.

  • Is the physics in place? Does the sun-lounger and terrace furniture for instance, have mass as their real world counter parts do?
  • In the video above, what happens if the audience – the person wearing the VR headset, climbs up on the rail around the terrace?
  • Where is this place – does the story/narrative dictate that the location be tied to a real-world latitude and longitude and weather system?
  • Does it have a functional time of day system – Passage of time in VR can be manipulated to great effect than was is ever possible in traditional Cinema. This feature has to be wielded responsibly.

These are some of the fundamentals of what we’d call “Root VR”- a term taken from computing terminology – giving the Director “super user” status to craft the narrative. When offering VR locations on lease for filmmaking, a VR Location Scout might look for these assets to be tabulated.

Above: In case it’s not obvious – that is a completely synthetic outdoor location.

Unlike in the real world, “Golden hour” in Hybrid VR filmmaking, along with the weather, can be manufactured by the Director.

Navmesh_rootVR_filmmaking

What is DEEP VR?

While Root VR allows access to the edge-rail at Hazelwood Terrace, the Director, using his/her discretion and depending on if the narrative is multi-threaded, might decide to not allow the audience the liberty of committing VR suicide. The blue areas are what the Director has marked as “navigation safe” in this scene, both for the actors and the audience present in the movie.

In a larger scene such as a Hazelwood VR backlot, or Digital town, an entire city block could be nav-meshed to allow crowds of standee VR actors to walk around, autonomously. A normal Cinematic VR film (shot with a 360 camera) by it’s nature is more challenging to ‘direct’, given that the audience is free to turn and look around. Things go up a notch or two on the directorial scale for a Hybrid VR film where the audience might be allowed locomotion on the set.

How Deep can/should Deep VR go?

DeepVR_CinematicVRfilmmaking

In the scene above, the nav-mesh shows that the audience won’t be able to reach the book-shelf. If they could, the Director might want to make it so every book can be picked up, and contain readable material.

Consider this:

  • In Dirrogate:DeepVR, ideally the Terrace scene would end with Maya saying, “It’s getting late… let’s go inside.” At which point she walks into the living room. The audience is free to follow her or – walk around the terrace exploring the area or just taking in the night-scape, or see aircraft lights blinking as they come in for a landing. This is not important to the narrative, but builds atmosphere; a subliminal form of that coveted word in VR – Presence.

So… how do we pick up the narrative?

  • The moment the audience does enter the Living room, they are allowed a few steps before they ‘trip’ a scene change trigger. An invisible 3d object or barrier that when entered, fades to black and moves to the next scene in the narrative – which is the bedroom and Maya’s internal monologue as from the original Dirrogate VR film seen at 1:44 into the movie.
  • At this point the audience does not have access to the living-room study and the bookshelf. However, after Maya’s interior monologue, they could (we haven’t decided how deep, the DeepVR should be) leave the bedroom and walk to the book shelf, pick up a book that should have real rendered text on it’s pages. How deep? The Director might ask that 40 books be on the shelf and contain real literature. The audience can read… in VR, till dawn. The scene would then fade out, eventually to the next scene as in the original narrative.
  • If the physics were left intact by the Director, the cushions could be picked up and tossed around, the books scattered around the floor… but in the name of all things holy in filmmaking… we digress.

The point is, in a DeepVR experience, we are aiming at total immersion. To that end, even the texture of fabric of the cushions, the wood paneling and the paint texture of the wall, count. This can be a blend of realworld photography and synthetic conversion.

DeepVR_details_in_cinematicVR_filmsjpg

A testament to the finesse of the Hazelwood Loft set is, the fine level of detail. The picture above does not do justice to the feeling and an urge it gives rise to, when experiencing the room via a VR headset – that of wanting to reach out and touch the wall and fabric. Where Polybox have excelled, is in having this set run even on the GearVR. No mean feat.

What is Hybrid VR?

A debate rages on the topic of is Cinematic VR really VR? One camp has gamers, who by nature, inhabit a completely synthetic world. They have experienced the joy of “positional tracking” – something that’s not currently possible even with stereoscopic 360 capture of a scene.

While every one who can currently afford a 360 camera rig claim they are producing “VR”, we’ve always been of the opinion, that at minimum, an experience needs to be stereoscopic 360 to fool the brain into ‘immersion’. 2D 360 with no sense of scale, where people look 20 feet tall, and sports cars that look like giant UFOs are hardly what can be classified as a Cinematic VR experience.

Yet – even we have to admit, the way forward is to blend real world (stereoscopic) 360 photography with CG to qualify for the VR label. From now on, we will only be producing narrative VR experiences that are either entirely synthetic reality, or a mix of stereoscopic 360 photography and CG. Eventually, developments such as Lightfield technology will afford a more advanced option. Today’s Lidar and photogrammetry already allow for bringing the real world into the realm of synthetic reality, albeit not in real time.

Take a look at what people like Simon Che and RealityVirtual are doing. Documenting real world locations and preserving history, digitally. A practice we’ve always been keen to see adopted, ever since stereo 3d movies had a renaissance.

Dirrogate:DeepVR:

Earlier on in the article we’d mentioned a need to understand Camera placement. While it’s true many of the rules of traditional cinematography either do not apply or need to be re-written, as it currently stands, there is a need to know about Camera placement in a Hybrid VR film.

Two currently well known engines for Hybrid VR filmmaking are Unity and Unreal Engine4. CryEngine V is a formidable new contender, and worth investigating.

We have settled on Unity. The thing with Game Engines and current VR hardware limitations is: Polygon count and frame-rate.

As it stands, even with just Maya in the scene, it rules out creating Dirrogate:DeepVR as a film for mobile VR. The framerate crawls on the GearVR. Optimizing the geometry destroys Maya’s ‘realism’ even further. On the Oculus Rift, we barely manage the recommended 90 frames per second, with the terrace, the living room and Maya visible in the shot.

This is where camera placement comes in. Take a look at the video above. The green lines show the area the audience can see, with everything else removed in real-time, by the Engine. It might be hard to read, but keep an eye on the “set pass calls” and Tris (triangles) count in the statistics panel and see them change as the camera view changes.

  • The way the scene is staged, puts Maya in the corner of the terrace with us (the camera) aimed at her and the far off buildings in the background. The buildings have a very low polygon count.
  • If, for instance, the scene was blocked such that Maya stood in the doorway near the curtains, and we had our backs to the terrace rails looking at her and taking in the whole terrace and the living room… the polygon count of the scene would be at it’s maximum, thereby dropping frame rate.
  • If we now turn the camera around to view the living room, Maya will be out of the field of view of the camera and so not be rendered by the Game Engine.
  • Not maintaining at least 90 frames per second, which is the recommended/mandated frame rate by Oculus for a good VR experience, can lead to strain and an unpleasant experience for audiences.

The solution? In the Unity Engine, “occlusion culling” – a term that VR filmmakers and DPs may soon be spouting, is one weapon. In easy language: The game engine eliminates rendering any geometry that is hidden (occluded) by a larger object in front of it. In addition there is frustum culling (anything out of the field of view of the camera is not rendered)

Intelligent camera placement and scene blocking is a skill needed in Cinematic VR filmmaking, as we are discovering… at least until such a time when hardware catches up and the difference between Reality and Virtual Reality is blurred further.

We haven’t talked about the ethics involved with Digital actors. Questions such as these come to mind:

  • How close should the Director allow the audience to get to the Digital actors?
  • In Dirrogate:DeepVR we will not be enabling physics on the clothing of the actors. This is important because audiences with VR input devices such as a Leap motion controller, a Kinect, or the (ironically named) Touch controller, could stray too far from the intent and narrative.
  • When the audience gets too close to Maya, she either turns away or, if you are playing Dan, (yes there is head-hopping in the film) she might reciprocate with a smile or other friendly gesture.

Hybrid VR filmmaking might sound intimidating. However, it’s not essential to master every aspect, unless you are an indie filmmaker. Film-making is still a collaborative effort, and brings together talent specializing in different fields. There’s no need to learn coding or computer graphics per se, but awareness and creative-tech knowledge is important, and is what the RealVision VR filmmaking masterclasses encourage.

 

Dirrogate:DeepVR is currently under production, and we are interested in speaking to VR investors.