19264503_10213404318729370_6423321401864046370_o

Anaglyph images – A Stereographer’s swiss knife:

This will be a slightly more technical article than what’s usually found on the RealVision site, and will seek to equally learn and answer questions as we move from traditional “Framed” Stereoscopic movies to what can be considered the minimum to qualify as video based “VR”: recording the ‘depth channel’

But first, a disclaimer, acknowledgement and notice: Footage, screengrabs and imagery is from the InstaPRO 360 site, freely available for download :here:  The footage used, was noted by the manufacturers as being from a pre-release camera, but now that the camera is commercially available, at least some of the observations in this article are still noticeable.

Anaglyphs are used by Stereographers to quickly scan a scene for stereo “sweetness,” but reading an anaglyph is part art, part science. I won’t delve into more closely guarded tricks/tips in anaglyph reading in this article, but a lot that will be talked about should benefit the reader and the VR motivated filmmaker.

A word of advice:

Don’t be caught coming off as being a novice, by uttering these immortal words:

“Anaglyph isn’t the way to display 3D” . Yes, Stereographers know that.

train_wrong_parallax_temporal_error_etc

Reading Equilateral projection Anaglyphs:

The correct way to read anaglyphs after reading them without red-cyan glasses, is to then wear a pair (for the image above, red filter over left eye) Next, place the mouse cursor over the image to find zero parallax – or screen plane. (click the image for larger.)

Observations:

  1. You’ll quickly, or eventually (it gets better with experience) find zero parallax to be on the woman wearing a white coat crossing the street with a cellphone and handbag. An experienced stereographer will know this without wearing glasses, because it looks like the area where the red-cyan “fringes” converge. Actually, to be precise, she does have a little bit of negative parallax (out of screen depth) and zero parallax is just slightly behind her as the mouse cursor will confirm.
  2. Now – and this is what prompted this technical article – in a Facebook Group discussion, there was an opinion stating the parallax in this scene is wrong or “off”. See the circles on the first image of this article. The argument being, how can areas adjacent to each other have varying depth?

Here’s how you find out if this is true:

Wearing anaglyph glasses, slowly… run the mouse cursor from the woman in white at near zero parallax…upward. Notice how the “depth” or hollowness in the scene increases? That’s correct parallax (obviously) but what happens as we keep going higher toward the sky? (the zenith). Notice how the cursor now seems to be reading almost back to zero parallax? Try it again – this time starting at the left at the man in black, on the street inline with the curved pillar building (with the LCD screen below) and it will be more obvious – why is the parallax reading “wrong” as you go higher? 

The answer is – or can be – because of the nature of the optics of fish eye lenses and the diminishing or graduated stereo that occurs as you go toward the outer ends of a fish eye lens.

Angenieux_309330_Optimo_3D_Ready_Package_1296568964000_753382

Back in the day – circa 2010 – when stereoscopic 3D films were the rage; to be fair: most blockbuster films are still 3D, only no one speaks about it… the Angenieux Optimo lenses were a StereoDP’s and stereographer’s prized possession and came complete with carry case, making one look even more serious if they arrived on set wearing a trench-coat… But, why were they important?

3D_depth_skew_FIFA_3D

Take a look at the above image from the 2010 FIFA WorldCup. The “skew” that you see in left image toward the left of the frame (use anaglyph glasses) and the skew in the right image toward the right of the frame, can be attributed to mis-matched (telephoto) lenses.

Imagine what kind of optical artifacts can emanate from mismatched fish eye lenses if not manufactured to stringent standards.

If that were not enough, also be aware that with fish eye lenses, as one moves toward the perimeter, the stereoscopic disparity will also decrease.

This – is why you see “depth” graduating toward zero parallax as you move toward the zenith. It can still catch an experienced stereographer  off-guard, because after all, Equirectangular projection and stereo and VR is still new, and everyone is learning and there are no experts.

zero_parallax_almost_on_taxi_back_A_beam

Reader Exercise:

Here’s one more image from the scene. Try to figure out where Zero Parallax is.

It’s almost at the taxi’s rear “A” beam (for lack of a better description, behind the small triangle, rear window). But by now you’re already reading anaglyphs like a pro – the “A” beam is also where there appears to be the least red-cyan fringes, so it’s a safe bet to home in on.

As you move the mouse cursor (wearing 3D glasses) you’ll notice the correct depth of the scene, but as you move higher, you’ll notice the graduated depth falloff, coming back toward zero parallax. We now know why.

To it’s credit – The InstaPRO 360 camera has a lot.. a lot of stereo realestate working for it, compared even to – dare I say – the Jaunt Camera.

Don’t believe me? Take a look at a screen grab from the Jaunt below, (including the ‘jaunt crater’ at the nadir) See just the sliver of 3D that’s visible? If you look even more closely you’ll notice the optical flow artifacts around the boy’s armbrothers_keeper_Jaunt_crater_opticalflow_rubberstamp_@cly3d

So what’s stopping the InstaPRO 360 from being the de-facto Stereoscopic VR camera and at that gorgeous price? To me (and this is strictly my personal opinion), the fact that based on footage I’ve seen, without having access to the camera, it’s that the cameras are not scan-line (or genlock) synced.

The below image explains a bit more of why scanline level sync (in CMOS sensors at least) is important.

train_wrong_parallax_temporal_error_etc

Observations:

  • The train is at wrong depth – Why? That’s temporal mis-sync at work. It’s not enough to just have frame level sync in Stereoscopic video based VR. You need to have true scanline level sync. If the scene is to appear at correct depth throughout. Here, depth conflicts appear as the train is in motion.
  • Another example below: (again, click the images for larger versions)

feet_wrong_depth_temporal_People_wrongdepth_Tsutaya_bldg

See the feet of the man and the woman with the white bag on the zebra crossing – It sort of hurts the eye –  Also notice the “depth” conflict on the group of people under the “Tsutaya” building in the region that features a white base of the streetlight. These people are in motion when the still image was grabbed and show the depth (parallax / disparity) conflict.

Which brings me to another point that was being discussed on the Facebook group.

In my opinion there is a huge difference in parallax errors manifesting themselves and caused by un-synced cameras, to that of messy stitching of equirectangular stereo imagery just to “get the stitch right”. An example of a hot mess of a stereo scene, and I don’t have access to a larger resolution image, is the one below:

19390682_10213400683518492_4205952773845891553_o (1)

The argument pointed out in the image above (shot by another 360 VR camera – the ZCam V1 PRO), and having good genlock/hardware sync is:

  • The depth / parallax done in the stitch shows inconsistent depth. I fully agree.
  • What I do not agree with is, equating this “mess of a stereo stitch” with the stereo output stitched image of the InstaPRO 360 camera images in this article. 
  • The argument put forward was that the InstaPRO 360 camera exhibits the same characteristics as the image above (see the first image in this article for the circled areas in question)

Now that we know how to read anaglyph Equirectangular images without, and with anaglyph glasses on – we know why, in the InstaPRO 360 footage, the skyscrapers show seemingly no Parallax, but in fact is and could be, the effects of optics and the nature of stereo around the perimeter of fisheye lenses toward the zenith. (around the horizon, there’s overlap between field-0f-view of adjacent lenses to mitigate the loss of stereo)

Secondly we now know, that in the case of the InstaPRO 360 camera, not having full genlock sync (based on viewing footage), temporal or  time multiplexed stereo depth conflicts will occur –

which is not the same as parallax anomalies as seen the swan image from the Z Cam V1 PRO above, which was stitched by pixel manipulation to get rid of stitch lines at the expense of stereoscopic accuracy (red shows bad parallax, blue shows expected parallax) The swan image was provided by the manufacturers themselves acknowledging it’s not the way to do good stereo, but also equating it to the stereo output of the imagery in this article from the InstaPRO 360.

Optical_flow_warping_stereo_errors

The Shortcoming of Optical Flow Stitching for Stereoscopic video based VR:

Optical flow stitching for seamless equirectangular output from VR cameras is popular and at least Facebook has made it’s algorithms to achieve this, open source so that third parties can implement it. However, and again, in my opinion I’ve not seen Optical Flow do any favors whatsoever for stereoscopic footage in third party implementations.

I’d earlier alerted two VR camera manufacturers (ZCam and InstaPRO 360) to have a look at the anomalies with Optical Flow Stitched Stereoscopic output from their cameras, in a review : here:

So… what’s wrong with OFlow for stereo VR? Take a look at the image above from the Swan video clip posted on Facebook. It’s only on closer look that you notice the irregularities.

  • The man’s head is warped (read: leads to what stereographers term as a form of ‘retinal rivalry’). This is the optical flow algorithm ‘borrowing’ pixels as it sees fit to do a smooth stitch. As you can tell, this will hurt viewers in stereo.
  • The shape of the child seat on the bicycle.
  • Rubberbanding or Cookiecutter artifacts – this manifests itself as ‘stretching’ or embossing effects in areas usually where there is, but not limited to, space between foreground and background objects.

An even more severe form of stereoscopic error manifests itself – because optical flow algorithms borrow pixels from (it’s configurable) upto one frame of video to smooth out seam lines, what happens is, it introduces “temporal artifacts” even if the cameras are genlock synced! So you will see, anomalies in feet of people moving, wheels of cars etc. putting these moving objects at wrong depth and leading to possible eye strain.

The only effective use of Optical Flow with negligible anomalies I saw was in the Google Jump system. The camera does use upto 16 cameras, which help to a great extent. So while I did notice the slight anomalies behind Don Cheadle in the bar scene, overall the video was stellar. See good Optical Flow implementation below:

You can read my review of that excellent piece of Cinematic VR filmmaking :here:

Conclusions:

  • The InstaPRO 360 camera – I’m torn between wanting to covet it, and keeping an eye out for it’s development. It ticks all the right boxes – sharp all around stereo, and price. The caveat: I’m hoping the sync get’s fixed or else to me, it’s a good monoscopic 360 camera.
  • The ZCam V1 Pro – Expensive, but within the range of the Nokia OZO (which I still don’t consider a true VR camera because it does only about 180-200 degrees of stereo. I’m hoping a v.2 comes out.) The Z Cam V1 Pro does have the option of stitching stereo traditionally which will yield good results, and there’s every reason to believe tweaking Optical Flow parameters can give better results depending on scene.
  • There’s also the hope, Optical Flow algorithms will get smarter or a different OFlow algorithm could be used.
  • The InstPRO 360 at that affordable price(!) and if the sync is fixed, just might lead to multicam narrative VR films becoming the norm – yes we know another cam will ‘seen’ if in the ‘frame’ – I’m referring to multiple cameras, nodal stereo 360 and quick deployment scenarios. 

anaglyph_correct_save_settings_Jpeg

One more tip on Anaglyphs:

There are some other tricks/tips to reading and understading anaglyph images that I’ve not gotten into, but here’s one more… By their nature, saving anaglyphs are prone to Jpeg color subsampling artefacts. The way to get rid of these is to always save an anaglyph image with a setting of 11 or 12 (in the older photoshop settings) so as to preserve the quality of the image and not have ghosting or embossed halos around the images. PNG or BMP are also better ways, at the cost of filesize.