5 ways and then some..to shoot better 3D for Sports and Golf.

By: Clyde DeSouza

3D Camera rotation error on left one third of screen - vertical misalign (click for larger image)

3D Camera rotation error on left one third of screen - vertical misalign (click for larger image)

Masters Golf in 3D:

The ongoing Masters Golf Tournament is being captured in Stereo 3D and being webcast in impressive HD (720p) as well as broadcast to select venues where it can be viewed on the new slew of 3D television sets now available in the market. The fact that it is being webcast live is another first, giving it a worldwide audience! In this article we take an open look at what makes watching sports in 3D so compelling, how to enhance the experience for viewers and discuss some of the possible errors that were witnessed on the first or second day shooting of the event in 3D. The event is being shot and produced by a few leading companies and organizations.

Camera Rotation Errors when shooting Live Stereoscopic 3D:

3d camera vertical mis-align on right side, possible keystone correction artifact?

3d camera vertical mis-align on right side, possible keystone correction artifact?

In the first screen-grab of the article and also on the image above, from the live event webcast, it is evident that there is camera rotation on one of the cameras. This gives rise to vertical misalignment of the images as can be seen on the left one-third area of the image (first image of article). The camera rotation could have been caused either by the physical camera rig itself not mounted precisely, or by image warping manipulation with real-time hardware or software that corrects for keystoning (due to convergence or toe-in) of the cameras. It is hard to tell what the cause is, and readers are invited to share their thoughts. However, ir-respective, vertical mis-alignment of 3D cameras is one of the biggest causes of viewer fatigue over time when watching a 3D presentation. The reason is, our eyes never move vertically independent of each other.

3D Cardboarding from using Telephoto and Zoom lenses:

"Cardboard" like 3D effect from using zoom and telephoto lens (click for larger image)

"Cardboard" like 3D effect from using zoom and telephoto lens (click for larger image)

Also known as the “Binoculars effect” this is mostly undesirable in a 3D presentation. It is similar to watching a sporting event through a pair of binoculars. There is a sense of 3D, but the different “layers” that make up the scene look like they are card-board cut-outs. Although everyday audiences may not feel the difference, most stereographers and 3D professionals advice against using Zoom or Telephoto lenses when shooting 3D.

To be realistic, it is not always possible to have well rounded 3D in every situation, especially if the camera rig is at quite a distance from the subject of interest. Some suggestions to overcome or balance this out:

  • Use a side by side Camera rig. Forget the notion that a beam-splitter rig with “converge on Focus” is the be all and end all of a good 3D rig
  • Judge the distance that will be shot with the side by side rig and *increase* the interaxial of the two cameras by as much as needed if the camera is zoomed in on a far subject. Then pull interaxial as you zoom out – Not pull convergence via toe-in. Shooting parallel will reduce the chance of vertical mis-alignment of cameras that may occur and go un-noticed in a Live 3D shoot.
  • The online editor could start the “cut” with the already zoomed in image and have the camera operator perform a slow zoom out – A sort of a “reveal” of fore-ground elements in the scene thereby giving a sense of scene depth.
  • Get creative – add a black border ‘virtual binocular’ mask, stating that it is a zoom effect – additional info such as play-field distances, ball speed etc. can be superimposed and add value to the scene.

The next image explains the last point a bit more…

Camera Framing for 3D scenes:

Compressed 3D depth Framing - No foregound 3D cues. (click for larger image)

Compressed 3D depth Framing - No foregound 3D cues. (click for larger image)

In the screen grab above, proper Camera framing would make all the difference. If the 3D Camera was framing this scene from a lower angle, additional perspective cues would kick in, such as the undulations of the greens, the water body and any foreground elements such as taller grass or flowers. This would lead the spectators eye from the foreground to the subjects of interest – the players on the far side of the turf.

The camera person on-cue could then initiate a very slow zoom, while adjusting interaxial *if* necessary while the on-line live editor switches to a camera closer to the subjects. This no doubt takes practice and experience, but that’s the luxury that high profile events and budgets have and should demand.

Another mal-practice (in the authors opinion) is the flattening of 3D as in this scene, where the convergence is on the subjects, i.e subjects are “on the screen plane” and no fore-ground or back-ground depth is visible. Leading to an almost 2D like image. From the screen grab it can be seen that even if projected on a 20 foot screen there would be little depth in such a scene.

Converge on Focus = Mismanagement of 3D Depth Budget:

Converge on Focus = Flat 3D bad use of depth budget

Converge on Focus = Flat 3D bad use of depth budget (Click for larger image)

Converging on an area of interest, and/or Toe-in of Cameras is a debate between two camps of Stereographers. One camp is completely against this practice, and emphasize that 3D cameras should shoot parallel, and any manipulation of horizontal parallax should be done via Horizontal Image Translation (HIT) of the left – right images either by hardware in a live environment or software in post. The other camp believe that converging on an area of interest is “natural” and is how our eyes function in real-life. The premise is to avoid creating extra work for the eyes to converge and focus – but instead just sit back and enjoy the story being told.

While this is a nice argument, it many times fails, if the viewer tries to focus on any area of the scene *other* than what the on-line editor / director despite their credentials as story tellers decides. Most Directors, Cinematographers still carry older rules of camera work, frame composition etc from the world of 2D movie making and sports coverage. This does not translate well when presenting Stereoscopic 3D imagery.

It leads to “flat” images as in the screen grab above. The still frame above was taken from a camera pan move being executed, while Convergence was being manipulated. – We can postulate that the on-line editor / stereo-grapher was trying to avoid excessive positive Z depth (which would lead to excessive eye divergence and headaches) if viewers tried to fuse the audiences in the back-ground.

He/she therefore rightly decided to pull convergence (or actually, the audience’s attention) to an area in the fore-ground. However, this logic when combined with bad Camera framing – i.e having nothing of interest in the foreground, leads to boring and flat 3D.  The scene does show the ball lying in the fore-ground, but that calls for better framing of the scene. A lower camera angle would present a more immersive shot… Which leads us to one of the most powerful yet not understood way of creating compelling Stereo 3D imagery – Mise en scène

Mise en scène and  The Psychology of using 3D as a medium:

Mise en Scene - "immerses" the viewer in the scene in 3D (Click for larger image)

Mise en Scene - "immerses" the viewer in the scene in 3D (Click for larger image)

Covering a sporting event such as Golf in 3D is an ideal candidate for the “immersion” possibility that only 3D imagery can provide to a viewer at home, watching the event with 3D glasses and a 3D TV or if the event is being streamed live at a Cinema. The very nature of a large screen Cinema already provides much of the sense of immersion of “being there”. When combined with 3D, it can be an un-paralleled experience, next only to physically attending the event in person.

Yet, most of the event seems to have been covered with a 2D mindset or traditional approach to coverage of a Golfing event. Some observations followed by suggestions:

  • Most shots looked like they were from fixed locations on Cranes, Jibs and with the use of zooms and telephoto lenses. Not variation of Camera angles – This would of course be due to budgets for Camera rigs. A suggestion would be to mix Camera rigs, non-proprietary (less expensive ones) with the proprietary beam-splitter rigs.
  • Were there any shots that showed the turf? or “immersed” the audiences for brief moments in the rich visuals of close-ups of nature on the Golf Course? This was hard to tell as we did not have un-interrupted streaming (poor broadband connection).
  • More creative framing of scenes – Handheld 3D rig shots showing POV thus creating a “virtual audience” view and “teleporting the viewer” on location so that they may see other people around *and* in focus. The viewer would then be able to choose whether to converge their eyes on people and expensive golf attire at the event, or on the ball and golfer preparing for the swing.  If present please add as comments.
  • When using the medium of 3D, Directors should get into the minds of the viewers and allow them enough leeway to “live the story” unfolding and not be entirely passive subjects, guided by the camera as in the world of 2D storytelling. This is true for feature presentations as well as live events such as sports and concerts.

No doubt this looks like an ideal wish-list of things that should be done, but not possible in a live un-scripted event. However there would have been enough time for practice sessions with camera and crew prior to such high profile events. If not this time, then for future 3D sporting events.

Something else worth mentioning but needing further investigation is many of the scenes covered in 3D at the Masters Golf event, seem to have been shot parallel, but without application of any HIT (in real-time via hardware). This led to shots where the depth budget seemed to start at screen-depth and project only forward (negative Z space). There was not much of positive Z space used. There is no screen grab to show this, but it is worth looking into. The effect may have also been caused on scenes shot with telephoto.

With the Masters Golf event still on-going… hopefully if there is any truth in these observations, they can be applied in a timely manner to the last two days of 3D coverage.

Disclaimer:

This is an open discussion and the article will be edited (with credit given) to reflect corrections if any, by way of input of comments from professionals reading it. We hope to learn what to avoid and some suggestions on what to add to the mix, for more effective stereoscopic 3d presentations when shooting live events; particularly sports. In no way is this article meant to ridicule the excellent effort of pioneering the art of shooting live stereo 3D for the ongoing Masters Golf Tournament.