{"id":289,"date":"2013-10-08T13:24:33","date_gmt":"2013-10-08T18:24:33","guid":{"rendered":"http:\/\/homepages.uc.edu\/~yaozo\/wordpress\/?p=289"},"modified":"2013-10-08T13:24:33","modified_gmt":"2013-10-08T18:24:33","slug":"a-beginners-guide-to-shooting-stereoscopic-3d","status":"publish","type":"post","link":"https:\/\/zhuoyao.net\/index.php\/2013\/10\/08\/a-beginners-guide-to-shooting-stereoscopic-3d\/","title":{"rendered":"A Beginner\u2019s Guide to Shooting Stereoscopic 3D"},"content":{"rendered":"<h1>A Beginner\u2019s Guide to Shooting Stereoscopic 3D<\/h1>\n<div>May 1, 2010<\/div>\n<div>\n<h5>by Tim Dashwood \u00a0(revised September 10, 2011)<\/h5>\n<p>3D is back in style again and it seems like everyone, from Hollywood producers to wedding videographers, is interested in producting stereoscopic 3D content.<\/p>\n<p>So how can you get involved by shooting your own 3D content?\u00a0 It\u2019s actually quite easy to get started and learn the basics of stereoscopic 3D photography.\u00a0 You won\u2019t be able to sell yourself as a stereographer after reading this beginner\u2019s guide (it literally takes years to learn all the aspects of shooting and build the necessary experience to shoot good stereoscopic 3D) but I guarantee you will have some fun and impress your friends.<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Pair of XF305 camcorders\" alt=\"Pair of XF305 camcorders\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/Pair-of-XF305-camcorders.png\" width=\"276\" height=\"213\" \/>The basic principle behind shooting stereoscopic 3D is to capture and then present two slightly different points of view and let the viewer\u2019s own visual system determine stereoscopic depth.\u00a0 It sounds simple enough but the first thing any budding stereographer should learn is some basic stereoscopic terminology.\u00a0 These few terms may seem daunting at first but they will form the basis of your stereoscopic knowledge.<\/p>\n<p><strong><em>Terminology<\/em><\/strong><\/p>\n<p><strong><em>Stereoscopic 3D a.k.a. \u201cStereo3D,\u201d \u201cS-3D,\u201d or \u201cS3D\u201d<br \/>\n<\/em><\/strong>\u201c3D\u201d means different things to different people.\u00a0 In the world of visual effects it primarily refers to CGI modeling.\u00a0\u00a0This is why stereographers refer to the craft specifically as \u201cstereoscopic 3D\u201d or simply \u201cS3D\u201d to differentiate it from 3D CGI.<\/p>\n<p><strong><em>Interaxial (a.k.a. \u201cStereo Base\u201d) &amp; Interocular (a.k.a. \u201ci.o.\u201d) separation<\/em><\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Interaxial Separation\" alt=\"Interaxial Separation\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/interaxial-separation.png\" width=\"297\" height=\"198\" \/><\/p>\n<p>Interaxial Separation between lenses<\/p>\n<p>The\u00a0<em>interocular\u00a0<\/em>separation (or interpupulary distance) technically refers to the distance between the centers of the human eyes.\u00a0 This distance is typically accepted to be an average of 65mm (roughly 2.5 inches) for a male adult.<\/p>\n<p><em>Interaxial\u00a0<\/em>separation is the distance between the centers of two camera lenses (specifically the\u00a0<a title=\"Entrance Pupil Definition\" href=\"http:\/\/en.wikipedia.org\/wiki\/Entrance_pupil\" target=\"_blank\" rel=\"noopener\">entrance pupils<\/a>.) The human interocular separation is an important constant stereographers use to make calculations for interaxial separation.\u00a0 Beware that Interaxial separation is often incorrectly referred to as \u201cInterocular\u201d and vise-versa.\u00a0 In the professional world of\u00a0 stereoscopic cinema it has become the norm to refer to interaxial separation as \u201ci.o.\u201d even though it is the incorrect term.<\/p>\n<p><strong><em>Binocular Vision, Retinal Disparity and Parallax<\/em><\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Convergence of Eyeballs\" alt=\"Eye convergence\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/Converged-Eyeballs.png\" width=\"161\" height=\"375\" \/><\/p>\n<p>Eyeballs converged on center object<\/p>\n<p>Binocular Vision simply means that two eyes are used in the vision system. \u00a0 Binocular Vision is very important to most mammals (including humans) because it allows us to perceive depth at close range.<br \/>\nTry this:\u00a0 Hold one finger next to your ear.\u00a0 Now stretch your other arm out straight and hold up another finger.\u00a0 Now bring your two fingers together and touch the tips together. \u00a0 Is was easy right?\u00a0 Now repeat the same procedure but close one eye.\u00a0 Were you able to touch your fingers together on the first try?\u00a0 Now you know how important binocular vision is at close range.<br \/>\nWhen we look around at objects at different distances from us the images of those objects will be projected on our retinas in slightly different locations for each eye.\u00a0 Our brain can interpret this \u201cRetinal Disparity\u201d and help us determine depth.<br \/>\nWhen we shoot 3D with two cameras from slightly different\u00a0 positions the same thing happens;\u00a0 each camera\u2019s sensor registers the objects in the scene in slightly different horizontal positions.\u00a0 We call this difference \u201cparallax.\u201d<\/p>\n<p><strong><em>Convergence &amp; Divergence<br \/>\n<\/em><\/strong>Binocular Vision and Parallax are the primary visual tools animals use to perceive depth at close range.\u00a0 The wider an animal\u2019s eyes are apart (its\u00a0<em>interocular<\/em>\u00a0distance) the deeper its binocular depth perception or \u201c<em>depth range<\/em>.\u201d<\/p>\n<p>At greater distances we start to use monocular depth cues like perspective, relative size, occlusion, shadows and relation to horizon to perceive how far away objects are from us.<br \/>\nOf course it would be difficult to look at double images all day so instead our eyes naturally angle in towards the object of interest to make it a single image.\u00a0 This is called\u00a0<em>convergence<\/em>.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"converged eyes\" alt=\"\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/converged-eyes.png\" width=\"160\" height=\"90\" \/><\/p>\n<p>Converged Eyes<\/p>\n<p>Here\u2019s an example of how your eyes use convergence in the real world.\u00a0 Hold a pen about one foot in front of your face and look directly at it.\u00a0 You will feel your eyes both angle towards the pen in order to converge on it, creating a single image of the pen.\u00a0 What you may not immediately perceive is that everything behind the pen appears as a double image (diverged.)\u00a0 Now look at the background behind the pen and your pen will suddenly appear as two pens because your eyes are no longer converged on it.\u00a0 This \u201cdouble-image\u201d is\u00a0<em>retinal disparity<\/em>\u00a0at work and it is helping your brain determine which object is in front of the other.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"diverged eyes\" alt=\"\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/diverged-eyes.png\" width=\"160\" height=\"90\" \/><\/p>\n<p>Diverged Eyes<\/p>\n<p>What\u00a0never\u00a0happens to your eyes in the natural world is\u00a0<em>divergence<\/em>, which would mean that your eyes would angle outward.\u00a0 This is because the furthest point you could possible attempt to look at is at infinity and even infinity would only require that your eyes be angled perfectly parallel to each other. \u00a0 This is why stereographers should avoid background parallax values in their scene that may require the eyes to diverge when viewed.\u00a0 This is easy to keep in check through some simple math but we will cover that a little later.<\/p>\n<p><strong><em>Stereo Window, the Screen Plane and Negative, Zero or Positive Parallax<br \/>\n<\/em><\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Perceived Position relative to Stereo Window\" alt=\"Perceived Position relative to Stereo Window\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/Perceived-Position-relative-to-Stereo-Windo.png\" width=\"314\" height=\"241\" \/><\/p>\n<p>Perceived Position relative to Stereo Window<\/p>\n<p>Simply put, the \u201cStereo Window\u201d refers to the physical display surface. You will be able to visualize the concept if you think of your TV screen as a real window that allows you to view the outside world.\u00a0 Objects in your stereoscopic scene can be<em>behind<\/em>\u00a0or outside the window (<em>positive parallax<\/em>,)\u00a0<em>on<\/em>\u00a0the window (the\u00a0<em>Screen Plane<\/em>or\u00a0<em>zero parallax<\/em>,) or inside, between you and the window (<em>negative parallax<\/em>.) \u00a0 In the same way objects appear in different horizontally offset locations on our retina to create parallax separation, stereoscopically recorded and displayed objects will appear to have different horizontal offsets (parallax) depending on their depth in the scene.\u00a0 If an object has no perceivable amount of parallax then we consider it to appear on the screen surface just as the star in the illustration.\u00a0 This is why converging on an object will make it appear to be at the screen.\u00a0 This can be done by converging the cameras on the objects while shooting, or by sliding the images horizontally in opposite directions during post production.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Left Eye View and Right Eye View\" alt=\"Left Eye View and Right Eye View\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/left-eye-view-vs-right-eye-view.png\" width=\"1320\" height=\"425\" \/><\/p>\n<p>Left Eye Presented View versus Right Eye Presented View (exaggerated separation for demonstration only)<\/p>\n<p>If an object\u2019s left image is to the left of the corresponding right image then that object has positive parallax and will appear to be behind the screen.<\/p>\n<p>If an objects left image is to the right of the right image then it has negative parallax and will cause your eyes to cross, which will suggest to your brain that the object is in front of the screen.<\/p>\n<p>This is the basic principle behind stereoscopic shooting and emulating human binocular vision with two cameras.<\/p>\n<p><strong><em>Respecting the Stereo Window<br \/>\n<\/em><\/strong>We discussed briefly how the display screen represents a window and objects can be behind, at or in front of the window.\u00a0 If you want an object to appear in front of the window it cannot touch the left or right edge of the frame.\u00a0 If it does the viewer\u2019s brain won\u2019t understand how the parallax is suggesting the object is in front of the screen, but at the same time it is being occluded by the edge of the screen.\u00a0 When this contradiction happens it is referred to as a\u00a0<em>window violation<\/em>\u00a0and it should be avoided.\u00a0\u00a0 Professional stereographers have a few tricks for fixing window violations with lighting or soft masks but it is best for beginners to simply obey this rule.<\/p>\n<p><a href=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/overlayed_with_disparities.png\"><img loading=\"lazy\" decoding=\"async\" title=\"overlayed_with_disparities\" alt=\"\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/overlayed_with_disparities-300x167.png\" width=\"300\" height=\"167\" \/><\/a><\/p>\n<p>Rotational and Vertical Disparities in Source Footage<\/p>\n<p><a href=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/disparities_corrected.png\"><img loading=\"lazy\" decoding=\"async\" title=\"disparities_corrected\" alt=\"\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/disparities_corrected-300x163.png\" width=\"300\" height=\"163\" \/><\/a><\/p>\n<p>Disparities corrected so all straight lines are parallel<\/p>\n<p><strong><em>Unwelcome Disparities (Geometric, lens and temporal)<br \/>\n<\/em><\/strong>Disparity is a \u201cdirty word\u201d for stereographers.\u00a0 In fact the only \u201cgood\u201d type of disparity in S3D is horizontal disparity between the left and right eye images.\u00a0 As mentioned before, this is known as parallax.<br \/>\nAny other type of disparity in your image (vertical, rotational, zoom, keystone or temporal) will cause the viewers eyes to strain to accommodate. This can break the 3D effect and cause muscular pain in the viewer\u2019s eyes or even nausea.\u00a0 Every stereographer will strive to avoid these disparities on set by carefully calibrating the steroescopic rig and it will be tweaked ever further in post production through the use of 3D mastering software.<\/p>\n<p><strong><em>Ortho-stereo, Hyper-stereo &amp; Hypo-stereo<br \/>\n<\/em><\/strong>I already mentioned that the average interocular of humans is considered to be about 65mm (2.5 inches.)\u00a0 When this same distance is used as the interaxial distance between two shooting cameras then the resulting stereoscopic effect is typically known as \u201cOrtho-stereo.\u201d\u00a0 Many stereographers choose 2.5\u201d as a stereo-base for this reason.\u00a0 If the interaxial distance used to shoot is smaller than 2.5 inches then you are shooting \u201cHypo-stereo.\u201d\u00a0 This technique is common for theatrically released films to accommodate the effects of the big screen.\u00a0 It is also used for macro stereoscopic photography.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"photos8_mouse\" alt=\"photos8_mouse\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/photos8_mouse-300x199.jpg\" width=\"300\" height=\"199\" \/><\/p>\n<p>Hypo-stereo &amp; Gigantism: Imagine how objects look from the P.O.V. of a mouse. Photo courtesy photos8.com<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"photos8_elephant\" alt=\"photos8_elephant\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/photos8_elephant-300x199.jpg\" width=\"300\" height=\"199\" \/><\/p>\n<p>Hyper-stereo &amp; Dwarfism: Imagine how objects look from the P.O.V. of an elephant. Photo courtesy photos8.com<\/p>\n<p>Lastly, Hyper-stereo refers to interaxial distances greater than 2.5 inches.\u00a0 As I mentioned earlier the greater the interaxial separation, the greater the depth effect.\u00a0\u00a0 An elephant can perceive much more depth than a human, and a human can perceive more depth than a mouse.\u00a0 However, using this same analogy, the mouse can get close and peer inside the petals of a flower with very good depth perception, and the human will just go \u201ccross-eyed.\u201d\u00a0\u00a0 Therefore decreasing the interaxial separation between two cameras to 1\u201d or less will allow you to shoot amazing macro stereo-photos and separating the cameras to several feet apart will allow great depth on mountain ranges, city skylines and other vistas.<\/p>\n<p>The trouble with using hyper-stereo is that scenes with gigantic objects in real-life may appear as small models.\u00a0 This phenomenon is known as\u00a0<em>dwarfism<\/em>\u00a0and we perceive it this way because the exaggerated separation between the taking lenses allows us to see around big objects much more that we do in the real world. Our brain interprets this as meaning the object must be small. \u00a0The opposite happens with hypo-stereo, where normal sized objects appear gigantic. (<em>Gigantism<\/em>.)<\/p>\n<p>If one attempts to shoot with two cameras configured in a side-by-side stereoscopic mount the smallest interaxial distance available will be the width of the camera.\u00a0 In most cases the width of the camera will be around\u00a0 6 inches. \u00a0This might seem like a big limiting factor, but other specialized equipment is available to achieve small interaxial distances with almost any sized camera. \u00a0 (More on that a in the \u201cSelecting your Gear\u201d segment.)<\/p>\n<p><strong><em>Viewing 3D: Passive Polarization, Active Shutter Glasses, Anaglyph &amp; Autostereo<br \/>\n<\/em><\/strong>There are three basic types of glasses used for presenting stereoscopic 3D material.\u00a0 In most of the theatres in North America the common method is passive polarized glasses with either circular or linear polarizers.\u00a0 There are a few consumer and professional HD 3D monitors that use the same passive method. However, most of the consumer 3DTVs on the market use some form of active shutter glasses to flicker the left and right images on and off at 120Hz.\u00a0\u00a0 Autostereoscopic displays use lenticular lenses or parallel barrier technologies to present stereoscopic material without the use of glasses.<br \/>\nAnaglyph glasses will work with almost any display but use color filters to separate the left and right images.\u00a0\u00a0 The most common configurations are red\/cyan, blue\/amber, and green\/magenta.<\/p>\n<h2><strong><em>The Quick Math &amp; Some Rules to Remember<\/em><\/strong><\/h2>\n<p><strong><em>Stereoscopic Parallax Budget (sometimes called Depth Budget) vs Depth Bracket<br \/>\n<\/em><\/strong>The Depth Bracket of your scene refers to the actual distance between your closest object in the frame and the furthest object.\u00a0 The Parallax Budget refers to your calculated maximum positive parallax and desired maximum negative parallax represented in percentage of screen width.\u00a0 For example if I determine through a simple calculation that my positive parallax should never exceed 0.7% of screen width and I have determined that my negative parallax should not exceed 2% of screen width, then my total Parallax Budget is 2.7%. \u00a0 The Depth Bracket must be able to be squeezed into the Parallax Budget.\u00a0 There are many algebraic formulas to determine the proper interaxial distance to achieve this.<\/p>\n<p><strong><em>Native Parallax for final display size<br \/>\n<\/em><\/strong>The native parallax for a given screen size simply refers to what percentage of screen width will equal the human interocular.\u00a0 If you are using 2.5 inches as the baseline interocular and you know your presentation screen will be 30 feet wide (360 inches) then just divide 2.5 by 360.\u00a0\u00a0<strong>2.5<\/strong>\u00a0<strong>\u00f7 360 = 0.007 or 0.7%<\/strong>\u00a0 Therefore the Native Parallax of a 30 foot screen is 0.7%, so we should make sure to keep our maximum positive parallax under 0.7% of screen width if we plan to show our footage on a 30 foot wide screen.\u00a0 If we shoot for a 65\u201d 3DTV, then we can get away with over 3% positive parallax.<\/p>\n<p><strong><em>The 1\/30th Rule<br \/>\n<\/em><\/strong>The 1\/30 rule refers to a commonly accepted rule that has been used for decades by hobbyist stereographers around the world.\u00a0 It basically states that the interaxial separation should only be 1\/30th\u00a0of the distance from your camera to the closest subject.\u00a0 In the case of ortho-stereoscopic shooting that would mean your cameras should only be 2.5\u201d apart and your closest subject should never be any closer than 75 inches (about 6 feet) away.<\/p>\n<p><strong>Interaxial x 30 = minimum object distance<br \/>\nor<br \/>\nMinimum object distance \u00f7 30 = Interaxial<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Side by Side Rig in use\" alt=\"Side by Side Rig in use\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/side-by-side-rig.png\" width=\"226\" height=\"342\" \/>If you are using a couple standard 6\u2033 wide camcorders in a side by side rig as close as they will fit together then the calculation would look like: 6\u201d x 30 = 180 inches or 15 feet.\u00a0 That\u2019s right\u2026 15 feet!<\/p>\n<p>But does the 1\/30 rule apply to all scenarios?\u00a0 No, the 1\/30 rule certainly does not apply to all scenarios. \u00a0In fact, in feature film production destined for the big screen we will typically use a ratio of 1\/60, 1\/100 or higher. \u00a0The 1\/30 rule works well if your final display screen size is less than 65 inches wide, your cameras were parallel to each other, and your shots were all taken outside with the background at infinity. \u00a0When you are ready to take the next step to becoming a stereographer you will need to learn about parallax range and the various equations available to calculate maximum positive parallax (the parallax of the furthest object,) which will translate into a real-world distance when you eventually display your footage.<\/p>\n<p>Remember that illustration on page 3 of the eyes pointing outward (diverging)? \u00a0Well it isn\u2019t natural for humans to diverge and therefore the maximum positive parallax when displayed should not exceed the human interocular of 2.5 inches (65mm.) \u00a0 You can readjust the convergence point and bring the maximum positive parallax within the limits of the native display parallax (2.5 inches) but that will also increase your negative parallax.<\/p>\n<h2><strong><em>Selecting Your Gear<\/em><\/strong><\/h2>\n<p><strong><em>Side by Side Rig vs Beam-Splitter Rig<br \/>\n<\/em><\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Side by Side Rig\" alt=\"Side by Side Rig\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/side-by-side-rig-on-white.png\" width=\"252\" height=\"165\" \/><\/p>\n<p>Side by Side RIg<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Beamsplitter Rig\" alt=\"Beamsplitter Rig\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/beansplitter-rig-on-white.png\" width=\"211\" height=\"240\" \/><\/p>\n<p>Beamsplitter RIg<\/p>\n<p>Interaxial separation is an important factor when shooting S3D so therefore the width of your two cameras will determine the minimum interaxial separation in a side by side rig. Both of these interaxial distances are far too wide for any application other than hyper-stereo shots of landscapes, mountain ranges, helicopter shots, etc.<\/p>\n<p>In order to shoot subjects in close range (within 15 or 20 feet) you will require a beamsplitter rig.<br \/>\nBeam-splitters use a 50\/50 or 60\/40 mirror (similar to teleprompter glass) that allows one camera to shoot through the glass and then other to shoot the reflection.\u00a0\u00a0 The interaxial can be brought down to as little as 0mm (2D) with beamsplitter rigs.<br \/>\nThere are over 20 different beamsplitter rigs on the market ranging from $2500 USD to $500,000.\u00a0 However, many other types of disparity can be introduced when shooting through the glass (polarization effect, dust contamination, color cast, etc.)<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Panasonic's H-FT012\" alt=\"Panasonic's H-FT012 for micro4\/3 cameras\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/h-ft012.jpg\" width=\"250\" height=\"209\" \/><\/p>\n<p>Panasonic&#8217;s H-FT012 for micro4\/3 cameras<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Loreo3D_9005\" alt=\"Loreo3D lens\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/Loreo3D_9005-300x263.jpg\" width=\"240\" height=\"210\" \/><\/p>\n<p>Loreo3D Attachment for DSLR cameras<\/p>\n<p><strong><em>Special Stereoscopic Lenses<\/em><\/strong><\/p>\n<p>There are special stereoscopic lenses on the market designed for various digital SLR cameras.\u00a0 These lenses will work with a single camera but capture a left and right point of view in the same frame.\u00a0\u00a0 The concept is intriguing but the lenses are very slow (F\/11 \u2013 F\/22), they use a smaller portion of the image sensor for each eye, they are usually made from plastic optics instead of glass and (in the case of the Loreo) the aspect ratio is vertically oriented.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Fujifilm_W1\" alt=\"\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/Fujifilm_W1-300x225.jpg\" width=\"300\" height=\"225\" \/><\/p>\n<p>Fujifilm\u2019s W1 S3D Camera<\/p>\n<p><strong>Purpose-built Stereoscopic cameras<\/strong><br \/>\nStereoscopic film cameras have existed for decades.\u00a0\u00a0 I personally own a Kodak Stereo camera from the early 50\u2019s that I\u2019ve shot hundreds of 3D slides with and I love the simplicity. Recently manufacturers like Fujifilm, Panasonic, Sony and JVC have recognized the demand for digital versions of these cameras and released new products to market. \u00a0Some can record to separate left and right files or side-by-side format files for easy workflows in most non-linear editing systems (and easy compatibility with\u00a0<a title=\"Stereo3D Toolbox LE\" href=\"http:\/\/www.dashwood3d.com\/stereo3dtoolboxle.php\">Stereo3D Toolbox<\/a>) but many of the new systems record the two streams into a self-contained multi video codec (MVC) file that requires specific editing software (currently only Sony Vegas 10 on Windows) or a demuxing stage to separate the MVC into discreet left and right files (as with JVC\u2019s bundled Mac\/PC software.)<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"GY-HMZ1U 3D Camcorder\" alt=\"GY-HMZ1U 3D Camcorder\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/gyhmz1u_300.jpg\" width=\"408\" height=\"208\" \/><\/p>\n<p>JVC&#8217;s GY-HMZ1U 3D camcorder can record side by side AVCHD (60i) or MVC (60i &amp; 24p) and ships with Mac\/PC demuxing software<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Tri-Level Sync Generator (Top) &amp; Stereo3D Muxer\" alt=\"Tri-Level Sync Generator and Stereo3D Muxer\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/trilevel-sync-generator-e1315635810643.jpg\" width=\"199\" height=\"300\" \/><\/p>\n<p>AJA&#8217;s Gen10 Tri-Level Sync Generator and Hi5-3D Muxer<\/p>\n<p><strong><em>Genlock capability<\/em><\/strong><br \/>\n<a href=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/trilevel-sync-generator-e1315635810643.jpg\"><br \/>\n<\/a>If you plan to shoot stereoscopic video with any action then it will be beneficial to use two cameras that can be genlocked together.\u00a0 Cameras that cannot be genlocked will have some degree of temporal disparity.\u00a0 However using the highest frame rate available (60p for example) will reduce the chance of detrimental temporal disparity.\u00a0 There are also some devices capable of synchronizing cameras that use LANC controllers.<\/p>\n<p><strong><em>Interlace vs. Progressive<\/em><\/strong><br \/>\nEvery frame of interlaced video inheritably will have some degree of temporal disparity between the fields.\u00a0 It is recommended to shoot with progressive formats whenever possible.<\/p>\n<p><strong><em>Lens &amp; Focal Length selection<\/em><\/strong><br \/>\nWider lenses will be easier to shoot with for the beginner and will also lend more \u201cdimensionality\u201d to your subjects.\u00a0\u00a0 Telephoto lenses will compress your subjects flat so they appear as cardboard cutouts.\u00a0\u00a0 Stay away from \u201cfisheye\u201d lenses because the distortion will cause many geometrical disparities.<\/p>\n<p>OK, so you\u2019ve learned your terminology and selected your gear.\u00a0\u00a0 Now what?\u00a0 It\u2019s time to get out there and shoot.\u00a0\u00a0 We haven\u2019t discussed the various calculations or the rules of S3D but I encourage you to shoot now so you can learn from your mistakes.<\/p>\n<p>&nbsp;<\/p>\n<p><strong><em>Turn off Image Stabilization<br \/>\n<\/em><\/strong>If you are using video cameras with image stabilization you must turn the feature off or the camera\u2019s optical axis will move independent of each other in unpredictable ways.\u00a0\u00a0 As you can imagine this will make it impossible to tune out disparities.<\/p>\n<p><strong><em>Manually Set White Balance<br \/>\n<\/em><\/strong>Use a white card, chart or 18% gray card to set the manual white balance of both cameras.\u00a0 On beamsplitter rigs it is not advisable to use preset white balance settings because the mirror glass introduces its own tint to the image on each camera.\u00a0 \u00a0 Set the WB switch to either A or B and press and hold the AWB button to execute the white balance sequence.<\/p>\n<p><strong><em>Gain<br \/>\n<\/em><\/strong>It is best to shoot on 0dB gain when possible.\u00a0 The noise and grain patterns at high gain levels will be unique on each camera for each frame and therefore will be a visual disparity.<\/p>\n<p><strong><em>Use identical settings on both cameras<\/em><\/strong><br \/>\nIt is very important to use the same type of camera, same type of lens and exactly the same camera settings (white balance, shutter speed, aperture, frame rate, resolution, zoom, codec, etc.) on both cameras.\u00a0\u00a0 Any differences will cause a disparity.\u00a0 It is also a good idea to use manual focus and set it to the hyperfocal distance or a suitable distance with a deep depth of field.<\/p>\n<p><strong><em>Proper configuration for CMOS shutters<br \/>\n<\/em><\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" title=\"Beamsplitter Rig side view\" alt=\"Beamsplitter Rig Side View\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/05\/side-by-side-rig-side-view.png\" width=\"317\" height=\"268\" \/><\/p>\n<p>CMOS sensor cameras in proper configuration<\/p>\n<p>The CMOS sensors in cameras like the Sony F3, Red, Canon XF105 or XF305 use a rolling shutter that requires a particular mounting configuration in a beamsplitter rig. \u00a0 The tops of the frames must match so there is no rolling shutter disparity between the sensors. \u00a0 If the mirror in your rig faces the ground and the upright camera mounts underneath then the camera can be mounted on the plate normally.\u00a0 If your mirror faces up and the upright camera points down then the camera must be mounted upside down so that the top-bottom orientation of the sensors match.<\/p>\n<p><strong><em>Use a clapper or synchronize timecode<\/em><\/strong><br \/>\nIf your cameras are capable of genlock and TC slave then by all means use those features to maintain synchronization.\u00a0 If you are using consumer level cameras it will be up to you to synchronize the shots in post.\u00a0 In either case you should use a slate with a clapper to identify the shot\/takes and easily synch them.<\/p>\n<p>If your cameras have an IR remote start\/stop it is handy to use one remote to roll &amp; cut both cameras simultaneously.\u00a0\u00a0 If you are shooting stills with DSLRs there are ways to connect the cameras with an electronic cable release for synchronized shutters.<\/p>\n<p><strong><em>Slow down your pans<\/em><\/strong><br \/>\nHowever fast you are used to panning in 2D, cut the speed in half for 3D.\u00a0 If you are shooting in interlace then cut the speed in half again.\u00a0\u00a0 Better yet, avoid pans altogether unless your cameras are genlocked.\u00a0 Whip pans should be OK with genlocked cameras.<\/p>\n<p><strong><em>Label your media \u201cLeft\u201d and \u201cRight\u201d<\/em><\/strong><br \/>\nThis might seem like a simple rule to remember but the truth is that most instances of inverted 3D is a result of a mislabeled tape or clip.\u00a0 Good logging and management of clips is essential with stereoscopic post production.<\/p>\n<p><strong><em>To Converge or Not Converge\u2026 That is the question.<\/em><\/strong><br \/>\nOne of the most debated topics among stereographers is whether to \u201ctoe-in\u201d the cameras to converge on your subject or simply mount the cameras perfectly parallel and set convergence in post-production.\u00a0 Converging while shooting requires more time during production but one would hope less time in production.\u00a0 However \u201ctoeing-in\u201d can also create keystoning issues that need to be repaired later.\u00a0\u00a0 My personal mantra is to always shoot perfectly parallel and I recommend the same for the budding stereographer.<\/p>\n<h2><\/h2>\n<h2><strong><em>Post<\/em><\/strong><\/h2>\n<p>So you\u2019ve shot your footage and now you want to edit and watch it.\u00a0If you work with After Effects, Motion or Final Cut Pro on the Mac please watch some of the tutorials on this website to learn more about how Stereo3D Toolbox can help you master your S3D content.<\/p>\n<p><strong><em>Fixing Disparity and Setting Convergence<\/em><\/strong><br \/>\nMost stereoscopic post software has sliders to adjust vertical, rotational, zoom, color &amp; keystone disparities.\u00a0\u00a0 Fixing these disparities requires skill and practice but my recommendation is to start with rotation and make sure any straight lines are parallel to each other and then adjust zoom to make sure objects are the same apparent size.\u00a0\u00a0 Next adjust the vertical disparity control make sure all objects next to each other.\u00a0\u00a0 Finally adjust the horizontal convergence to perfectly align the object you wanted to be on the stereo window.<\/p>\n<p><a href=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/stereo3dtoolbox_interface.png\"><img loading=\"lazy\" decoding=\"async\" title=\"stereo3dtoolbox_interface\" alt=\"\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/stereo3dtoolbox_interface-244x300.png\" width=\"244\" height=\"300\" \/><\/a><\/p>\n<p>Stereo3D Toolbox Interface<\/p>\n<p><strong><em>Native Pixel Parallax<\/em><\/strong><\/p>\n<p>There is one last thing you should check after aligning each shot.\u00a0 You must make sure that your background doesn\u2019t exceed the Native Pixel Parallax of your display screen or your audience\u2019s eyes will diverge (which is bad.) \u00a0 The idea here is that the maximum positive parallax (the parallax of your deepest object\/background) does not exceed the human interocular distance when presented.<\/p>\n<p>You can determine the Native Pixel Parallax (a.k.a. NPP) by dividing 2.5 inches by the display screen\u2019s width and then multiply the result by the amount of horizontal pixels (i.e. 1920 for 1080p or 1280 for 720p.)<\/p>\n<p>I present my S3D material on JVC\u2019s 46\u201d 3DTV.\u00a0 It is 42 inches wide and 1920 pixels wide so the calculation is 2.5\/42\u00d71920 = 114 pixels.\u00a0\u00a0 This means that the parallax of the background should not exceed 114 pixels.<\/p>\n<p>In Stereo3D Toolbox you can enter your screen width and the filter will automatically calculate NPP and display a grid.\u00a0\u00a0 If the parallax in your background does exceed this limit then adjust your convergence to move the depth range back away from the viewer.<\/p>\n<p><strong><em>Share your S3D Masterpiece on YouTube with the yt3d tag<\/em><\/strong><br \/>\nNow that you have finished editing and mastering your S3D movie it is time to share it with the world.\u00a0 YouTube added the capability to dynamically present S3D content in any anaglyph format.\u00a0\u00a0 All you have to do is export your movie file as \u201cside by side squeezed\u201d and encode it as H264 with Compressor.\u00a0 I recommend using 1280x720p for S3D content on Youtube but not 1080p.\u00a0 The workload of rendering the anaglyph result is handled by the viewer\u2019s computer so 1080p will decrease the frame rate on most laptops.<\/p>\n<p>Upload your movie file to YouTube and then add the tag \u201c<strong>yt3d:enable=true<\/strong>\u201d to enable YouTube 3D mode.\u00a0\u00a0 If your footage is 16\u00d79 aspect ratio also add the tag \u201c<strong>yt3d:aspect=16:9<\/strong>\u201d. YouTube 3D expected crossview formatted side by side so if you exported as side by side parallel instead of crossview you will need to add the tag \u201c<strong>yt3d:swap=true<\/strong>\u201d to ensure the left and right eyes are presented correctly.<\/p>\n<p><a href=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/sbs_squeeze_sample1.png\"><img loading=\"lazy\" decoding=\"async\" title=\"sbs_squeeze_sample1\" alt=\"\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/sbs_squeeze_sample1-300x167.png\" width=\"300\" height=\"167\" \/><\/a><\/p>\n<p>Output as Side by Side Squeeze<\/p>\n<p><a href=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/youtube-tags.png\"><img loading=\"lazy\" decoding=\"async\" title=\"youtube tags\" alt=\"\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/youtube-tags-300x81.png\" width=\"300\" height=\"81\" \/><\/a><\/p>\n<p>Add YouTube 3D tags<\/p>\n<p><a href=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/youtube-modes.png\"><img loading=\"lazy\" decoding=\"async\" title=\"youtube modes\" alt=\"\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/youtube-modes-300x152.png\" width=\"300\" height=\"152\" \/><\/a><\/p>\n<p>YouTube 3D Display Modes<\/p>\n<p><a href=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/anaglyph_sample1.png\"><img loading=\"lazy\" decoding=\"async\" title=\"anaglyph_sample1\" alt=\"\" src=\"http:\/\/www.dashwood3d.com\/blog\/wp-content\/uploads\/2010\/06\/anaglyph_sample1-300x160.png\" width=\"300\" height=\"160\" \/><\/a><\/p>\n<p>Anaglyph Display of finished movie<\/p>\n<p>I think I\u2019ve covered the basics of shooting &amp; posting stereoscopic 3D but we\u2019ve really just scratched the surface of what a professional stereographer needs to know.\u00a0\u00a0 If you want to continue your education in this area I recommend you pick up Bernard Mendiburu\u2019s\u00a0<em>3D Movie Making<\/em>\u00a0or search your library for the \u201cbible\u201d of stereoscopic 3D, Lenny Lipton\u2019s classic \u201c<em>Foundations of the StereoScopic Cinema.\u00a0 A Study in Depth.<\/em>\u201d<\/p>\n<p>Remember\u2026 stereoscopic 3D cinematography is a craft that can takes years to master and is a craft where even the \u2018experts\u2019 are still learning new techniques. \u00a0 As the popularity of S3D continues to rise there will by many demands on inexperienced videographers to provide stereoscopic services.\u00a0 It is important to remember that 2D can\u00a0<em>look<\/em>\u00a0bad, but 3D can\u00a0<em>feel<\/em>\u00a0bad.\u00a0 The last thing any producer wants is to physically hurt the audience. \u00a0 Therefore, extensive practice and testing is advised before producing content to be viewed by anyone other than the stereographer.\u00a0 Trial and error is the best way to learn this particular craft.<\/p>\n<h6>Tim Dashwood is the founder of\u00a0<a href=\"http:\/\/www.dashwood3d.com\/\">Dashwood Cinema Solutions<\/a>, a stereoscopic research, development &amp; consultancy division of his Toronto-based production company\u00a0<a href=\"http:\/\/www.stereo3dunlimited.com\/\">Stereo3D Unlimited<\/a>. Dashwood is an accomplished director\/cinematographer &amp; stereographer and a member of the Canadian Society of Cinematographers.\u00a0 His diverse range of credits include music videos, commercials, feature films and 3D productions for Fashion Week, CMT, Discovery Channel and the National Film Board of Canada.\u00a0 He also consults on and previsualizes fight\/stunt action scenes for productions such as Kick-Ass and Scott Pilgrim vs the World.\u00a0 Dashwood is the creator of the award winning Stereo3D Toolbox plugin suite and Stereo3D CAT calibration and analysis system.<br \/>\n\u00a92011 Tim Dashwood<\/h6>\n<p>&nbsp;<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>A Beginner\u2019s Guide to Shooting Stereoscopic 3D May 1, 2010 by Tim Dashwood \u00a0(revised September 10, 2011) 3D is back in style again and it&hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[25],"tags":[],"class_list":["post-289","post","type-post","status-publish","format-standard","hentry","category-technology"],"_links":{"self":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts\/289","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/comments?post=289"}],"version-history":[{"count":0,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts\/289\/revisions"}],"wp:attachment":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/media?parent=289"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/categories?post=289"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/tags?post=289"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}