Original Software Application 
 
The original software application I used for my experiments is Bryce 5.0, a 3D modeling and landscape visualization software I have used for 12 years now (in four versions), and the application I am most proficient with. I chose this application because my proficiency allowed me to quickly and effectively experiment with many 3D model alternatives, test variables, and explore options or ideas in the most effective way. In the boujou software, by comparison, I am slowed down by the learning curve of getting acquainted with the tools and workflow of that application.
 
The Bryce software was more that sufficient for the development of the digital site model, but if I were to try and create an animation of a digital subject walking through the digital site model, I would use one of the high end animation packages like Maya or Lightwave.
 
 
Coordinate System 
 
The coordinate system I used is a world coordinate setting. With a camera set at zero rotation on all axis, X coordinate is left to right. Y coordinate is up and down. Z coordinate is nearer to camera or farther away. If you are attempting to use a software that has View Coordinates as default (3D Studio Max as one example), you may consider setting the system to World coordinates instead, to enter the data I have listed.
 
Also note, for Lightwave users, that Lightwave has used aircraft pilot concepts of rotation (pitch, yaw, and bank) instead of just X, Y, Z, rotations, so you may have to calculate the appropriate translations. I use Modeler primarily, so I don't deal with the rotational issues in Layout much.
Lens Field of View Issues
 
There is some confusion about lens field of view angles and specifications, because in basic principle, a lens has a circular field of view, and that circle must be large enough to include the diagonal distances to the corners of a rectangular camera or render field. So true field of view angle is not the same as horizontal or vertical viewing angle, through the camera's aperture opening.
 
The true calculated horizontal view angle for a 15mm lens on a 16mm camera with a standard aperture width of 0.402"  is 37.8 degrees. But because the theoretical FOV is greater, the Bryce setting used for FOV in the camera attributes is FOV: 47.40
 
So Step One in creating or replicating the site model is calibrating your camera view horizontal angle to 37.8 degrees. The method I used for the calibration involved creating two bars radiating out from the same XZ position as the camera (but slightly below so the bars were not blocking camera view), one bar rotated 18.9 degrees on the Y axis, the other bar rotated -(minus)18.9 degrees on the Y axis.  Then posts were added to each bar as true vertical markers, and the camera FOV was adjusted until those two vertical markers were exactly at the edge of the rendered image sides. This insured the digital camera was capturing the horizontal angle of view of 37.8 degrees.
 
Similarly, I would advise anyone using a 3D application to similarly calibrate your camera view to insure your horizontal viewing angle is 37.8 degrees.
 
You may, of course, test the site model with any other lens angle, including the angle for a 25mm lens if you like (horizontal angle of view is 23 degrees, according to ASC Manual), and see if you can find camera positions which effectively replicate the film frames (same position and scale for the site objects). I tested the model with a 25mm lens specification for two months, and it failed to solve every time, but there may be alternatives I did not try (given there's an infinite number of alternatives). I can say conclusively that the 15mm lens angles and positions do produce an excellent match for the real film.
Render Aspect Ratio
 
Use a 4:3 render aspect ratio (4 wide, 3 high) to match the 16mm film aspect ratio of standard 16mm cameras, including the K-100 used by Patterson.
 
Background Image
 
If your software supports a capability of displaying a background image behind the scene objects, the frames from the film showing the seven camera positions can be used as background images. They are identified with text on the image as to which camera position they correspond to.
 
Black Border on Film Frames
 
The image frames from the film which are used to test the 3D model have a black border around them. This is because true full frame versions of the film are almost non-existent for general research purposes. I had to go to John Green's location with a portable scanning device to make these frames and they are a copy of a film version which was printed on an optical printer, the same time the more commonly seen zoomed in version was done, the F352 freeze frame segment, and the slow motion segment (all done on an optical printer, not a contact printer). But an optical printer acts like a projector, and the requires an intermittant shutter and pulldown movement, and on the copy film stock side, there is a film gate with an aperture opening, like a camera. This aperture on the copying side masked off a small portion of the true full frame, reducing the visible frame to about 96% of true full frame.
 
The following image (below) shows a true "full frame" and the redish border is what was missing from the scan's of John Green's full frame copy that I scanned. So this missing section was added with the black border around the frames I used.
 
A contact print, by comparison, just puts the copy film stock on a roller, puts the source film on top of it, and shines a light through them both as they roll continuously through the printer. You get true full frame with such a copy process.
 
The result of using the optical printer is that even the "full frame" version I scanned is actually only about 96% of true full frame, compared to some still frame prints that are true full frame, so the black border reconstructs the true full frame size in relation to the image, necessary for a photogrammetry analysis.
 
 
Making Trees for the Site Model
 
If you are attempting to build a site model in a 3D visualization/CAD software, this is some advice on making the trees. In Bryce, there is a wonderful object option called a symmetrical lattice, which is in effect a mirrored pair of mesh objects with the base's joined and then clipped to be invisible. I use a Displacement map to shape the tree, and an image texture map applied object top (since the mesh innately comes as a top and bottom section) for the texture. And the object size notes are for the actual full rectangular mesh, including the clipped invisible part.
 
But this is a rare object type for many of the other 3D visualization applications, so you may want to make the trees as simple image planes.
 
Make a 2D image plane (a single polygon with height and width, but no depth) and map the image texture map I've provided, using a simple front planar mapping. Then use the included alpha channel map for the transparency setting, and it will remove everything but the actual tree shape. Use my object position and rotation coordinates, assuming your software puts the exact center of the polygon as the point of origin. That should give you the tree in correct position and scale to the site.
 
For each tree, there are three image maps on the page. Each has an image texture map, an alpha channel map, and a displacement map. For an Image Polygon, just use the texture and alpha. If your modeling program supports displacement mapping on meshes, use the displacement map and texture.
 
Use the coordinate data on the"Image Plane Data Form" to assemble the model.
Release One     Foundation Material     Camera Material     Model Data    Texture Maps     Conclusion
Website Index         Overview Navigation Page